Search Results

Search found 454 results on 19 pages for 'clare a keith ward'.

Page 18/19 | < Previous Page | 14 15 16 17 18 19  | Next Page >

  • Newbie question: When to use extern "C" { //code } ?

    - by Russel
    Hello, Maybe I'm not understanding the differences between C and C++, but when and why do we need to use: extern "C" { ? Apparently its a "linkage convention"? I read about it briefly and noticed that all the .h header files included with MSVS surround their code with it. What type of code exactly is "C code" and NOT "C++ code"? I thought C++ included all C code? I'm guessing that this is not the case and that C++ is different and that standard features/functions exist in one or the other but not both (ie: printf is C and cout is C++), but that C++ is backwards compatible though the extern "C" declaration. Is this correct? My next question depends on the answer to the first, but I'll ask it here anyway: Since MSVS header files that are written in C are surrounded by extern "C" { ... }, when would you ever need to use this yourself in your own code? If your code is C code and you are trying to compile it in a C++ compiler, shouldn't it work without problem because all the standard h files you include will already have the extern "C" thing in them with the C++ compiler? Do you have to use this when compiling in C++ but linking to alteady built C libraries or something? Please help clarify this for me... Thanks! --Keith

    Read the article

  • Micro Second resolution timestamps on windows.

    - by Nikhil
    How to get micro second resolution timestamps on windows? I am loking for something better than QueryPerformanceCounter, QueryPerformanceFrequency (these can only give you an elapsed time since boot, and are not necessarily accurate if they are called on different threads - ie QueryPerformanceCounter may return different results on different CPUs. There are also some processors that adjust their frequency for power saving, which apparently isn't always reflected in their QueryPerformanceFrequency result.) There is this, http://msdn.microsoft.com/en-us/magazine/cc163996.aspx but it does not seem to be solid. This looks great but its not available for download any more. http://www.ibm.com/developerworks/library/i-seconds/ This is another resource. http://www.lochan.org/2005/keith-cl/useful/win32time.html But requires a number of steps, running a helper program plus some init stuff also, I am not sure if it works on multiple CPUs Also looked at the Wikipedia link on the subject which is interesting but not that useful. http://en.wikipedia.org/wiki/Time_Stamp_Counter If the answer is just do this with BSD or Linux, its a lot easier thats fine, but I would like to confirm this and get some explanation as to why this is so hard in windows and so easy in linux and bsd. Its the same damm hardware...

    Read the article

  • Unable to Get a Correct Time when I am Calling serverTime using jquery.countdown.js + Asp.net ?

    - by user312891
    When i am calling the below function I unable to get a correct Answer. Both var Shortly and newTime having same time one coming from the client site other sync with server. http://keith-wood.name/countdown.html I am waiting from your response. Thanks $(function() { var shortly = new Date('April 9, 2010 20:38:10'); var newTime = new Date('April 9, 2010 20:38:10'); //for loop divid /// $('#defaultCountdown').countdown({ until: shortly, onExpiry: liftOff, onTick: watchCountdown, serverSync: serverTime }); $('#div1').countdown({ until: newTime }); }); function serverTime() { var time = null; $.ajax({ type: "POST", //Page Name (in which the method should be called) and method name url: "Default.aspx/GetTime", // If you want to pass parameter or data to server side function you can try line contentType: "application/json; charset=utf-8", dataType: "json", data: "{}", async: false, //else If you don't want to pass any value to server side function leave the data to blank line below //data: "{}", success: function(msg) { //Got the response from server and render to the client time = new Date(msg.d); alert(time); }, error: function(msg) { time = new Date(); alert('1'); } }); return time; }

    Read the article

  • Why would a PCI scan fail because of components that are not even installed?

    - by Brandon
    Recently a PCI scan was run against a web server and the result was a failure. Some of the issues could be fixed, however others simply make no sense to me. The machine was a clean install, there are only two things running, the .NET 3.5 website and the dotDefender web application firewall. However there are several errors similar to: Web server vulnerability Impact: /servlet/SessionServlet: JRun or Netware WebSphere default servlet found. All default code should be removed from servers. Risk Factor: Medium/ CVSS2 Base Score: 6.4 CVE: CVE-2000-0539 I'm not sure what this is, but I can't find anything on the server that looks anything like this. Web server vulnerability Impact: /some.php?=PHPE9568F35- D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests that contain specific QUERY strings. Risk Factor: Medium/ CVSS2 Base Score: 5.0 PHP is not installed. Trying to add that query string to any page does nothing because the application ignores it. And doing that phpVersion check results in a 404. Similar to this, there are dozens of errors related to JSP and Oracle that are also not installed. Web server vulnerability Impact: /admin/database/wwForum.mdb: Web Wiz Forums pre 7.5 is vulnerable to Cross-Site Scripting attacks. Default login/pass is Administrator/letmein Risk Factor: Medium/ CVSS2 Base Score: 4.0 There are several errors like this, telling me that Web Wiz Forums, Alan Ward A-Cart 2.0, IlohaMail, etc. are all vulnerable. These are not installed or referenced anywhere I can find. There are even references to pages that simply don't exist, like OpenAutoClassifieds. Can anyone point me in the right direction as to why these errors are showing up or where I might look to find these components if they are in fact installed? Note: This website and server are for a subdomain of the main website. The main website runs on a server that is running Apache/PHP, but I don't have access to that server. The report says the subdomain was the site being scanned, but is it possible for it to have scanned the main site as well?

    Read the article

  • Dec 5th Links: ASP.NET, ASP.NET MVC, jQuery, Silverlight, Visual Studio

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series for another on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET ASP.NET Code Samples Collection: J.D. Meier has a great post that provides a detailed round-up of ASP.NET code samples and tutorials from a wide variety of sources.  Lots of useful pointers. Slash your ASP.NET compile/load time without any hard work: Nice article that details a bunch of optimizations you can make to speed up ASP.NET project load and compile times. You might also want to read my previous blog post on this topic here. 10 Essential Tools for Building ASP.NET Websites: Great article by Stephen Walther on 10 great (and free) tools that enable you to more easily build great ASP.NET Websites.  Highly recommended reading. Optimize Images using the ASP.NET Sprite and Image Optimization Framework: A nice article by 4GuysFromRolla that discusses how to use the open-source ASP.NET Sprite and Image Optimization Framework (one of the tools recommended by Stephen in the previous article).  You can use this to significantly improve the load-time of your pages on the client. Formatting Dates, Times and Numbers in ASP.NET: Scott Mitchell has a great article that discusses formatting dates, times and numbers in ASP.NET.  A very useful link to bookmark.  Also check out James Michael’s DateTime is Packed with Goodies blog post for other DateTime tips. Examining ASP.NET’s Membership, Roles and Profile APIs (Part 18): Everything you could possibly want to known about ASP.NET’s built-in Membership, Roles and Profile APIs must surely be in this tutorial series. Part 18 covers how to store additional user info with Membership. ASP.NET with jQuery An Introduction to jQuery Templates: Stephen Walther has written an outstanding introduction and tutorial on the new jQuery Template plugin that the ASP.NET team has contributed to the jQuery project. Composition with jQuery Templates and jQuery Templates, Composite Rendering, and Remote Loading: Dave Ward has written two nice posts that talk about composition scenarios with jQuery Templates and some cool scenarios you can enable with them. Using jQuery and ASP.NET to Build a News Ticker: Scott Mitchell has a nice tutorial that demonstrates how to build a dynamically updated “news ticker” style UI with ASP.NET and jQuery. Checking All Checkboxes in a GridView using jQuery: Scott Mitchell has a nice post that covers how to use jQuery to enable a checkbox within a GridView’s header to automatically check/uncheck all checkboxes contained within rows of it. Using jQuery to POST Form Data to an ASP.NET AJAX Web Service: Rick Strahl has a nice post that discusses how to capture form variables and post them to an ASP.NET AJAX Web Service (.asmx). ASP.NET MVC ASP.NET MVC Diagnostics Using NuGet: Phil Haack has a nice post that demonstrates how to easily install a diagnostics page (using NuGet) that can help identify and diagnose common configuration issues within your apps. ASP.NET MVC 3 JsonValueProviderFactory: James Hughes has a nice post that discusses how to take advantage of the new JsonValueProviderFactory support built into ASP.NET MVC 3.  This makes it easy to post JSON payloads to MVC action methods. Practical jQuery Mobile with ASP.NET MVC: James Hughes has another nice post that discusses how to use the new jQuery Mobile library with ASP.NET MVC to build great mobile web applications. Credit Card Validator for ASP.NET MVC 3: Benjii Me has a nice post that demonstrates how to build a [CreditCard] validator attribute that can be used to easily validate credit card numbers are in the correct format with ASP.NET MVC. Silverlight Silverlight FireStarter Keynote and Sessions: A great blog post from John Papa that contains pointers and descriptions of all the great Silverlight content we published last week at the Silverlight FireStarter.  You can watch all of the talks online.  More details on my keynote and Silverlight 5 announcements can be found here. 31 Days of Windows Phone 7: 31 great tutorials on how to build Windows Phone 7 applications (using Silverlight).  Silverlight for Windows Phone Toolkit Update: David Anson has a nice post that discusses some of the additional controls provided with the Silverlight for Windows Phone Toolkit. Visual Studio JavaScript Editor Extensions: A nice (and free) Visual Studio plugin built by the web tools team that significantly improves the JavaScript intellisense support within Visual Studio. HTML5 Intellisense for Visual Studio: Gil has a blog post that discusses a new extension my team has posted to the Visual Studio Extension Gallery that adds HTML5 schema support to Visual Studio 2008 and 2010. Team Build + Web Deployment + Web Deploy + VS 2010 = Goodness: Visual blogs about how to enable a continuous deployment system with VS 2010, TFS 2010 and the Microsoft Web Deploy framework.  Visual Studio 2010 Emacs Emulation Extension and VIM Emulation Extension: Check out these two extensions if you are fond of Emacs and VIM key bindings and want to enable them within Visual Studio 2010. Hope this helps, Scott

    Read the article

  • Learn About Oracle’s Strategy for a Simple, Modern User Experience at OpenWorld 2012

    - by Applications User Experience
    By Kathy Miedema, Oracle Applications User Experience If you’re interested in what the best possible user experience looks like, you’ll want to hear what Oracle’s Applications User Experience team is planning for OpenWorld 2012, Sept. 30-Oct. 4 in San Francisco. This year, we will talk Fusion, Fusion, Fusion. We were among the first to show Oracle Fusion Applications in the last couple of years, and we’ll be showing it again this year so you can see what Oracle is planning for the next generation of enterprise applications. Attend our sessions to learn more about the user experience strategy in which Oracle is investing. Simplicity is the driving force behind the demos that we are unveiling now, which you can see at OpenWorld. We want to create opportunities for productivity and efficiency, and deliver enterprise data across devices to help you do your work in the way best suited to your job and needs, said Jeremy Ashley, Vice President, Oracle Applications User Experience. You can see the new look for Fusion Applications at a general session led by Ashley at 3:30 p.m. on Wednesday, Oct. 3. You’ll also have the chance to learn more about tailoring in Oracle Fusion Applications, and gain a new understanding of the investment in the user experience behind Fusion Applications at our sessions (see session information below). Inside the Oracle Applications User Experience team’s on-site lab at Oracle OpenWorld 2011. Head to the demogrounds to see new demos from the Applications User Experience team, including the new look for Fusion Applications and what we’re building for mobile platforms. Take a spin on our eye tracker, a very cool tool that we use to research the usability of a particular design. Visit the Usable Apps OpenWorld page to find out where our demopods will be located. We are also recruiting participants for our on-site lab, in which we gather feedback on new user experience designs, and taking reservations for a charter bus that will bring you to Oracle headquarters for a lab tour Thursday, Oct. 4, or Friday, Oct. 5. Tours leave at 10 a.m. and 1:45 p.m. from the Moscone Center in San Francisco. You’ll see more of our newest designs at the lab tour, and some of our research tools in action. Can’t participate in a customer feedback session or take a lab tour this time around? Visit Usable Apps to participate or book a tour another time. For more information on any OpenWorld sessions, check the content catalog – also available at www.oracle.com/openworld. For information on Applications User Experience (Apps UX) sessions and activities, go to the Usable Apps OpenWorld page. APPS UX OPENWORLD SESSIONS Oracle’s Roadmap to a Simple, Modern User Experience Presenter: Jeremy Ashley, Vice President Applications User Experience, Oracle; with Debra Lilley, Fujitsu Consulting; Basheer Khan, Innowave; and Edward Roske, InterRelSession ID: CON9467Date: Wednesday, Oct. 3 Time: 3:30 - 4:30 p.m.Location: Moscone West - 3002/3004 Jeremy Ashley Oracle Fusion Applications: Transforming Insight into Action Presenters: Killian Evers and Kristin Desmond, OracleSession ID: CON8718Date: Thursday, Oct. 4Time: 11:15 a.m. - 12:15 p.m.Location: Moscone West - 2008 “FRIENDS OF UX” OPENWORLD SESSIONS Sessions by the Oracle Usability Advisory Board (OUAB) members: Advances in Oracle Enterprise Governance, Risk, and Compliance Manager  Presenters: Koen Delaure, KPMG Advisory NV, and Oracle Usability Advisory Board member; Russell Stohr, Oracle Session ID: CON9389Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: Palace Hotel - Concert Optimize Oracle E-Busines Suite Procure-to-Pay: Cut Inefficiences/Fraud with Oracle GRC Apps Presenters: Koen Delaure, KPMG Advisory NV, and Solveig Wagner, Seadrill Management AS, both Oracle Usability Advisory Board members; and Swarnali Bag, OracleSession ID: CON9401Date: Monday, Oct. 1Time: 12:15 - 1:15 p.m.Location: Intercontinental - Sutter Showcase of JD Edwards EnterpriseOne Mobility Presenters: Jon Wells, Westmoreland Coal Co., Oracle Usability Advisory Board member; Rob Mills and Liz Davson, Town of Oakville; Keith Sholes and Louise Farner, Oracle Session ID: CON9123Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: InterContinental - Grand Ballroom B Sessions by the Fusion User Experience Adovcates (FXA) Usability and Features of Oracle Fusion Applications, Built upon Oracle Fusion Middleware Presenters: Debra Lilley, Fujitsu Consulting and Oracle Usability Advisory Board member; John King, King Training ResourcesSession ID: UGF10371Date: Sunday, Sept. 30Time: 11 a.m. - 11:45 a.m. Location: Moscone West – 2010 Ten Things to Love About Oracle Fusion Project Portfolio Management  Presenter: Floyd Teter, EiS TechnologiesSession ID: CON6021Date: Tuesday, Oct. 2Time: 10:15 - 11:15 a.m.Location: Moscone West – 2003

    Read the article

  • Silverlight Firestarter Wrap Up and WCF RIA Services Talk Sample Code

    - by dwahlin
    I had a great time attending and speaking at the Silverlight Firestarter event up in Redmond on December 2, 2010. In addition to getting a chance to hang out with a lot of cool people from Microsoft such as Scott Guthrie, John Papa, Tim Heuer, Brian Goldfarb, John Allwright, David Pugmire, Jesse Liberty, Jeff Handley, Yavor Georgiev, Jossef Goldberg, Mike Cook and many others, I also had a chance to chat with a lot of people attending the event and hear about what projects they’re working on which was awesome. If you didn’t get a chance to look through all of the new features coming in Silverlight 5 check out John Papa’s post on the subject. While at the Silverlight Firestarter event I gave a presentation on WCF RIA Services and wanted to get the code posted since several people have asked when it’d be available. The talk can be viewed by clicking the image below. Code from the talk follows as well as additional links. I had a few people ask about the green bracelet on my left hand since it looks like something you’d get from a waterpark. It was used to get us access down a little hall that led backstage and allowed us to go backstage during the event. I thought it looked kind of dorky but it was required to get through security. Sample Code from My WCF RIA Services Talk (To login to the 2 apps use “user” and “P@ssw0rd”. Make sure to do a rebuild of the projects in Visual Studio before running them.) View All Silverlight Firestarter Talks and Scott Guthrie’s Keynote WCF RIA Services SP1 Beta for Silverlight 4 WCF RIA Services Code Samples (including some SP1 samples) Improved binding support in EntitySet and EntityCollection with SP1 (Kyle McClellan’s Blog) Introducing an MVVM-Friendly DomainDataSource: The DomainCollectionView (Kyle McClellan’s Blog) I’ve had the chance to speak at a lot of conferences but never with as many cameras, streaming capabilities, people watching live and overall hype involved. Over 1000 people registered to attend the conference in person at the Microsoft campus and well over 15,000 to watch it through the live stream.  The event started for me on Tuesday afternoon with a flight up to Seattle from Phoenix. My flight was delayed 1 1/2 hours (I seem to be good at booking delayed flights) so I didn’t get up there until almost 8 PM. John Papa did a tech check at 9 PM that night and I was scheduled for 9:30 PM. We basically plugged in my laptop backstage (amazing number of servers, racks and audio devices back there) and made sure everything showed up properly on the projector and the machines recording the presentation. In addition to a dedicated show director, there were at least 5 tech people back stage and at least that many up in the booth running lights, audio, cameras, and other aspects of the show. I wish I would’ve taken a picture of the backstage setup since it was pretty massive – servers all over the place. I definitely gained a new appreciation for how much work goes into these types of events. Here’s what the room looked like right before my tech check– not real exciting at this point. That’s Yavor Georgiev (who spoke on WCF Services at the Firestarter) in the background. We had plenty of monitors to reference during the presentation. Two monitors for slides (right and left side) and a notes monitor. The 4th monitor showed the time and they’d type in notes to us as we talked (such as “You’re over time!” in my case since I went around 4 minutes over :-)). Wednesday morning I went back on campus at Microsoft and watched John Papa film a few Silverlight TV episodes with Dave Campbell and Ryan Plemons.   Next I had the chance to watch the dry run of the keynote with Scott Guthrie and John Papa. We were all blown away by the demos shown since they were even better than expected. Starting at 1 PM on Wednesday I went over to Building 35 and listened to Yavor Georgiev (WCF Services), Jaime Rodriguez (Windows Phone 7), Jesse Liberty (Data Binding) and Jossef Goldberg and Mike Cook (Silverlight Performance) give their different talks and we all shared feedback with each other which was a lot of fun. Jeff Handley from the RIA Services team came afterwards and listened to me give a dry run of my WCF RIA Services talk. He had some great feedback that I really appreciated getting. That night I hung out with John Papa and Ward Bell and listened to John walk through his keynote demos. I also got a sneak peak of the gift given to Dave Campbell for all his work with Silverlight Cream over the years. It’s a poster signed by all of the key people involved with Silverlight: Thursday morning I got up fairly early to get to the event center by 8 AM for speaker pictures. It was nice and quiet at that point although outside the room there was a huge line of people waiting to get in.     At around 8:30 AM everyone was let in and the main room was filled quickly. Two other overflow rooms in the Microsoft conference center (Building 33) were also filled to capacity. At around 9 AM Scott Guthrie kicked off the event and all the excitement started! From there it was all a blur but it was definitely a lot of fun. All of the sessions for the Silverlight Firestarter were recorded and can be watched here (including the keynote). Corey Schuman, John Papa and I also released 11 lab exercises and associated videos to help people get started with Silverlight. Definitely check them out if you’re interested in learning more! Level 100: Getting Started Lab 01 - WinForms and Silverlight Lab 02 - ASP.NET and Silverlight Lab 03 - XAML and Controls Lab 04 - Data Binding Level 200: Ready for More Lab 05 - Migrating Apps to Out-of-Browser Lab 06 - Great UX with Blend Lab 07 - Web Services and Silverlight Lab 08 - Using WCF RIA Services Level 300: Take me Further Lab 09 - Deep Dive into Out-of-Browser Lab 10 - Silverlight Patterns: Using MVVM Lab 11 - Silverlight and Windows Phone 7

    Read the article

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • How to fire server-side methods with jQuery

    - by Nasser Hajloo
    I have a large application and I'm going to enabling short-cut key for it. I'd find 2 JQuery plug-ins (demo plug-in 1 - Demo plug-in 2) that do this for me. you can find both of them in this post in StackOverFlow My application is a completed one and I'm goining to add some functionality to it so I don't want towrite code again. So as a short-cut is just catching a key combination, I'm wonder how can I call the server methods which a short-cut key should fire? So How to use either of these plug-ins, by just calling the methods I'd written before? Actually How to fire Server methods with Jquery? You can also find a good article here, by Dave Ward Update: here is the scenario. When User press CTRL+Del the GridView1_OnDeleteCommand so I have this protected void grdDocumentRows_DeleteCommand(object source, System.Web.UI.WebControls.DataGridCommandEventArgs e) { try { DeleteRow(grdDocumentRows.DataKeys[e.Item.ItemIndex].ToString()); clearControls(); cmdSaveTrans.Text = Hajloo.Portal.Common.Constants.Accounting.Documents.InsertClickText; btnDelete.Visible = false; grdDocumentRows.EditItemIndex = -1; BindGrid(); } catch (Exception ex) { Page.AddMessage(GetLocalResourceObject("AProblemAccuredTryAgain").ToString(), MessageControl.TypeEnum.Error); } } private void BindGrid() { RefreshPage(); grdDocumentRows.DataSource = ((DataSet)Session[Hajloo.Portal.Common.Constants.Accounting.Session.AccDocument]).Tables[AccDocument.TRANSACTIONS_TABLE]; grdDocumentRows.DataBind(); } private void RefreshPage() { Creditors = (decimal)((AccDocument)Session[Hajloo.Portal.Common.Constants.Accounting.Session.AccDocument]).Tables[AccDocument.ACCDOCUMENT_TABLE].Rows[0][AccDocument.ACCDOCUMENT_CREDITORS_SUM_FIELD]; Debtors = (decimal)((AccDocument)Session[Hajloo.Portal.Common.Constants.Accounting.Session.AccDocument]).Tables[AccDocument.ACCDOCUMENT_TABLE].Rows[0][AccDocument.ACCDOCUMENT_DEBTORS_SUM_FIELD]; if ((Creditors - Debtors) != 0) labBalance.InnerText = GetLocalResourceObject("Differentiate").ToString() + "?" + (Creditors - Debtors).ToString(Hajloo.Portal.Common.Constants.Common.Documents.CF) + "?"; else labBalance.InnerText = GetLocalResourceObject("Balance").ToString(); lblSumDebit.Text = Debtors.ToString(Hajloo.Portal.Common.Constants.Common.Documents.CF); lblSumCredit.Text = Creditors.ToString(Hajloo.Portal.Common.Constants.Common.Documents.CF); if (grdDocumentRows.EditItemIndex == -1) clearControls(); } Th other scenario are the same. How to enable short-cut for these kind of code (using session , NHibernate, etc)

    Read the article

  • I am using a constructor for my array but why is it saying i need a return type?

    - by JB
    using System; using System.IO; using System.Data; using System.Text; using System.Drawing; using System.Data.OleDb; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Drawing.Printing; using System.Collections.Generic; namespace Eagle_Eye_Class_Finder { public class GetSchedule { public class IDnumber { public string Name { get; set; } public string ID { get; set; } public string year { get; set; } public string class1 { get; set; } public string class2 { get; set; } public string class3 { get; set; } public string class4 { get; set; } IDnumber[] IDnumbers = new IDnumber[3]; public GetSchedule() //here i get an error saying method must have a return type??? { IDnumbers[0] = new IDnumber() { Name = "Joshua Banks",ID = "900456317", year = "Senior", class1 = "TEET 4090", class2 = "TEET 3020", class3 = "TEET 3090", class4 = "TEET 4290" }; IDnumbers[1] = new IDnumber() { Name = "Sean Ward", ID = "900456318", year = "Junior", class1 = "ENGNR 4090", class2 = "ENGNR 3020", class3 = "ENGNR 3090", class4 = "ENGNR 4290" }; IDnumbers[2] = new IDnumber() { Name = "Terrell Johnson",ID = "900456319",year = "Sophomore", class1 = "BUS 4090", class2 = "BUS 3020", class3 = "BUS 3090", class4 = "BUS 4290" }; } static void ProcessNumber(IDnumber myNum) { StringBuilder myData = new StringBuilder(); myData.AppendLine(IDnumber.Name); myData.AppendLine(": "); myData.AppendLine(IDnumber.ID); myData.AppendLine(IDnumber.year); myData.AppendLine(IDnumber.class1); myData.AppendLine(IDnumber.class2); myData.AppendLine(IDnumber.class3); myData.AppendLine(IDnumber.class4); MessageBox.Show(myData); } public string GetDataFromNumber(string ID) { foreach (IDnumber idCandidateMatch in IDnumbers) { if (IDCandidateMatch.ID == ID) { StringBuilder myData = new StringBuilder(); myData.AppendLine(IDnumber.Name); myData.AppendLine(": "); myData.AppendLine(IDnumber.ID); myData.AppendLine(IDnumber.year); myData.AppendLine(IDnumber.class1); myData.AppendLine(IDnumber.class2); myData.AppendLine(IDnumber.class3); myData.AppendLine(IDnumber.class4); return myData; } } return ""; } } } }

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Windows Azure VMs - New "Stopped" VM Options Provide Cost-effective Flexibility for On-Demand Workloads

    - by KeithMayer
    Originally posted on: http://geekswithblogs.net/KeithMayer/archive/2013/06/22/windows-azure-vms---new-stopped-vm-options-provide-cost-effective.aspxDidn’t make it to TechEd this year? Don’t worry!  This month, we’ll be releasing a new article series that highlights the Best of TechEd announcements and technical information for IT Pros.  Today’s article focuses on a new, much-heralded enhancement to Windows Azure Infrastructure Services to make it more cost-effective for spinning VMs up and down on-demand on the Windows Azure cloud platform. NEW! VMs that are shutdown from the Windows Azure Management Portal will no longer continue to accumulate compute charges while stopped! Previous to this enhancement being available, the Azure platform maintained fabric resource reservations for VMs, even in a shutdown state, to ensure consistent resource availability when starting those VMs in the future.  And, this meant that VMs had to be exported and completely deprovisioned when not in use to avoid compute charges. In this article, I'll provide more details on the scenarios that this enhancement best fits, and I'll also review the new options and considerations that we now have for performing safe shutdowns of Windows Azure VMs. Which scenarios does the new enhancement best fit? Being able to easily shutdown VMs from the Windows Azure Management Portal without continued compute charges is a great enhancement for certain cloud use cases, such as: On-demand dev/test/lab environments - Freely start and stop lab VMs so that they are only accumulating compute charges when being actively used.  "Bursting" load-balanced web applications - Provision a number of load-balanced VMs, but keep the minimum number of VMs running to support "normal" loads. Easily start-up the remaining VMs only when needed to support peak loads. Disaster Recovery - Start-up "cold" VMs when needed to recover from disaster scenarios. BUT ... there is a consideration to keep in mind when using the Windows Azure Management Portal to shutdown VMs: although performing a VM shutdown via the Windows Azure Management Portal causes that VM to no longer accumulate compute charges, it also deallocates the VM from fabric resources to which it was previously assigned.  These fabric resources include compute resources such as virtual CPU cores and memory, as well as network resources, such as IP addresses.  This means that when the VM is later started after being shutdown from the portal, the VM could be assigned a different IP address or placed on a different compute node within the fabric. In some cases, you may want to shutdown VMs using the old approach, where fabric resource assignments are maintained while the VM is in a shutdown state.  Specifically, you may wish to do this when temporarily shutting down or restarting a "7x24" VM as part of a maintenance activity.  Good news - you can still revert back to the old VM shutdown behavior when necessary by using the alternate VM shutdown approaches listed below.  Let's walk through each approach for performing a VM Shutdown action on Windows Azure so that we can understand the benefits and considerations of each... How many ways can I shutdown a VM? In Windows Azure Infrastructure Services, there's three general ways that can be used to safely shutdown VMs: Shutdown VM via Windows Azure Management Portal Shutdown Guest Operating System inside the VM Stop VM via Windows PowerShell using Windows Azure PowerShell Module Although each of these options performs a safe shutdown of the guest operation system and the VM itself, each option handles the VM shutdown end state differently. Shutdown VM via Windows Azure Management Portal When clicking the Shutdown button at the bottom of the Virtual Machines page in the Windows Azure Management Portal, the VM is safely shutdown and "deallocated" from fabric resources.  Shutdown button on Virtual Machines page in Windows Azure Management Portal  When the shutdown process completes, the VM will be shown on the Virtual Machines page with a "Stopped ( Deallocated )" status as shown in the figure below. Virtual Machine in a "Stopped (Deallocated)" Status "Deallocated" means that the VM configuration is no longer being actively associated with fabric resources, such as virtual CPUs, memory and networks. In this state, the VM will not continue to allocate compute charges, but since fabric resources are deallocated, the VM could receive a different internal IP address ( called "Dynamic IPs" or "DIPs" in Windows Azure ) the next time it is started.  TIP: If you are leveraging this shutdown option and consistency of DIPs is important to applications running inside your VMs, you should consider using virtual networks with your VMs.  Virtual networks permit you to assign a specific IP Address Space for use with VMs that are assigned to that virtual network.  As long as you start VMs in the same order in which they were originally provisioned, each VM should be reassigned the same DIP that it was previously using. What about consistency of External IP Addresses? Great question! External IP addresses ( called "Virtual IPs" or "VIPs" in Windows Azure ) are associated with the cloud service in which one or more Windows Azure VMs are running.  As long as at least 1 VM inside a cloud service remains in a "Running" state, the VIP assigned to a cloud service will be preserved.  If all VMs inside a cloud service are in a "Stopped ( Deallocated )" status, then the cloud service may receive a different VIP when VMs are next restarted. TIP: If consistency of VIPs is important for the cloud services in which you are running VMs, consider keeping one VM inside each cloud service in the alternate VM shutdown state listed below to preserve the VIP associated with the cloud service. Shutdown Guest Operating System inside the VM When performing a Guest OS shutdown or restart ( ie., a shutdown or restart operation initiated from the Guest OS running inside the VM ), the VM configuration will not be deallocated from fabric resources. In the figure below, the VM has been shutdown from within the Guest OS and is shown with a "Stopped" VM status rather than the "Stopped ( Deallocated )" VM status that was shown in the previous figure. Note that it may require a few minutes for the Windows Azure Management Portal to reflect that the VM is in a "Stopped" state in this scenario, because we are performing an OS shutdown inside the VM rather than through an Azure management endpoint. Virtual Machine in a "Stopped" Status VMs shown in a "Stopped" status will continue to accumulate compute charges, because fabric resources are still being reserved for these VMs.  However, this also means that DIPs and VIPs are preserved for VMs in this state, so you don't have to worry about VMs and cloud services getting different IP addresses when they are started in the future. Stop VM via Windows PowerShell In the latest version of the Windows Azure PowerShell Module, a new -StayProvisioned parameter has been added to the Stop-AzureVM cmdlet. This new parameter provides the flexibility to choose the VM configuration end result when stopping VMs using PowerShell: When running the Stop-AzureVM cmdlet without the -StayProvisioned parameter specified, the VM will be safely stopped and deallocated; that is, the VM will be left in a "Stopped ( Deallocated )" status just like the end result when a VM Shutdown operation is performed via the Windows Azure Management Portal.  When running the Stop-AzureVM cmdlet with the -StayProvisioned parameter specified, the VM will be safely stopped but fabric resource reservations will be preserved; that is the VM will be left in a "Stopped" status just like the end result when performing a Guest OS shutdown operation. So, with PowerShell, you can choose how Windows Azure should handle VM configuration and fabric resource reservations when stopping VMs on a case-by-case basis. TIP: It's important to note that the -StayProvisioned parameter is only available in the latest version of the Windows Azure PowerShell Module.  So, if you've previously downloaded this module, be sure to download and install the latest version to get this new functionality. Want to Learn More about Windows Azure Infrastructure Services? To learn more about Windows Azure Infrastructure Services, be sure to check-out these additional FREE resources: Become our next "Early Expert"! Complete the Early Experts "Cloud Quest" and build a multi-VM lab network in the cloud for FREE!  Build some cool scenarios! Check out our list of over 20+ Step-by-Step Lab Guides based on key scenarios that IT Pros are implementing on Windows Azure Infrastructure Services TODAY!  Looking forward to seeing you in the Cloud! - Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • Need personal advice on how to get out of a company..

    - by SOfan
    Hi, I am an SO user since past 6 months and this is the first time I am turning to SO for personal help. I have asked technical questions before with my real ID. I am stuck inside a service based IT company for the past one year and haven't been able to decide if to leave it, when to leave it and how to leave it. I had taken 2 weeks LWP on medical reason roughly at end of 1 year and then soon after reporting, I applied for 2 months more LWP (on medical/personal ground) with the intention of working on my health,take up a hobby class to ward off depression,pessimism, to have some fun in life, and to look for a job which I really would be excited about - that interests me and which matches with my strength. My leave starts from this Monday. So in any case, I had hard set in mind that I will leave the company after I join them back hopefully with some job offer already in hand (after figuring out what I want do). Neither I can stand the past project,past colleagues,company, HR, pathetically low salary. But if I really listen to my heart, I don't want to have to go back to that office after my sabbatical and again have to see those people. I will have to resign it after my sabbatical ends. Then HR people perhaps wont like it, may even accuse me on face or behind back that primary purpose of my leave must have been to hunt for a better job and I lied about medical and person reasons. Also, if they get nasty and force me to serve 2 months notice period. There is no way I see myself after sabbatical resuming in old project or starting new work. It will be a pain. Since they have already approved 2 months leave and stuff, ideally if they want, they should be just able to relieve me right on the next day after I join back. But, I don't know if they want to get nasty, will they mention about my 2 months sabbatical leave in my experience letter or more scary, the term medical/personal reason. I have hard earned my experience here, have worked against my will, mostly it has been painful and slogged like anything, because I realize the importance of work experience in IT industry. I don't have greed to have those 2 months included extra in my experience letter, but I don't want to mess up with my experience letter in a way which makes my next employer ask question, get suspicious, or be wary if I have any medical reason going on. Being an emotional,moody person or somebody who can't be in an environment, once my mind and heart starts hating it. I think it perhaps is best, if I resign on Monday itself telling them (in polite manner) something that look I took sabbatical for some reason but I don't want to resume working in the company after my sabbatical ends. So please accept my resignation. Now tell me what you want to do about my leave request, my notice period and when you are willing to relieve me. What should I write and how? Some background: I am working in an IT company in India.I am overqualified in the company. It is grossly underpaying me. My education qualifications far exceed anyone's in the whole company being a CS undergrad as well as a CS grad. I joined this company after finishing the grad. I had self-doubts about my skills and interest as a programmer. I like doing research oriented work, though didn't have any particular success during grad. My life here has been very hectic. The project containing many many sub-projects has kept me on my toes and I have never really liked the work. I have been playing against my strength. Also the company strict internet usage policy (you can't read gmails, can't browse any non-work related sites not even news). When working for a client, from the machine we can't even check company related emails.For this one has to go to kiosk like 5 machines in a small room etc. Most of the times those machines are not available, so it was not unusual to keep making rounds to these kiosk machines to check company emails, browse company related emails etc.So it was not so easy to keep in touch with company related basic affairs for a not particular careful person. Things like this which are new to me, make me feel restricted. I am an undecisive person with a sense of failure, self-doubt, not meeting up unrealistic expectation. Somewhere at back of mind, I envy my classmates who make a smooth transition from company to company without causing any gap in their resume. I on other hand have gaps in resume. I get tired after working in a place for sometime. problem with colleagues in general. I am not particular great with people, have few friends, not known for a fun nature, rather serious, scholar. I am not a typical conventional female. I think females are usually more disciplined. But I am not so. I reach office late (though after informing manager). I don't want to blame them entirely, because from my past, it is not unusual for me to get undecisive on things. Also I had doubts about my ability as researched and to succeed there. of building a relationship in a group, to have something to talk about, newspaper. I get cut-off from people. peer pressure. I make blunders in coding, lose patience. Consciously or unconsciously I feel contempt for people here, work here, environment here. I have doubts that either I go to a place which does innovation, does research oriented work, product biggies, have great motivated people, have competent people passionate about products they are building. But then I also doubt my ability to survive there. I have identified that an idea job for me would be 4 days a week, a high salary job. When among people in company/team, I can't think much. I need some time at home to read good authentic books written in good style on what work I am doing.So that I am comfortable with my understanding of work. I get into pressure easily under deadline and need 5th day to cool myself off. I took for 2 weeks leave, because each day was hell for me. May be the depression phase of bipolar is on and also partially it could be that being a work centered person, who derives happiness,self-esteem from work, haven't been enjoying work and have been working for the sole person of proving stability, and ability to stick, against all odds, and facing what challenges I see, bonding with people, identifying opportunities to learn in given task etc.have been averaging one day LWP in 1 week or 10 days. or may be because of my nature,ADD,not being able to switch context,out of touch with news, don't have a circle of friends with who I enjoy. less knowledge in general to talk about, just some technical stuff.anyway, so due to emotional reason, some practical reason etc, I wanted to be very sure before leaving. So my leave starts from Monday and I should feel happy about it. I have taken the leave to for a few purposes - to take care of my health by regular yoga/exercise (with project on, I just can't do anything regular), reassess myself to see what I want to try next which work I might like, look for next job, take up a hobby which I like say singing. I am not clear on my career,job aspiration. I have tried my hands on research. During this year appraisal yesterday, I even had some conflict with my last manager. In meeting with me one on one, he would say all nice things about me, but in feedback to new manager, he hasn't given any excellent feedback. It is all only good. I am angry at this old Manager. Also new manager also scolded me as I didn't agree to his appraisal and waited to hear myself from old Manager. He kind of scolded me for wasting his time. Am I being unethical somewhere? I am always very conscious of if I am cheating anywhere. What advice I am seeking? How to resign and what to write in resignation letter

    Read the article

  • How can I read the verbose output from a Cmdlet in C# using Exchange Powershell

    - by mrkeith
    Environment: Exchange 2007 sp3 (2003 sp2 mixed mode) Visual Studio 2008, .Net 3.5 Hello, I'm working with an Exchange powershell move-mailbox cmdlet and have noted when I do so from the Exchange Management shell (using the Verbose switch) there is a ton of real-time information provided. To provide a little context, I'm attempting to create a UI application that moves mailboxes similarly to the Exchange Management Console but desire to support an input file and specific server/database destinations for each entry (and threading). Here's roughly what I have at present but I'm not sure if there is an event I need to register for or what... And to be clear, I desire to get this information in real-time so I may update my UI to reflect what's occurring in the move sequence for the appropriate user (pretty much like the native functionality offered in the Management Console). And in case you are wondering, the reason why I'm not content with the Management Console functionality is, I have an algorithm which I'm using to balance users depending on storage limit, Blackberry use, journaling, exception mailbox size etc which demands user be mapped to specific locations... and I do not desire to create many/several move groups for each common destination or to hunt for lists of users individually through the management console UI. I can not seem to find any good documentation or examples of how to tie into reading the verbose messages that are provided within the console using C# (I see value in being able to read this kind of information in many different scenarios). I've explored the Invoke and InvokeAsync methods and the StateChanged & DataReady events but none of these seem to provide the information (verbose comments) that I'm after. Any direction or examples that can be provided will be very appreciated! A code sample which is little more than how I would ordinarily call any other powershell command follows: // config to use ExMgmt shell, create runspace and open it RunspaceConfiguration rsConfig = RunspaceConfiguration.Create(); PSSnapInException snapInException = null; PSSnapInInfo info = rsConfig.AddPSSnapIn("Microsoft.Exchange.Management.PowerShell.Admin", out snapInException); if (snapInException != null) throw snapInException; Runspace runspace = RunspaceFactory.CreateRunspace(rsConfig); try { runspace.Open(); // create a pipeline and feed script text Pipeline pipeline = runspace.CreatePipeline(); string targetDatabase = @"myServer\myStorageGroup\myDB"; string mbxOwner = "[email protected]"; Command myMoveMailbox = new Command("Move-Mailbox", false, false); myMoveMailbox.Parameters.Add("Identity", mbxOwner); myMoveMailbox.Parameters.Add("TargetDatabase", targetDatabase); myMoveMailbox.Parameters.Add("Verbose"); myMoveMailbox.Parameters.Add("ValidateOnly"); myMoveMailbox.Parameters.Add("Confirm", false); pipeline.Commands.Add(myMoveMailbox); System.Collections.ObjectModel.Collection output = null; // these next few lines that are commented out are where I've tried // registering for events and calling asynchronously but this doesn't // seem to get me anywhere closer // //pipeline.StateChanged += new EventHandler(pipeline_StateChanged); //pipeline.Output.DataReady += new EventHandler(Output_DataReady); //pipeline.InvokeAsync(); //pipeline.Input.Close(); //return; tried these variations that are commented out but none seem to be useful output = pipeline.Invoke(); // Check for errors in the pipeline and throw an exception if necessary if (pipeline.Error != null && pipeline.Error.Count 0) { StringBuilder pipelineError = new StringBuilder(); pipelineError.AppendFormat("Error calling Test() Cmdlet. "); foreach (object item in pipeline.Error.ReadToEnd()) pipelineError.AppendFormat("{0}\n", item.ToString()); throw new Exception(pipelineError.ToString()); } foreach (PSObject psObject in output) { // blah, blah, blah // this is normally where I would read details about a particular PS command // but really pertains to a command once it finishes and has nothing to do with // the verbose messages that I'm after... since this part of the methods pertains // to the after-effects of a command having run, I'm suspecting I need to look to // the asynch invoke method but am not certain or knowing how. } } finally { runspace.Close(); } Thanks! Keith

    Read the article

  • How to show form in front in C#

    - by corlettk
    Folks, Please does anyone know how to show a Form from an otherwise invisible application, and have it get the focus (i.e. appear on top of other windows)? I'm working in C# .NET 3.5. I suspect I've taken "completely the wrong approach"... I do not Application.Run(new TheForm ()) instead I (new TheForm()).ShowModal()... The Form is basically a modal dialogue, with a few check-boxes; a text-box, and OK and Cancel Buttons. The user ticks a checkbox and types in a description (or whatever) then presses OK, the form disappears and the process reads the user-input from the Form, Disposes it, and continues processing. This works, except when the form is show it doesn't get the focus, instead it appears behind the "host" application, until you click on it in the taskbar (or whatever). This is a most annoying behaviour, which I predict will cause many "support calls", and the existing VB6 version doesn't have this problem, so I'm going backwards in usability... and users won't accept that (and nor should they). So... I'm starting to think I need to rethink the whole shebang... I should show the form up front, as a "normal application" and attach the remainer of the processing to the OK-button-click event. It should work, But that will take time which I don't have (I'm already over time/budget)... so first I really need to try to make the current approach work... even by quick-and-dirty methods. So please does anyone know how to "force" a .NET 3.5 Form (by fair means or fowl) to get the focus? I'm thinking "magic" windows API calls (I know Twilight Zone: This only appears to be an issue at work, we're I'm using Visual Studio 2008 on Windows XP SP3... I've just failed to reproduce the problem with an SSCCE (see below) at home on Visual C# 2008 on Vista Ulimate... This works fine. Huh? WTF? Also, I'd swear that at work yesterday showed the form when I ran the EXE, but not when F5'ed (or Ctrl-F5'ed) straight from the IDE (which I just put up with)... At home the form shows fine either way. Totaly confusterpating! It may or may not be relevant, but Visual Studio crashed-and-burned this morning when the project was running in debug mode and editing the code "on the fly"... it got stuck what I presumed was an endless loop of error messages. The error message was something about "can't debug this project because it is not the current project, or something... So I just killed it off with process explorer. It started up again fine, and even offered to recover the "lost" file, an offer which I accepted. using System; using System.Windows.Forms; namespace ShowFormOnTop { static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Application.Run(new Form1()); Form1 frm = new Form1(); frm.ShowDialog(); } } } Background: I'm porting an existing VB6 implementation to .NET... It's a "plugin" for a "client" GIS application called MapInfo. The existing client "worked invisibly" and my instructions are "to keep the new version as close as possible to the old version", which works well enough (after years of patching); it's just written in an unsupported language, so we need to port it. About me: I'm pretty much a noob to C# and .NET generally, though I've got a bottoms wiping certificate, I have been a professional programmer for 10 years; So I sort of "know some stuff". Any insights would be most welcome... and Thank you all for taking the time to read this far. Consiseness isn't (apparently) my forte. Cheers. Keith.

    Read the article

  • Fitting it together, database, reporting, applications in C#

    - by alvonellos
    Introduction Preamble I was hesitant to post this, since it's an application whose intricate details are defined elsewhere, and answers may not be helpful to others. Within the past few weeks (I was actually going to write a blog post about this after I finished) I've discovered that the barrier I'm encountering is one that's actually quite common for newer developers. This question is not so much about a specific thing as it is about piecing those things together. I've searched the internet far and wide, and found many tutorials on how to create applications that are kind of similar to what I'm looking for. I've also looked at hiring another, more experienced, developer to help me along, but all I've gotten are unqualified candidates that don't have the experience necessary and won't take care of the client or project like I will. I'd rather have the project never transpire than to release a solution that is half-baked. I've asked professors at my school, but they've not turned up answers to my question. I'm an experienced developer, and I've written many applications that are -- very abstractly -- close to what I'm doing, but my experiences from those applications aren't giving me enough leverage to solve this particular problem. I just hope that posting this article isn't a mistake for me to write. Project Description I have a project I'm working on for a client that is a rewrite of an application, originally written in Foxpro 2.6 by someone before me, that performs some analysis (which, sadly, I'm not allowed to disclose as per of my employment contract) on financial data. One day, after a long talk between the client and I -- where he intimately described his frustrations with all the bugs I've been hacking out of this code for 6 months now -- he told me to just rewrite it and gave me a month to write a good 1/8 of this 65k LOC Foxpro monstrosity. this 65k line of code foxpro monstrosity. It'll take me a good 3 - 6 months to rewrite this software (I know things the original programmer did not, like inheritance) going as I am right now, but I'm quickly discovering that I'm going to need to use databases. Prior to this contract I didn't even know about foxpro, and so I've had to learn foxpro on the fly, write procedures and make modifications to the database. I've actually come to like it, and this project would be rewritten in Foxpro if it were still a supported language, because over the past few months, I've come to like the features of Foxpro that make it so easy to develop data-driven applications. I once perfomed an experiment, comparing C# to Foxpro. What took me 45 minutes in C# took me two in Foxpro, and I knew C# prior to Foxpro. I was hoping to leverage the power of C#, but it intimidates me that in foxpro, you can have one line of code and be using a database. Prior to this, I have never written any serious database development from scratch. All the applications that I've written are in a different league. They are either completely data-naive or data-naive enough that I can get away with not using a database through serialization or by designing algorithms that work with the data in a manner that is stateless, so there is no need to worry about databases. I've come to realize, very quickly, that serialization and my efficacy with data structures has been my crutch all these years that's prevented me from adventuring into databases, and has consequently hindered my success in real-world programming. Sure, I've written some database stuff in Perl and Python, and I've done forms and worked with relational databases and tables, I'm a wizard in Access and Excel (seriously) and can do just about anything, but it just feels unnatural writing SQL code in another language... I don't mind writing SQL, and I don't It's that bridge between the database and the program code that drives me absolutely bonkers. I hope I'm not the only one to think this, but it bothers me that I have to create statements like the following string sSql = "SELECT * from tablename" When there's really no reason for that kind of unchecked language binding between two languages and two API's. Don't get my wrong, SQL is great, but I don't like the idea that, when executing commands on a SQL database, that one must intermix database and application software, and there's no database independence, which means that different versions of different databases can break code. This isn't very nice. The nicest thing about Foxpro is the cohesiveness between programming language and database. It's so easy, and Foxpro makes it easy, because the tool just fits the task. I can see why so many developers have created a career with this language, because it lowered the barrier of entry to data-driven applications that so many businesses need. It was wonderful. For my purposes today, with the demands and need for community support, extensibility, and language features, Foxpro isn't a solution that I feel would be the right tool for the job. I'm also worried about working too heavy with the database, because I've seen data-driven .NET applications have issues with database caches, running out of memory, and objects in the database not being collected. (Memory leaks) And OH the queries. Which one, how, and why? There are a plethora of different ways that a database can be setup, I think I counted 5 or 6 different kinds of database applications alone that I can chose from. That is a great mountain for me to climb when I don't even know where to begin when it comes to writing data-driven applications. The problem isn't that I don't know SQL or that I don't know C#. I know both and have worked with both extensively. It's making them work together that's the problem, and it's something I've never done in C# before. Reports The client likes paper. The data needs to be printed out in a format that is extensible, layered, and easy to use. I have never done reporting before, and so this is a bit of a problem. From the data source comes crystal reports, and so there's a dependency on the database, from what I understand. Code reuse A large part of the design decision that I've gone through so far is to break the task of writing a piece of this software into routines and modular DLL's and so forth such that much of the code can be reused. For example, when I setup this database, I want to be able to reuse the same database code over and over again. I also want to make sure that when the day comes that another developer is here, that he/she will be able to pick up just where I left off. The quicker I develop these applications, the better off I am. Tasks & Goals In my project, I need to write routines that apply algorithms and look for predefined patterns in financial data. Additionally, I need to simulate trading based on predefined algorithms and data. Then I need to prepare reports on that data. Additionally, I need to have a way to change the code base for this application quickly and effectively, without hacking together some band-aid solution for a problem that really needs a trauma ward. Special Considerations The solution must be fast, run quickly on existing hardware, and not be too much of a pain to maintain and write. I understand that anything I write I'm married to -- I'm responsible for the things that I write because my reputation and livelihood is dependent on it. Do I really need a database? What about performance? Performance was such a big issue that I hand wrote a data structure that is capable of performing 2 billion operations, using a total of 4 gigs of memory in under 1/4 of a second using the standard core two duo processor. I could not find a similar, pre-written data structure in C# to perform this task. What setup do I use in terms of database? What about reporting? I'd prefer to have PDF's generated, but I'd like to be able to visually sketch those reports and then just have a ReportFactory of some sort, that when I pass some variables in, it just does that data. About Me I'm a lone developer for a small business in this area. This is the first time I've done this and I've never had the breadth and depth of my knowledge tested. I'm incredibly frustrated with this project because I feel incredibly overwhelmed with the task at hand. I'm looking for that entry level point where I can draw a line and say "this is what I need to do" Conclusion I may have not been clear enough on my post. I'm still new to this whole thing, and I've been doing my best to contribute back to the community that I've leached so much knowledge from. I'd be glad to edit my post and add more information if possible. I'm looking for a big-picture solution or design process that helps me get off the ground in this world of data-driven applications, because I have a feeling that it's going to be concentric to my entire career as a programmer for some time. Specifically, if you didn't get it from the rest of the post (I may not have been clear enough) I really need some guidance as to where to go in terms of the design decisions for this project. Some things that'll be useful will be a pro/con list for the different kinds of database projects available in VS2010. I've tried, but generating that list has been as hard as solving the problem itself... If you could walk a developer writing a data-driven application for the first time in C#, how would you do that? Where would you point them to?

    Read the article

  • JPA Entity (in multiple persistence-unit) in OSGi (Spring DM) Environnement is confusing me.

    - by Vincent Demeester
    Hi, I'm a bit confused about a strange behavior of my JPA's related objects. I have three bundle : The User bundle does contain some user-related objects, but mainly the User object. The Energy bundle does contain some energy-related objects, and particularly a ConsumptionTerminal which contains a List of User. The Index bundle does contain an Index object that has no dependency at all. My OSGi environment is the following : A DataSource bundle that provide 2 services : dataSource and jpaVendorAdapter. The three bundles. They consume dataSource and jpaVendorAdapter. Their module-context.xml file look like : And they all have a persistence.xml file : User <?xml version="1.0" encoding="UTF-8"?> <persistence> <persistence-unit name="securityPU" transaction-type="JTA"> <jta-data-source>java:/securityDataSourceService</jta-data-source> <class>net.nextep.amundsen.security.domain.User</class> <!-- [...] --> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="eclipselink.logging.level" value="INFO" /> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.orm.throw.exceptions" value="true" /> </properties> </persistence-unit> </persistence> Energy <?xml version="1.0" encoding="UTF-8"?> <persistence> <persistence-unit name="energyPU" transaction-type="JTA"> <jta-data-source>java:/securityDataSourceService</jta-data-source> <class>net.nextep.amundsen.security.domain.User</class> <class>net.nextep.amundsen.energy.domain.User</class> <!-- [...] --> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="eclipselink.logging.level" value="INFO" /> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.orm.throw.exceptions" value="true" /> </properties> </persistence-unit> </persistence> Index : This one has the most simple persistence.xml with just the Index class (no shared Class). I'm using named @PersistenceUnit annotation like @PersitenceUnit(name = 'securityPU') (for the User bundle). And finally, I'm using EclipseLink as Jpa provider and Spring DM (+ Spring DM Server in the development process) The problem is the following : When the User bundle is deployed, I'm able to persist User objects. When the User bundle and Energy bundles are both deployed, I'm not able to persist User objects (neither the Energy object). But I don't have any exception at all ! There is no problem at all with the Index bundle. The bug is dataSource independent (I tried with PostgreSQL and MySQL so far). My first conclusion was that the <class>net.nextep.amundsen.security.domain.User</class> in both persistence unit was causing the trouble. I tried without it (and hiding the User dependent object in the Energy bundle) but it failed too. I'm a bit confused about that bug. I'm also not quite sure about the transaction management in this context. I wasn't the one who designed this architecture (but I tell my intern OK without testing it.. shame on me) but if I could understand this bug and maybe fix it without rewrite the bundle (and break my intern work), I would appreciate. Am I doing something wrong ? (it's obvious, but what..) Did I miss something while reading documentation ? By the way, I'm also looking for some best practices or advices when it comes to JPA, EclipseLink (or whatever JPA Provider) and Spring DM (and OSGi in general). I found interesting slides from Mike Keith about this topic (by browsing Stackoverflow).

    Read the article

  • jQuery templates - Load another template within a template (composite)

    - by Saxman
    I'm following this post by Dave Ward (http://encosia.com/2010/12/02/jquery-templates-composite-rendering-and-remote-loading/) to load a composite templates for a Blog, where I have a total of 3 small templates (all in one file) for a blog post. In the template file, I have these 3 templates: blogTemplate, where I render the "postTemplate" Inside the "postTemplate", I would like to render another template that displays comments, I called this "commentsTemplate" the "commentsTemplate" Here's the structure of my json data: blog Title Content PostedDate Comments (a collection of comments) CommentContents CommentedBy CommentedDate For now, I was able to render the Post content using the code below: Javascript $(document).ready(function () { $.get('/GetPost', function (data) { $.get('/Content/Templates/_postWithComments.tmpl.htm', function (templates) { $('body').append(templates); $('#blogTemplate').tmpl(data).appendTo('#blogPost'); }); }); }); Templates <!--Blog Container Templates--> <script id="blogTemplate" type="x-jquery-tmpl"> <div class="latestPost"> {{tmpl() '#postTemplate'}} </div> </script> <!--Post Item Container--> <script id="postTemplate" type="x-jquery-tmpl"> <h2> ${Title}</h2> <div class="entryHead"> Posted in <a class="category" rel="#">Design</a> on ${PostedDateString} <a class="comments" rel="#">${NumberOfComments} Comments</a></div> ${Content} <div class="tags"> {{if Tags.length}} <strong>Tags:</strong> {{each(i, tag) Tags}} <a class="tag" href="/blog/tags/{{= tag.Name}}"> {{= tag.Name}}</a> {{/each}} <a class="share" rel="#"><strong>TELL A FRIEND</strong></a> <a class="share twitter" rel="#">Twitter</a> <a class="share facebook" rel="#">Facebook</a> {{/if}} </div> <!-- close .tags --> <!-- end Entry 01 --> {{if Comments.length}} {{each(i, comment) Comments}} {{tmpl() '#commentTemplate'}} {{/each}} {{/if}} <div class="lineHor"> </div> </script> <!--Comment Items Container--> <script id="commentTemplate" type="x-jquery-tmpl"> <h4> Comments</h4> &nbsp; <!-- COMMENT --> <div id="authorComment1"> <div id="gravatar1" class="grid_2"> <!--<img src="images/gravatar.png" alt="" />--> </div> <!-- close #gravatar --> <div id="commentText1"> <span class="replyHead">by<a class="author" rel="#">${= comment.CommentedBy}</a>on today</span> <p> {{= comment.CommentContents}}</p> </div> <!-- close #commentText --> <div id="quote1"> <a class="quote" rel="#"><strong>Quote this Comment</strong></a> </div> <!-- close #quote --> </div> <!-- close #authorComment --> <!-- END COMMENT --> </script> Where I'm having problem is at the {{each(i, comment) Comments}} {{tmpl() '#commentTemplate'}} {{/each}} Update - Example Json data when GetPost method is called { Id: 1, Title: "Test Blog", Content: "This is a test post asdf asdf asdf asdf asdf", PostedDateString: "2010-12-20", - Comments: [ - { Id: 1, PostId: 1, CommentContents: "Test comments # 1, asdf asdf asdf", PostedBy: "User 1", CommentedDate: "2010-12-20" }, - { Id: 2, PostId: 1, CommentContents: "Test comments # 2, ghjk gjjk gjkk", PostedBy: "User 2", CommentedDate: "2010-12-21" } ] } I've tried passing in {{tmpl(comment) ..., {{tmpl(Comments) ..., or leave {{tmpl() ... but none seems to work. I don't know how to iterate over the Comments collection and pass that object into the commentsTemplate. Any suggestions? Thank you very much.

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • An Introduction to jQuery Templates

    - by Stephen Walther
    The goal of this blog entry is to provide you with enough information to start working with jQuery Templates. jQuery Templates enable you to display and manipulate data in the browser. For example, you can use jQuery Templates to format and display a set of database records that you have retrieved with an Ajax call. jQuery Templates supports a number of powerful features such as template tags, template composition, and wrapped templates. I’ll concentrate on the features that I think that you will find most useful. In order to focus on the jQuery Templates feature itself, this blog entry is server technology agnostic. All the samples use HTML pages instead of ASP.NET pages. In a future blog entry, I’ll focus on using jQuery Templates with ASP.NET Web Forms and ASP.NET MVC (You can do some pretty powerful things when jQuery Templates are used on the client and ASP.NET is used on the server). Introduction to jQuery Templates The jQuery Templates plugin was developed by the Microsoft ASP.NET team in collaboration with the open-source jQuery team. While working at Microsoft, I wrote the original proposal for jQuery Templates, Dave Reed wrote the original code, and Boris Moore wrote the final code. The jQuery team – especially John Resig – was very involved in each step of the process. Both the jQuery community and ASP.NET communities were very active in providing feedback. jQuery Templates will be included in the jQuery core library (the jQuery.js library) when jQuery 1.5 is released. Until jQuery 1.5 is released, you can download the jQuery Templates plugin from the jQuery Source Code Repository or you can use jQuery Templates directly from the ASP.NET CDN. The documentation for jQuery Templates is already included with the official jQuery documentation at http://api.jQuery.com. The main entry for jQuery templates is located under the topic plugins/templates. A Basic Sample of jQuery Templates Let’s start with a really simple sample of using jQuery Templates. We’ll use the plugin to display a list of books stored in a JavaScript array. Here’s the complete code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head> <title>Intro</title> <link href="0_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContent"> <h1>ASP.NET Bookstore</h1> <div id="bookContainer"></div> </div> <script id="bookTemplate" type="text/x-jQuery-tmpl"> <div> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Create an array of books var books = [ { title: "ASP.NET 4 Unleashed", price: 37.79, picture: "AspNet4Unleashed.jpg" }, { title: "ASP.NET MVC Unleashed", price: 44.99, picture: "AspNetMvcUnleashed.jpg" }, { title: "ASP.NET Kick Start", price: 4.00, picture: "AspNetKickStart.jpg" }, { title: "ASP.NET MVC Unleashed iPhone", price: 44.99, picture: "AspNetMvcUnleashedIPhone.jpg" }, ]; // Render the books using the template $("#bookTemplate").tmpl(books).appendTo("#bookContainer"); function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html> When you open this page in a browser, a list of books is displayed: There are several things going on in this page which require explanation. First, notice that the page uses both the jQuery 1.4.4 and jQuery Templates libraries. Both libraries are retrieved from the ASP.NET CDN: <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> You can use the ASP.NET CDN for free (even for production websites). You can learn more about the files included on the ASP.NET CDN by visiting the ASP.NET CDN documentation page. Second, you should notice that the actual template is included in a script tag with a special MIME type: <script id="bookTemplate" type="text/x-jQuery-tmpl"> <div> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> </script> This template is displayed for each of the books rendered by the template. The template displays a book picture, title, and price. Notice that the SCRIPT tag which wraps the template has a MIME type of text/x-jQuery-tmpl. Why is the template wrapped in a SCRIPT tag and why the strange MIME type? When a browser encounters a SCRIPT tag with an unknown MIME type, it ignores the content of the tag. This is the behavior that you want with a template. You don’t want a browser to attempt to parse the contents of a template because this might cause side effects. For example, the template above includes an <img> tag with a src attribute that points at “BookPictures/${picture}”. You don’t want the browser to attempt to load an image at the URL “BookPictures/${picture}”. Instead, you want to prevent the browser from processing the IMG tag until the ${picture} expression is replaced by with the actual name of an image by the jQuery Templates plugin. If you are not worried about browser side-effects then you can wrap a template inside any HTML tag that you please. For example, the following DIV tag would also work with the jQuery Templates plugin: <div id="bookTemplate" style="display:none"> <div> <h2>${title}</h2> price: ${formatPrice(price)} </div> </div> Notice that the DIV tag includes a style=”display:none” attribute to prevent the template from being displayed until the template is parsed by the jQuery Templates plugin. Third, notice that the expression ${…} is used to display the value of a JavaScript expression within a template. For example, the expression ${title} is used to display the value of the book title property. You can use any JavaScript function that you please within the ${…} expression. For example, in the template above, the book price is formatted with the help of the custom JavaScript formatPrice() function which is defined lower in the page. Fourth, and finally, the template is rendered with the help of the tmpl() method. The following statement selects the bookTemplate and renders an array of books using the bookTemplate. The results are appended to a DIV element named bookContainer by using the standard jQuery appendTo() method. $("#bookTemplate").tmpl(books).appendTo("#bookContainer"); Using Template Tags Within a template, you can use any of the following template tags. {{tmpl}} – Used for template composition. See the section below. {{wrap}} – Used for wrapped templates. See the section below. {{each}} – Used to iterate through a collection. {{if}} – Used to conditionally display template content. {{else}} – Used with {{if}} to conditionally display template content. {{html}} – Used to display the value of an HTML expression without encoding the value. Using ${…} or {{= }} performs HTML encoding automatically. {{= }}-- Used in exactly the same way as ${…}. {{! }} – Used for displaying comments. The contents of a {{!...}} tag are ignored. For example, imagine that you want to display a list of blog entries. Each blog entry could, possibly, have an associated list of categories. The following page illustrates how you can use the { if}} and {{each}} template tags to conditionally display categories for each blog entry:   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>each</title> <link href="1_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="blogPostContainer"></div> <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{each categories}} <i>${$value}</i> {{/each}} {{else}} Uncategorized {{/if}} </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> var blogPosts = [ { postTitle: "How to fix a sink plunger in 5 minutes", postEntry: "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna.", categories: ["HowTo", "Sinks", "Plumbing"] }, { postTitle: "How to remove a broken lightbulb", postEntry: "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna.", categories: ["HowTo", "Lightbulbs", "Electricity"] }, { postTitle: "New associate website", postEntry: "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna." } ]; // Render the blog posts $("#blogPostTemplate").tmpl(blogPosts).appendTo("#blogPostContainer"); </script> </body> </html> When this page is opened in a web browser, the following list of blog posts and categories is displayed: Notice that the first and second blog entries have associated categories but the third blog entry does not. The third blog entry is “Uncategorized”. The template used to render the blog entries and categories looks like this: <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{each categories}} <i>${$value}</i> {{/each}} {{else}} Uncategorized {{/if}} </script> Notice the special expression $value used within the {{each}} template tag. You can use $value to display the value of the current template item. In this case, $value is used to display the value of each category in the collection of categories. Template Composition When building a fancy page, you might want to build a template out of multiple templates. In other words, you might want to take advantage of template composition. For example, imagine that you want to display a list of products. Some of the products are being sold at their normal price and some of the products are on sale. In that case, you might want to use two different templates for displaying a product: a productTemplate and a productOnSaleTemplate. The following page illustrates how you can use the {{tmpl}} tag to build a template from multiple templates:   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Composition</title> <link href="2_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContainer"> <h1>Products</h1> <div id="productListContainer"></div> <!-- Show list of products using composition --> <script id="productListTemplate" type="text/x-jQuery-tmpl"> <div> {{if onSale}} {{tmpl "#productOnSaleTemplate"}} {{else}} {{tmpl "#productTemplate"}} {{/if}} </div> </script> <!-- Show product --> <script id="productTemplate" type="text/x-jQuery-tmpl"> ${name} </script> <!-- Show product on sale --> <script id="productOnSaleTemplate" type="text/x-jQuery-tmpl"> <b>${name}</b> <img src="images/on_sale.png" alt="On Sale" /> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> var products = [ { name: "Laptop", onSale: false }, { name: "Apples", onSale: true }, { name: "Comb", onSale: false } ]; $("#productListTemplate").tmpl(products).appendTo("#productListContainer"); </script> </div> </body> </html>   In the page above, the main template used to display the list of products looks like this: <script id="productListTemplate" type="text/x-jQuery-tmpl"> <div> {{if onSale}} {{tmpl "#productOnSaleTemplate"}} {{else}} {{tmpl "#productTemplate"}} {{/if}} </div> </script>   If a product is on sale then the product is displayed with the productOnSaleTemplate (which includes an on sale image): <script id="productOnSaleTemplate" type="text/x-jQuery-tmpl"> <b>${name}</b> <img src="images/on_sale.png" alt="On Sale" /> </script>   Otherwise, the product is displayed with the normal productTemplate (which does not include the on sale image): <script id="productTemplate" type="text/x-jQuery-tmpl"> ${name} </script>   You can pass a parameter to the {{tmpl}} tag. The parameter becomes the data passed to the template rendered by the {{tmpl}} tag. For example, in the previous section, we used the {{each}} template tag to display a list of categories for each blog entry like this: <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{each categories}} <i>${$value}</i> {{/each}} {{else}} Uncategorized {{/if}} </script>   Another way to create this template is to use template composition like this: <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{tmpl(categories) "#categoryTemplate"}} {{else}} Uncategorized {{/if}} </script> <script id="categoryTemplate" type="text/x-jQuery-tmpl"> <i>${$data}</i> &nbsp; </script>   Using the {{each}} tag or {{tmpl}} tag is largely a matter of personal preference. Wrapped Templates The {{wrap}} template tag enables you to take a chunk of HTML and transform the HTML into another chunk of HTML (think easy XSLT). When you use the {{wrap}} tag, you work with two templates. The first template contains the HTML being transformed and the second template includes the filter expressions for transforming the HTML. For example, you can use the {{wrap}} template tag to transform a chunk of HTML into an interactive tab strip: When you click any of the tabs, you see the corresponding content. This tab strip was created with the following page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Wrapped Templates</title> <style type="text/css"> body { font-family: Arial; background-color:black; } .tabs div { display:inline-block; border-bottom: 1px solid black; padding:4px; background-color:gray; cursor:pointer; } .tabs div.tabState_true { background-color:white; border-bottom:1px solid white; } .tabBody { border-top:1px solid white; padding:10px; background-color:white; min-height:400px; width:400px; } </style> </head> <body> <div id="tabsView"></div> <script id="tabsContent" type="text/x-jquery-tmpl"> {{wrap "#tabsWrap"}} <h3>Tab 1</h3> <div> Content of tab 1. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 2</h3> <div> Content of tab 2. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 3</h3> <div> Content of tab 3. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> {{/wrap}} </script> <script id="tabsWrap" type="text/x-jquery-tmpl"> <div class="tabs"> {{each $item.html("h3", true)}} <div class="tabState_${$index === selectedTabIndex}"> ${$value} </div> {{/each}} </div> <div class="tabBody"> {{html $item.html("div")[selectedTabIndex]}} </div> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Global for tracking selected tab var selectedTabIndex = 0; // Render the tab strip $("#tabsContent").tmpl().appendTo("#tabsView"); // When a tab is clicked, update the tab strip $("#tabsView") .delegate(".tabState_false", "click", function () { var templateItem = $.tmplItem(this); selectedTabIndex = $(this).index(); templateItem.update(); }); </script> </body> </html>   The “source” for the tab strip is contained in the following template: <script id="tabsContent" type="text/x-jquery-tmpl"> {{wrap "#tabsWrap"}} <h3>Tab 1</h3> <div> Content of tab 1. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 2</h3> <div> Content of tab 2. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 3</h3> <div> Content of tab 3. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> {{/wrap}} </script>   The tab strip is created with a list of H3 elements (which represent each tab) and DIV elements (which represent the body of each tab). Notice that the HTML content is wrapped in the {{wrap}} template tag. This template tag points at the following tabsWrap template: <script id="tabsWrap" type="text/x-jquery-tmpl"> <div class="tabs"> {{each $item.html("h3", true)}} <div class="tabState_${$index === selectedTabIndex}"> ${$value} </div> {{/each}} </div> <div class="tabBody"> {{html $item.html("div")[selectedTabIndex]}} </div> </script> The tabs DIV contains all of the tabs. The {{each}} template tag is used to loop through each of the H3 elements from the source template and render a DIV tag that represents a particular tab. The template item html() method is used to filter content from the “source” HTML template. The html() method accepts a jQuery selector for its first parameter. The tabs are retrieved from the source template by using an h3 filter. The second parameter passed to the html() method – the textOnly parameter -- causes the filter to return the inner text of each h3 element. You can learn more about the html() method at the jQuery website (see the section on $item.html()). The tabBody DIV renders the body of the selected tab. Notice that the {{html}} template tag is used to display the tab body so that HTML content in the body won’t be HTML encoded. The html() method is used, once again, to grab all of the DIV elements from the source HTML template. The selectedTabIndex global variable is used to display the contents of the selected tab. Remote Templates A common feature request for jQuery templates is support for remote templates. Developers want to be able to separate templates into different files. Adding support for remote templates requires only a few lines of extra code (Dave Ward has a nice blog entry on this). For example, the following page uses a remote template from a file named BookTemplate.htm: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Remote Templates</title> <link href="0_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContent"> <h1>ASP.NET Bookstore</h1> <div id="bookContainer"></div> </div> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Create an array of books var books = [ { title: "ASP.NET 4 Unleashed", price: 37.79, picture: "AspNet4Unleashed.jpg" }, { title: "ASP.NET MVC Unleashed", price: 44.99, picture: "AspNetMvcUnleashed.jpg" }, { title: "ASP.NET Kick Start", price: 4.00, picture: "AspNetKickStart.jpg" }, { title: "ASP.NET MVC Unleashed iPhone", price: 44.99, picture: "AspNetMvcUnleashedIPhone.jpg" }, ]; // Get the remote template $.get("BookTemplate.htm", null, function (bookTemplate) { // Render the books using the remote template $.tmpl(bookTemplate, books).appendTo("#bookContainer"); }); function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html>   The remote template is retrieved (and rendered) with the following code: // Get the remote template $.get("BookTemplate.htm", null, function (bookTemplate) { // Render the books using the remote template $.tmpl(bookTemplate, books).appendTo("#bookContainer"); });   This code uses the standard jQuery $.get() method to get the BookTemplate.htm file from the server with an Ajax request. After the BookTemplate.htm file is successfully retrieved, the $.tmpl() method is used to render an array of books with the template. Here’s what the BookTemplate.htm file looks like: <div> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> Notice that the template in the BooksTemplate.htm file is not wrapped by a SCRIPT element. There is no need to wrap the template in this case because there is no possibility that the template will get interpreted before you want it to be interpreted. If you plan to use the bookTemplate multiple times – for example, you are paging or sorting the books -- then you should compile the template into a function and cache the compiled template function. For example, the following page can be used to page through a list of 100 products (using iPhone style More paging). <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Template Caching</title> <link href="6_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Products</h1> <div id="productContainer"></div> <button id="more">More</button> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Globals var pageIndex = 0; // Create an array of products var products = []; for (var i = 0; i < 100; i++) { products.push({ name: "Product " + (i + 1) }); } // Get the remote template $.get("ProductTemplate.htm", null, function (productTemplate) { // Compile and cache the template $.template("productTemplate", productTemplate); // Render the products renderProducts(0); }); $("#more").click(function () { pageIndex++; renderProducts(); }); function renderProducts() { // Get page of products var pageOfProducts = products.slice(pageIndex * 5, pageIndex * 5 + 5); // Used cached productTemplate to render products $.tmpl("productTemplate", pageOfProducts).appendTo("#productContainer"); } function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html>   The ProductTemplate is retrieved from an external file named ProductTemplate.htm. This template is retrieved only once. Furthermore, it is compiled and cached with the help of the $.template() method: // Get the remote template $.get("ProductTemplate.htm", null, function (productTemplate) { // Compile and cache the template $.template("productTemplate", productTemplate); // Render the products renderProducts(0); });   The $.template() method compiles the HTML representation of the template into a JavaScript function and caches the template function with the name productTemplate. The cached template can be used by calling the $.tmp() method. The productTemplate is used in the renderProducts() method: function renderProducts() { // Get page of products var pageOfProducts = products.slice(pageIndex * 5, pageIndex * 5 + 5); // Used cached productTemplate to render products $.tmpl("productTemplate", pageOfProducts).appendTo("#productContainer"); } In the code above, the first parameter passed to the $.tmpl() method is the name of a cached template. Working with Template Items In this final section, I want to devote some space to discussing Template Items. A new Template Item is created for each rendered instance of a template. For example, if you are displaying a list of 100 products with a template, then 100 Template Items are created. A Template Item has the following properties and methods: data – The data associated with the Template Instance. For example, a product. tmpl – The template associated with the Template Instance. parent – The parent template item if the template is nested. nodes – The HTML content of the template. calls – Used by {{wrap}} template tag. nest – Used by {{tmpl}} template tag. wrap – Used to imperatively enable wrapped templates. html – Used to filter content from a wrapped template. See the above section on wrapped templates. update – Used to re-render a template item. The last method – the update() method -- is especially interesting because it enables you to re-render a template item with new data or even a new template. For example, the following page displays a list of books. When you hover your mouse over any of the books, additional book details are displayed. In the following screenshot, details for ASP.NET Kick Start are displayed. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Template Item</title> <link href="0_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContent"> <h1>ASP.NET Bookstore</h1> <div id="bookContainer"></div> </div> <script id="bookTemplate" type="text/x-jQuery-tmpl"> <div class="bookItem"> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> </script> <script id="bookDetailsTemplate" type="text/x-jQuery-tmpl"> <div class="bookItem"> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} <p> ${description} </p> </div> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Create an array of books var books = [ { title: "ASP.NET 4 Unleashed", price: 37.79, picture: "AspNet4Unleashed.jpg", description: "The most comprehensive book on Microsoft’s new ASP.NET 4.. " }, { title: "ASP.NET MVC Unleashed", price: 44.99, picture: "AspNetMvcUnleashed.jpg", description: "Writing for professional programmers, Walther explains the crucial concepts that make the Model-View-Controller (MVC) development paradigm work…" }, { title: "ASP.NET Kick Start", price: 4.00, picture: "AspNetKickStart.jpg", description: "Visual Studio .NET is the premier development environment for creating .NET applications…." }, { title: "ASP.NET MVC Unleashed iPhone", price: 44.99, picture: "AspNetMvcUnleashedIPhone.jpg", description: "ASP.NET MVC Unleashed for the iPhone…" }, ]; // Render the books using the template $("#bookTemplate").tmpl(books).appendTo("#bookContainer"); // Get compiled details template var bookDetailsTemplate = $("#bookDetailsTemplate").template(); // Add hover handler $(".bookItem").mouseenter(function () { // Get template item associated with DIV var templateItem = $(this).tmplItem(); // Change template to compiled template templateItem.tmpl = bookDetailsTemplate; // Re-render template templateItem.update(); }); function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html>   There are two templates used to display a book: bookTemplate and bookDetailsTemplate. When you hover your mouse over a template item, the standard bookTemplate is swapped out for the bookDetailsTemplate. The bookDetailsTemplate displays a book description. The books are rendered with the bookTemplate with the following line of code: // Render the books using the template $("#bookTemplate").tmpl(books).appendTo("#bookContainer");   The following code is used to swap the bookTemplate and the bookDetailsTemplate to show details for a book: // Get compiled details template var bookDetailsTemplate = $("#bookDetailsTemplate").template(); // Add hover handler $(".bookItem").mouseenter(function () { // Get template item associated with DIV var templateItem = $(this).tmplItem(); // Change template to compiled template templateItem.tmpl = bookDetailsTemplate; // Re-render template templateItem.update(); });   When you hover your mouse over a DIV element rendered by the bookTemplate, the mouseenter handler executes. First, this handler retrieves the Template Item associated with the DIV element by calling the tmplItem() method. The tmplItem() method returns a Template Item. Next, a new template is assigned to the Template Item. Notice that a compiled version of the bookDetailsTemplate is assigned to the Template Item’s tmpl property. The template is compiled earlier in the code by calling the template() method. Finally, the Template Item update() method is called to re-render the Template Item with the bookDetailsTemplate instead of the original bookTemplate. Summary This is a long blog entry and I still have not managed to cover all of the features of jQuery Templates J However, I’ve tried to cover the most important features of jQuery Templates such as template composition, template wrapping, and template items. To learn more about jQuery Templates, I recommend that you look at the documentation for jQuery Templates at the official jQuery website. Another great way to learn more about jQuery Templates is to look at the (unminified) source code.

    Read the article

< Previous Page | 14 15 16 17 18 19  | Next Page >