Search Results

Search found 501 results on 21 pages for 'rick crawford'.

Page 20/21 | < Previous Page | 16 17 18 19 20 21  | Next Page >

  • IIS6 compressing static files, but not JSON requests

    - by user500038
    I have IIS6 compression setup, and static content is being compressed correctly as per Coding Horror. Example of a static file: Response Headers Content-Length 55513 Content-Type application/x-javascript Content-Encoding gzip Last-Modified Mon, 20 Dec 2010 15:31:58 GMT Accept-Ranges bytes Vary Accept-Encoding Server Microsoft-IIS/6.0 X-Powered-By ASP.NET Date Wed, 29 Dec 2010 16:37:23 GMT Request Headers Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive As you can see, the response is correctly compressed. However with a JSON call you'll see the request with the gzip parameter correctly set: Accept application/json, text/javascript, */*; q=0.01 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive X-Requested-With XMLHttpRequest Then for the response: Cache-Control private Content-Length 6811 Content-Type application/json; charset=utf-8 Server Microsoft-IIS/6.0 X-Powered-By ASP.NET X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 No gzip! I found this article by Rick Stahl, which outlines how to compress in your code, but I'd like IIS to handle this. Is this possible with IIS6?

    Read the article

  • missing rake tasks ??

    - by richard moss
    hi I have ran gem install rails and am running 2.3.4 but i am missing some rake tasks like 'db' and 'gems' if i run rake -T i get the following tasks. How can i get all the others ? rake apache2 # Build Apache 2 module rake clean # Remove compiled files rake clobber # Remove all generated files rake default # Build everything rake doc # Generate all documentation rake doxygen # Generate Doxygen C++ API documentation if ... rake doxygen:clobber # Remove generated Doxygen C++ API documenta... rake doxygen:force # Force generation of Doxygen C++ API docume... rake fakeroot # Create a fakeroot, useful for building nat... rake nginx # Build Nginx helper server rake package # Build all the packages rake package:clean # Remove package products rake package:debian # Create a Debian package rake package:force # Force a rebuild of the package files rake package:gem # Build the gem file passenger-2.2.4.gem rake rdoc # Build the rdoc HTML Files rake rdoc:clobber # Remove rdoc products rake rdoc:force # Force a rebuild of the RDOC files rake sloccount # Run 'sloccount' to see how much code Passe... rake test # Run all unit tests and integration tests rake test:cxx # Run unit tests for the Apache 2 and Nginx ... rake test:integration # Run all integration tests rake test:integration:apache2 # Run Apache 2 integration tests rake test:integration:nginx # Run Nginx integration tests rake test:oxt # Run unit tests for the OXT library rake test:rcov # Run coverage tests for the Ruby libraries rake test:restart # Run the 'restart' integration test infinit... rake test:ruby If anyone knows why this has happened, how i can fix it or anything else that could help, please let me know thanks alot rick

    Read the article

  • Parsing the Youtube API with DOM

    - by Kirk
    I'm using the Youtube API and I can retrieve the date information without a problem, but don't know how to retrieve the description information. My Code: <?php $v = "dQw4w9WgXcQ"; $url = "http://gdata.youtube.com/feeds/api/videos/". $v; $doc = new DOMDocument; $doc->load($url); $pub = $doc->getElementsByTagName("published")->item(0)->nodeValue; $desc = $doc->getElementsByTagName("media:description")->item(0)->nodeValue; echo "<b>Video Uploaded:</b> "; echo date( "F jS, Y", strtotime( $pub ) ); echo '<br>'; if (isset ($desc)) { echo "<b>Description:</b> "; echo $desc; echo '<br>'; } ?> Here's a link to the feed: http://gdata.youtube.com/feeds/api/videos/dQw4w9WgXcQ?prettyprint=true And the excerpt of code I don't know how to retrieve data from: <media:group> <media:description type='plain'>Music video by Rick Astley performing Never Gonna Give You Up. (C) 1987 PWL</media:description> </media:group> Thanks in advance.

    Read the article

  • Dec 5th Links: ASP.NET, ASP.NET MVC, jQuery, Silverlight, Visual Studio

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series for another on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET ASP.NET Code Samples Collection: J.D. Meier has a great post that provides a detailed round-up of ASP.NET code samples and tutorials from a wide variety of sources.  Lots of useful pointers. Slash your ASP.NET compile/load time without any hard work: Nice article that details a bunch of optimizations you can make to speed up ASP.NET project load and compile times. You might also want to read my previous blog post on this topic here. 10 Essential Tools for Building ASP.NET Websites: Great article by Stephen Walther on 10 great (and free) tools that enable you to more easily build great ASP.NET Websites.  Highly recommended reading. Optimize Images using the ASP.NET Sprite and Image Optimization Framework: A nice article by 4GuysFromRolla that discusses how to use the open-source ASP.NET Sprite and Image Optimization Framework (one of the tools recommended by Stephen in the previous article).  You can use this to significantly improve the load-time of your pages on the client. Formatting Dates, Times and Numbers in ASP.NET: Scott Mitchell has a great article that discusses formatting dates, times and numbers in ASP.NET.  A very useful link to bookmark.  Also check out James Michael’s DateTime is Packed with Goodies blog post for other DateTime tips. Examining ASP.NET’s Membership, Roles and Profile APIs (Part 18): Everything you could possibly want to known about ASP.NET’s built-in Membership, Roles and Profile APIs must surely be in this tutorial series. Part 18 covers how to store additional user info with Membership. ASP.NET with jQuery An Introduction to jQuery Templates: Stephen Walther has written an outstanding introduction and tutorial on the new jQuery Template plugin that the ASP.NET team has contributed to the jQuery project. Composition with jQuery Templates and jQuery Templates, Composite Rendering, and Remote Loading: Dave Ward has written two nice posts that talk about composition scenarios with jQuery Templates and some cool scenarios you can enable with them. Using jQuery and ASP.NET to Build a News Ticker: Scott Mitchell has a nice tutorial that demonstrates how to build a dynamically updated “news ticker” style UI with ASP.NET and jQuery. Checking All Checkboxes in a GridView using jQuery: Scott Mitchell has a nice post that covers how to use jQuery to enable a checkbox within a GridView’s header to automatically check/uncheck all checkboxes contained within rows of it. Using jQuery to POST Form Data to an ASP.NET AJAX Web Service: Rick Strahl has a nice post that discusses how to capture form variables and post them to an ASP.NET AJAX Web Service (.asmx). ASP.NET MVC ASP.NET MVC Diagnostics Using NuGet: Phil Haack has a nice post that demonstrates how to easily install a diagnostics page (using NuGet) that can help identify and diagnose common configuration issues within your apps. ASP.NET MVC 3 JsonValueProviderFactory: James Hughes has a nice post that discusses how to take advantage of the new JsonValueProviderFactory support built into ASP.NET MVC 3.  This makes it easy to post JSON payloads to MVC action methods. Practical jQuery Mobile with ASP.NET MVC: James Hughes has another nice post that discusses how to use the new jQuery Mobile library with ASP.NET MVC to build great mobile web applications. Credit Card Validator for ASP.NET MVC 3: Benjii Me has a nice post that demonstrates how to build a [CreditCard] validator attribute that can be used to easily validate credit card numbers are in the correct format with ASP.NET MVC. Silverlight Silverlight FireStarter Keynote and Sessions: A great blog post from John Papa that contains pointers and descriptions of all the great Silverlight content we published last week at the Silverlight FireStarter.  You can watch all of the talks online.  More details on my keynote and Silverlight 5 announcements can be found here. 31 Days of Windows Phone 7: 31 great tutorials on how to build Windows Phone 7 applications (using Silverlight).  Silverlight for Windows Phone Toolkit Update: David Anson has a nice post that discusses some of the additional controls provided with the Silverlight for Windows Phone Toolkit. Visual Studio JavaScript Editor Extensions: A nice (and free) Visual Studio plugin built by the web tools team that significantly improves the JavaScript intellisense support within Visual Studio. HTML5 Intellisense for Visual Studio: Gil has a blog post that discusses a new extension my team has posted to the Visual Studio Extension Gallery that adds HTML5 schema support to Visual Studio 2008 and 2010. Team Build + Web Deployment + Web Deploy + VS 2010 = Goodness: Visual blogs about how to enable a continuous deployment system with VS 2010, TFS 2010 and the Microsoft Web Deploy framework.  Visual Studio 2010 Emacs Emulation Extension and VIM Emulation Extension: Check out these two extensions if you are fond of Emacs and VIM key bindings and want to enable them within Visual Studio 2010. Hope this helps, Scott

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • Stir Trek: Iron Man Edition Recap and Photos

    - by Brian Jackett
    If you’ve noticed my blogging activity has reduced in frequency and technical content lately it’s primarily due to all of the conferences I’ve been attending, speaking at, or planning in the past few months.  This past Friday myself and six other dedicated individuals put on Stir Trek: Iron Man Edition as the culmination of a few months of hard work.  For those unfamiliar, Stir Trek is a web developer conference that was founded last year as an event to showcase content from Microsoft’s MIX conference and end the day with a private showing of the then just-released Star Trek movie.  This year’s conference expanded from 2 to 4 content tracks and upped the number of tickets from 350 to 600.  Even more amazing was the fact that we had 592 people show up day of the event for the lowest drop-off percentage of any conference I’ve been to before.   Nerd Dinner and Swag Bags     The night before Stir Trek: Iron Man Edition we hosted a nerd dinner at the Polaris Shopping mall food court with about 30 in attendance.  Nerd dinners are a great time to meet others passionate about technology and socialize before the whirlwind of the conference hits.  After the nerd dinner 20+ volunteers headed to the conference location and helped us stuff swag bags.  This in and of itself was a monumental task of putting together 600 swag bags with numerous leaflets, sponsor items, and t-shirts.  A big thanks goes out to all who assisted us that night so that we could finish in just under 2 hours instead of taking all night.  My sleep schedule also thanks you. Morning of Stir Trek     After getting a decent amount of sleep I arrived at Marcus Crosswoods theater at 6am to begin setting up for the day.  Myself and Jody Morgan were in charge of registration so we got tables set up, laid out swag bags, and organized our volunteer crew to assist with checking-in attendees.  Despite having 600+ people registration went fairly smoothly and got the day off to a great start.  I especially appreciated the 3+ cups of coffee from Crimson Cup, a local coffee shop.  For any of you that know me you’ll know that I rarely drink coffee except a few times a year when I really need the energy, so that says a lot about how good their coffee is.   Conference Starts     Once registration was completed the day kicked off with Molly Holzschlag keynoting.  Unfortunately Molly suffered from an ear infection and wasn’t able to fly so she had a virtual keynote and a session later in the day.  I was working behind the scenes on various tasks so I was only able to drop in very briefly on the keynote and rest of the morning sessions.  Throughout the day I tried to grab at least 1 or 2 pics of each presenter.  See my album below for the full set of pics.      For lunch we ordered around 150 pizzas from Mellow Mushroom, a local pizza place (notice the theme of supporting local businesses.)  Early on we were concerned about Mellow Mushroom being able to supply that many pizzas and get them delivered (still hot) to the theater, but they did an excellent job day of the event.  I wish I had gotten some pictures of the old school VW van they delivered the pizza in, but I was just a bit busy running around trying to get theaters ready for lunch.  We had attendees from last year who specifically requested that we have Mellow Mushroom supply lunch this year and I’m glad everything worked out being able to use them again.     During the afternoon I was able to attend a few sessions and hear some great content from various speakers.  It was also nice to just sit down and get off my feet for a bit.  After the last sessions the day concluded with a raffle.  There were a few logistical and technical issues that hampered our ability to smoothly conduct the raffle.  To those of you that agree the raffle wasn’t the smoothest experience I would like to say that the Stir Trek planning committee has already begun meeting to discuss ways of improving the conference for next year.  We are also accepting feedback (both positive and negative) at the following link: click here.  If you don’t wish to use the Joind In site you can also email me directly and I’ll be sure to pass along the feedback.   Iron Man 2 Movie     Last but not least, what Stir Trek event would be complete without the feature movie.  This year’s movie was Iron Man 2.  The theater had some really cool props and promotions (see pic below) for the movie.  I really enjoyed Iron Man 2, but I would recommend brushing up on the Iron Man comics and Marvel’s plans for future movies to understand some of the plot elements that come up.  Also make sure you stay through to the end of the movie credits to see a sneak peak of something special, that’s all I’ll say. Conclusion     Again a big thanks goes out to all of the speakers, sponsors, attendees, movie theater staff, volunteers, and everyone else involved in making this event great.  Also big thanks to my fellow Stir Trek planning committee members: Jeff Blankenburg, Matt Casto, Carey Payette, Jody Morgan, Rick Kierner, and Sarah Dutkiewitcz.  I am grateful for everything I learned while helping plan this event and look forward to being involved again next year.  For those interested we are currently targeting Thor as our movie theme for 2011 and then The Avengers for 2012.  These are tentative based on release dates that could shift as we get closer, but for now look solid.   Photos Pics on Facebook (includes tagging)     Stir Trek: Iron Man Edition photos on Facebook Pics on Live site (higher res)      View Full Album         -Frog Out

    Read the article

  • SQLAuthority News – #SQLPASS 2012 Seattle Update – Memorylane 2009, 2010, 2011

    - by pinaldave
    Today is the first day of the SQLPASS 2012 and I will be soon posting SQL Server 2012 experience over here. Today when I landed in Seattle, I got the nostalgia feeling. I used to stay in the USA. I stayed here for more than 7 years – I studied here and I worked in USA. I had lots of friends in Seattle when I used to stay in the USA. I always wanted to visit Seattle because it is THE place. I remember once I purchased a ticket to travel to Seattle through Priceline (well it was the cheapest option and I was a student) but could not fly because of an interesting issue. I used to be Teaching Assistant of an advanced course and the professor asked me to build a pop-quiz for the course. I unfortunately had to cancel the trip. Before I returned to India – I pretty much covered every city existed in my list to must visit, except one – Seattle. It was so interesting that I never made it to Seattle even though I wanted to visit, when I was in USA. After that one time I never got a chance to travel to Seattle. After a few years I also returned to India for good. Once on Television I saw “Sleepless in Seattle” movie playing and I immediately changed the channel as it reminded me that I never made it to Seattle before. However, destiny has its own way to handle decisions. After I returned to India – I visited Seattle total of 5 times and this is my 6th visit to Seattle in less than 3 years. I was here for 3 previous SQLPASS events – 2009, 2010, and 2011 as well two Microsoft Most Valuable Professional Summit in 2009 and 2010. During these five trips I tried to catch up with all of my all friends but I realize that time has its own way of doing things. Many moved out of Seattle and many were too busy revive the old friendship but there were few who always make a point to meet me when I travel to the city. During the course of my visits I have made few fantastic new friends – Rick Morelan (Joes 2 Pros) and Greg Lynch. Every time I meet them I feel that I know them for years. I think city of Seattle has played very important part in our relationship that I got these fantastic friends. SQLPASS is the event where I find all of my SQL Friends and I look for this event for an entire year. This year’s my goal is to meet as many as new friends I can meet. If you are going to be at SQLPASS – FIND ME. I want to have a photo with you. I want to remember each name as I believe this is very important part of our life – making new friends and sustaining new friendship. Here are few of the pointers where you can find me. All Keynotes – Blogger’s Table Exhibition Booth Joes 2 Pros Booth #117 – Do not forget to stop by at the booth – I might have goodies for you – limited editions. Book Signing Events – Check details in tomorrow’s blog or stop by Booth #117 Evening Parties 6th Nov – Welcome Reception Evening Parties 7th Nov - Exhibitor Reception – Do not miss Booth #117 Evening Parties 8th Nov - Community Appreciation Party Additionally at few other locations – Embarcadero Booth In Coffee shops in Convention Center If you are SQLPASS – make sure that I find an opportunity to meet you at the event. Reserve a little time and lets have a coffee together. I will be continuously tweeting about my where about on twitter so let us stay connected on twitter. Here is my experience of my earlier experience of attending SQLPASS. SQLAuthority News – Book Signing Event – SQLPASS 2011 Event Log SQLAuthority News – Meeting SQL Friends – SQLPASS 2011 Event Log SQLAuthority News – Story of Seattle – SQLPASS 2011 Event Log SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience SQLAuthority News – Notes of Excellent Experience at SQL PASS 2009 Summit, Seattle Let us meet! Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Global Perspective: Oracle AppAdvantage Does its Stage Debut in the UK

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Global Perspective is a monthly series that brings experiences, business needs and real-world use cases from regions across the globe. This month’s feature is a follow-up from last month’s Global Perspective note from a well known ACE Director based in EMEA. My first contribution to this blog was before Oracle Open World and I was quite excited about where this initiative would take me in my understanding of the value of Oracle Fusion Middleware. Rimi Bewtra from the Oracle AppAdvantage team came as promised to the Oracle ACE Director briefings and explained what this initiative was all about and I then asked the directors to take part in the new survey. The story was really well received and then at the SOA advisory board that many of these ACE Directors already take part in there was a further discussion on how this initiative will help customers understand the benefits of adoption. A few days later Rick Beers launched the program at a lunch of invited customer executives which included one from Pella who talked about their projects (a quick recap on that here). I wasn’t able to stay for the whole event but what really interested me was that these executives who understood the technology but where looking for how they could use them to drive their businesses. Lots of ideas were bubbling up in my head about how we can use this in user groups to help our members, and the timing was fantastic as just three weeks later we had UKOUG_Apps13, our flagship Applications conference in the UK. We had independently working with Oracle marketing in the UK on an initiative called Apps Transformation to help our members look beyond just the application they use today. We have had a Fusion community page but felt the options open are now much wider than Fusion Applications, there are acquired applications, social, mobility and of course the underlying technology, Oracle Fusion Middleware. I was really pleased to be allowed to give the Oracle AppAdvantage story as a session in our conference and we are planning a special Apps Transformation event in March where I hope the Oracle AppAdvantage team will take part and we will have the results of the survey to discuss. But, life also came full circle for me. In my first post, I talked about Andrew Sutherland and his original theory that Oracle Fusion Middleware adoption had technical drivers. Well, Andrew was a speaker at our event and he gave a potted, tech-talk free update on Oracle Open World. Andrew talked about the Prevailing Technology Winds, and what is driving this today and he talked about that in the past it was the move from simply automating processes (ERP etc), through the altering of those processes (SOA) and onto consolidation. The next drivers are around the need to predict, both faster and more accurately; how to better exploit the information that we have available. He went on to talk about The Nexus of Forces: Social, Mobile, Cloud and Information – harnessing these forces of change with Oracle technology. Gartner really likes this concept and if you want to know more you can get their paper here. All this has made me think, and I hope it will make you too. Technology can help us drive our businesses better and understanding your needs can be the first step on your journey, which was the theme of our event in the UK. I spoke to a number of the delegates and I hope to share some of their stories in later posts. If you have a story to share, the survey is at: https://www.surveymonkey.com/s/P335DD3 About the Author: Debra Lilley, Fujitsu Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Debra has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Leader and then Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts Editor’s Note: Debra has kindly agreed to share her musings and experience in a monthly column on the Fusion Middleware blog so do stay tuned…

    Read the article

  • Best Method For Evaluating Existing Software or New Software

    How many of us have been faced with having to decide on an off-the-self or a custom built component, application, or solution to integrate in to an existing system or to be the core foundation of a new system? What is the best method for evaluating existing software or new software still in the design phase? One of the industry preferred methodologies to use is the Active Reviews for Intermediate Designs (ARID) evaluation process.  ARID is a hybrid mixture of the Active Design Review (ADR) methodology and the Architectural Tradeoff Analysis Method (ATAM). So what is ARID? ARD’s main goal is to ensure quality, detailed designs in software. One way in which it does this is by empowering reviewers by assigning generic open ended survey questions. This approach attempts to remove the possibility for allowing the standard answers such as “Yes” or “No”. The ADR process ignores the “Yes”/”No” questions due to the fact that they can be leading based on how the question is asked. Additionally these questions tend to receive less thought in comparison to more open ended questions. Common Active Design Review Questions What possible exceptions can occur in this component, application, or solution? How should exceptions be handled in this component, application, or solution? Where should exceptions be handled in this component, application, or solution? How should the component, application, or solution flow based on the design? What is the maximum execution time for every component, application, or solution? What environments can this component, application, or solution? What data dependencies does this component, application, or solution have? What kind of data does this component, application, or solution require? Ok, now I know what ARID is, how can I apply? Let’s imagine that your organization is going to purchase an off-the-shelf (OTS) solution for its customer-relationship management software. What process would we use to ensure that the correct purchase is made? If we use ARID, then we will have a series of 9 steps broken up by 2 phases in order to ensure that the correct OTS solution is purchases. Phase 1 Identify the Reviewers Prepare the Design Briefing Prepare the Seed Scenarios Prepare the Materials When identifying reviewers for a design it is preferred that they be pulled from a candidate pool comprised of developers that are going to implement the design. The believe is that developers actually implementing the design will have more a vested interest in ensuring that the design is correct prior to the start of code. Design debriefing consist of a summary of the design, examples of the design solving real world examples put in to use and should be no longer than two hours typically. The primary goal of this briefing is to adequately summarize the design so that the review members could actually implement the design. In the example of purchasing an OTS product I would attempt to review my briefing prior to its distribution with the review facilitator to ensure that nothing was excluded that should have not been. This practice will also allow me to test the length of the briefing to ensure that can be delivered in an appropriate about of time. Seed Scenarios are designed to illustrate conceptualized scenarios when applied with a set of sample data. These scenarios can then be used by the reviewers in the actual evaluation of the software, All materials needed for the evaluation should be prepared ahead of time so that they can be reviewed prior to and during the meeting. Materials Included: Presentation Seed Scenarios Review Agenda Phase 2 Present ARID Present Design Brainstorm and prioritize scenarios Apply scenarios Summarize Prior to the start of any ARID review meeting the Facilitator should define the remaining steps of ARID so that all the participants know exactly what they are doing prior to the start of the review process. Once the ARID rules have been laid out, then the lead designer presents an overview of the design which typically takes about two hours. During this time no questions about the design or rational are allowed to be asked by the review panel as a standard, but they are written down for use latter in the process. After the presentation the list of compiled questions is then summarized and sent back to the lead designer as areas that need to be addressed further. In the example of purchasing an OTS product issues could arise regarding security, the implementation needed or even if this is this the correct product to solve the needed solution. After the Design presentation a brainstorming and prioritize scenarios process begins by reducing the seed scenarios down to just the highest priority scenarios.  These will then be used to test the design for suitability. Once the selected scenarios have been defined the reviewers apply the examples provided in the presentation to the scenarios. The intended output of this process is to provide code or pseudo code that makes use of the examples provided while solving the selected seed scenarios. As a standard rule, the designers of the systems are not allowed to help the review board unless they all become stuck. When this occurs it is documented and along with the reason why the designer needed to help the review panel back on track. Once all of the scenarios have been completed the review facilitator reviews with the group issues that arise during the process. Then the reviewers will be polled as to efficacy of the review experience. References: Clements, Paul., Kazman, Rick., Klien, Mark. (2002). Evaluating Software Architectures: Methods and Case Studies Indianapolis, IN: Addison-Wesley

    Read the article

  • About Solaris 11 and UltraSPARC II/III/IV/IV+

    - by nospam(at)example.com (Joerg Moellenkamp)
    I know that I will get the usual amount of comments like "Oh, Jörg ? you can't be negative about Oracle" for this article. However as usual I want to explain the logic behind my reasoning. Yes ? I know that there is a lot of UltraSPARC III, IV and IV+ gear out there. But there are some very basic questions: Does your application you are currently running on this gear stops running just because you can't run Solaris 11 on it? What is the need to upgrade a system already in production to Solaris 11? I have the impression, that some people think that the systems get useless in the moment Oracle releases Solaris 11. I know that Sun sold UltraSPARC IV+ systems until 2009. The Sun SF490 introduced 2004 for example, that was a Sun SF480 with UltraSPARC IV and later with UltraSPARC IV+. And yes, Sun made some speedbumps. At that time the systems of the UltraSPARC III to IV+ generations were supported on Solaris 8, on Solaris 9 and on Solaris 10. However from my perspective we sold them to customers, which weren't able to migrate to Solaris 10 because they used applications not supported on Solaris 9 or who just didn't wanted to migrate to Solaris 10. Believe it or not ? I personally know two customers that migrated core systems to Solaris 10 in ? well 2008/9. This was especially true when the M3000 was announced in 2008 when it closed the darned single socket gap. It may be different at you site, however that's what I remember about that time when talking with customers. At first: Just because there is no Solaris 11 for UltraSPARC III, IV and IV+, it doesn't mean that Solaris 10 will go away anytime soon. I just want to point you to "Expect Lifetime Support - Hardware and Operating Systems". It states about Premier Support:Maintenance and software upgrades are included for Oracle operating systems and Oracle VM for a minimum of eight years from the general availability date.GA for Solaris 10 was in 2005. Plus 8 years ? 2013 ? at minimum. Then you can still opt for 3 years of "Extended Support" ? 2016 ? at minimum. 2016 your systems purchased in 2009 are 7 years old. Even on systems purchased at the very end of the lifetime of that system generation. That are the rules as written in the linked document. I said minimum The actual dates are even further in the future: Premier Support for Solaris 10 ends in 2015, Extended support ends 2018. Sustaining support ? indefinite. You will find this in the document "Oracle Lifetime Support Policy: Oracle Hardware and Operating Systems".So I don't understand when some people write, that Oracle is less protective about hardware investments than Sun. And for hardware it's the same as with Sun: Service 5 years after EOL as part of Premier Support. I would like to write about a different perspective as well: I have to be a little cautious here, because this is going in the roadmap area, so I will mention the public sources here: John Fowler told last year that we have to expect at at least 3x the single thread performance of T3 for T4. We have 8 cores in T4, as stated by Rick Hetherington. Let's assume for a moment that a T4 core will have the performance of a UltraSPARC core (just to simplify math and not to disclosing anything about the performance, all existing SPARC cores are considered equal). So given this pieces of information, you could consolidate 8 V215, 4 or 8 V245, 2 full blown V445,2 full blown 490, 2 full blown M3000 on a single T4 SPARC processor. The Fowler roadmap prezo talked about 4-socket systems with T4. So 32 V215, 16 to 8 V245, 8 fullblown V445, 8 full blown V490, 8 full blown M3000 in a system image. I think you get the idea. That said, most of the systems we are talking about have already amortized and perhaps it's just time to invest in new systems to yield other advantages like reduced space consumptions, like reduced power consumption, like some of the neat features sun4v gives you, and yes ? reduced number of processor licenses for Oracle and less money for Oracle HW/SW support. As much as I dislike it myself that my own UltraSPARC III and UltraSPARC II based systems won't run on Solaris 11 (and I have quite a few of them in my personal lab), I really think that the impact on production environments will be much less than most people think now. By the way: The reason for this move is a quite significant new feature. I will tell you that it was this feature, when it's out. I assume, telling just a word more could lead to much more time to blog.

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Is there a standard way to encode a .NET string into javascript string for use in MS AJAX?

    - by Rich Andrews
    I'm trying to pass the output of a SQL Server exception to the client using the RegisterStartUpScript method of the MS ScriptManager. This works fine for some errors but when the exception contains single quotes the alert fails. I dont want to only escape single quotes though - Is there a standard function i can call to escape any special chars for use in Javascript? string scriptstring = "alert('" + ex.Message + "');"; ScriptManager.RegisterStartupScript(this, this.GetType(), "Alert", scriptstring , true); Thanks tpeczek, the code almost worked for me :) but with a slight amendment (the escaping of single quotes) it works a treat. I've included my amended version here... public class JSEncode { /// <summary> /// Encodes a string to be represented as a string literal. The format /// is essentially a JSON string. /// /// The string returned includes outer quotes /// Example Output: "Hello \"Rick\"!\r\nRock on" /// </summary> /// <param name="s"></param> /// <returns></returns> public static string EncodeJsString(string s) { StringBuilder sb = new StringBuilder(); sb.Append("\""); foreach (char c in s) { switch (c) { case '\'': sb.Append("\\\'"); break; case '\"': sb.Append("\\\""); break; case '\\': sb.Append("\\\\"); break; case '\b': sb.Append("\\b"); break; case '\f': sb.Append("\\f"); break; case '\n': sb.Append("\\n"); break; case '\r': sb.Append("\\r"); break; case '\t': sb.Append("\\t"); break; default: int i = (int)c; if (i < 32 || i > 127) { sb.AppendFormat("\\u{0:X04}", i); } else { sb.Append(c); } break; } } sb.Append("\""); return sb.ToString(); } } As mentioned below - original source: here

    Read the article

  • Simple way of converting server side objects into client side using JSON serialization for asp.net websites

    - by anil.kasalanati
     Introduction:- With the growth of Web2.0 and the need for faster user experience the spotlight has shifted onto javascript based applications built using REST pattern or asp.net AJAX Pagerequest manager. And when we are working with javascript wouldn’t it be much better if we could create objects in an OOAD way and easily push it to the client side.  Following are the reasons why you would push the server side objects onto client side -          Easy availability of the complex object. -          Use C# compiler and rick intellisense to create and maintain the objects but use them in the javascript. You could run code analysis etc. -          Reduce the number of calls we make to the server side by loading data on the pageload.   I would like to explain about the 3rd point because that proved to be highly beneficial to me when I was fixing the performance issues of a major website. There could be a scenario where in you be making multiple AJAX based webrequestmanager calls in order to get the same response in a single page. This happens in the case of widget based framework when all the widgets are independent but they need some common information available in the framework to load the data. So instead of making n multiple calls we could load the data needed during pageload. The above picture shows the scenario where in all the widgets need the common information and then call GetData webservice on the server side. Ofcourse the result can be cached on the client side but a better solution would be to avoid the call completely.  In order to do that we need to JSONSerialize the content and send it in the DOM.                                                                                                                                                                                                                                                                                                                                                                                            Example:- I have developed a simple application to demonstrate the idea and I would explaining that in detail here. The class called SimpleClass would be sent as serialized JSON to the client side .   And this inherits from the base class which has the implementation for the GetJSONString method. You can create a single base class and all the object which need to be pushed to the client side can inherit from that class. The important thing to note is that the class should be annotated with DataContract attribute and the methods should have the Data Member attribute. This is needed by the .Net DataContractSerializer and this follows the opt-in mode so if you want to send an attribute to the client side then you need to annotate the DataMember attribute. So if I didn’t want to send the Result I would simple remove the DataMember attribute. This is default WCF/.Net 3.5 stuff but it provides the flexibility of have a fullfledged object on the server side but sending a smaller object to the client side. Sometimes you may hide some values due to security constraints. And thing you will notice is that I have marked the class as Serializable so that it can be stored in the Session and used in webfarm deployment scenarios. Following is the implementation of the base class –  This implements the default DataContractJsonSerializer and for more information or customization refer to following blogs – http://softcero.blogspot.com/2010/03/optimizing-net-json-serializing-and-ii.html http://weblogs.asp.net/gunnarpeipman/archive/2010/12/28/asp-net-serializing-and-deserializing-json-objects.aspx The next part is pretty simple, I just need to inject this object into the aspx page.   And in the aspx markup I have the following line – <script type="text/javascript"> var data =(<%=SimpleClassJSON  %>);   alert(data.ResultText); </script>   This will output the content as JSON into the variable data and this can be any element in the DOM. And you can verify the element by checking data in the Firebug console.    Design Consideration – If you have a lot of javascripts then you need to think about using Script # and you can write javascript in C#. Refer to Nikhil’s blog – http://projects.nikhilk.net/ScriptSharp Ensure that you are taking security into consideration while exposing server side objects on to client side. I have seen application exposing passwords, secret key so it is not a good practice.   The application can be tested using the following url – http://techconsulting.vpscustomer.com/Samples/JsonTest.aspx The source code is available at http://techconsulting.vpscustomer.com/Source/HistoryTest.zip

    Read the article

  • Speaker at the German Visual FoxPro Developer Conference 2004

    The following is an excerpt from the UniversalThread conference coverage of the German Visual FoxPro Developer Conference 2004 written by Hans-Otto Lochmann, Armin Neudert and myself. TRACK Active FoxPro Pages Back in 1996 Peter Herzog invented a FoxPro based solution to provide intranet capabilities for one of his customers. Nearly at the same time Rick Strahl had the same task and created WestWind Web Connection (WWWC). The aspect that developers have to have a full Visual FoxPro development environment to create WWWC solutions was the starting point of a "personal sportive competition" of Peter to write his own solution. But the main aspect has to be that it doesn't rely on a full VFP version in order to run. The VFP runtime should enough and the source code has to be compiled and interpreted on the fly. So, as Microsoft released Active Server Pages a name for Peter's solution was found: Active FoxPro Pages (AFP). During the years many drawbacks, design aspects as well as technological hassles forced ProLib Software to refactor the product. This way many limits like DCOM configuration, file-based information transfer between Web server and AFP, missing features (like upload forms or other Web servers than IIS) and extensibility were eliminated. As a consequence ProLib Software decided to rewrite Active FoxPro Pages in mid of 2002 completely. Christof Wollenhaupt, before his marriage known as Christof Lange, and Jochen Kirstätter had to solve this task. AFP 3.0 was officially released at German Devcon in November 2002. Today AFP has six distributors world-wide and there is a lot more information available online than before version 3.0. Directly after a short welcome speech by Rainer Becker, Jochen Kirstätter - aka JoKi - opened today's AFP track and introduced the basic concepts how Active FoxPro Pages works in general, explained the AFP terminilogy and every single component, and presented a small Walk-Through about how to write an AFP-based Web solution. Actually his presentation slides themselves were an AFP Web application. This way it was easy to integrate accompanying AFP samples on the fly. Additionally it was shown that no Visual FoxPro development environment is needed to create a Web application. A simple text editor like NotePad or any WYSIWYG editor on the market is usable to fullfil customer's requirements.Welcome at least two new speakers - Nina Schwanzer and Bernhard Reiter. Both are working at ProLib Software and this year's conference is their first time as speakers. And they did their job very well. The whole session was kind of a "ping pong" game and those two complemented each other to keep the audience in tension. First, they described typical requirements a modern desktop application should fullfil - online registration and activation, auto-update capabilities, or even frontend to administer a Web application on a remote system via internet, and explained how possible solutions like Web Services (using the SOAP interface), DCOM, and even .NET might solve those requirements. But any of those ways has different drawbacks like complicated installation or configuration, or extraordinary download sizes. Next, they introduced a technology they developed and used in a customer's project: Active FoxPro Pages Remote Procedure Call (AFP RPC). [...]   In the next session JoKi described how to extend Active FoxPro Pages. On the one hand AFP provides a plugin interface, and on the other hand any addon for Visual FoxPro might be usable as well. During the first half he spoke about the plugin interface and wrote live a new AFP extension - the Devcon plugin. Later he questioned any former step and showed that a single AFP document may solve the problem as well. So, developing extensions is only interesting if they are re-usable and generic. At the end he talked about multiple interfaces for the same business logic. For instance plain VFP class, COM server and .NET integration. Currently there are several specialized AFP extensions for sending mail, for using cryptographic routines (ie. based on .NET classes), or enhanced methods to handle HTML/XML strings.Rainer Becker and Peter Herzog introduced a new development for Visual Extend (VFX) - an AFP form builder. With this builder creating an AFP Web form designed with Visual FoxPro's form designer was a matter of seconds. The builder itself is currently in pre-release status and will be part of the VFX framework in the future. It was very impressive to see that the whole design of a form as well as most parts of its functionality were exported to a combination of HTML, JavaScript and Active FoxPro Pages. At half-time Jürgen "wOOdy" Wondzinski and JoKi changed places with Rainer and Peter, and presented some Web solutions in AFP. [...] Visual FoxPro 9.0 und Linux Is Linux still a topic for Visual FoxPro developers based on the activities during this year? In his session Jochen Kirstätter - aka JoKi - went not through the technical steps and requirements on how to setup and run FoxPro on a Linux client. Instead, he explained what Linux actually is, and talked about the high variety of distributions. In fact there are a lot of distributions around but since some several years there are some specialized ones available: Live Distributions (aka LiveCDs).The intension of LiveCDs is to run a full-featured Linux operating system on any personal computer directly from a bootable medium, like CD, DVD, or even USB memory stick, without installation on a hard disk. One of the first Linux LiveCDs was made by Klaus Knopper and is well-known as Knoppix. Today, many other LiveCDs are based on the concepts of Knoppix. During the session Jochen booted Morphix, a very light-weighted LiveCD, on his notebook, and actually showed the attendees that testing and playing around with Linux is absolutely easy. Running a text processing application swept away most of the contrary aspects the audience had. Okay, where is the part about FoxPro? Well, there are several scenarios a customer might require usage of Linux, and actually with all of them FoxPro could deal with. I guess that one of the more common ones is the situation that a customer has a heterogeneous intranet with Windows clients and Linux servers, i.e. Windows XP Professional and any Linux distribution on their servers. Even in this scenario there are two variants hidden! Why? Well, on the one hand there is a software package called Samba, that provides Windows server capabilities to a Linux system, and on the other hand there are several SQL servers for Linux, like PostgreSQL, DB2 and MySQL. Either way, FoxPro is able to deal with these scenarios, but you as developer have to know what you are talking about with your customers. And even if there's no Windows operating system, you are able to provide a FoxPro-based solution. Using the wine library - wine stands for Wine Is Not an Emulator - you are able to run your VFP applications on Linux clients, too; but not without reading VFP's EULA. Licenses were also part the session, and Jochen discussed the meaning of Open Source and its misunderstanding throughout most developers. Open Source does not mean that it's without a fee. Instead, it stands for access to the source code of an application or tool. And, VFP itself is one of the best samples to explain Open Source due to fact that since years, VFP is shipped with the xSource.zip archive. [...]

    Read the article

  • The APEX of Business Value...or...the Business Value of APEX? Oracle Cloud Takes Oracle APEX to New Heights!

    - by Gene Eun
    The attraction of Oracle Application Express (APEX) has increased tremendously with the recent launch of the Oracle Cloud. APEX already supported departmental development and deployment of business applications with minimal involvement from the IT department. Positioned as the ideal replacement for MS Access, APEX probably has managed better to capture the eye of developers and was used for enterprise application development at least as much as for the kind of tactical applications that Oracle strategically positioned it for. With APEX as PaaS from the Oracle Cloud, a leap is made to a much higher level of business value. Now the IT department is not even needed to make infrastructure available with a database running  on it. All the business needs is a credit card. And the business application that is developed, managed and used from the cloud through a standard browser can now just as easily be accessed by users from around the world as by users from the business department itself. As a bonus – the development of the APEX application is also done in the cloud – with no special demands on the location or the enterprise access privileges of the developers. To sum it up: APEX from Oracle Cloud Database Service get the development environment up and running in minutes no involvement from the internal IT department required (not for infrastructure, platform, or development) superior availability and scalability is offered by Oracle users from anywhere in the world can be invited to access the application developers from anywhere in the world can participate in creating and maintaining the application In addition: because the Oracle Cloud platform is the same as the on-premise platform, you can still decide to move the APEX application between the cloud and the local environment – and back again. The REST-ful services that are available through APEX allow programmatic interaction with the database under the APEX application. That means that this database can be synchronized with on premise databases or data stores in (other) clouds. Through the Oracle Cloud Messaging Service, the APEX application can easily enter into asynchronous conversations with other APEX applications, Fusion Middleware applications (ADF, SOA, BPM) and any other type of REST-enabled application. In my opinion, now, for the first time perhaps, APEX offers the attraction to the business that has been suggested before: because of the cloud, all the business needs is  a credit card (a budget of $175 per month), an internet-connection and a browser. Not like before, with a PC hidden under a desk or a database running somewhere in the data center. No matter how unattended: equipment is needed, power is consumed, the database needs to be kept running and if Oracle Database XE does not suffice, software licenses are required as well. And this set up always has a security challenge associated with it. The cloud fee for the Oracle Cloud Database Service includes infrastructure, power, licenses, availability, platform upgrades, a collection of reusable application components and the development and runtime environments containing the APEX platform. Of course this not only means that business departments can move quickly without having to convince their IT colleagues to move along – it also means that small organizations that do not even have IT colleagues can do the same. Getting tailored applications or applications up and running to get in touch with users and customers all over the world is now within easy reach for small outfits – without any investment. My misunderstanding For a long time, I was under the impression that the essence of APEX was that the business could create applications themselves – meaning that business ‘people’ would actually go into APEX to create the application. To me APEX was too much of a developers’ tool to see that happen – apart from the odd business analyst who missed his or her calling as an IT developer. Having looked at various other cloud based development offerings – including Force.com, Mendix, WaveMaker, WorkXpress, OrangeScape, Caspio and Cordys- I have come to realize my mistake. All these platforms are positioned for 'the business' but require a fair amount of coding and technical expertise. However, they make the business happy nevertheless, because they allow the  business to completely circumvent the IT department. That is the essence. Not having to go through the red tape, not having to wait for IT staff who (justifiably) need weeks or months to provide an environment, not having to deal with administrators (again, justifiably) refusing to take on that 'strange environment'. Being able to think of an initiative and turn into action right away. The business does not have to build the application - it can easily hire some external developers or even that nerdy boy next door. They can get started, get an application up and running and invite users in – especially external users such as customers. They will worry later about upgrades and life cycle management and integration. To get applications up and running quickly and start turning ideas into action and results rightaway. That is the key selling point for all these cloud offerings, including APEX from the Cloud. And it is a compelling story. For APEX probably even more so than for the others. While I consider APEX a somewhat proprietary framework compared with ‘regular’ Java/JEE web development (or even .NET and PHP  development), it is still far more open than most cloud environments. APEX is SQL and PL/SQL based – nothing special about those languages – and can run just as easily on site as in the cloud. It has been around since 2004 (that is not including several predecessors that fed straight into APEX) so it can be considered pretty mature. Oracle as a company seems pretty stable – so investments in its technology are bound to last for some time to come. By the way: neither APEX nor the other Cloud DevaaS offerings are targeted at creating applications with enormous life times. They fit into a trend of agile development and rapid life cycle management, with fairly light weight user interfaces that quickly adapt to taste, technology trends and functional requirements and that are easily replaced. APEX and ADF – a match made in heaven?! (or at least in the sky) Note that using APEX only for cloud based database with REST-ful Services is also a perfectly viable scenario: any UI – mobile or browser based – capable of consuming REST-ful services can be created against such a business tier. Creating an ADF Mobile application for example that runs aginst REST-ful services is a best practice for mobile development. Such REST-ful services can be consumed from any service provider – including the Cloud based APEX powered REST-ful services running against the Oracle Cloud Database Service! The ADF Mobile architecture overview can easily be morphed to fit the APEX services in – allowing for a cloud based mobile app: Want to learn more about Oracle Database Cloud Service or Oracle Cloud, just visit cloud.oracle.com  or oracle.com/cloud. Repost of a blog entry by Rick Greenwald, Director of Product Management, Oracle Database Cloud Service.

    Read the article

  • Thoughts on the Nomination Committee Campaign 2014

    - by Testas
    Congratulations to Erin, Andy and Allen on making the Nomination Committee for 2014. As Mark Broadbent (@retracement) stated in his tweet, there’s a great set of individuals for the Nom Com, and I could not agree more. I know Erin and Allen, and I know how much value they will bring to the process. I don’t know Andy as well, but I am sure he will do a great job and I hope I can meet him at PASS soon. The final candidate appointed by the PASS board is Rick Bolesta, who brings a wealth of experience to the process. I also want to take the opportunity to thank all who have voted. Not just for me, but for all the candidates during the election. Your contribution is greatly appreciated. Would I apply for the Nom Com again?  Yes I would. My first election experience has been a learning experience in itself. So I accept the result and look forward to applying next year. Moving on from this, I do want to express my opinion about the lack of international representation in the election process. One of the tweets that I saw after the result was from Adam Machanic (@AdamMachanic) who commented on the lack of international members on the Nom Com. If truth be told, I was disappointed – when the candidate list was released -- that for the second time in recent elections there was a lack of international candidates on the candidate list. It feels that only Brits and Americans partake in such elections. This is a real shame, and I can’t help thinking why this is the case. Hugo Kornelis (@Hugo_Kornelis) wrote a blog here to express his thoughts. He did raise some valid points. I don’t know why there is an absence of international candidates. I know that the team at PASS are looking to improve the situation, so I do not want to give the impression that PASS are doing nothing. For reference please see Bill Graziano’ s article here to see how PASS are addressing the situation. There is a clear direction to change the rules within PASS to give greater inclusion of international members. In addition to this, I wanted to explore a couple of potential approaches to address the situation. I am not saying that they are the right answer, but when I see challenges, I like to bring potential solutions to the table. 1.       Use the PASS mission statement to define a tactical objective that engages community leaders into the election process. If you are not familiar with the PASS mission statement, let me provide it here as laid out on the PASS website. “Empower data professionals who leverage Microsoft technologies to connect, share, and learn through networking, knowledge sharing, and peer-based learning” PASS fulfil this mission statement regularly. Whether you attend SQL Saturday, SQLRally, SQLPASS and BA conference itself. The biggest value of PASS is the ability to bring our profession together. And the 24 hour hop allows you to learn from the comfort of your own office/home. This mission should be extended to define a tactical objectives that bring greater networking and knowledge sharing between PASS Chapter leaders/Regional Mentors and PASS HQ. It should help educate the leaders about the opportunities of elections and how leaders can become involved. I know PASS engage with Chapter leaders on a regular basis to discuss community matters for the benefit of PASS members. How could this be achieved? Perhaps PASS could perform a quarterly virtual meeting that specifically looks at helping leaders become more involved with the election process 2.       Evolve the Global Growth Strategy into a Global Engagement Strategy. One of the remits of the PASS board over the last couple of years is the Global Growth strategy. This has been very successful as we have seen the massive growth of events across the world. For that, I congratulate the board for this success. Perhaps the time is now right to look at solidifying this success, through a Global Engagement Strategy that starts with the collaboration of Chapter Leaders, Regional Mentors and Evangelists in their respective Countries or Regions. The engagement strategy should look at increasing collaboration between community leaders for the benefit of their respective communities. It should also provide a channel for encouraging leaders to put themselves forward for the elections. How could this be achieved? In the UK, there has been a big growth in PASS Chapters and SQL Server Events that was approaching saturation point. The introduction of the Community Engagement Day -- channelled through the SQLBits conference -- has enabled Chapter Leaders to collaborate, connect and share with PASS, Sponsors and Microsoft. It also provides the ability for Chapter Leaders to speak directly to the PASS representatives from PASSHQ. This brings with it the ability for PASS community evangelists to communicate PASS objectives. It has also been the event where we have found out; and/or encouraged, Chapter Leaders to put themselves forward for elections. People like encouragement and validation when going for something like an election, and being able to discuss this with peers at a dedicated event provides a useful platform. PASS has the people in place already to facilitate such an event. Regional Mentors could potentially help organise such events on an annual basis, with PASSHQ providing support in providing a room/Lync access for the event to take place. It would be really good if a PASSHQ representative could attend in person as well.   3.       Restrict candidates to serve only a limited number of terms. A frequent comment I saw on social networking was that the elections can be seen by some as a popularity conference. Perhaps by limiting the number of terms that an individual can serve on either the Nom Com or the BOD, other candidates may be encouraged to be more actively involved within the PASS election process. I don’t think that the current byelaws deal with this particular suggestion. I also saw a couple of tweets that stated that more active community members did not apply for the Nom Com. I struggled to understand how the individuals of the tweets measured “more active”. It just also further solidified the subjective nature of elections. In the absence of how candidates are put forward for the elections. Then a restriction of terms enables the opportunity to be extended to others. How could this be achieved? Set a resolution that is put to a community vote as to the viability of such a solution. For example, the questions for the vote could be: Should individuals in the Nom Com and BoD be limited to a certain number of terms?  Yes/No. What is the maximum number of terms a candidate could serve?   It would be simple to execute such a vote, and the community will have an opportunity to have a say in an important aspect of the PASS organisation. And is the change is successful, then add it as a byelaw.   So there are some of my thoughts. I am not saying they are right or wrong. But I do hope that there is a concerted effort to encourage more candidates from other reaches of the Globe to become involved with future elections.   It would be good to hear your thoughts   Thanks   Chris

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Community Outreach - Where Should I Go

    - by Roger Brinkley
    A few days ago I was talking to person new to community development and they asked me what guidelines I used to determine the worthiness of a particular event. After our conversation was over I thought about it a little bit more and figured out there are three ways to determine if any event (be it conference, blog, podcast or other social medias) is worth doing: Transferability, Multiplication, and Impact. Transferability - Is what I have to say useful to the people that are going to hear it. For instance, consider a company that has product offering that can connect up using a number of languages like Scala, Grovey or Java. Sending a Scala expert to talk about Scala and the product is not transferable to a Java User Group, but a Java expert doing the same talk with a Java slant is. Similarly, talking about JavaFX to any Java User Group meeting in Brazil was pretty much a wasted effort until it was open sourced. Once it was open sourced it was well received. You can also look at transferability in relation to the subject matter that you're dealing with. How transferable is a presentation that I create. Can I, or a technical writer on the staff, turn it into some technical document. Could it be converted into some type of screen cast. If we have a regular podcast can we make a reference to the document, catch the high points or turn it into a interview. Is there a way of using this in the sales group. In other words is the document purely one dimensional or can it be re-purposed in other forms. Multiplication - On every trip I'm looking for 2 to 5 solid connections that I can make with developers. These are long term connections, because I know that once that relationship is established it will lead to another 2 - 5 from that connection and within a couple of years were talking about some 100 connections from just one developer. For instance, when I was working on JavaHelp in 2000 I hired a science teacher with a programming background. We've developed a very tight relationship over the year though we rarely see each other more than once a year. But at this JavaOne, one of his employees came up to me and said, "Richard (Rick Hard in Czech) told me to tell you that he couldn't make it to JavaOne this year but if I saw you to tell you hi". Another example is from my Mobile & Embedded days in Brasil. On our very first FISL trip about 5 years ago there were two university students that had created a project called "Marge". Marge was a Bluetooth framework that made connecting bluetooth devices easier. I invited them to a "Sun" dinner that evening. Originally they were planning on leaving that afternoon, but they changed their plans recognizing the opportunity. Their eyes were as big a saucers when they realized the level of engineers at the meeting. They went home started a JUG in Florianoplis that we've visited more than a couple of times. One of them went to work for Brazilian government lab like Berkley Labs, MIT Lab, John Hopkins Applied Physicas Labs or Lincoln Labs in the US. That presented us with an opportunity to show Embedded Java as a possibility for some of the work they were doing there. Impact - The final criteria is how life changing is what I'm going to say be to the individuals I'm reaching. A t-shirt is just a token, but when I reach down and tug at their developer hearts then I know I've succeeded. I'll never forget one time we flew all night to reach Joan Pasoa in Northern Brazil. We arrived at 2am went immediately to our hotel only to be woken up at 6 am to travel 2 hours by car to the presentation hall. When we arrived we were totally exhausted. Outside the facility there were 500 people lined up to hear 6 speakers for the day. That itself was uplifting.  I delivered one of my favorite talks on "I have passion". It was a talk on golf and embedded java development, "Find your passion". When we finished a couple of first year students came up to me and said how much my talk had inspired them. FISL is another great example. I had been about 4 years in a row. FISL is a very young group of developers so capturing their attention is important. Several of the students will come back 2 or 3 years later and ask me questions about research or jobs. And then there's Louis. Louis is one my favorite Brazilians. I can only describe him as a big Brazilian teddy bear. I see him every year at FISL. He works primarily in Java EE but he's attended every single one of my talks over the last 4 years. I can't tell you why, but he always greets me and gives me a hug. For some reason I've had a real impact. And of course when it comes to impact you don't just measure a presentation but every single interaction you have at an event. It's the hall way conversations, the booth conversations, but more importantly it's the conversations at dinner tables or in the cars when you're getting transported to an event. There's a good story that illustrates this. Last year in the spring I was traveling to Goiânia in Brazil. I've been there many times and leaders there no me well. One young man has picked me up at the airport on more than one occasion. We were going out to dinner one evening and he brought his girl friend along. One thing let to another and I eventually asked him, in front of her, "Why haven't you asked her to marry you?" There were all kinds of excuses and she just looked at him and smiled. When I came back in December for JavaOne he came and sought me. "I just want to tell you that I thought a lot about what you said, and I asked her to marry me. We're getting married next Spring." Sometimes just one presentation is all it takes to make an impact. Other times it takes years. Some impacts are directly related to the company and some are more personal in nature. It doesn't matter which it is because it's having the impact that matters.

    Read the article

  • How to organize pictures on website using css? [on hold]

    - by user3624023
    Here is my website without any CSS: http://www.wmcicompsci.ca/cs20/students/theglowcloud/Bare%20bones%20website/classics_bare.html I am new to CSS and I would like to organize pictures these pictures in this fashion: http://css-tricks.com/examples/SlideinCaptions/ I would just like this layout for the pictures but I do not need the sliding of the captions(although I would like to but it does not work my browser). I would like the captions to be like titles on top of the pictures. Here is my current html code: <!DOCTYPE html> <html> <head> <title> My favourite Fantasy books</title> <meta charset = "utf-8"> <link rel="stylesheet" href="css.css"> </head> <body> <nav id="main_nav"> <ul> <li><a href = " homepage_css.html"> Homepage</a></li> <li><a href="science_fiction_css.html">Science Fiction</a></li> <li><a href="classics_css.html">Classics</a></li> <li><a href="fantasy_css.html">Fantasy</a></li> </ul> </nav> <h1> Fantasy Genre</h1> <p> Here are my favourites:</p> <ul> <li> Goblet of Fire by J.K Rowling (4th book in the Harry Potter Series) </li> <li><img class= displayed src="pics/fantasy/goblet_of_fire.jpg" width="200" alt="Goblet of Fire book cover"></li> <li> Graceling by Kristan Cashore </li> <li><img src="pics/fantasy/graceling.jpg" width="200" alt = " Graceling book cover"></li> <li> Serpent's Shadow by Rick Riordan (3rd book in the Kane Chronicles) </li> <li><img src="pics/fantasy/serpents_shadow.jpg" width="200" alt="Serpent's Shadow book cover"></li> <li> The Hobbit by J.R.R. Tolkein </li> <li><img src="pics/fantasy/the_hobbit.jpg" width="200" alt="The Hobbit book cover"></li> <li> The False Prince by Jennifer Neilson (1st book in the Ascendance Triology) </li> <li><img src="pics/fantasy/the_false_prince.jpg" width="200" alt="The False Prince book cover"></li> </ul>

    Read the article

  • Adding an Admin user to an ASP.NET MVC 4 application using a single drop-in file

    - by Jon Galloway
    I'm working on an ASP.NET MVC 4 tutorial and wanted to set it up so just dropping a file in App_Start would create a user named "Owner" and assign them to the "Administrator" role (more explanation at the end if you're interested). There are reasons why this wouldn't fit into most application scenarios: It's not efficient, as it checks for (and creates, if necessary) the user every time the app starts up The username, password, and role name are hardcoded in the app (although they could be pulled from config) Automatically creating an administrative account in code (without user interaction) could lead to obvious security issues if the user isn't informed However, with some modifications it might be more broadly useful - e.g. creating a test user with limited privileges, ensuring a required account isn't accidentally deleted, or - as in my case - setting up an account for demonstration or tutorial purposes. Challenge #1: Running on startup without requiring the user to install or configure anything I wanted to see if this could be done just by having the user drop a file into the App_Start folder and go. No copying code into Global.asax.cs, no installing addition NuGet packages, etc. That may not be the best approach - perhaps a NuGet package with a dependency on WebActivator would be better - but I wanted to see if this was possible and see if it offered the best experience. Fortunately ASP.NET 4 and later provide a PreApplicationStartMethod attribute which allows you to register a method which will run when the application starts up. You drop this attribute in your application and give it two parameters: a method name and the type that contains it. I created a static class named PreApplicationTasks with a static method named, then dropped this attribute in it: [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] That's it. One small gotcha: the namespace can be a problem with assembly attributes. I decided my class didn't need a namespace. Challenge #2: Only one PreApplicationStartMethod per assembly In .NET 4, the PreApplicationStartMethod is marked as AllMultiple=false, so you can only have one PreApplicationStartMethod per assembly. This was fixed in .NET 4.5, as noted by Jon Skeet, so you can have as many PreApplicationStartMethods as you want (allowing you to keep your users waiting for the application to start indefinitely!). The WebActivator NuGet package solves the multiple instance problem if you're in .NET 4 - it registers as a PreApplicationStartMethod, then calls any methods you've indicated using [assembly: WebActivator.PreApplicationStartMethod(type, method)]. David Ebbo blogged about that here:  Light up your NuGets with startup code and WebActivator. In my scenario (bootstrapping a beginner level tutorial) I decided not to worry about this and stick with PreApplicationStartMethod. Challenge #3: PreApplicationStartMethod kicks in before configuration has been read This is by design, as Phil explains. It allows you to make changes that need to happen very early in the pipeline, well before Application_Start. That's fine in some cases, but it caused me problems when trying to add users, since the Membership Provider configuration hadn't yet been read - I got an exception stating that "Default Membership Provider could not be found." The solution here is to run code that requires configuration in a PostApplicationStart method. But how to do that? Challenge #4: Getting PostApplicationStartMethod without requiring WebActivator The WebActivator NuGet package, among other things, provides a PostApplicationStartMethod attribute. That's generally how I'd recommend running code that needs to happen after Application_Start: [assembly: WebActivator.PostApplicationStartMethod(typeof(TestLibrary.MyStartupCode), "CallMeAfterAppStart")] This works well, but I wanted to see if this would be possible without WebActivator. Hmm. Well, wait a minute - WebActivator works in .NET 4, so clearly it's registering and calling PostApplicationStartup tasks somehow. Off to the source code! Sure enough, there's even a handy comment in ActivationManager.cs which shows where PostApplicationStartup tasks are being registered: public static void Run() { if (!_hasInited) { RunPreStartMethods(); // Register our module to handle any Post Start methods. But outside of ASP.NET, just run them now if (HostingEnvironment.IsHosted) { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility.RegisterModule(typeof(StartMethodCallingModule)); } else { RunPostStartMethods(); } _hasInited = true; } } Excellent. Hey, that DynamicModuleUtility seems familiar... Sure enough, K. Scott Allen mentioned it on his blog last year. This is really slick - a PreApplicationStartMethod can register a new HttpModule in code. Modules are run right after application startup, so that's a perfect time to do any startup stuff that requires configuration to be read. As K. Scott says, it's this easy: using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; [assembly:PreApplicationStartMethod(typeof(MyAppStart), "Start")] public class CoolModule : IHttpModule { // implementation not important // imagine something cool here } public static class MyAppStart { public static void Start() { DynamicModuleUtility.RegisterModule(typeof(CoolModule)); } } Challenge #5: Cooperating with SimpleMembership The ASP.NET MVC Internet template includes SimpleMembership. SimpleMembership is a big improvement over traditional ASP.NET Membership. For one thing, rather than forcing a database schema, it can work with your database schema. In the MVC 4 Internet template case, it uses Entity Framework Code First to define the user model. SimpleMembership bootstrap includes a call to InitializeDatabaseConnection, and I want to play nice with that. There's a new [InitializeSimpleMembership] attribute on the AccountController, which calls \Filters\InitializeSimpleMembershipAttribute.cs::OnActionExecuting(). That comment in that method that says "Ensure ASP.NET Simple Membership is initialized only once per app start" which sounds like good advice. I figured the best thing would be to call that directly: new Mvc4SampleApplication.Filters.InitializeSimpleMembershipAttribute().OnActionExecuting(null); I'm not 100% happy with this - in fact, it's my least favorite part of this solution. There are two problems - first, directly calling a method on a filter, while legal, seems odd. Worse, though, the Filter lives in the application's namespace, which means that this code no longer works well as a generic drop-in. The simplest workaround would be to duplicate the relevant SimpleMembership initialization code into my startup code, but I'd rather not. I'm interested in your suggestions here. Challenge #6: Module Init methods are called more than once When debugging, I noticed (and remembered) that the Init method may be called more than once per page request - it's run once per instance in the app pool, and an individual page request can cause multiple resource requests to the server. While SimpleMembership does have internal checks to prevent duplicate user or role entries, I'd rather not cause or handle those exceptions. So here's the standard single-use lock in the Module's init method: void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { //Do stuff } initialized = true; } } Putting it all together With all of that out of the way, here's the code I came up with: using Mvc4SampleApplication.Filters; using System.Web; using System.Web.Security; using WebMatrix.WebData; [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] public static class PreApplicationTasks { public static void Initializer() { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility .RegisterModule(typeof(UserInitializationModule)); } } public class UserInitializationModule : IHttpModule { private static bool initialized; private static object lockObject = new object(); private const string _username = "Owner"; private const string _password = "p@ssword123"; private const string _role = "Administrator"; void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { new InitializeSimpleMembershipAttribute().OnActionExecuting(null); if (!WebSecurity.UserExists(_username)) WebSecurity.CreateUserAndAccount(_username, _password); if (!Roles.RoleExists(_role)) Roles.CreateRole(_role); if (!Roles.IsUserInRole(_username, _role)) Roles.AddUserToRole(_username, _role); } initialized = true; } } void IHttpModule.Dispose() { } } The Verdict: Is this a good thing? Maybe. I think you'll agree that the journey was undoubtedly worthwhile, as it took us through some of the finer points of hooking into application startup, integrating with membership, and understanding why the WebActivator NuGet package is so useful Will I use this in the tutorial? I'm leaning towards no - I think a NuGet package with a dependency on WebActivator might work better: It's a little more clear what's going on Installing a NuGet package might be a little less error prone than copying a file A novice user could uninstall the package when complete It's a good introduction to NuGet, which is a good thing for beginners to see This code either requires either duplicating a little code from that filter or modifying the file to use the namespace Honestly I'm undecided at this point, but I'm glad that I can weigh the options. If you're interested: Why are you doing this? I'm updating the MVC Music Store tutorial to ASP.NET MVC 4, taking advantage of a lot of new ASP.NET MVC 4 features and trying to simplify areas that are giving people trouble. One change that addresses both needs us using the new OAuth support for membership as much as possible - it's a great new feature from an application perspective, and we get a fair amount of beginners struggling with setting up membership on a variety of database and development setups, which is a distraction from the focus of the tutorial - learning ASP.NET MVC. Side note: Thanks to some great help from Rick Anderson, we had a draft of the tutorial that was looking pretty good earlier this summer, but there were enough changes in ASP.NET MVC 4 all the way up to RTM that there's still some work to be done. It's high priority and should be out very soon. The one issue I ran into with OAuth is that we still need an Administrative user who can edit the store's inventory. I thought about a number of solutions for that - making the first user to register the admin, or the first user to use the username "Administrator" is assigned to the Administrator role - but they both ended up requiring extra code; also, I worried that people would use that code without understanding it or thinking about whether it was a good fit.

    Read the article

  • count specific values in a multidimensional array

    - by user1680701
    I have an odd set of arrays that I need to count how many times specific values show in the results. Currently I have this bit of code. $nested_arrays = shopp_orders( '2011-11-30 00:00:00', '2012-11-30 12:59:59', false, '', 2 ); print_r($nested_arrays); This code pulls multiple arrays (serialized data) from the database and outputs like this Array ( [30] => Purchase Object ( [purchased] => Array ( ) [columns] => Array ( ) [message] => Array ( ) [data] => Array ( ) [invoiced] => [authorized] => [captured] => [refunded] => [voided] => [balance] => 0 [downloads] => [shipable] => [shipped] => [stocked] => [_position:DatabaseObject:private] => 0 [_properties:DatabaseObject:private] => Array ( ) [_ignores:DatabaseObject:private] => Array ( [0] => _ ) [_map:protected] => Array ( ) [_table] => wp_shopp_demo_shopp_purchase [_key] => id [_datatypes] => Array ( [id] => int [customer] => int [shipping] => int [billing] => int [currency] => int [ip] => string [firstname] => string [lastname] => string [email] => string [phone] => string [company] => string [card] => string [cardtype] => string [cardexpires] => date [cardholder] => string [address] => string [xaddress] => string [city] => string [state] => string [country] => string [postcode] => string [shipname] => string [shipaddress] => string [shipxaddress] => string [shipcity] => string [shipstate] => string [shipcountry] => string [shippostcode] => string [geocode] => string [promos] => string [subtotal] => float [freight] => float [tax] => float [total] => float [discount] => float [fees] => float [taxing] => list [txnid] => string [txnstatus] => string [gateway] => string [paymethod] => string [shipmethod] => string [shipoption] => string [status] => int [data] => string [secured] => string [created] => date [modified] => date ) [_lists] => Array ( [taxing] => Array ( [0] => exclusive [1] => inclusive ) ) [id] => 30 [customer] => 12 [shipping] => 23 [billing] => 23 [currency] => 0 [ip] => 24.125.58.205 [firstname] => test [lastname] => test [email] => [email protected] [phone] => 1234567890 [company] => [card] => 1111 [cardtype] => Visa [cardexpires] => 1420070400 [cardholder] => test [address] => 123 Any Street [xaddress] => [city] => Danville [state] => VA [country] => US [postcode] => 24541 [shipname] => [shipaddress] => 123 Any Street [shipxaddress] => [shipcity] => Danville [shipstate] => VA [shipcountry] => US [shippostcode] => 24541 [geocode] => [promos] => Array ( ) [subtotal] => 49.37 [freight] => 9.98 [tax] => 9.874 [total] => 69.22 [discount] => 0 [fees] => 0 [taxing] => exclusive [txnid] => [txnstatus] => authed [gateway] => TestMode [paymethod] => credit-card-test-mode [shipmethod] => ItemRates-0 [shipoption] => Fast Shipping [status] => 0 [secured] => [created] => 1354096946 [modified] => 1354096946 ) [29] => Purchase Object ( [purchased] => Array ( ) [columns] => Array ( ) [message] => Array ( ) [data] => Array ( ) [invoiced] => [authorized] => [captured] => [refunded] => [voided] => [balance] => 0 [downloads] => [shipable] => [shipped] => [stocked] => [_position:DatabaseObject:private] => 0 [_properties:DatabaseObject:private] => Array ( ) [_ignores:DatabaseObject:private] => Array ( [0] => _ ) [_map:protected] => Array ( ) [_table] => wp_shopp_demo_shopp_purchase [_key] => id [_datatypes] => Array ( [id] => int [customer] => int [shipping] => int [billing] => int [currency] => int [ip] => string [firstname] => string [lastname] => string [email] => string [phone] => string [company] => string [card] => string [cardtype] => string [cardexpires] => date [cardholder] => string [address] => string [xaddress] => string [city] => string [state] => string [country] => string [postcode] => string [shipname] => string [shipaddress] => string [shipxaddress] => string [shipcity] => string [shipstate] => string [shipcountry] => string [shippostcode] => string [geocode] => string [promos] => string [subtotal] => float [freight] => float [tax] => float [total] => float [discount] => float [fees] => float [taxing] => list [txnid] => string [txnstatus] => string [gateway] => string [paymethod] => string [shipmethod] => string [shipoption] => string [status] => int [data] => string [secured] => string [created] => date [modified] => date ) [_lists] => Array ( [taxing] => Array ( [0] => exclusive [1] => inclusive ) ) [id] => 29 [customer] => 13 [shipping] => 26 [billing] => 25 [currency] => 0 [ip] => 70.176.223.40 [firstname] => Bryan [lastname] => Crawford [email] => [email protected] [phone] => 4802323049 [company] => ggg [card] => 1111 [cardtype] => Visa [cardexpires] => 1356998400 [cardholder] => ggg [address] => 1300 W Warner Rd [xaddress] => [city] => Gilbert [state] => AZ [country] => US [postcode] => 85224 [shipname] => [shipaddress] => 1300 W Warner Rd [shipxaddress] => [shipcity] => Gilbert [shipstate] => AZ [shipcountry] => US [shippostcode] => 85224 [geocode] => [promos] => Array ( ) [subtotal] => 29.95 [freight] => 9.98 [tax] => 0 [total] => 39.93 [discount] => 0 [fees] => 0 [taxing] => exclusive [txnid] => [txnstatus] => authed [gateway] => TestMode [paymethod] => credit-card-test-mode [shipmethod] => ItemRates-0 [shipoption] => Fast Shipping [status] => 0 [secured] => [created] => 1353538691 [modified] => 1353538691 ) ) This is order data from only two orders. I need to count how many times each state, each city, shipmethod, etc occur in the array. I tried the following but it only counted the 2 large arrays. function count_nested_array_keys(array &$a, array &$res=array()) { $i = 0; foreach ($a as $key=>$value) { if (is_array($value)) { $i += count_nested_array_keys($value, &$res); } else { if(!isset($res[$key])) $res[$key] = 0; $res[$key]++; $i++; } } return $i; } $total_item_count = count_nested_array_keys($nested_arrays, $count_per_key); echo "count per key: ", print_r($count_per_key), "\n"; If someone could show me how to count how many times each state value occurs, example, VA = 2 NC = 1 I can take it from there. Thank You.

    Read the article

  • Building The Right SharePoint Team For Your Organization

    - by Mark Rackley
    I see the question posted fairly often asking what kind SharePoint team an organization should have. How many people do I need? What roles do I need to fill? What is best for my organization? Well, just like every other answer in SharePoint, the correct answer is “it depends”. Do you ever get sick of hearing that??? I know I do… So, let me give you my thoughts and opinions based upon my experience and what I’ve seen and let you come to your own conclusions. What are the possible SharePoint roles? I guess the first thing you need to understand are the different roles that exist in SharePoint (and their are LOTS). Remember, SharePoint is a massive beast and you will NOT find one person who can do it all. If you are hoping to find that person you will be sorely disappointed. For the most part this is true in SharePoint 2007 and 2010. However, generally things are improved in 2010 and easier for junior individuals to grasp. SharePoint Administrator The absolutely positively only role that you should not be without no matter the size of your organization or SharePoint deployment is a SharePoint administrator. These guys are essential to keeping things running and figuring out what’s wrong when things aren’t running well. These unsung heroes do more before 10 am than I do all day. The bad thing is, when these guys are awesome, you don’t even know they exist because everything is running so smoothly. You should definitely invest some time and money here to make sure you have some competent if not rockstar help. You need an admin who truly loves SharePoint and will go that extra mile when necessary. Let me give you a real world example of what I’m talking about: We have a rockstar admin… and I’m sure she’s sick of my throwing her name around so she’ll just have to live with remaining anonymous in this post… sorry Lori… Anyway! A couple of weeks ago our Server teams came to us and said Hi Lori, I’m finalizing the MOSS servers and doing updates that require a restart; can I restart them? Seems like a harmless request from your server team does it not? Sure, go ahead and apply the patches and reboot during our scheduled maintenance window. No problem? right? Sounded fair to me… but no…. not to our fearless SharePoint admin… I need a complete list of patches that will be applied. There is an update that is out there that will break SharePoint… KB973917 is the patch that has been shown to cause issues. What? You mean Microsoft released a patch that would actually adversely affect SharePoint? If we did NOT have a rockstar admin, our server team would have applied these patches and then when some problem occurred in SharePoint we’d have to go through the fun task of tracking down exactly what caused the issue and resolve it. How much time would that have taken? If you have a junior SharePoint admin or an admin who’s not out there staying on top of what’s going on you could have spent days tracking down something so simple as applying a patch you should not have applied. I will even go as far to say the only SharePoint rockstar you NEED in your organization is a SharePoint admin. You can always outsource really complicated development projects or bring in a rockstar contractor every now and then to make sure you aren’t way off track in other areas. For your day-to-day sanity and to keep SharePoint running smoothly, you need an awesome Admin. Some rockstars in this category are: Ben Curry, Mike Watson, Joel Oleson, Todd Klindt, Shane Young, John Ferringer, Sean McDonough, and of course Lori Gowin. SharePoint Developer Another essential role for your SharePoint deployment is a SharePoint developer. Things do start to get a little hazy here and there are many flavors of “developers”. Are you writing custom code? using SharePoint Designer? What about SharePoint Branding?  Are all of these considered developers? I would say yes. Are they interchangeable? I’d say no. Development in SharePoint is such a large beast in itself. I would say that it’s not so large that you can’t know it all well, but it is so large that there are many people who specialize in one particular category. If you are lucky enough to have someone on staff who knows it all well, you better make sure they are well taken care of because those guys are ready-made to move over to a consulting role and charge you 3 times what you are probably paying them. :) Some of the all-around rockstars are Eric Shupps, Andrew Connell (go Razorbacks), Rob Foster, Paul Schaeflein, and Todd Bleeker SharePoint Power User/No-Code Solutions Developer These SharePoint Swiss Army Knives are essential for quick wins in your organization. These people can twist the out-of-the-box functionality to make it do things you would not even imagine. Give these guys SharePoint Designer, jQuery, InfoPath, and a little time and they will create views, dashboards, and KPI’s that will blow your mind away and give your execs the “wow” they are looking for. Not only can they deliver that wow factor, but they can mashup, merge, and really help make your SharePoint application usable and deliver an overall better user experience. Before you hand off a project to your SharePoint Custom Code developer, let one of these rockstars look at it and show you what they can do (in probably less time). I would say the second most important role you can fill in your organization is one of these guys. Rockstars in this category are Christina Wheeler, Laura Rogers, Jennifer Mason, and Mark Miller SharePoint Developer – Custom Code If you want to really integrate SharePoint into your legacy systems, or really twist it and make it bend to your will, you are going to have to open up Visual Studio and write some custom code.  Remember, SharePoint is essentially just a big, huge, ginormous .NET application, so you CAN write code to make it do ANYTHING, but do you really want to spend the time and effort to do so? At some point with every other form of SharePoint development you are going to run into SOME limitation (SPD Workflows is the big one that comes to mind). If you truly want to knock down all the walls then custom development is the way to go. PLEASE keep in mind when you are looking for a custom code developer that a .NET developer does NOT equal a SharePoint developer. Just SOME of the things these guys write are: Custom Workflows Custom Web Parts Web Service functionality Import data from legacy systems Export data to legacy systems Custom Actions Event Receivers Service Applications (2010) These guys are also the ones generally responsible for packaging everything up into solution packages (you are doing that, right?). Rockstars in this category are Phil Wicklund, Christina Wheeler, Geoff Varosky, and Brian Jackett. SharePoint Branding “But it LOOKS like SharePoint!” Somebody call the WAAAAAAAAAAAAHMbulance…   Themes, Master Pages, Page Layouts, Zones, and over 2000 styles in CSS.. these guys not only have to be comfortable with all of SharePoint’s quirks and pain points when branding, but they have to know it TWICE for publishing and non-publishing sites.  Not only that, but these guys really need to have an eye for graphic design and be able to translate the ramblings of business into something visually stunning. They also have to be comfortable with XSLT, XML, and be able to hand off what they do to your custom developers for them to package as solutions (which you are doing, right?). These rockstars include Heater Waterman, Cathy Dew, and Marcy Kellar SharePoint Architect SharePoint Architects are generally SharePoint Admins or Developers who have moved into more of a BA role? Is that fair to say? These guys really have a grasp and understanding for what SharePoint IS and what it can do. These guys help you structure your farms to meet your needs and help you design your applications the correct way. It’s always a good idea to bring in a rockstar SharePoint Architect to do a sanity check and make sure you aren’t doing anything stupid.  Most organizations probably do not have a rockstar architect on staff. These guys are generally brought in at the deployment of a farm, upgrade of a farm, or for large development projects. I personally also find architects very useful for sitting down with the business to translate their needs into what SharePoint can do. A good architect will be able to pick out what can be done out-of-the-box and what has to be custom built and hand those requirements to the development Staff. Architects can generally fill in as an admin or a developer when needed. Some rockstar architects are Rick Taylor, Dan Usher, Bill English, Spence Harbar, Neil Hodgkins, Eric Harlan, and Bjørn Furuknap. Other Roles / Specialties On top of all these other roles you also get these people who specialize in things like Reporting, BDC (BCS in 2010), Search, Performance, Security, Project Management, etc... etc... etc... Again, most organizations will not have one of these gurus on staff, they’ll just pay out the nose for them when they need them. :) SharePoint End User Everyone else in your organization that touches SharePoint falls into this category. What they actually DO in SharePoint is determined by your governance and what permissions you give these guys. Hopefully you have these guys on a fairly short leash and are NOT giving them access to tools like SharePoint Designer. Sadly end users are the ones who truly make your deployment a success by using it, but are also your biggest enemy in breaking it.  :)  We love you guys… really!!! Okay, all that’s fine and dandy, but what should MY SharePoint team look like? It depends! Okay… Are you just doing out of the box team sites with no custom development? Then you are probably fine with a great Admin team and a great No-Code Solution Development team. How many people do you need? Depends on how busy you can keep them. Sorry, can’t answer the question about numbers without knowing your specific needs. I can just tell you who you MIGHT need and what they will do for you. I’ll leave you with what my ideal SharePoint Team would look like for a particular scenario: Farm / Organization Structure Dev, QA, and 2 Production Farms. 5000 – 10000 Users Custom Development and Integration with legacy systems Team Sites, My Sites, Intranet, Document libraries and overall company collaboration Team Rockstar SharePoint Administrator 2-3 junior SharePoint Administrators SharePoint Architect / Lead Developer 2 Power User / No-Code Solution Developers 2-3 Custom Code developers Branding expert With a team of that size and skill set, they should be able to keep a substantial SharePoint deployment running smoothly and meet your business needs. This does NOT mean that you would not need to bring in contract help from time to time when you need an uber specialist in one area. Also, this team assumes there will be ongoing development for the life of your SharePoint farm. If you are just going to be doing sporadic custom development, it might make sense to partner with an awesome firm that specializes in that sort of work (I can give you the name of a couple if you are interested).  Again though, the size of your team depends on the number of requests you are receiving and how much active deployment you are doing. So, don’t bring in a team that looks like this and then yell at me because they are sitting around with nothing to do or are so overwhelmed that nothing is getting done. I do URGE you to take the proper time to asses your needs and determine what team is BEST for your organization. Also, PLEASE PLEASE PLEASE do not skimp on the talent. When it comes to SharePoint you really do get what you pay for when it comes to employees, contractors, and software.  SharePoint can become absolutely critical to your business and because you skimped on hiring a developer he created a web part that brings down the farm because he doesn’t know what he’s doing, or you hire an admin who thinks it’s fine to stick everything in the same Content Database and then can’t figure out why people are complaining. SharePoint can be an enormous blessing to an organization or it’s biggest curse. Spend the time and money to do it right, or be prepared to spending even more time and money later to fix it.

    Read the article

  • WCF fails to deserialize correct(?) response message security headers (Security header is empty)

    - by Soeteman
    I'm communicating with an OC4J webservice, using a WCF client. The client is configured as follows: <basicHttpBinding> <binding name="MyBinding"> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm=""/> <message clientCredentialType="UserName" algorithmSuite="Default"/> </security> </binding> My clientcode looks as follows: ServicePointManager.CertificatePolicy = new AcceptAllCertificatePolicy(); string username = ConfigurationManager.AppSettings["user"]; string password = ConfigurationManager.AppSettings["pass"]; // client instance maken WebserviceClient client = new WebserviceClient(); client.Endpoint.Binding = new BasicHttpBinding("MyBinding"); // credentials toevoegen client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; //uitvoeren request var response = client.Ping(); I've altered the CertificatePolicy to accept all certificates, because I need to insert Charles (ssl proxy) in between client and server to intercept the actual Xml that is sent across te wire. My request looks as follows: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2010-04-01T09:47:01.161Z</u:Created> <u:Expires>2010-04-01T09:52:01.161Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-9b39760f-d504-4e53-908d-6125a1827aea-21"> <o:Username>user</o:Username> <o:Password o:Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username- token-profile-1.0#PasswordText">pass</o:Password> </o:UsernameToken> </o:Security> </s:Header> <s:Body> <getPrdStatus xmlns="http://mynamespace.org/wsdl"> <request xmlns="" xmlns:a="http://mynamespace.org/wsdl" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <a:IsgrStsRequestTypeUser> <a:prdCode>LEPTO</a:prdCode> <a:sequenceNumber i:nil="true" /> <a:productionType i:nil="true" /> <a:statusDate>2010-04-01T11:47:01.1617641+02:00</a:statusDate> <a:ubn>123456</a:ubn> <a:animalSpeciesCode>RU</a:animalSpeciesCode> </a:IsgrStsRequestTypeUser> </request> </getPrdStatus> </s:Body> </s:Envelope> In return, I receive the following response: <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://mynamespace.org/wsdl"> <env:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" env:mustUnderstand="1" /> </env:Header> <env:Body> <ns0:getPrdStatusResponse> <result> <ns0:IsgrStsResponseTypeUser> <ns0:prdCode>LEPTO</ns0:prdCode> <ns0:color>green</ns0:color> <ns0:stsCode>LEP1</ns0:stsCode> <ns0:sequenceNumber xsi:nil="1" /> <ns0:productionType xsi:nil="1" /> <ns0:IAndRCode>00</ns0:IAndRCode> <ns0:statusDate>2010-04-01T00:00:00.000+02:00</ns0:statusDate> <ns0:description>Gecertificeerd vrij</ns0:description> <ns0:ubn>123456</ns0:ubn> <ns0:animalSpeciesCode>RU</ns0:animalSpeciesCode> <ns0:name>gecertificeerd vrij</ns0:name> <ns0:ranking>17</ns0:ranking> </ns0:IsgrStsResponseTypeUser> </result> </ns0:getPrdStatusResponse> </env:Body> </env:Envelope> Why can't WCF deserialize this response header? I'm getting a "Security header is empty" exception: Server stack trace: at System.ServiceModel.Security.ReceiveSecurityHeader.Process(TimeSpan timeout) at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessageCore(Message& message, TimeSpan timeout) at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessage(Message& message, TimeSpan timeout) at System.ServiceModel.Security.SecurityProtocol.VerifyIncomingMessage(Message& message, TimeSpan timeout, SecurityProtocolCorrelationState[] correlationStates) at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.ProcessReply(Message reply, SecurityProtocolCorrelationState correlationState, TimeSpan timeout) at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Who knows what is going on here? I've already tried Rick Strahl's suggestion and removed the timestamp from the request header. Any help greatly appreciated!

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

< Previous Page | 16 17 18 19 20 21  | Next Page >