Search Results

Search found 5633 results on 226 pages for 'follow'.

Page 75/226 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • Twitter traffic might not be what it seems

    - by Piet
    Are you using bit.ly stats to measure interest in the links you post on twitter? I’ve been hearing for a while about people claiming to get the majority of their traffic originating from twitter these days. Now, I’ve been playing with the twitter ruby gem recently, doing various experiments which I’ll not go into detail here because they could be regarded as spamming… if I’d conduct them on a large scale, that is. It’s scary to see people actually engaging with @replies crafted with some regular expressions and eliza-like trickery on status updates found using the twitter api. I’m wondering how Twitter is going to contain the coming spam-flood. When posting links I used bit.ly as url shortener, since this one seems to be the de-facto standard on twitter. A nice thing about bit.ly is that it shows some basic stats about the redirects it performs for your shortened links. To my surprise, most links posted almost immediately resulted in several visitors. Now, seeing that I was posting the links together with some information concerning what the link is about, I concluded that the people who were actually clicking the links should be very targeted visitors. This felt a bit like free adwords, and I suddenly started to understand why everyone was raving about getting traffic from twitter. How wrong I was! (and I think several 1000 online marketers with me) On the destination site I used a traffic logging solution that works by including a little javascript snippet in your pages. It seemed that somehow all visitors disappeared after the bit.ly redirect and before getting to the site, because I was hardly seeing any visitors there. So I started investigating what was happening: by looking at the logfiles of the destination site, and by making my own ’shortened’ urls by doing redirects using a very short domain name I own. This way, I could check the apache access_log before the redirects. Most user agents turned out to be bots without a doubt. Here’s an excerpt of user-agents awk’ed from apache’s access_log for a time period of about one hour, right after posting some links: AideRSS 2.0 (postrank.com) Java/1.6.0_13 Java/1.6.0_14 libwww-perl/5.816 MLBot (www.metadatalabs.com/mlbot) Mozilla/4.0 (compatible;MSIE 5.01; Windows -NT 5.0 - real-url.org) Mozilla/5.0 (compatible; Twitturls; +http://twitturls.com) Mozilla/5.0 (compatible; Viralheat Bot/1.0; +http://www.viralheat.com/) Mozilla/5.0 (Danger hiptop 4.6; U; rv:1.7.12) Gecko/20050920 Mozilla/5.0 (X11; U; Linux i686; en-us; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.04 (jaunty) Firefox/3.5 OpenCalaisSemanticProxy PycURL/7.18.2 PycURL/7.19.3 Python-urllib/1.17 Twingly Recon twitmatic Twitturly / v0.6 Wget/1.10.2 (Red Hat modified) Wget/1.11.1 (Red Hat modified) Of the few user-agents that seem ‘real’ at first, half are originating from an ip-address used by Amazon EC2. And I doubt people are setting op proxies on there. Oh yeah, Googlebot (the real deal, from a legit google owned address) is sucking up posted links like fresh oysters. I guess google is trying to make sure in advance to never be beaten by twitter in the ‘realtime search’ department. Actually, I think it’d be almost stupid NOT to post any new pages/posts/websites on Twitter, it must be one of the fastest ways to get a Googlebot visit. Same experiment with a real, established twitter account Now, because I was posting the url’s either as ’status’ messages or directed @people, on a test-account with hardly any (human) followers, I checked again using the twitter accounts from a commercial site I’m involved with. These accounts all have between 500 and 1000 targeted (I think) followers. I checked the destination access_logs and also added ‘my’ redirect after the bit.ly redirect: same results, although seemingly a bit higher real visitor/bot ratio. Btw: one of these account was ‘punished’ with a 1 week lock recently because the same (1 one!) status update was sent that was sent right before using another account. They got an email explaining the lock because the account didn’t act according to their TOS. I can’t find anything in their TOS about it, can you? I don’t think Twitter is on the right track punishing a legit account, knowing the trickery I had been doing with it’s api went totally unpunished. I might be wrong though, I often am. On the other hand: this commercial site reported targeted traffic and actual signups from visitors coming from Twitter. The ones that are really real visitors are also very targeted. I’m just not sure if the amount of work involved could hold up against an adwords campaign. Reposting the same link over and over again helps On thing I noticed: It helps to keep on reposting the same links with regular intervals. I guess most people only look at their first page when checking out recent posts of the ones they’re following, or don’t look too far back when performing a search. Now, this probably isn’t according to the twitter TOS. Actually, it might be spamming but no-one is obligated to follow anyone else of course. This way, I was getting more real visitors and less bots. To my surprise (when my programmer’s hat is on) there were still repeated visits from the same bots coming from the same ip-addresses. Did they expect to find something else when visiting for a 2nd or 3rd time? (actually,this gave me an idea: you can’t change a link once it’s posted, but you can change where it redirects to) Most bots were smart enough not to follow the same link again though. Are you successful in getting real visitors from Twitter? Are you only relying on bit.ly to provide traffic stats?

    Read the article

  • Building Great-Looking, Usable Apps: A two-day workshop applying Oracle’s best UX practices in ADF

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceI have been with Oracle for more than 12 years. It is a company that has granted me extraordinary creative freedom to help deliver compelling experiences for customers.I am beyond proud to talk about one of the experiences we just took for a test drive. Recently, we delivered a first-of-its-kind, three-team collaboration, train-the-trainer event in Reading, U.K., on building great-looking, usable apps based on Oracle Fusion Applications -- using the ADF tool kit. A new kind of workshopKevin Li, Platform Product Director, asked the Oracle Applications User Experience VP, Jeremy Ashley, if the team had anything to help partners and customers build applications that looked like Fusion. He was receiving this request from European partners and customers.Some quick conversations ensued, and the idea for the workshop was born: We would conduct an experiment.  We would work with feedback from the key Platform Technology Solutions (PTS) trainers under Andre Pavanello, Director, Platform Technology Solutions, in Europe, Middle East, and Africa. We would partner with the ADF team lead by Grant Ronald, Director of Product Management, title> and leverage the Applications UX expertise in Ashley’s team.The goal: Create a pilot workshop that in two days would explain to an ADF developer how to leverage the next-generation user experience best-practices developed for Fusion Apps. Why? Customers who need integrations with Oracle Fusion Applications, who are looking for custom applications that need to co-exist with Fusion, or who quite simply want a next-generation design for a custom app, need their solutions to reflect the next-generation research and design.Building an event for an ADF developerThe biggest hurdle was figuring out where to start.  How far into user experience country do you take an ADF developer? How far into ADF do you need to go if you are a UX professional?After some time in the UX kitchen, the workshop recipe looked like this: Mix equal parts: Fusion user experience design principles and functional design patterns The art and science behind UX How to wireframe designs that you can build in Fusion How to translate those designs into an ADF application Ultan O’Broin, Director of Global User Experience, explaining the trouble ticket wireframe design exerciseLynn Munsinger, Senior Group Product Manager, explaining the follow-on trouble ticket ADF coding exercise For spice, add:•    Debra Lilley, Fujitsu and ACE director, showcasing some of the latest ADF design work in the new face of Fusion Applications •    Partner show-and-tell of example apps they have built with FMW and ADF that are dynamic, beautiful, and interactive.Debra Lilley, Oracle ACE Director and Fujitsu Fusion Champion on the new face of Fusion built with ADF and Fusion extensibility with composers as a window into “the possible”?The taste testThis first go-round of the workshop was aimed squarely at ADF developers and partners.  We were privileged to have participation and feedback from:•    Sten Vesterli, Scott/Tiger S. A., Denmark•    John Sim, Fishbowl Solutions, UK•    Josef Huber, Primus Delphi Group, Munich•    Thaddaus Weindl, Primus Delphi, Group , Munich•    Praveen Pillalamarri, EiS Technologies, Bangalore•    Balaji Kamepalli, EiS Technologies, Bangalore•    Plinio Arbizu, Services & Processes Solutions S. A., Mexico•    Yannick Ongena, infoMENTUM, UK•    Jakub Ciszek, infoMENTUM, UK•    Mauro Flores, infoMENTUM, UK•    Matteo Formica, infoMENTUM, UKRichard Bingham, Oracle, Mauro Flores and Matteo Formica, infoMENTUMWhy is this so exciting?  Oracle has invested heavily in the research and development of the Oracle Fusion Applications user experience. This investment has been and continues to be applied across the product lines. Now, we finally get to teach customers and partners how to take advantage of this investment for custom solutions.This event was a pilot to test-drive the content, as well as a train-the-trainer event that our EMEA colleagues will be using with partners who want to build with Fusion Apps design patterns.What did attendees think?"I liked most the science stuff, like eye-tracking, design patterns and best-practice (color, contrast),” Josef Huber said. “It was a very good introduction to UI design, and most developers and project managers are very bad in that.  So this course would be good for all developers and even project managers." Team Anonymous: John Sim, Fishbowl Solutions, Flavius Sana, Oracle, Josef Huber, infoMENTUM, Mireille Duroussaud, Oracle. Winners of the wireframing design exercise.  Sten Vesterli, of Scott/Tiger, said he attended to learn techniques he could use in his own projects. He wants to ensure that his applications better meet the needs of his users, and he said sessions during the workshop on user interface design and wireframing were most useful to him.  “Go to this event to learn the art and science of good user interfaces from people who really know how to do it,” he said.Sten Vesterli, Scott/Tiger, Angelo Santagata, Oracle Plinio Arbizu said the workshop fulfilled his goals, thanks to the recommendations given in how to design user interfaces to facilitate the adoption of applications among the final users. “The workshop combined these recommendations with an exercise that improved the technical comprehension, permitting the usage of JDeveloper to set forth our solutions,” he said. He added: “The first session that I really enjoyed was the five Fusion design principles. It was incredible to discover how these simple principles were included in an inherit manner in Fusion Applications, and I had been using many of them applying only ADF components.  Another topic that I enjoyed a lot was the eight recommendations about the visual design of UIs. The issues that were raised in that lesson are unknown to the developers and of great value to achieve an attractive presentation layer to the end users.  Participate in this workshop, and include these usability features in your projects and in this manner not only to facilitate and improve the user productivity, but also to distinguish you as a professional who takes advantage fully of the functionalities offered by Oracle technology. Praveen Pillalamarri came to the workshop to learn about the difficulties faced in UI and UX development, and how this can be resolved with the help of ADF.  He also appreciated the opportunity to talk with other individuals who came to the workshop. Pillalmarri said, “The way we looked at things in terms of work and projects were sharpened.  UI and UX design knowledge shared by you was quite interesting, especially the minute things which we ignored in the UI or UX design.” Plinio Arbizu, Services & Processes Solutions S. A., Richard Bingham, Oracle, Balaji Kamepalli, & Praveen Pillalamarri, EiS TechnologiesReady to spread the wordIn EMEA, Oracle customers and partners have access to three world-class trainers via Platform Technology Solutions: Mireille Duroussaud, Flavius Sana, and Angelo Santagata. Contact Andre Pavanello if you like to experience this workshop firsthand, or you have customers or partners who would benefit from the training.We are looking to bring the event to the U.S. in spring 2013. If you have interest in this kind of a workshop, leave a comment below. For those who want to follow the action, join the ADF Enterprise Methodology Group run by Oracle’s Chris Muir. Ask questions and continue with the conversation in this forum, or check blogs.oracle.com/usableapps for topics emerging from the workshop.

    Read the article

  • Interview with Lenz Grimmer about MySQL Connect

    - by Keith Larson
    Keith Larson: Thank you for allowing me to do this interview with you.  I have been talking with a few different Oracle ACEs   about the MySQL Connect Conference. I figured the MySQL community might be missing you as well. You have been very busy with Oracle Linux but I know you still have an eye on the MySQL Community. How have things been?Lenz Grimmer: Thanks for including me in this series of interviews, I feel honored! I've read the other interviews, and really liked them. I still try to follow what's going on over in the MySQL community and it's good to see that many of the familiar faces are still around. Over the course of the 9 years that I was involved with MySQL, many colleagues and contacts turned into good friends and we still maintain close relationships.It's been almost 1.5 years ago that I moved into my new role here in the Linux team at Oracle, and I really enjoy working on a Linux distribution again (I worked for SUSE before I joined MySQL AB in 2002). I'm still learning a lot - Linux in the data center has greatly evolved in so many ways and there are a lot of new and exciting technologies to explore. Keith Larson: What were your thoughts when you heard that Oracle was going to deliver the MySQL Connect conference to the MySQL Community?Lenz Grimmer: I think it's testament to the fact that Oracle deeply cares about MySQL, despite what many skeptics may say. What started as "MySQL Sunday" two years ago has now evolved into a full-blown sub-conference, with 80 sessions at one of the largest corporate IT events in the world. I find this quite telling, not many products at Oracle enjoy this level of exposure! So it certainly makes me feel proud to see how far MySQL has come. Keith Larson: Have you had a chance to look over the sessions? What are your thoughts on them?Lenz Grimmer: I did indeed look at the final schedule.The content committee did a great job with selecting these sessions. I'm glad to see that the content selection was influenced by involving well-known and respected members of the MySQL community. The sessions cover a broad range of topics and technologies, both covering established topics as well as recent developments. Keith Larson: When you get a chance, what sessions do you plan on attending?Lenz Grimmer: I will actually be manning the Oracle booth in the exhibition area on one of these days, so I'm not sure if I'll have a lot of time attending sessions. But if I do, I'd love to see the keynotes and catch some of the sessions that talk about recent developments and new features in MySQL, High Availability and Clustering . Quite a lot has happened and it's hard to keep up with this constant flow of new MySQL releases.In particular, the following sessions caught my attention: MySQL Connect Keynote: The State of the Dolphin Evaluating MySQL High-Availability Alternatives CERN’s MySQL “as a Service” Deployment with Oracle VM: Empowering Users MySQL 5.6 Replication: Taking Scalability and High Availability to the Next Level What’s New in MySQL Server 5.6? MySQL Security: Past and Present MySQL at Twitter: Development and Deployment MySQL Community BOF MySQL Connect Keynote: MySQL Perspectives Keith Larson: So I will ask you just like I have asked the others I have interviewed, any tips that you would give to people for handling the long hours at conferences?Lenz Grimmer: Wear comfortable shoes and make sure to drink a lot! Also prepare a plan of the sessions you would like to attend beforehand and familiarize yourself with the venue, so you can get to the next talk in time without scrambling to find the location. The good thing about piggybacking on such a large conference like Oracle OpenWorld is that you benefit from the whole infrastructure. For example, there is a nice schedule builder that helps you to keep track of your sessions of interest. Other than that, bring enough business cards and talk to people, build up your network among your peers and other MySQL professionals! Keith Larson: What features of the MySQL 5.6 release do you look forward to the most ?Lenz Grimmer: There has been solid progress in so many areas like the InnoDB Storage Engine, the Optimizer, Replication or Performance Schema, it's hard for me to really highlight anything in particular. All in all, MySQL 5.6 sounds like a very promising release. I'm confident it will follow the tradition that Oracle already established with MySQL 5.5, which received a lot of praise even from very critical members of the MySQL community. If I had to name a single feature, I'm particularly and personally happy that the precise GIS functions have finally made it into a GA release - that was long overdue. Keith Larson:  In your opinion what is the best reason for someone to attend this event?Lenz Grimmer: This conference is an excellent opportunity to get in touch with the key people in the MySQL community and ecosystem and to get facts and information from the domain experts and developers that work on MySQL. The broad range of topics should attract people from a variety of roles and relations to MySQL, beginning with Developers and DBAs, to CIOs considering MySQL as a viable solution for their requirements. Keith Larson: You will be attending MySQL Connect and have some Oracle Linux Demos, do you see a growing demand for MySQL on Oracle Linux ?Lenz Grimmer: Yes! Oracle Linux is our recommended Linux distribution and we have a good relationship to the MySQL engineering group. They use Oracle Linux as a base Linux platform for development and QA, so we make sure that MySQL and Oracle Linux are well tested together. Setting up a MySQL server on Oracle Linux can be done very quickly, and many customers recognize the benefits of using them both in combination.Because Oracle Linux is available for free (including free bug fixes and errata), it's an ideal choice for running MySQL in your data center. You can run the same Linux distribution on both your development/staging systems as well as on the production machines, you decide which of these should be covered by a support subscription and at which level of support. This gives you flexibility and provides some really attractive cost-saving opportunities. Keith Larson: Since I am a Linux user and fan, what is on the horizon for  Oracle Linux?Lenz Grimmer: We're working hard on broadening the ecosystem around Oracle Linux, building up partnerships with ISVs and IHVs to certify Oracle Linux as a fully supported platform for their products. We also continue to collaborate closely with the Linux kernel community on various projects, to make sure that Linux scales and performs well on large systems and meets the demands of today's data centers. These improvements and enhancements will then rolled into the Unbreakable Enterprise Kernel, which is the key ingredient that sets Oracle Linux apart from other distributions. We also have a number of ongoing projects which are making good progress, and I'm sure you'll hear more about this at the upcoming OpenWorld conference :) Keith Larson: What is something that more people should be aware of when it comes to Oracle Linux and MySQL ?Lenz Grimmer: Many people assume that Oracle Linux is just tuned for Oracle products, such as the Oracle Database or our Engineered Systems. While it's of course true that we do a lot of testing and optimization for these workloads, Oracle Linux is and will remain a general-purpose Linux distribution that is a very good foundation for setting up a LAMP-Stack, for example. We also provide MySQL RPM packages for Oracle Linux, so you can easily stay up to date if you need something newer than what's included in the stock distribution.One more thing that is really unique to Oracle Linux is Ksplice, which allows you to apply security patches to the running Linux kernel, without having to reboot. This ensures that your MySQL database server keeps up and running and is not affected by any downtime. Keith Larson: What else would you like to add ?Lenz Grimmer: Thanks again for getting in touch with me, I appreciated the opportunity. I'm looking forward to MySQL Connect and Oracle OpenWorld and to meet you and many other people from the MySQL community that I haven't seen for quite some time! Keith Larson:  Thank you Lenz!

    Read the article

  • Top tweets SOA Partner Community – October 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community Deploying Fusion Order Demo on 11.1.1.6 by Antony Reynolds http://wp.me/p10C8u-vA leonsmiers ?Cant wait to test it >> 't waiRT @OracleSOA: Case Management patterns, session coverage from #OOW #OracleBPM #ACM #BPM http://bit.ly/OdcZL6 Danilo Schmiedel Bye bye San Francisco. #oow was a great conference in a wonderful city! Thanks! @soacommunity pic.twitter.com/lcYSe9xC OPITZ CONSULTING ?The Journey towards #Oracle #BPM @OpenWorld 2012 - Slides by @t_winterberg & H. Normann: http://ow.ly/edkWE #oow demed Full house at the SOA Customer Advisory Board! #oow12 http://instagr.am/p/QX9B8eLMLS/ Danilo Schmiedel "@whitehorsesnl: Had some great talks with the BPM guys at the DEMOgrounds. It is one of the best things at #oow" -> I agree!! @soacommunity Mark Simpson ?Fusion Middleware Global Innovation Awards: nice to pick up a soa and bpm with our customer. #oow Mark Simpson ?RT @SOASimone: #oraclesoa #oow hands on lab fully booked pic.twitter.com/pwI94Ew7 <--quick, provision some more compute power on the cloud! Oracle SOA ?Join us for BPM and Analytics: Process Dashboards. BAM, and Intelligent OptimizationMoscone South - 308#OracleBPM #OOW Oracle SOA ?Real-time public safety demo! License plate recognition and processing in London via Oracle Event Processing. #oow pic.twitter.com/WufesDBq Marc ?Nice session on customer success stories on #SOA11g on with @SOASimone Pro and cons and architectural overview. #oow pic.twitter.com/bzuhsujm Lucas Jellema Full length Keynote on Middleware #oow : http://medianetwork.oracle.com/video/player/1873556035001 … #oow_amis OracleBlogs ?Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers? http://ow.ly/2stVQ0 OracleBlogs ?Open World Session - BPM, SOA and ADF Combined:Patterns learned from Fusion Applications http://ow.ly/2suhzf Ronald Luttikhuizen ?VENNSTER BLOG | Presentations at OpenWorld 2012 | http://blog.vennster.nl/2012/10/presentations-at-openworld-2012.html … Andrejus Baranovskis @dschmied @soacommunity next OOW for sure, and may be SOA community event ! @soacommunity Danilo Schmiedel ?@andrejusb Thanks Andrejus - I really enjoyed having a session with you at #oow. When is next time :-) ? @soacommunity Lionel Dubreuil ?@soacommunity #oow12 Today-1:15pm-Marriott Marquis Salon 7 Jump-starting Integration with Oracle Foundation Pack http://bit.ly/QKKJzF Ronald Luttikhuizen ?Impression from our fault handling session in OSB and SOA Suite from the audience @soacommunity @gschmutz #oow pic.twitter.com/WSg1Z89E Marc Nice session on Oracle Virtual Assembly for #SOA11g, @soacommunity Works with #exalogic but not required SOA Community ?Send your #soacommunity #oow pictures and blog posts @soacommunity or http://www.facebook.com/soacommunity Enjoy OOW ;-) Jon petter hjulstad Oracle BPM- Big leap forward in 11.1.1.7 ! Whitehorses ?Common BPM Use Cases from Oracle #bpm #oow pic.twitter.com/ofOv04EF Whitehorses ?Oracle BPM 11.1.1.7 top new features. Interesting #oow #oowbenelux pic.twitter.com/HY9QN5un SOA Community Industrialized SOA - topic of Business Technology Magazine http://wp.me/p10C8u-vi orclateamsoa ?A-Team Blog #ateam: The curious case of SOA Human tasks' automatic completion http://ow.ly/1mq6YU Simone Geib Look for this sign #oow #oraclesoa pic.twitter.com/MJsPV4PO Lucas Jellema My summary of Larry Ellison's keynote at #oow on the AMIS Blog: http://technology.amis.nl/2012/10/01/oow-2012-larry-ellisons-keynote-announcements-exa-cloud-database/ … #oow_amis gschmutz ?Join my #oow session "Five Cool Use Cases for the Spring Component" to see the power of Spring and SOA Suite combined! Moscone 310 - 3:15 PM Ronald Luttikhuizen Thanks to @soacommunity for great SOA/BPM dinner event yesterday night! #oow pic.twitter.com/v7x3i0DC OracleBlogs ?OSB, Service Callouts and OQL http://ow.ly/2sq6B2 OracleBlogs ?Cloud and On-Premises Applications Integration using Oracle Integration Adapters http://ow.ly/2sqiDy OracleBlogs ?Adapters, SOA Suite and More @Openworld 2012 http://ow.ly/2srdTg Eric Elzinga ?OSB, Service Callouts and OQL - Part 3, http://see.sc/JodzEx #oracleservicebus Donatas Valys interesting articles about soa industrialization to read #soa #industrialization http://it-republik.de/business-technology/bt-magazin-ausgaben/Industrialized-SOA-000516.html … gschmutz ?“@techsymp: 2012 Symposium Presentation Download Page Now Available! 75% of presentations published. http://www.servicetechsymposium.com ” find mine there.. Oracle BPM Customer Experience and BPM – From Efficiency to Engagement #bpm #oraclebpm #processmanagement #socialbpm http://pub.vitrue.com/Tahi SOA Community ?@soacommunity SOA Community Newsletter September 2012 http://wp.me/p10C8u-wa SOA Community again again again.... it is Oracle Open World 2012 http://wp.me/p10C8u-wk OracleBlogs ?SOA Proactive support http://ow.ly/2smrSJ demed ?@gschmutz on NoSQL at @techsymp http://lockerz.com/s/247601661 demed ?Just finished "#BigData and its impact on #SOA" talk @techsymp. Really enjoyed getting out of beaten path. #london #oep http://lockerz.com/s/247636974 OTNArchBeat ?Need help selling SOA to business stakeholders? Give them this free eBook. #soasuite http://pub.vitrue.com/hsQY SOA Community top Tweets SOA Partner Community &ndash; September 2012 http://wp.me/p10C8u-vc SOA Community Move Data into the grid for scalable, predictable response times http://wp.me/p10C8u-vv ServiceTechSymposium ?The September issue of the Service Technology Magazine is now published with six new items! Read them at http://www.servicetechmag.com Marc ?Reviewed @Packt_OracleFMW new book on SOA11g administration! Very good ! http://tinyurl.com/8pzd5ww SOA Community ?BPM Solution Catalogue&ndash;promote your process templates http://wp.me/p10C8u-vt OTNArchBeat ?BPM ADF Task forms: Checking whether the current user is in a BPM Swimlane | @ChrisKarlChan http://pub.vitrue.com/aPMG OTNArchBeat ?Cloud, automation drive new growth in SOA governance market | @JoeMcKendrick http://pub.vitrue.com/hNPv Simon Haslam ?Looking for "oak style"(!) advanced content but you're a middleware specialist? See #ukoug2012 #middlewaresunday http://2012.ukoug.org/default.asp?p=9355 … Simon Haslam ?The #ukoug2012 agenda is "go, go, go!" (as Murray would say!) http://2012.ukoug.org/agendagrid Germán Gazzoni SOA Spezial II verfügbar – Industralized SOA: Die überarbeitete und ergänzte Neuauflage des SOA Spezial Sonderhe... http://bit.ly/PAWwN9 Oracle SOA ?Flip thru new interactive "Oracle SOA Suite eBook-In the Customers Words" #middleware #soa #oraclesoa http://pub.vitrue.com/NzFZ SOA Community Follow SOA Community on Facebook http://www.facebook.com/soacommunity #soacommunity #opn SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • JMaghreb 2012 Trip Report

    - by arungupta
    JMaghreb is the inaugural Java conference organized by Morocco JUG. It is the biggest Java conference in Maghreb (5 countries in North West Africa). Oracle was the exclusive platinum sponsor with several others. The registrations had to be closed at 1412 for the free conference and several folks were already on the waiting list. Rabat with 531 registrations and Casablanca with 426 were the top cities. Some statistics ... 850+ attendees over 2 days, 500+ every day 30 sessions were delivered by 18 speakers from 10 different countries 10 sessions in French and 20 in English 6 of the speakers spoke at JavaOne 2012 8 will be at Devoxx Attendees from 5 different countries and 57 cities in Morocco 40.9% qualified them as professional and rest as students Topics ranged from HTML5, Java EE 7, ADF, JavaFX, MySQL, JCP, Vaadin, Android, Community, JCP Java EE 6 hands-on lab was sold out within 7 minutes and JavaFX in 12 minutes I gave the keynote along with Simon Ritter which was basically a recap of the Strategy and Technical keynotes presented at JavaOne 2012. An informal survey during the keynote showed the following numbers: 25% using NetBeans, 90% on Eclipse, 3 on JDeveloper, 1 on IntelliJ About 10 subscribers to free online Java magazine. This digital magazine is a comprehensive source of information for everything Java - subscribe for free!! About 10-15% using Java SE 7. Download JDK 7 and get started today! Even JDK 8 builds have been available for a while now. My second talk explained the core concepts of WebSocket and how JSR 356 is providing a standard API to build WebSocket-driven applications in Java EE 7. TOTD #183 explains how you can easily get started with WebSocket in GlassFish 4. The complete slide deck is available: Next day started with a community keynote by Sonya Barry. Some of us live the life of JCP, JSR, EG, EC, RI, etc every day, but not every body is. To address that, Sonya prepared an excellent introductory presentation providing an explanation of these terms and how java.net infrastructure supports Java development. The registration for the lab showed there is a definite demand for these technologies in this part of the world. I delivered the Java EE 6 hands-on lab to a packed room of about 120 attendees. Most of the attendees were able to progress and follow the lab instructions. Some of the attendees did not have a laptop but were taking extensive notes on paper notepads. Several attendees were already using Java EE 6 in their projects and typically they are the ones asking deep dive questions. Also gave out three copies of my recently released Java EE 6 Pocket Guide and new GlassFish t-shirts. Definitely feels happy to coach ~120 more Java developers learn standards-based enterprise Java programming. I also participated in a JCP BoF along with Werner, Sonya, and Badr. Adotp-a-JSR, java.net infrastructure, how to file a JSR, what is an RI, and other similar topics were discussed in a candid manner. You can follow @JMaghrebConf or check out their facebook page. java.net published a timely conversation with Badr El Houari - the fearless leader of the Morocco JUG team. Did you know that Morocco JUG stood for JCP EC elections (ADD LINK) ? Even though they did not get elected but did fairly well. Now some sample tweets from #JMaghreb ... #JMaghreb is over. Impressive for a first edition! Thanks @badrelhouari and all the @MoroccoJUG team ! Since you @speakjava : System.out.println("Thank you so much dear Tech Evangelist ! The JavaFX was pretty amazing !!! "); #JMaghreb @YounesVendetta @arungupta @JMaghrebConf Right ! hope he will be back to morocco again and again .. :) @Alji_ @arungupta @JMaghrebConf That dude is a genius ;) Put it on your wall :p @arungupta rocking Java EE 6 at @JMaghrebConf #Java #JavaEE #JMaghreb http://t.co/isl0Iq5p @sonyabarry you are an awesome speaker ;-) #JMaghreb rich more than 550 attendees in day one. Expecting more tomorrow! ongratulations @badrelhouari the organisation was great! The talks were pretty interesting, and the turnout was surprising at #JMaghreb! #JMaghreb is truly awesome... The speakers are unbelievable ! #JavaFX... Just amazing #JMaghreb Charmed by the talk about #javaFX ( nodes architecture, MVC, Lazy loading, binding... ) gotta start using it intead of SWT. #JMaghreb JavaFX is killing JFreeChart. It supports Charts a lot of kind of them ... #JMaghreb The british man is back #JMaghreb I do like him!! #JMaghreb @arungupta rocking @JMaghrebConf. pic.twitter.com/CNohA3PE @arungupta Great talk about the future of Java EE (JEE 7 & JEE 8) Thank you. #JMaghreb JEE7 more mooore power , leeess less code !! #JMaghreb They are simplifying the existing API for Java Message Service 2.0 #JMaghreb good to know , the more the code is simplified the better ! The Glassdoor guy #arungupta is doing it RIGHT ! #JMaghreb Great presentation of The Future of the Java Platform: Java EE 7, Java SE 8 & Beyond #jMaghreb @arungupta is a great Guy apparently #JMaghreb On a personal front, the hotel (Soiftel Jardin des Roses) was pretty nice and the location was perfect. There was a 1.8 mile loop dirt trail right next to it so I managed to squeeze some runs before my upcoming marathon. Also enjoyed some great Moroccan cuisine - Couscous, Tajine, mint tea, and moroccan salad. Visit to Kasbah of the Udayas, Hassan II (one of the tallest mosque in the world), and eating in a restaurant in a kasbah are some of the exciting local experiences. Now some pictures from the event (and around the city) ... And the complete album: Many thanks to Badr, Faisal, and rest of the team for organizing a great conference. They are already thinking about how to improve the content, logisitics, and flow for the next year. I'm certainly looking forward to JMaghreb 2.0 :-)

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Getting your bearings and defining the project objective

    - by johndoucette
    I wrote this two years ago and thought it was worth posting… Some may think this is a daunting task and some may even say “what a waste of time” and want to open MS Project and start typing out tasks because someone asked for an estimate and a task list. Hell, maybe you even use Excel and pump out a spreadsheet with some real scientific formula for guessing how long it will take to code a bunch of classes. However, this short exercise will provide the basis for the entire project, whether small or large and be a great friend when communicating to anyone on your team or even your client. I call this the Project Brief. If you find yourself going beyond a single page, then you must decompose the sections and summarize your findings so there is a complete and clear picture of the project you are working on in a relatively short statement. Here is a great quote from the PMBOK (Project Management Body of Knowledge) relative to what a project is;   A project is a temporary endeavor undertaken to create a unique product, service or result. With this in mind, the project brief should encompass the entirety (objective) of the endeavor in its explanation and what it will take (goals) to create the product, service or result (deliverables). Normally the process of identifying the project objective is done during the first stage of a project called the Project Kickoff, but you can perform this very important step anytime to help you get a bearing. There are many more parts to helping a project stay on course, but this is usually the foundation where it can be grounded on. Through a series of 3 exercises, you should be able to come up with the objective, goals and deliverables on your project. Follow these steps, and in no time (about &frac12; hour), you will have the foundation of your project plan. (See examples below) Exercise 1 – Objectives Begin with the end in mind. Think about your project in business terms with a couple things to help you understand the objective; Reference the business benefit in terms of cost, speed and / or quality, Provide a higher level of what the outcome will look like (future sense) It should be non-measurable, that’s what the goals are all about The output should be a single paragraph with three sentences and take 10 minutes to write. *Typically, agreement must be reached on the objectives of the project before you would proceed to the next steps of the project. Exercise 2 – Goals A project goal is a statement that answers questions about who, what, why, where and when. A good project goal statement; Answers the five “W” questions for the project Is measurable in each of its parts Is published and agreed on by all the owners This helps the Project Manager receive confirmation on defining the project target. Using the established project objective done in the first exercise, think about the things it will take to get the job done. Think about tangible activities which are the top level tasks in a typical Work Breakdown Structure (WBS). The overall goal statement plus all the deliverables (next exercise) can be seen as the project team’s contract with the project owners. Write 3 - 5 goals in about 10 minutes. You should not write the words “Who, what, why, where and when, but merely be able to answer the questions when you read a goal. Exercise 3 – Deliverables Every project creates some type of output and these outputs are called deliverables. There are two classes of deliverables; Internal – produced for project team members to meet their goals External – produced for project owners to meet their expectations The list you enter here provides a checklist for the team’s delivery and/or is a statement of all the expectations of the project owners. Here are some typical project deliverables; Product and product documentation End product/system Requirements/feature documents Installation guides Demo/prototype System design documents User guides/help files Plans Project plan Training plan Conversion/installation/delivery plan Test plans Documentation plan Communication plan Reports and general documentation Progress reports System acceptance tests Outstanding bug list Procedures Risk and issue logs Project history Deliverables should go with each of the goals. Have 3-5 deliverables for each goal. When you are done, you will have established a great foundation for the clarity of your project. This exercise can take some time, but with practice, you should be able to whip this one out in 10 minutes as well, especially if you are intimate with an ongoing project. Samples  Objective [Client] is implementing a series of MOSS sites to support external public (Internet), internal employee (Intranet) and an external secure (password protected Internet) applications. This project will focus on the public-facing web site and will provide [Client] with architectural recommendations based on the current design being done by their design partner [Partner] and the internal Content Team. In addition, it will provide [Client] with a development plan and confidence they need to deploy a world class public Internet website. Goals 1.  [Consultant] will provide technical guidance and set project team expectations for the implementation of the MOSS Internet site based on provided features/functions within three weeks. 2.  [Consultant] will understand phase 2 secure password-protected Internet site design and provide recommendations.   Deliverables 1.1  Public Internet (unsecure) Architectural Recommendation Plan 1.2  Physical Site construction Work Breakdown Structure and plan (Time, cost and resources needed) 2.1  Two Factor authentication recommendation document   Objective [Client] is currently using an application developed by [Consultant] many years ago called "XXX". This application, although functional, does not meet their new updated business requirements and contains a few defects which [Client] has developed work-around processes. [Client] would like to have a "new and improved" system to support their membership management needs by expanding membership and subscription capabilities, provide accounting integration with internal (GL) and external (VeriSign) systems, and implement hooks to the current CRM solution. This effort will take place through a series of phases, beginning with envisioning. Goals 1. Through discussions with users, [Consultant] will discover current issues/bugs which need to be resolved which must meet the current functionality requirements within three weeks. 2. [Consultant] will gather requirements from the users about what is "needed" vs. "what they have" for enhancements and provide a high level document supporting their needs. 3. [Consultant] will meet with the team members through a series of meetings and help define the overall project plan to deliver a new and improved solution. Deliverables 1.1 Prioritized list of Current application issues/bugs that need to be resolved 1.2 Provide a resolution plan on the issues/bugs identified in the current application 1.3 Risk Assessment Document 2.1 Deliver a Requirements Document showing high-level [Client] needs for the new XXX application. · New feature functionality not in the application today · Existing functionality that will remain in the new functionality 2.2 Reporting Requirements Document 3.1 A Project Plan showing the deliverables and cost for the next (second) phase of this project. 3.2 A Statement of Work for the next (second) phase of this project. 3.3 An Estimate of any work that would need to follow the second phase.

    Read the article

  • Grow Your Business with Security

    - by Darin Pendergraft
    Author: Kevin Moulton Kevin Moulton has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East EnterpriseSecurity Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. It happened again! There I was, reading something interesting online, and realizing that a friend might find it interesting too. I clicked on the little email link, thinking that I could easily forward this to my friend, but no! Instead, a new screen popped up where I was asked to create an account. I was expected to create a User ID and password, not to mention providing some personally identifiable information, just for the privilege of helping that website spread their word. Of course, I didn’t want to have to remember a new account and password, I didn’t want to provide the requisite information, and I didn’t want to waste my time. I gave up, closed the web page, and moved on to something else. I was left with a bad taste in my mouth, and my friend might never find her way to this interesting website. If you were this content provider, would this be the outcome you were looking for? A few days later, I had a similar experience, but this one went a little differently. I was surfing the web, when I happened upon some little chotcke that I just had to have. I added it to my cart. When I went to buy the item, I was again brought to a page to create account. Groan! But wait! On this page, I also had the option to sign in with my OpenID account, my Facebook account, my Yahoo account, or my Google Account. I have all of those! No new account to create, no new password to remember, and no personally identifiable information to be given to someone else (I’ve already given it all to those other guys, after all). In this case, the vendor was easy to deal with, and I happily completed the transaction. That pleasant experience will bring me back again. This is where security can grow your business. It’s a differentiator. You’ve got to have a presence on the web, and that presence has to take into account all the smart phones everyone’s carrying, and the tablets that took over cyber Monday this year. If you are a company that a customer can deal with securely, and do so easily, then you are a company customers will come back to again and again. I recently had a need to open a new bank account. Every bank has a web presence now, but they are certainly not all the same. I wanted one that I could deal with easily using my laptop, but I also wanted 2-factor authentication in case I had to login from a shared machine, and I wanted an app for my iPad. I found a bank with all three, and that’s who I am doing business with. Let’s say, for example, that I’m in a regular Texas Hold-em game on Friday nights, so I move a couple of hundred bucks from checking to savings on Friday afternoons. I move a similar amount each week and I do it from the same machine. The bank trusts me, and they trust my machine. Most importantly, they trust my behavior. This is adaptive authentication. There should be no reason for my bank to make this transaction difficult for me. Now let's say that I login from a Starbucks in Uzbekistan, and I transfer $2,500. What should my bank do now? Should they stop the transaction? Should they call my home number? (My former bank did exactly this once when I was taking money out of an ATM on a business trip, when I had provided my cell phone number as my primary contact. When I asked them why they called my home number rather than my cell, they told me that their “policy” is to call the home number. If I'm on the road, what exactly is the use of trying to reach me at home to verify my transaction?) But, back to Uzbekistan… Should my bank assume that I am happily at home in New Jersey, and someone is trying to hack into my account? Perhaps they think they are protecting me, but I wouldn’t be very happy if I happened to be traveling on business in Central Asia. What if my bank were to automatically analyze my behavior and calculate a risk score? Clearly, this scenario would be outside of my typical behavior, so my risk score would necessitate something more than a simple login and password. Perhaps, in this case, a one-time password to my cell phone would prove that this is not just some hacker half way around the world. But, what if you're not a bank? Do you need this level of security? If you want to be a business that is easy to deal with while also protecting your customers, then of course you do. You want your customers to trust you, but you also want them to enjoy doing business with you. Make it easy for them to do business with you, and they’ll come back, and perhaps even Tweet about it, or Like you, and then their friends will follow. How can Oracle help? Oracle has the technology and expertise to help you to grown your business with security. Oracle Adaptive Access Manager will help you to prevent fraud while making it easier for your customers to do business with you by providing the risk analysis I discussed above, step-up authentication, and much more. Oracle Mobile and Social Access Service will help you to secure mobile access to applications by expanding on your existing back-end identity management infrastructure, and allowing your customers to transact business with you using the social media accounts they already know. You also have device fingerprinting and metrics to help you to grow your business securely. Security is not just a cost anymore. It’s a way to set your business apart. With Oracle’s help, you can be the business that everyone’s tweeting about. Image courtesy of Flickr user shareski

    Read the article

  • Get your TFS 2012 task board demo ready in under 1 minute

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/  | Download | Source Code | Report a Bug | Ideas In this blog post, I’ll show you how to use the ‘TfsDemoSetup’ application to configure and setup the TFS 2012 task board for a demo in well less than 1 minute Step 1 – Note what you get with a newly created Team Project Create a new Team Project on TFS Preview         2. Click Create Project         3. The project creation has completed        4. Open the team web access and have a look at the home page Note – Since I created the project I am the only Team Member       A default Team by the name AdventureWorks Team has been created       A few sprints have been assigned to the default team but no dates for sprint start and end have been specified        A default Area Path for the team is missing       Step 2. Download the TFS Demo Setup Console application from Codeplex 1. Navigate to the TFS Demo Setup project on codeplex https://tfsdemosetup.codeplex.com/       2. Download Instructions and TFSDemo_<version>      3. Follow the steps in the Instructions.txt file      4. Unzip TFSDemo_<version> and open the target folder. Two important files in this folder, DemoDictionary.xml – This file contains the settings using which the demo environment will be setup SetupTfsDemo.exe – This will run the TFS demo environment setup application       Step 3 – Configure the setup (i.e. team name, members, sprint dates, etc) 1. Open up DemoDictionary.xml      2. Walkthrough DemoDictionary.xml             a. Basic Team Details         <Name> – Specify the name of the team         <Description> – Specify a description to go with the team         <SetAsDefaultTeam> – This accepts a value “true/false” when set to true, the newly created team will be set as the default team in the project         <BacklogIterationPath> – Specify a backlog iteration path for the team     b. Iterations – The iterations you specify here will be set as the Teams iterations        <Iterations> – Accepts multiple <Iteration> nodes.        <Iteration> – This is the most granular level of an Iteration        <Path> – The path to the sprint, sample values, Release 1\Sprint 1 or Release 2\Sprint 2        <StartDate> – The sprint start date, this accepts the format yyyy-MM-dd        <FinishDate> – The sprint finish date, this accepts the format yyyy-MM-dd     c. Team Members – Team Members that need to be added to the newly created team will be added under this section         <TeamMembers> – Accepts multiple <TeamMember> nodes.         <TeamMember> – This is the most granular level of a Team Member         <User> – This accepts the username, if you are running this against TFSPreview then the live id of the user will need to be passed. If you are running this against TFS Server then the user id i.e. Domain\UserName will need to be passed          <Team> – Specify the name of the team that you want the user to be assigned to.     d. WorkItems – This section will allow you to add work items (product backlog Items and linked tasks) to the current sprint of the team         <WorkItems> – Accepts multiple <WorkItem> nodes.         <WorkItem> – Accepts one <ProductBacklogItem> and multiple <Task> nodes         <ProductBacklogItem> – Used to create a Product Backlog Item type work item               <Title> – The title of the Product Backlog Item               <Description> – The description of the Product Backlog Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <Effort> – The total effort required to complete the Product Backlog Item         <Task> – Used to create a linked task to the Product Backlog type work item               <Title> – The title of the task type work item               <Description> – The description of the Task Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <RemainingWork> – The remaining effort to complete the task type work item Step 4 – Setup the demo environment against the newly created Team Project 1. Run SetupTfsDemo.exe    2. Enter Y or y on the prompt to continue setting up TFS Demo setup.     3. Select the newly created Team project, for this blogpost I had created the Team Project – AdventureWorks, so that is what I’ll select in the Connect to TFS Server pop up    3. Click Connect and follow the messages that are written to the console application       Step 5 – Validate that the Demo environment is set up as per the configuration 1. The team web access is all lit up You have a Sprint, a burn down chart, team members…    2. The team Demo has been added and has been set up as the default team    3. The Sprint Backlog Iteration path, Sprints and Sprint start and finish dates have been set    4. The default area path has been setup    5. Taskboard – Backlog items view    6. Taskboard – Team members view      Step 6 – Exception Handling! 1. This solution has been tested against TFS 2012 Service/Server for the Scrum 2.1 process template. 2. You are likely to run into an exception if you mess up the config file 3. If the team already exists and you run the console app to set up the team (with the same name) you will run into exceptions. Please remember this is just an alpha release, if you have any feedback please leave a comment! Didn’t I say that it would just take 1 minute, Enjoy!

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • Probation is Over: PASS Board Year 1, Q2

    - by Denise McInerney
    Though it's not always official every job begins with a probation period. You start out with lots of questions and every day you find out how much more you have to learn. Usually after a few months you discover that you can actually answer some questions and have at least an idea of what you are supposed to be doing. Now at the end of my second quarter on the "job" of serving on the PASS Board I have reached that point. My probation period is over. The last three months were busy for the entire Board with the budget process, an in-person meeting and moving forward with PASS Global Growth plans. I had also set a specific goal for myself for my 2nd quarter: to see the Board to adopt a Code of Conduct for the PASS Summit. Code of Conduct When I ran for the Board I included my desire to see PASS establish a code of conduct in my campaign platform.  I was motivated to do this for a few reasons. Other technical conferences have had incidents of harassment. Most of these did not have a policy in place prior to having a problem, though several conference organizers have since adopted anti-harassment policies or codes of conduct. I felt it would be in PASS' interest to establish a policy so we would be prepared should there be an incident.   "This is Community" Adopting a code of conduct would reinforce our community orientation and send a message about the positive character of the Summit. PASS is a leader among technical organizations for its promotion and support of women. Adopting a code of conduct would further demonstrate our leadership in this area. After researching similar polices from other organizations I published a first draft in April. I solicited feedback from the Board, HQ staff and some PASS members. Incorporating that feedback I presented version 4 at the May Board meeting, where we had a good discussion. You can read the meeting minutes for details. I incorporated points from  the Board discussion as well as feedback from a legal review to produce a final version which has been submitted to the Board. It will be discussed at the Board meeting July 12. You can read the full text at the end of this post. Virtual Chapters In the first quarter we started ramping up marketing support for the Virtual Chapters. Since then each edition of the Connector has highlighted a different VC to help get out the message about the variety of eductional opporutnities that are offered. These VC profiles will continue in the coming months. I was very pleased to welcome the new DBA Fundamentals VC which is geared toward new DBAs, people who are considering entering the field and those transitioning from a different IT role. Thanks to the contributions of Erin Stellato, Michelle Nalliah and Karla Landrum we published a "Virtual Chapter Guidebook". This document includes great advice on how to build and promote a VC. It's also a reference for how things work, from budgets to webinar hosting. I think this document will be extremely valuable to all our VC leaders and am grateful to those who put it together. Board Meeting/SQL Rally The Board met in May in Dallas. Among the items discussed were Global Growth, the budget, future events and the upcoming elections. We covered a lot of ground in two days and I will again refer you to the meeting minutes for details. The meeting schedule allowed us to participate in the SQL Rally networking events and one full day of the conference. I enjoyed having the opportunity to meet and talk with many PASS members. And my hat is off to the SQL Rally organizers who put on an outstanding event. Global Growth PASS has undertaken a major intitiative to reach and engage SQL Server professionals around the world. This Global Growth plan is ambitious and will have a significant impact on the strategic direction of the organization. We have been reaching out to the community for feedback, including hosting Twitter chats and live Town Hall meetings. I co-hosted two of these events and appreciated hearing the different perspectives of the people who participated If you have not done so I encourage you to read about the Global Growth vision and proposed governance changes  and submit your feedback. FY13 Budget July 1 is the beginning of PASS' fiscal year, which makes the end of June the deadline for approving a budget. Each director submits a budget for his or her portfolio. For the Virtual Chapter portfolio I focused on how we can allocate resources to grow the VCs. Budgeting is a give-and-take process, and while I didn't get everything I asked for I'm pleased the FY13 budget includes a significant increase in financial support for the Virtual Chapters. Many people put a lot of work into the budget, but no two people deserve credit more than VP of Finance Douglas McDowell and Accounting Manager Sandy Cherry. Thanks to both of them for getting us across the goal line on time. SQL Saturday I attended SQL Saturdays in Orange Co. CA and Phoenix. It's always inspiring to see the enthusiasm in the community for learning and networking. These events are successful due to the hard work of many volunteers. Thanks to the organizers in both cities for all your efforts. Next Up This quarter we'll be gearing up plans for the VCs at the Summit and exploring ways the VCs can best support PASS' Global Growth work. I'll also be wrapping up work on the Code of Conduct and attending a Board meeting in September. And I will be at SQL Saturday #144 in Sacramento later this month. Here is the language of the Code of Conduct I have submitted to the Board for consideration: PASS Code of Conduct The PASS Summit provides database professionals from a variety of backgrounds with an opportunity to connect, share and learn.  We value the strong sense of community that characterizes this event and we seek to foster an inclusive, professional atmosphere. We are dedicated to providing a harassment-free conference experience for everyone, regardless of gender, race, sexual orientation, disability, physical appearance, religion or any other protected classification.  Everyone at the Summit is expected to follow the Code of Conduct. This includes but is not limited to: PASS Staff, Exhibitors, Speakers, Attendees and anyone affiliated with the event. Participants are expected to follow the Code of Conduct at all Summit events, including PASS-sponsored social events. Participant behavior Harassment includes, but is not limited to, offensive verbal comments related to gender, race, sexual orientation, disability, physical appearance, religion, or any other protected classification.  Intimidation, threats, stalking, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact and unwelcome attention will also be considered harassment. Similarly, sexual, racist, derogatory, threatening or other inappropriate language and imagery are not appropriate for any conference venue, including sessions.  Recourse If a participant engages in any conduct that is prohibited under this Code of Conduct, the conference organizers may take any action they deem appropriate, including warning the offender or expelling the offender from the conference. No refunds will be granted to attendees expelled from the Summit due to violations of the Code of Conduct. If you are being harassed, witness harassment, or have any other concerns, please contact a member of conference staff immediately. Conference staff can be identified by their “Headquarters/Staff” shirts and are trained to handle the situation appropriately. A Code of Conduct Committee (CCC) made up of the Executive Manager and three members of the Board of Directors designated by the President will be authorized to take action in response to an incident or behavior that violates the Code of Conduct.

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • Conducting Effective Web Meetings

    - by BuckWoody
    There are several forms of corporate communication. From immediate, rich communications like phones and IM messaging to historical transactions like e-mail, there are a lot of ways to get information to one or more people. From time to time, it's even useful to have a meeting. (This is where a witty picture of a guy sleeping in a meeting goes. I won't bother actually putting one here; you're already envisioning it in your mind) Most meetings are pointless, and a complete waste of time. This is the fault, completely and solely, of the organizer. It's because he or she hasn't thought things through enough to think about alternate forms of information passing. Here's the criteria for a good meeting - whether in-person or over the web: 100% of the content of a meeting should require the participation of 100% of the attendees for 100% of the time It doesn't get any simpler than that. If it doesn't meet that criteria, then don't invite that person to that meeting. If you're just conveying information and no one has the need for immediate interaction with that information (like telling you something that modifies the message), then send an e-mail. If you're a manager, and you need to get status from lots of people, pick up the phone.If you need a quick answer, use IM. I once had a high-level manager that called frequent meetings. His real need was status updates on various processes, so 50 of us would sit in a room while he asked each one of us questions. He believed this larger meeting helped us "cross pollinate ideas". In fact, it was a complete waste of time for most everyone, except in the one or two moments that they interacted with him. So I wrote some code for a Palm Pilot (which was a kind of SmartPhone but with no phone and no real graphics, but this was in the days when we had just discovered fire and the wheel, although the order of those things is still in debate) that took an average of the salaries of the people in the room (I guessed at it) and ran a timer which multiplied the number of people against the salaries. I left that running in plain sight for him, and when he asked about it, I explained how much the meetings were really costing the company. We had far fewer meetings after. Meetings are now web-enabled. I believe that's largely a good thing, since it saves on travel time and allows more people to participate, but I think the rule above still holds. And in fact, there are some other rules that you should follow to have a great meeting - and fewer of them. Be Clear About the Goal This is important in any meeting, but all of us have probably gotten an invite with a web link and an ambiguous title. Then you get to the meeting, and it's a 500-level deep-dive on something everyone expects you to know. This is unfair to the "expert" and to the participants. I always tell people that invite me to a meeting that I will be as detailed as I can - but the more detail they can tell me about the questions, the more detailed I can be in my responses. Granted, there are times when you don't know what you don't know, but the more you can say about the topic the better. There's another point here - and it's that you should have a clearly defined "win" for the meeting. When the meeting is over, and everyone goes back to work, what were you expecting them to do with the information? Have that clearly defined in your head, and in the meeting invite. Understand the Technology There are several web-meeting clients out there. I use them all, since I meet with clients all over the world. They all work differently - so I take a few moments and read up on the different clients and find out how I can use the tools properly. I do this with the technology I use for everything else, and it's important to understand it if the meeting is to be a success. If you're running the meeting, know the tools. I don't care if you like the tools or not, learn them anyway. Don't waste everyone else's time just because you're too bitter/snarky/lazy to spend a few minutes reading. Check your phone or mic. Check your video size. Install (and learn to use)  ZoomIT (http://technet.microsoft.com/en-us/sysinternals/bb897434.aspx). Format your slides or screen or output correctly. Learn to use the voting features of the meeting software, and especially it's whiteboard features. Figure out how multiple monitors work. Try a quick meeting with someone to test all this. Do this *before* you invite lots of other people to your meeting.   Use a WebCam I'm not a pretty man. I have a face fit for radio. But after attending a meeting with clients where one Microsoft person used a webcam and another did not, I'm convinced that people pay more attention when a face is involved. There are tons of studies around this, or you can take my word for it, but toss a shirt on over those pajamas and turn the webcam on. Set Up Early Whether you're attending or leading the meeting, don't wait to sign on to the meeting at the time when it starts. I can almost plan that a 10:00 meeting will actually start at 10:10 because the participants/leader is just now installing the web client for the meeting at 10:00. Sign on early, go on mute, and then wait for everyone to arrive. Mute When Not Talking No one wants to hear your screaming offspring / yappy dog / other cubicle conversations / car wind noise (are you driving in a desert storm or something?) while the person leading the meeting is trying to talk. I use the Lync software from Microsoft for my meetings, and I mute everyone by default, and then tell them to un-mute to talk to the group. Share Collateral If you have a PowerPoint deck, mail it out in case you have a tech failure. If you have a document, share it as an attachment to the meeting. Don't make people ask you for the information - that's why you're there to begin with. Even better, send it out early. "But", you say, "then no one will come to the meeting if they have the deck first!" Uhm, then don't have a meeting. Send out the deck and a quick e-mail and let everyone get on with their productive day. Set Actions At the Meeting A meeting should have some sort of outcome (see point one). That means there are actions to take, a follow up, or some deliverable. Otherwise, it's an e-mail. At the meeting, decide who will do what, when things are needed, and so on. And avoid, if at all possible, setting up another meeting, unless absolutely necessary. So there you have it. Whether it's on-premises or on the web, meetings are a necessary evil, and should be treated that way. Like politicians, you should have as few of them as are necessary to keep the roads paved and public libraries open.

    Read the article

  • Looking into the JQuery Overlays Plugin

    - by nikolaosk
    I have been using JQuery for a couple of years now and it has helped me to solve many problems on the client side of web development.  You can find all my posts about JQuery in this link. In this post I will be providing you with a hands-on example on the JQuery Overlays Plugin.If you want you can have a look at this post, where I describe the JQuery Cycle Plugin.You can find another post of mine talking about the JQuery Carousel Lite Plugin here. Another post of mine regarding the JQuery Image Zoom Plugin can be found here.I will be writing more posts regarding the most commonly used JQuery Plugins. With the JQuery Overlays Plugin we can provide the user (overlay) with more information about an image when the user hovers over the image. I have been using extensively this plugin in my websites. In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like. You can use Visual Studio 2012 Express edition. You can download it here.  You can download this plugin from this link. I launch Expression Web 4.0 and then I type the following HTML markup (I am using HTML 5) <html lang="en"> <head>    <link rel="stylesheet" href="ImageOverlay.css" type="text/css" media="screen" />    <script type="text/javascript" src="jquery-1.8.3.min.js"></script>    <script type="text/javascript" src="jquery.ImageOverlay.min.js"></script>         <script type="text/javascript">        $(function () {            $("#Liverpool").ImageOverlay();        });    </script>   </head><body>    <ul id="Liverpool" class="image-overlay">        <li>            <a href="www.liverpoolfc.com">                <img alt="Liverpool" src="championsofeurope.jpg" />                <div class="caption">                    <h3>Liverpool Football club</h3>                    <p>The greatest club in the world</p>                </div>            </a>        </li>    </ul></body></html> This is a very simple markup. I have added references to the JQuery library (current version is 1.8.3) and the JQuery Overlays Plugin. Then I add 1 image in the element with "id=Liverpool". There is a caption class as well, where I place the text I want to show when the mouse hovers over the image. The caption class and the Liverpool id element are styled in the ImageOverlay.css file that can also be downloaded with the plugin.You can style the various elements of the html markup in the .css file. The Javascript code that makes it all happen follows.   <script type="text/javascript">        $(function () {            $("#Liverpool").ImageOverlay();        });    </script>        I am just calling the ImageOverlay function for the Liverpool ID element.The contents of ImageOverlay.css file follow .image-overlay { list-style: none; text-align: left; }.image-overlay li { display: inline; }.image-overlay a:link, .image-overlay a:visited, .image-overlay a:hover, .image-overlay a:active { text-decoration: none; }.image-overlay a:link img, .image-overlay a:visited img, .image-overlay a:hover img, .image-overlay a:active img { border: none; }.image-overlay a{    margin: 9px;    float: left;    background: #fff;    border: solid 2px;    overflow: hidden;    position: relative;}.image-overlay img{    position: absolute;    top: 0;    left: 0;    border: 0;}.image-overlay .caption{    float: left;    position: absolute;    background-color: #000;    width: 100%;    cursor: pointer;    /* The way to change overlay opacity is the follow properties. Opacity is a tricky issue due to        longtime IE abuse of it, so opacity is not offically supported - use at your own risk.         To play it safe, disable overlay opacity in IE. */    /* For Firefox/Opera/Safari/Chrome */    opacity: .8;    /* For IE 5-7 */    filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=80);    /* For IE 8 */    -MS-filter: "progid:DXImageTransform.Microsoft.Alpha(Opacity=80)";}.image-overlay .caption h1, .image-overlay .caption h2, .image-overlay .caption h3,.image-overlay .caption h4, .image-overlay .caption h5, .image-overlay .caption h6{    margin: 10px 0 10px 2px;    font-size: 26px;    font-weight: bold;    padding: 0 0 0 5px;    color:#92171a;}.image-overlay p{    text-indent: 0;    margin: 10px;    font-size: 1.2em;} It couldn't be any simpler than that. I view my simple page in Internet Explorer 10 and it works as expected. I have tested this simple solution in all major browsers and it works fine.Have a look at the picture below. You can test it yourself and see the results in your favorite browser. Hope it helps!!!

    Read the article

  • parallel_for_each from amp.h – part 1

    - by Daniel Moth
    This posts assumes that you've read my other C++ AMP posts on index<N> and extent<N>, as well as about the restrict modifier. It also assumes you are familiar with C++ lambdas (if not, follow my links to C++ documentation). Basic structure and parameters Now we are ready for part 1 of the description of the new overload for the concurrency::parallel_for_each function. The basic new parallel_for_each method signature returns void and accepts two parameters: a grid<N> (think of it as an alias to extent) a restrict(direct3d) lambda, whose signature is such that it returns void and accepts an index of the same rank as the grid So it looks something like this (with generous returns for more palatable formatting) assuming we are dealing with a 2-dimensional space: // some_code_A parallel_for_each( g, // g is of type grid<2> [ ](index<2> idx) restrict(direct3d) { // kernel code } ); // some_code_B The parallel_for_each will execute the body of the lambda (which must have the restrict modifier), on the GPU. We also call the lambda body the "kernel". The kernel will be executed multiple times, once per scheduled GPU thread. The only difference in each execution is the value of the index object (aka as the GPU thread ID in this context) that gets passed to your kernel code. The number of GPU threads (and the values of each index) is determined by the grid object you pass, as described next. You know that grid is simply a wrapper on extent. In this context, one way to think about it is that the extent generates a number of index objects. So for the example above, if your grid was setup by some_code_A as follows: extent<2> e(2,3); grid<2> g(e); ...then given that: e.size()==6, e[0]==2, and e[1]=3 ...the six index<2> objects it generates (and hence the values that your lambda would receive) are:    (0,0) (1,0) (0,1) (1,1) (0,2) (1,2) So what the above means is that the lambda body with the algorithm that you wrote will get executed 6 times and the index<2> object you receive each time will have one of the values just listed above (of course, each one will only appear once, the order is indeterminate, and they are likely to call your code at the same exact time). Obviously, in real GPU programming, you'd typically be scheduling thousands if not millions of threads, not just 6. If you've been following along you should be thinking: "that is all fine and makes sense, but what can I do in the kernel since I passed nothing else meaningful to it, and it is not returning any values out to me?" Passing data in and out It is a good question, and in data parallel algorithms indeed you typically want to pass some data in, perform some operation, and then typically return some results out. The way you pass data into the kernel, is by capturing variables in the lambda (again, if you are not familiar with them, follow the links about C++ lambdas), and the way you use data after the kernel is done executing is simply by using those same variables. In the example above, the lambda was written in a fairly useless way with an empty capture list: [ ](index<2> idx) restrict(direct3d), where the empty square brackets means that no variables were captured. If instead I write it like this [&](index<2> idx) restrict(direct3d), then all variables in the some_code_A region are made available to the lambda by reference, but as soon as I try to use any of those variables in the lambda, I will receive a compiler error. This has to do with one of the direct3d restrictions, where only one type can be capture by reference: objects of the new concurrency::array class that I'll introduce in the next post (suffice for now to think of it as a container of data). If I write the lambda line like this [=](index<2> idx) restrict(direct3d), all variables in the some_code_A region are made available to the lambda by value. This works for some types (e.g. an integer), but not for all, as per the restrictions for direct3d. In particular, no useful data classes work except for one new type we introduce with C++ AMP: objects of the new concurrency::array_view class, that I'll introduce in the post after next. Also note that if you capture some variable by value, you could use it as input to your algorithm, but you wouldn’t be able to observe changes to it after the parallel_for_each call (e.g. in some_code_B region since it was passed by value) – the exception to this rule is the array_view since (as we'll see in a future post) it is a wrapper for data, not a container. Finally, for completeness, you can write your lambda, e.g. like this [av, &ar](index<2> idx) restrict(direct3d) where av is a variable of type array_view and ar is a variable of type array - the point being you can be very specific about what variables you capture and how. So it looks like from a large data perspective you can only capture array and array_view objects in the lambda (that is how you pass data to your kernel) and then use the many threads that call your code (each with a unique index) to perform some operation. You can also capture some limited types by value, as input only. When the last thread completes execution of your lambda, the data in the array_view or array are ready to be used in the some_code_B region. We'll talk more about all this in future posts… (a)synchronous Please note that the parallel_for_each executes as if synchronous to the calling code, but in reality, it is asynchronous. I.e. once the parallel_for_each call is made and the kernel has been passed to the runtime, the some_code_B region continues to execute immediately by the CPU thread, while in parallel the kernel is executed by the GPU threads. However, if you try to access the (array or array_view) data that you captured in the lambda in the some_code_B region, your code will block until the results become available. Hence the correct statement: the parallel_for_each is as-if synchronous in terms of visible side-effects, but asynchronous in reality.   That's all for now, we'll revisit the parallel_for_each description, once we introduce properly array and array_view – coming next. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Meraki wireless access point disconnects clients

    - by resolver101
    We have a Meraki MR16 Cloud Managed AP and it disconnects certain clients. The clients with Intel wireless cards work without any disconnects. The Meraki reports the follow in its event log: Sep 4 09:55:47 WPA authentication Sep 4 09:55:47 802.11 association channel: 11, rssi: 64 Sep 4 09:55:38 802.11 disassociation client has left AP Sep 4 09:55:38 WPA deauthentication vap: 0, radio: 0, aid: 1633956416 An example wireless network card which the Meraki disconnects is Realtek RTL8191SE 802.11b/g/n WiFi Adapter. The realtek laptop is sat 2 meters away from the AP and has a lot of signal and the Meraki reports minimal interference. Any ideas why it disconnects non-intel wireless network cards?

    Read the article

  • Uncompiled WCF on IIS7: The type could not be found

    - by Jimmy
    Hello, I've been trying to follow this tutorial for deploying a WCF sample to IIS . I can't get it to work. This is a hosted site, but I do have IIS Manager access to the server. However, in step 2 of the tutorial, I can't "create a new IIS application that is physically located in this application directory". I can't seem to find a menu item, context menu item, or what not to create a new application. I've been right-clicking everywhere like crazy and still can't figure out how to create a new app. I suppose that's probably the root issue, but I tried a few other things (described below) just in case that actually is not the issue. This is "deployed" at http://test.com.cws1.my-hosting-panel.com/IISHostedCalcService/Service.svc . The error says: The type 'Microsoft.ServiceModel.Samples.CalculatorService', provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found. I also tried to create a virtual dir (IISHostedCalc) in dotnetpanel that points to IISHostedCalcService . When I navigate to http://test.com.cws1.my-hosting-panel.com/IISHostedCalc/Service.svc , then there is a different error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. As per the tutorial, there was no compiling involved; I just dropped the files on the server as follow inside the folder IISHostedCalcService: service.svc Web.config Service.cs service.svc contains: <%@ServiceHost language=c# Debug="true" Service="Microsoft.ServiceModel.Samples.CalculatorService"%> (I tried with quotes around the c# attribute, as this looks a little strange without quotes, but it made no difference) Web.config contains: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Microsoft.ServiceModel.Samples.CalculatorService"> <!-- This endpoint is exposed at the base address provided by host: http://localhost/servicemodelsamples/service.svc --> <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator" /> <!-- The mex endpoint is explosed at http://localhost/servicemodelsamples/service.svc/mex --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Service.cs contains: using System; using System.ServiceModel; namespace Microsoft.ServiceModel.Samples { [ServiceContract] public interface ICalculator { [OperationContract] double Add(double n1, double n2); [OperationContract] double Subtract(double n1, double n2); [OperationContract] double Multiply(double n1, double n2); [OperationContract] double Divide(double n1, double n2); } public class CalculatorService : ICalculator { public double Add(double n1, double n2) { return n1 + n2; } public double Subtract(double n1, double n2) { return n1 - n2; } public double Multiply(double n1, double n2) { return n1 * n2; } public double Divide(double n1, double n2) { return n1 / n2; } } }

    Read the article

  • Du tells me it can't find the current directory?

    - by C. Ross
    I'm on AIX, and in some directories I can't use the du command. I get the follow error message: du: 0653-175 Cannot find the current directory. Obviously the current directory exists, and I have permissions to it. I can list the directory and create files in it, both before and after I ran du. What could possibly be wrong here? The du command works just fine in my home directory. A quick google search turns up a bunch of forum posts of the same problem, but no clear answers.

    Read the article

  • .NET Framework 4 updates breaking MMC.exe and other CLR.dll Exceptions

    - by Fox
    I've seen this issue floating around the net the last few weeks and I'm facing exactly the same issue. My servers are set to auto install updates using Windows update (not clever, I know), and since about 2 months ago, I've been getting strange Exceptions. The first thing that happens is that MMC.exe just crashes randomly and sometimes on startup of the console. The exception in the Windows Application log is as follow: Faulting application name: mmc.exe, version: 6.1.7600.16385, time stamp: 0x4a5bc808 Faulting module name: mscorwks.dll, version: 2.0.50727.5448, time stamp: 0x4e153960 Secondly, on the same server, I have some custom Windows services which constantly crash with exceptions : Faulting application name: Myservice.exe, version: 1.0.0.0, time stamp: 0x4f44cb11 Faulting module name: clr.dll, version: 4.0.30319.239, time stamp: 0x4e181a6d Exception code: 0xc0000005 Fault offset: 0x000378aa The exception is not in my code. I've tested and retested it. My server has the following .NET Framework updates installed: Does anyone have any idea?

    Read the article

  • Error 0x6ba (RPC server is unavailable) when running sfc /scannow on Windows XP in Safe Mode

    - by leeand00
    I think that my mup.sys file is corrupt. I received the following error when trying to access a network share that was located on my Windows 7 box, from my Windows XP box: No network provider accepted the given network path. After reading this I attempted to follow the directions by rebooting my computer into safe mode. After I run "sfc /scannow" I receive the following error message: The specific error code is 0x000006ba [The RPC server is unavailable]. When I go into Services, it says that the Remote Procedure Call (RPC) service is running but that the Remote Procedure Call (RPC) Locator is not running. When I try to start the Remote Procedure Call (RPC) Locator, it gives me an error saying: Error 1084: This service cannot be started in Safe Mode What can I do about this? If it can't find the Remote Procedure Call service in safe mode?

    Read the article

  • Migration of VM from Hyper-V to Hyper-V R2 - Pass through disks

    - by Andrew Gillen
    I am trying to migrate a VM which is using two pass through disks from a legacy Hyper-V Cluster to a new R2 cluster. The migrated VM cannot use the pass through disks though. The guest OS (2008 R2) doesn't seem to like the disk and eventually tries to format the disk instead of mounting it. The migration process I have been using for all my VMs is to export the VM to a new lun, then add that new lun to the new cluster, importing the vm off it in the hyper-v console, then making it highly available. I assumed I could do the same thing and just add the two pass through disks to the new cluster and then attach them inside Hyper-V. Is there a process I need to follow to migrate pass through disks that does not involve setting up new Luns and robocopying the data over?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >