Search Results

Search found 1068 results on 43 pages for 'richard ibarra ramirez'.

Page 43/43 | < Previous Page | 39 40 41 42 43 

  • Community Outreach - Where Should I Go

    - by Roger Brinkley
    A few days ago I was talking to person new to community development and they asked me what guidelines I used to determine the worthiness of a particular event. After our conversation was over I thought about it a little bit more and figured out there are three ways to determine if any event (be it conference, blog, podcast or other social medias) is worth doing: Transferability, Multiplication, and Impact. Transferability - Is what I have to say useful to the people that are going to hear it. For instance, consider a company that has product offering that can connect up using a number of languages like Scala, Grovey or Java. Sending a Scala expert to talk about Scala and the product is not transferable to a Java User Group, but a Java expert doing the same talk with a Java slant is. Similarly, talking about JavaFX to any Java User Group meeting in Brazil was pretty much a wasted effort until it was open sourced. Once it was open sourced it was well received. You can also look at transferability in relation to the subject matter that you're dealing with. How transferable is a presentation that I create. Can I, or a technical writer on the staff, turn it into some technical document. Could it be converted into some type of screen cast. If we have a regular podcast can we make a reference to the document, catch the high points or turn it into a interview. Is there a way of using this in the sales group. In other words is the document purely one dimensional or can it be re-purposed in other forms. Multiplication - On every trip I'm looking for 2 to 5 solid connections that I can make with developers. These are long term connections, because I know that once that relationship is established it will lead to another 2 - 5 from that connection and within a couple of years were talking about some 100 connections from just one developer. For instance, when I was working on JavaHelp in 2000 I hired a science teacher with a programming background. We've developed a very tight relationship over the year though we rarely see each other more than once a year. But at this JavaOne, one of his employees came up to me and said, "Richard (Rick Hard in Czech) told me to tell you that he couldn't make it to JavaOne this year but if I saw you to tell you hi". Another example is from my Mobile & Embedded days in Brasil. On our very first FISL trip about 5 years ago there were two university students that had created a project called "Marge". Marge was a Bluetooth framework that made connecting bluetooth devices easier. I invited them to a "Sun" dinner that evening. Originally they were planning on leaving that afternoon, but they changed their plans recognizing the opportunity. Their eyes were as big a saucers when they realized the level of engineers at the meeting. They went home started a JUG in Florianoplis that we've visited more than a couple of times. One of them went to work for Brazilian government lab like Berkley Labs, MIT Lab, John Hopkins Applied Physicas Labs or Lincoln Labs in the US. That presented us with an opportunity to show Embedded Java as a possibility for some of the work they were doing there. Impact - The final criteria is how life changing is what I'm going to say be to the individuals I'm reaching. A t-shirt is just a token, but when I reach down and tug at their developer hearts then I know I've succeeded. I'll never forget one time we flew all night to reach Joan Pasoa in Northern Brazil. We arrived at 2am went immediately to our hotel only to be woken up at 6 am to travel 2 hours by car to the presentation hall. When we arrived we were totally exhausted. Outside the facility there were 500 people lined up to hear 6 speakers for the day. That itself was uplifting.  I delivered one of my favorite talks on "I have passion". It was a talk on golf and embedded java development, "Find your passion". When we finished a couple of first year students came up to me and said how much my talk had inspired them. FISL is another great example. I had been about 4 years in a row. FISL is a very young group of developers so capturing their attention is important. Several of the students will come back 2 or 3 years later and ask me questions about research or jobs. And then there's Louis. Louis is one my favorite Brazilians. I can only describe him as a big Brazilian teddy bear. I see him every year at FISL. He works primarily in Java EE but he's attended every single one of my talks over the last 4 years. I can't tell you why, but he always greets me and gives me a hug. For some reason I've had a real impact. And of course when it comes to impact you don't just measure a presentation but every single interaction you have at an event. It's the hall way conversations, the booth conversations, but more importantly it's the conversations at dinner tables or in the cars when you're getting transported to an event. There's a good story that illustrates this. Last year in the spring I was traveling to Goiânia in Brazil. I've been there many times and leaders there no me well. One young man has picked me up at the airport on more than one occasion. We were going out to dinner one evening and he brought his girl friend along. One thing let to another and I eventually asked him, in front of her, "Why haven't you asked her to marry you?" There were all kinds of excuses and she just looked at him and smiled. When I came back in December for JavaOne he came and sought me. "I just want to tell you that I thought a lot about what you said, and I asked her to marry me. We're getting married next Spring." Sometimes just one presentation is all it takes to make an impact. Other times it takes years. Some impacts are directly related to the company and some are more personal in nature. It doesn't matter which it is because it's having the impact that matters.

    Read the article

  • Delivering the Integrated Portal Experience!

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Guest post by Richard Maldonado, Principal Product Manager, Oracle WebCenter Portal Organizations are still struggling to standardize on a user interaction platform which can meet the needs of all their target audiences.  This has not only resulted in inefficient and inconsistent experiences for their users, but it also creates inefficiencies (productivity and costs) for the departments that manage the applications and information systems.  Portals have historically been the unifying platform that provide IT with a common interface which can securely surface the most relevant interactions for a given user and/or group of users.  However, organizations have found that the technologies available have either not provided the flexibility necessary to address all of their use cases, or they rely too much on IT resources to manage, maintain, and evolve.  Empowering  the Business Groups The core issue that IT departments face with delivering portal experiences is having enough resources to respond and address the influx of requirements which come in from the business.  Commonly, when a business group wants a new portal site established for their group, they will submit a request to the IT dept, the IT dept then assigns a resource to an administrator and/or developer to build.  Unfortunately, this approach is not scalable, it can be a time consuming activity which requires significant interaction between the business owner and the IT resource.  A modern user interaction platforms should empower the business groups by providing them tools which they can use to build and manage the portal experiences without the need for IT's involvement.  And because business groups rarely have technical resources (developers) on staff, the tools must be easy enough that virtually any business user could use.  In addition, the tool must be powerful enough to allow them to build the experience that they need, things such as creating a whole new portal, add/manage page and page hierarchy, manage user/group access, add/modify components within the page, etc.  This balance between ease-of-use and flexibility is key to the successful adoption of tools which will ultimately reduce the burden on IT, respond to the needs of the business, and deliver high-value experiences for the users.  Ready or Not, Here They Come: Smartphones and Tablets Recently, several studies have highlighted that smartphone and tablet-style devices have overtaken PC's in both sales and usage.  This shift is further driving organizations to revaluate how they're delivering data, information, and applications to their users.  Users are expecting to get the same level of access and interaction, but in a ways which are optimized for the capabilities of the device that they are using.  Expect More With the ever growing number of new IT projects and flat/shrinking budgets, organizations are looking for comprehensive solutions which can deliver integrated web experiences that are tailored for the users and optimized for mobile devices.  Piecing together a number of point solutions is no longer an option.  A modern portal technology should not only address the traditional needs of integrating and surfacing back-end applications/information, but it should enable the business through easy-to-use tools and accelerate the delivery of mobile optimized experiences.   v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter Featuring Qualcomm & Keste 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 12.00 Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-fareast- mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

  • PASS Summit Feedback

    - by Rob Farley
    PASS Feedback came in last week. I also saw my dentist for some fillings... At the PASS Summit this year, I delivered a couple of regular sessions and a Lightning Talk. People told me they enjoyed it, but when the rankings came out, they showed that I didn’t score particularly well. Brent Ozar was keen to discuss it with me. Brent: PASS speaker feedback is out. You did two sessions and a Lightning Talk. How did you go? Rob: Not so well actually, thanks for asking. Brent: Ha! Sorry. Of course you know that's why I wanted to discuss this with you. I was in one of your sessions at SQLBits in the UK a month before PASS, and I thought you rocked. You've got a really good and distinctive delivery style.  Then I noticed your talks were ranked in the bottom quarter of the Summit ratings and wanted to discuss it. Rob: Yeah, I know. You did ask me if we could do this...  I should explain – my presentation style is not the stereotypical IT conference one. I throw in jokes, and try to engage the audience thoroughly. I find many talks amazingly dry, and I guess I try to buck that trend. I also run training courses, and find that I get a lot of feedback from people thanking me for keeping things interesting. That said, I also get feedback criticising me for my style, and that’s basically what’s happened here. For the rest of this discussion, let’s focus on my talk about the Incredible Shrinking Execution Plan, which I considered to be my main talk. Brent: I thought that session title was the very best one at the entire Summit, and I had it on my recommended sessions list.  In four words, you managed to sum up the topic and your sense of humor.  I read that and immediately thought, "People need to be in this session," and then it didn't score well.  Tell me about your scores. Rob: The questions on the feedback form covered the usefulness of the information, the speaker’s presentation skills, their knowledge of the subject, how well the session was described, the amount of time allocated, and the quality of the presentation materials. Brent: Presentation materials? But you don’t do slides.  Did they rate your thong? Rob: No-one saw my flip-flops in this talk, Brent. I created a script in Management Studio, and published that afterwards, but I think people will have scored that question based on the lack of slides. I wasn’t expecting to do particularly well on that one. That was the only section that didn’t have 5/5 as the most popular score. Brent: See, that sucks, because cookbook-style scripts are often some of my favorites.  Adam Machanic's Service Broker workbench series helped me immensely when I was prepping for the MCM.  As an attendee, I'd rather have a commented script than a slide deck.  So how did you rank so low? Rob: When I look at the scores that you got (based on your blog post), you got very few scores below 3 – people that felt strong enough about your talk to post a negative score. In my scores, between 5% and 10% were below 3 (except on the question about whether I knew my stuff – I guess I came as knowledgeable). Brent: Wow – so quite a few people really didn’t like your talk then? Rob: Yeah. Mind you, based on the comments, some people really loved it. I’d like to think that there would be a certain portion of the room who may have rated the talk as one of the best of the conference. Some of my comments included “amazing!”, “Best presentation so far!”, “Wow, best session yet”, “fantastic” and “Outstanding!”. I think lots of talks can be “Great”, but not so many talks can be “Outstanding” without the word losing its meaning. One wrote “Pretty amazing presentation, considering it was completely extemporaneous.” Brent: Extemporaneous, eh? Rob: Yeah. I guess they don’t realise how much preparation goes into coming across as unprepared. In many ways it’s much easier to give a written speech than to deliver a presentation without slides as a prompt. Brent: That delivery style, the really relaxed, casual, college-professor approach was one of the things I really liked about your presentation at SQLbits.  As somebody who presents a lot, I "get" it - I know how hard it is to come off as relaxed and comfortable with your own material.  It's like improv done by jazz players and comedians - if you've never tried it, you don't realize how hard it is.  People also don't realize how hard it is to make a tough subject fun. Rob: Yeah well... There will be people writing comments on this post that say I wasn't trying to make the subject fun, and that I was making it all about me. Sometimes the style works, sometimes it doesn't. Most of the comments mentioned the fact that I tell jokes, some in a nice way, but some not so much (and it wasn't just a PASS thing - that's the mix of feedback I generally get). One comment at PASS was: “great stand up comedian - not what I'm looking for at pass”, and there were certainly a few that said “too many jokes”. I’m not trying to do stand-up – jokes are my way of engaging with the audience while I demonstrate some of the amazing things that the Query Optimizer can do if you write your queries the right way. Some people didn’t think it was technical enough, but I’ve also had some people tell me that the concepts I’m explaining are deep and profound. Brent: To me, that's a hallmark of a great explanation - when someone says, "But of course it has to work that way - how could it work any other way?  It seems so simple and logical."  Well, sure it does when it's explained correctly, but now pick up any number of thick SQL Server books and try to understand the Redundant Joins concept.  I guarantee it'll take more than 45 minutes. Rob: Some people in my audiences realise that, but definitely not everyone. There's only so much you can tell someone that something is profound. Generally it's something that they either have an epiphany on or not. I like to lull my audience into knowing what's going on, and do something that surprises them. Gain their trust, build a rapport, and then show them the deeper truth of what just happened. Brent: So you've learned your lesson about presentation scores, right?  From here on out, you're going to be dry, humorless, and all your presentations will consist of you reading bullet points off the screen. Rob: No Brent, I’m not. I'm also not going to suggest that most presentations at PASS are like that. No-one tries to present like that. There's a big space to occupy between what "dry and humourless" and me. My difference is to focus on the relationship I have with the crowd, rather than focussing on delivering the perfect session. I want to see people smiling and know they're relaxed. I think most presenters focus on the material, which is completely reasonable and safe. I remember once hearing someone talking about product creation. They talked about mediocrity. They said that one of the worst things that people can ever say about your product is that it’s “good”. What you want is for 10% of the world to love it enough to want to buy it. If 10% the world gave me a dollar, I’d have more money than I could ever use (assuming it wasn’t the SAME dollar they were giving me I guess). Brent: It's the Raving Fans theory.  It's better to have a small number of raving customers than a large number of almost-but-not-really customers who don't care that much about your product or service.  I know exactly how you feel - when I got survey feedback from my Quest video presentation when I was dressed up in a Richard Simmons costume, some of the attendees said I was unprofessional and distracting.  Some of the attendees couldn't get enough and Photoshopped all kinds of stuff into the screen captures.  On a whole, I probably didn't score that well, and I'm fine with that.  It sucks to look at the scores though - do those lower scores bother you? Rob: Of course they do. It hurts deeply. I open myself up and give presentations in a very personal way. All presenters do that, and we all feel the pain of negative feedback. I hate coming 146th & 162nd out of 185, but have to acknowledge that many sessions did worse still. Plus, once I feel the wounds have healed, I’ll be able to remember that there are people in the world that rave about my presentation style, and figure that people will hopefully talk about me. One day maybe those people that don’t like my presentation style will stay away and I might be able to score better. You don’t pay to hear country music if you prefer western... Lots of people find chili too spicy, but it’s still a popular food. Brent: But don’t you want to appeal to everyone? Rob: I do, but I don’t want to be lukewarm as in Revelation 3:16. I’d rather disgust and be discussed. Well, maybe not ‘disgust’, but I don’t want to conform. Conformity just isn’t the same any more. I’m not sure I’ve ever been one to do that. I try not to offend, but definitely like to be different. Brent: Count me among your raving fans, sir.  Where can we see you next? Rob: Considering I live in Adelaide in Australia, I’m not about to appear at anyone’s local SQL Saturday. I’m still trying to plan which events I’ll get to in 2011. I’ve submitted abstracts for TechEd North America, but won’t hold my breath. I’m also considering the SQLBits conferences in the UK in April, PASS in October, and I’m sure I’ll do some LiveMeeting presentations for user groups. Online, people download some of my recent SQLBits presentations at http://bit.ly/RFSarg and http://bit.ly/Simplification though. And they can download a 5-minute MP3 of my Lightning Talk at http://www.lobsterpot.com.au/files/Collation.mp3, in which I try to explain the idea behind collation, using thongs as an example. Brent: I was in the audience for http://bit.ly/RFSarg. That was a great presentation. Rob: Thanks, Brent. Now where’s my dollar?

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Why does an error appear every time I try to open the Ubuntu Software Center? [duplicate]

    - by askubuntu7639
    This question already has an answer here: How do I remove a broken software source? 3 answers There is a glitch on the Ubuntu Software Center and whenever I open it an error appears and it keeps loading and never opens. Why does this happen? I have installed Ubuntu 13.04 on a disk and partitioned it. Please help me and ask for excess information if you need it. If you know of any duplicates please show me them!! This is the output of a question someone asked me. SystemError: E:Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list This next output is the output of cat /etc/apt/sources.list.d/medibuntu.list </div> <div style="float:left;"> <div class="textwidget"><script type="text/javascript"><!-- google_ad_client = "ca-pub-2917661377128354"; /* 160X600 Sidebar UX */ google_ad_slot = "9908287444"; google_ad_width = 160; google_ad_height = 600; //-- Recent Comments <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://0.gravatar.com/avatar/ae5f4503d5f167f1cf62d3e36e8242b6?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Richard Syme</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/how-to-customize-you-vlc-hot-keys/#comment-13732">#</a> </p> </div> <div class="content" style="float:left;"><p>I dont have a clear button under the hotkeys. All i want to do is get rid of all hotkeys.</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/ffabde94437e996a506e31e981bcf8fc?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Abin Thomas Mathew</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/install-lamp-server-in-centos-6-4-rhel-6-4/#comment-13727">#</a> </p> </div> <div class="content" style="float:left;"><p>Simple and easy to follow tutorial to install and start of phpMyAdmin. Thank you</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://0.gravatar.com/avatar/499ccc1154e9b8569b87413434220b91?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">SK</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/munich-giving-ubuntu-linux-cds-citizens/#comment-13725">#</a> </p> </div> <div class="content" style="float:left;"><p>I have Bosslinux and i used it for a while. Now i swiched to Ubuntu 13.04.</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/3dc2f7140bdd857dcdfe815a6e29aa6b?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Anon</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/linus-torvalds-talks-backdoor-linuxcon/#comment-13724">#</a> </p> </div> <div class="content" style="float:left;"><p>Do you know how much extra bloat is in Ubuntu these days? How the hell does anyone really know?</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/9dd28d1cf5efe754fa58b53c1e6de401?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author"><a href="http://ambitiousgeeks.blogspot.com/" onclick="javascript:_gaq.push(['_trackEvent','outbound-commentauthor','http://ambitiousgeeks.blogspot.com']);" rel='external nofollow' class='url'>Ambition</a></h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/linus-torvalds-talks-backdoor-linuxcon/#comment-13723">#</a> </p> </div> <div class="content" style="float:left;"><p>True :)</p> </article> </div> <div style="float:left;"> &nbsp;<script type="text/javascript"> window.___gcfg = {lang: 'en-US'}; (function() {var po = document.createElement("script"); po.type = "text/javascript"; po.async = true;po.src = "https://apis.google.com/js/plusone.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(po, s); })(); <div class="execphpwidget"></div> </div> <div class="module2"> <div class="recentPost"> <h3 class="module-title2">Favorite Links</h3> <ul class='xoxo blogroll'> http://www.iticy.com']);"Cheap Hosting http://www.tuxmachines.org']);"TuxMachines.org http://www.ubuntugeek.com']);"UbuntuGeek.com http://www.stelinuxhost.com']);"Webdesign & SEO </ul> <img src="http://180016988.r.cdn77.net/wp-content/themes/unimax/images/bigLine.jpg" alt="" /> </div> </div> <div align="center" style="min-height:610px;"> <div class="execphpwidget"></div> <div class="textwidget"><a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en_US" onclick="javascript:_gaq.push(['_trackEvent','outbound-widget','http://creativecommons.org']);"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/88x31.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="unixmen.com" property="cc:attributionName" rel="cc:attributionURL">unixmen.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en_US" >Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License</a>.</div> </div> </div> <!-- #primary .widget-area --> </div> Unixmen Archive Select Month September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 Tags Cloudandroid apache browser Centos chrome command line Debian eyecandy Fedora firefox games gaming gnome google karmic koala kde libreoffice Linux linux distribution LinuxMint lucid lynx maverick meerkat mysql news oneiric ocelot openoffice opensource opensuse oracle ppa Precise Pangolin release RHEL security server software themes tools ubuntu unix upgrade virtualbox vlc windows wine Unixmen Twitts Firefox 16, a treat for developers http://t.co/cnd27CzT Ubuntu 12.10 ‘Quantal Quetzal’: Beta 2 Sneak Peek http://t.co/hd4LwDOy Top 5 security Myths about Linux; and their realities http://t.co/zO1LgHST About Us Advertising Sitemap Privacy Contact Us Hire Us Copyright © 2008-2013 Unixmen.com . Maintained by Unixmen . /* */ jQuery(document).on('ready post-load', easy_fancybox_handler ); http://www.w3-edge.com/wordpress-plugins/ Page Caching using apc Database Caching 3/186 queries in 0.035 seconds using apc Content Delivery Network via 180016988.r.cdn77.net Served from: www.unixmen.com @ 2013-09-25 01:38:14 by W3 Total Cache

    Read the article

  • Token replacement

    - by ClarkeyBoy
    Hey, I currently have a system on my website whereby I can put something like "[cfe]" anywhere in the site and, when the page is rendered, it will replace it with the root to the customer front end (same for "[afe]" and admin front end - so in the admin front end I can put "[cfe]/Default.aspx" to link to the homepage on the customer front end. This is in place as I have a development version of the site, then a test and a live version too. All 3 may have different roots to each section (for example the way the website is set up, the root to the admin front end in test is "/test/Administration/", but in live and development it is just "/Administration/"). Which version it is depends on the URL - all my development sites are in a folder called "development", whereas test is in a folder called "test" and any live urls do not contain either of these. There are also 3 different databases - one for each. All 3, obviously, require a different connection string. I also have a string replacement function in place which can change, for example, "[Product:Cards]" to point to the Cards catalogue page. Problem is that for this I go through all the products and do a replacement on "[Product:" & Product.Name() & "]". However I would like to take this further. I would like to pick up these custom strings when the page is rendered so it picks up "[Product:Cards]" and then goes off to find product "Cards" and replaces the string with a link to the Cards page, rather than looping through all the products and doing a replace just on the off chance that there are any replacements to make. One use for this, which I may start using in the future if I can figure out how to do this, is like on Wikipedia where you put the title of the page you want to point to, then a divider (think its a pipe from memory) then the link text. I would like to apply this to the above situation. This way broken links can also be picked up, and reported to admin (a major advantage as they can then locate them and remove the link or add the product / page that it refers to). I would like to take this to the stage where content of entire pages can be rearranged (kinda like web parts, but not as advanced as that). I mean like so you can put [layout type="3columnImageTopRight" image="imageurl"]Content here[/layout]. This will display, as specified, an image in the top right (with padding at the left and bottom) and 3 columns - maybe with the image spanning one or two columns). The imageurl can be specified as another token: maybe like [Image:imagename.gif] or something. This replaces it with the root to the image folder and then the specified filename. I have not really looked into how I am going to split the content into 3 columns yet, but this would be something to look at for my dissertation and then implement after my project deadline at least. Does anyone have any ideas or pointers which could help me with this? Also if this is not strictly token replacement then please point me to what it is, so I can further develop this. Thanks in advance, Regards, Richard Clarke

    Read the article

  • Overflow - what am I doing wrong?

    - by ClarkeyBoy
    Hi, I have been working on trying to get a page to display a title at the top of the content pane, and then a scrollable list of products below that so that the title of the product range is displayed at all times. I am sure this is a very simple thing to do - but cannot figure it out. Currently the actual page (not the test page for which the code is given below) works ok in the sense that I set the heading div to 5% of the height of .content-container and then set the scrollable div to 95% with top: 5%, both with position: absolute applied. - however I would like to place some links in the heading div to different pages (1, 2, 3 etc), which I would like to centre vertically if they are shorter than the heading and expand the heading div to match the height of the heading or the links, whichever is smallest. Furthermore I would like the div below the heading to shrink so that it doesn't go below the bottom of the content div as the heading div gets taller. The point of this is because it is for a client who may, or may not, be happy with the heading sizes and so on - therefore the heading div height could easily change. Specifying heights so precisely means that changing the h1 height could mean 5 changes to the CSS file - something I want to avoid. The content pane currently has its height fixed to 80% of the page, with the header and footer being 10% each on top of that, so there is no scrollbar at the side of the page and the header / footer are always showing. This is something I would like to keep. In the code below, .content-container is the main content pane - this is contained in another div which is centred using the margin at 50% of the page width. .test-div is the div which contains the heading. .test-div-2 is an attempt to place a div below .test-div, in the hope that I can force .test-div-3 to extend to 100% of its' height but no further, and to display a scrollbar if the content exceeds the height. So far I have the following, but it doesn't do exactly what I would like it to: <div class="content-container"> <div class="test-div"> <h1 style="text-align: center;">Dogs</h1> </div> <div class="test-div-2"> <div class="test-div-3"> //Content here </div> </div> </div> .content-container { position: absolute; width: 100%; height: 100%; left: 0; right: 0; bottom: 0; overflow: auto; } .test-div { position: relative; padding: 0; margin: 0; } .test-div-2 { position: relative; background-color: #CCCCCC; } .test-div-3 { max-height: 100%; background-color: #999999; } Any help with this would be greatly appreciated. I would like to achieve this without the use of JavaScript / jQuery if possible - pure HTML / CSS solutions only please! Regards, Richard

    Read the article

  • Writing more efficient xquery code (avoiding redundant iteration)

    - by Coquelicot
    Here's a simplified version of a problem I'm working on: I have a bunch of xml data that encodes information about people. Each person is uniquely identified by an 'id' attribute, but they may go by many names. For example, in one document, I might find <person id=1>Paul Mcartney</person> <person id=2>Ringo Starr</person> And in another I might find: <person id=1>Sir Paul McCartney</person> <person id=2>Richard Starkey</person> I want to use xquery to produce a new document that lists every name associated with a given id. i.e.: <person id=1> <name>Paul McCartney</name> <name>Sir Paul McCartney</name> <name>James Paul McCartney</name> </person> <person id=2> ... </person> The way I'm doing this now in xquery is something like this (pseudocode-esque): let $ids := distinct-terms( [all the id attributes on people] ) for $id in $ids return <person id={$id}> { for $unique-name in distinct-values ( for $name in ( [all names] ) where $name/@id=$id return $name ) return <name>{$unique-name}</name> } </person> The problem is that this is really slow. I imagine the bottleneck is the innermost loop, which executes once for every id (of which there are about 1200). I'm dealing with a fair bit of data (300 MB, spread over about 800 xml files), so even a single execution of the query in the inner loop takes about 12 seconds, which means that repeating it 1200 times will take about 4 hours (which might be optimistic - the process has been running for 3 hours so far). Not only is it slow, it's using a whole lot of virtual memory. I'm using Saxon, and I had to set java's maximum heap size to 10 GB (!) to avoid getting out of memory errors, and it's currently using 6 GB of physical memory. So here's how I'd really like to do this (in Pythonic pseudocode): persons = {} for id in ids: person[id] = set() for person in all_the_people_in_my_xml_document: persons[person.id].add(person.name) There, I just did it in linear time, with only one sweep of the xml document. Now, is there some way to do something similar in xquery? Surely if I can imagine it, a reasonable programming language should be able to do it (he said quixotically). The problem, I suppose, is that unlike Python, xquery doesn't (as far as I know) have anything like an associative array. Is there some clever way around this? Failing that, is there something better than xquery that I might use to accomplish my goal? Because really, the computational resources I'm throwing at this relatively simple problem are kind of ridiculous.

    Read the article

  • How do you read from a file into an array of struct?

    - by Thomas.Winsnes
    I'm currently working on an assignment and this have had me stuck for hours. Can someone please help me point out why this isn't working for me? struct book { char title[25]; char author[50]; char subject[20]; int callNumber; char publisher[250]; char publishDate[11]; char location[20]; char status[11]; char type[12]; int circulationPeriod; int costOfBook; }; void PrintBookList(struct book **bookList) { int i; for(i = 0; i < sizeof(bookList); i++) { struct book newBook = *bookList[i]; printf("%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d\n",newBook.title, newBook.author, newBook.subject, newBook.callNumber,newBook.publisher, newBook.publishDate, newBook.location, newBook.status, newBook.type,newBook.circulationPeriod, newBook.costOfBook); } } void GetBookList(struct book** bookList) { FILE* file = fopen("book.txt", "r"); struct book newBook[1024]; int i = 0; while(fscanf(file, "%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d", &newBook[i].title, &newBook[i].author, &newBook[i].subject, &newBook[i].callNumber,&newBook[i].publisher, &newBook[i].publishDate, &newBook[i].location, &newBook[i].status, &newBook[i].type,&newBook[i].circulationPeriod, &newBook[i].costOfBook) != EOF) { bookList[i] = &newBook[i]; i++; } /*while(fscanf(file, "%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d", &bookList[i].title, &bookList[i].author, &bookList[i].subject, &bookList[i].callNumber, &bookList[i].publisher, &bookList[i].publishDate, &bookList[i].location, &bookList[i].status, &bookList[i].type, &bookList[i].circulationPeriod, &bookList[i].costOfBook) != EOF) { i++; }*/ PrintBookList(bookList); fclose(file); } int main() { struct book *bookList[1024]; GetBookList(bookList); } I get no errors or warnings on compile it should print the content of the file, just like it is in the file. Like this: OperatingSystems Internals and Design principles;William.S;IT;741012759;Upper Saddle River;2009;QA7676063;Available;circulation;3;11200 Communication skills handbook;Summers.J;Accounting;771239216;Milton;2010;BF637C451;Available;circulation;3;7900 Business marketing management:B2B;Hutt.D;Management;741912319;Mason;2010;HF5415131;Available;circulation;3;1053 Patient education rehabilitation;Dreeben.O;Education;745121511;Sudbury;2010;CF5671A98;Available;reference;0;6895 Tomorrow's technology and you;Beekman.G;Science;764102174;Upper Saddle River;2009;QA76B41;Out;reserved;1;7825 Property & security: selected essay;Cathy.S;Law;750131231;Rozelle;2010;D4A3C56;Available;reference;0;20075 Introducing communication theory;Richard.W;IT;714789013;McGraw-Hill;2010;Q360W47;Available;circulation;3;12150 Maths for computing and information technology;Giannasi.F;Mathematics;729890537;Longman;Scientific;1995;QA769M35G;Available;reference;0;13500 Labor economics;George.J;Economics;715784761;McGraw-Hill;2010;HD4901B67;Available;circulation;3;7585 Human physiology:from cells to systems;Sherwood.L;Physiology;707558936;Cengage Learning;2010;QP345S32;Out;circulation;3;11135 bobs;thomas;IT;701000000;UC;1006;QA7548;Available;Circulation;7;5050 but when I run it, it outputs this: OperatingSystems;;;0;;;;;;0;0 Internals;;;0;;;;;;0;0 and;;;0;;;;;;0;0 Design;;;0;;;;;;0;0 principles;William.S;IT;741012759;Upper;41012759;Upper;;0;;;;;;0;0 Saddle;;;0;;;;;;0;0 River;2009;QA7676063;Available;circulation;3;11200;lable;circulation;3;11200;;0;;;;;;0;0 Communication;;;0;;;;;;0;0 Thanks in advance, you're a life saver

    Read the article

  • Ajax and using responseXML

    - by Banderdash
    Hello, I have a XML file that looks like this: <response> <library name="My Library"> <book id="1" checked-out="1"> <authors> <author>David Flanagan</author> </authors> <title>JavaScript: The Definitive Guide</title> <isbn-10>0596101996</isbn-10> </book> <book id="2" checked-out="1"> <authors> <author>John Resig</author> </authors> <title>Pro JavaScript Techniques (Pro)</title> <isbn-10>1590597273</isbn-10> </book> <book id="3" checked-out="0"> <authors> <author>Erich Gamma</author> <author>Richard Helm</author> <author>Ralph Johnson</author> <author>John M. Vlissides</author> </authors> <title>Design Patterns: Elements of Reusable Object-Oriented Software</title> <isbn-10>0201633612</isbn-10> </book> ... </library> </response> I'm using a simple JS script to, on click show all the titles of the books: <script type="text/javascript"> function loadXMLDoc() { if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { xmlDoc=xmlhttp.responseXML; var txt=""; x=xmlDoc.getElementsByTagName("title"); for (i=0;i<x.length;i++) { txt=txt + x[i].childNodes[0].nodeValue + "<br />"; } document.getElementById("checkedIn").innerHTML=txt; } } xmlhttp.open("GET","ajax-response-data.xml",true); xmlhttp.send(); } </script> This works fine, as you can see here: http://clients.pixelbleed.net/ajax-test/ What I'd like to do is have the results post, on page load (not on click) into two separate DIV's depending on checked-out variable in the XML. So <book id="#" checked-out="1"> would post to the checkedIn div, <book id="#" checked-out="0"> posts to a checkedOut div. Also want to display the title and the author--would love any ideas as best method for accomplishing this. Apologize in advanced for the newbieness of my query.

    Read the article

  • ADO program to list members of a large group.

    - by AlexGomez
    Hi everyone, I'm attempting to list all the members in a Active Directory group using ADO. The problem I have is that many of these groups have over 1500 members and ADSI cannot handle more than 1500 items in a multi-valued attribute. Fortunately I came across Richard Muller's wonderful VBScript that handles more than 1500 members at http://www.rlmueller.net/DocumentLargeGroup.htm I modified his code as shown below so that I can list ALL the groups and its memberships in a certain OU. However, I'm keeping getting the exception shown below: "ADODB.Recordset: Item cannot be found in the collection corresponding to the requested name or ordinal." My program appears to get stuck at: strPath = adoRecordset.Fields("ADsPath").Value Set objGroup = GetObject(strPath) All I am doing above is issuing the query to get back a recordset consisting of the ADsPath for each group in the OU. It then walks through the recordset and grabs the ADsPath for the first group and store its in a variable named strPath; we then use the value of that variable to bind to the group account for that group. It really should work! Any idea why the code below doesn't work for me? Any pointers will be great appreciated. Thanks. Option Explicit Dim objRootDSE, strDNSDomain, adoCommand Dim adoConnection, strBase, strAttributes Dim strFilter, strQuery, adoRecordset Dim strDN, intCount, blnLast, intLowRange Dim intHighRange, intRangeStep, objField Dim objGroup, objMember, strName ' Determine DNS domain name. Set objRootDSE = GetObject("LDAP://RootDSE") 'strDNSDomain = objRootDSE.Get("DefaultNamingContext") strDNSDomain = "XXXXXXXX" ' Use ADO to search Active Directory. Set adoCommand = CreateObject("ADODB.Command") Set adoConnection = CreateObject("ADODB.Connection") adoConnection.Provider = "ADsDSOObject" adoConnection.Open = "Active Directory Provider" adoCommand.ActiveConnection = adoConnection adoCommand.Properties("Page Size") = 100 adoCommand.Properties("Timeout") = 30 adoCommand.Properties("Cache Results") = False ' Specify base of search. strBase = "<LDAP://" & strDNSDomain & ">" ' Specify the attribute values to retrieve. strAttributes = "member" ' Filter on objects of class "group" strFilter = "(&(objectClass=group)(samAccountName=*))" ' Enumerate direct group members. ' Use range limits to handle more than 1000/1500 members. ' Setup to retrieve 1000 members at a time. blnLast = False intRangeStep = 999 intLowRange = 0 IntHighRange = intLowRange + intRangeStep Do While True If (blnLast = True) Then ' If last query, retrieve remaining members. strQuery = strBase & ";" & strFilter & ";" _ & strAttributes & ";range=" & intLowRange _ & "-*;subtree" Else ' If not last query, retrieve 1000 members. strQuery = strBase & ";" & strFilter & ";" _ & strAttributes & ";range=" & intLowRange & "-" _ & intHighRange & ";subtree" End If adoCommand.CommandText = strQuery Set adoRecordset = adoCommand.Execute adoRecordset.MoveFirst intCount = 0 Do Until adoRecordset.EOF strPath = adoRecordset.Fields("ADsPath").Value Set objGroup = GetObject(strPath) For Each objField In adoRecordset.Fields If (VarType(objField) = (vbArray + vbVariant)) _ Then For Each strDN In objField.Value ' Escape any forward slash characters, "/", with the backslash ' escape character. All other characters that should be escaped are. strDN = Replace(strDN, "/", "\/") ' Check dictionary object for duplicates. 'If (objGroupList.Exists(strDN) = False) Then ' Add to dictionary object. 'objGroupList.Add strDN, True ' Bind to each group member, to find member's samAccountName Set objMember = GetObject("LDAP://" & strDN) ' Output group cn, group samaAccountName and group member's samAccountName. Wscript.Echo objMember.samAccountName intCount = intCount + 1 'End if Next End If Next adoRecordset.MoveNext Loop adoRecordset.Close ' If this is the last query, exit the Do While loop. If (blnLast = True) Then Exit Do End If ' If the previous query returned no members, then the previous ' query for the next 1000 members failed. Perform one more ' query to retrieve remaining members (less than 1000). If (intCount = 0) Then blnLast = True Else ' Setup to retrieve next 1000 members. intLowRange = intHighRange + 1 intHighRange = intLowRange + intRangeStep End If Loop

    Read the article

  • "Randomly" occurring errors...

    - by ClarkeyBoy
    Hi, My website has a setup whereby when the application starts a module called SiteContent is "created". This runs a clearup function which basically deletes any irrelevant data from the database, in case any has been left in there from previously run functions. The module has instances of Manager classes - namely RangeManager, CollectionManager, DesignManager. There are others but I will just use these as an example. Each Manager class contains an array of items - items may be of type Range, Collection or Design, whichever one is relevant. Data for each range is then read into an instance of Range, Collection or Design. I know this is basically duplicating data - not very efficient but its my final year project at the moment so I can always change it to use Linq or something similar later, when I am not pressured by the one month deadline. I have a form which, on clicking the Save button, saves data by calling SiteContent.RangeManager.Create(vars) or SiteContent.RangeManager.Update(Range As Range, vars) (or the equivalent for other manager classes, whichever one happens to be relevant). These functions call a stored procedure to insert or update in the relevant table. Classes Range, Collection and Design all have attributes such as Name, Description, Display and several others. When the Create or Update function is called, the Manager loops through all the other items to check if an item with the same name already exists. The Update function ensures that it does not compare the item being updated to itself. A custom exception (ItemAlreadyExistsException) is thrown if another item with the same name is found. For some weird reason, if I go into a Range, Collection or Design in edit mode, change something and try to update it, it occasionally doesnt update the item. When I say occasionally I mean every 3 - 4 page loads, sometimes more. I see absolutely no pattern in when or why it occurs. I have a try-catch statement which catches ItemAlreadyExistsException, and outputs "An item with this name already exists" when caught. Occasionally it will output this; other times it will not. Does anyone have any idea why this could happen? Maybe a mistake which someone has made and solved before? I used to have regular expressions in place that the names were compared to - I believe it was [a-zA-Z]{1, 100} (between 1 and 100 lower- or upper-case characters). For some reason the customer who I am developing the site for used to get errors saying its not in the correct format. Yet he could try the same text 5 minutes later and it would work fine. I am thinking this could well be the same problem, since both problems occur at random. Many thanks in advance. Regards, Richard Clarke Edit: After much time spent narrowing down the code, I have decided to wait till my brother, who has been a programmer for at least 8 years more than I have, to come down over Easter and get him to have a look at it. If he cant solve it then I will zip the files up and put them somewhere for people to access and have a go at. I narrowed it down literally to the minimum number of files possible, and it still occurs. It seems to be about every 10th time. Having said that, I force the manager classes to refresh every 10 page loads or 5 minutes (whichever one is sooner). I may look into this - this could be causing a problem. Basically each Manager contains an array of an object. This array is populated using data from the database. The Update function takes an instance of the item and the new values to be set for the object. If it happens to be a page load where the array is reset (ie the data is loaded freshly from the database) then the object instance with the same ID wont be the same instance as the one being passed in. This explains the fact that it throws an ItemAlreadyExistsException now and then. It all makes sense now the more I think about it. If I were to pass in the ID of the object to be altered, rather than the object itself, then it should work perfectly. I will answer the question if I solve it..

    Read the article

  • SOA Community Newsletter: nouvelle lettre !

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTERAUGUST 2012 Dear SOA partner community member Have you submitted your feedback on SOA Partner Community Survey 2012? This is the last chance to participate in the survey. We recommend you to complete the survey and help us to improve our SOA Community. Thanks to all attendees and trainers for their participation in the excellent Fusion Middleware Summer Camps held in Lisbon and Munich. I would also like to thank you for the great feedback and the nice reports provided by AMIS Technology Blog & Middleware by Link Consulting. Most of our courses have been overbooked, if you did not get a chance or missed it, we offer a wide range of online training and the course material. Key take-away from the advanced BPM course is to become an expert in ADF. Here is the course from Grant Ronald Learn Advanced ADF online available. The Link Consulting Team became experts in SOA Governance with EAMS and Oracle Enterprise Repository! We always encourage our community members to share their best practices and are very keen to publish it. Please let us know if you want to share your best practices through this medium.We encourage you to make use of the Specialization benefits - this month we are giving an opportunity to Promote Your SOA & BPM Events. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Presentations & Training material OFM Summer CampsPromote Your SOA & BPM Events Advanced ADF Online, For Free By Grant BPM 11g Customer Stories & Solution Catalog & Process Accelerators Delivering SOA Governance with EAMS by Link Consulting Team WebLogic Server Provisioning and Patching News from our Partners & CommunityUpdated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum SOA Workspace PRESENTATIONS & TRAINING MATERIAL OFM SUMMER CAMPS Thanks to all attendees who invested their time and utilized the opportunity to attend the Summer Camps! Due to high demand of our most of the trainings, we had a long waiting list with more numbers of partners who are keen to attend it. We would like to give our special thanks to all trainers, who delivered excellent workshops! Most of the presentations and course material have been posted on our SOA Community Workspaceand WebLogic Community Workspace. You can access the content only if you are a registered community member. To register for the SOA Community please click here. You can register for the WebLogic Community here. To find out the first impressions of the event please visit our Facebook pages:www.facebook.com/WebLogicCommunity &www.facebook.com/soacommunity or Picasa AlbumThanks for the excellent blog posts from AMIS Technology Blog & Middleware by Link Consulting. Let us know if you published a twitter blog on@soacommunity & @wlscommunity. We will be pleased to publish it in our Newsletters. BPM Course Quotes “Its always easy, if you know, what you are doing” - Torsten Winterberg, Opitz“ The best ideas are the ideas from the best” - Filipe Sequeria, Primesoft “Best invest in the education in the last 12 months” - Richard Schaller, IPT “Practice best practice with the best instructor” - Graham Lamond Capgemini “If you have basic BPM knowledge, this is the course to really mater it” - Diogo Henriques Link Consulting “Very good trainers lot of work. Lot of fun as well” - Matthias Gris Workflow Factory “If you like to accelerate in Oracle come to the training to bring it all together” - Marcel van der Glind, Amis ADF Course Quotes "Excellent training, great opportunity to network!" - Frank Houweling, Amis "Lots of fun and good ideas" - Ana Santiago, GFI "Learn ADF, worth it Fusion Apps is the future" - Miguel Delgadillo, STO Consulting "The best way to learn Fusion Middleware from the #1" Alexandro Montantes, STO Consulting "Be advanced to to be the first” - Dimitar Petrov Fadata "Great opportunity to suck all the knowledge out of some very experienced product managers” - Wilfred von der Deijl, The Future Group WebLogic Course Quotes “Oracle trainings are the best” - Pedro Neto Novobas“ "Excellent training, well organized” - Pedro Antunh, Capgemini “This course dives you into Oracle WebLogic giving you a quick start on benefiting from Fusion Apps” - Leonardo Fernandes, Outsystems Additional Quotes “Thanks a lot again for organizing such a great and informative Summer Camp. Both training and networking were organized very professionally. I have gained tons of very useful Info, which will definitely help to increase quality of our future projects.” - Daniel Fasko fss-group.com I didn’t get the chance yesterday to thank you for a most enjoyable and thoroughly educational time I had in Munich over the last few days.” - Jeroen Bakker Ordina “Just to congratulate you on a great event, not only today but also in the previous days of training. As we know, a very good organization and, as a native Portuguese that knows Lisbon very good, a nice choice of places to visit. Looking forward to come again next year.” Pedro Miguel Neto, Novobase PROMOTE YOUR SOA & BPM EVENTS The Partner Event Publisher has just been made available to all SOA & BPM specialized partners in EMEA. Partners now have the opportunity to publish their events to theOracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. See the demo below and click here to read more information. ADVANCED ADF ONLINE, FOR FREE BY GRANT The second part of the advanced ADF online eCourse is Live now! This covers the advanced topics of region and region interaction as well as getting down and dirty with some of the layout features of ADF Faces, skinning and DVT components. The aim of this course is to give you a self-paced learning aid which covers the more advanced topics of ADF development. The content is developed by Product Management and our Curriculum development teams and is based on advanced training material we have been running internally for about 18 months. We will get started on the next chapter, but in the meantime, please have a look at chapters one and two. Back to top BPM 11G CUSTOMER STORIES & SOLUTION CATALOG & PROCESS ACCELERATORS Stories Everyone loves a good story on planning or implementing a BPM strategy. Everyone wants to hear how it was done before?, what worked?, what was achieved? If you have achieved success with BPM, we are very keen to hear your stories and examples of how your customers use it. We receive lots of requests from people who are thinking of using BPM to solve a specific problem or in combination with a specific technology to talk to someone who has done it before. These stories are invaluable. Drop down the details of anything you think is relevant with a bit of detail and we will follow up on it. As one good deed deserves another, we will do our best to give you stories if you need them to show that where you are going, others have treaded before. Send your stories to us using this e-mail link and we will share them among other like minded people. Solution Catalogue This summer, Oracle is launching a solution catalogue specifically intended for partners. If you have delivered a successful implementation in BPM and think it could be reused and applied again in a similar scenario in the same industry or in a similar environment, then we ware keen to know about it and will add it to the solution catalogue. The solution catalogue will showcase successful BPM solutions both inside and outside Oracle. Be in touch with us on this e-mail link and we will make sure to add your solution. Process AcceleratorsFinally if you have specific processes that you are expert on, you have implemented at a customer and you want to work with us on getting these productised, then we would love to know about it. The process accelerator programme is explained in the most recent SOA/BPM Community Newsletter but again feel free to contact us if you want to get involved. Good luck with BPM and let us know how we can help. Barry O'Reilly Director BPM [email protected] DELIVERING SOA GOVERNANCE WITH EAMS BY LINK CONSULTING TEAM In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. WEBLOGIC SERVER PROVISIONING AND PATCHING For access to the Oracle demo systems please visit OPN and talk to your Partner Expert.SOA Suite and BPM Suite runs on WebLogic! We are pleased to announce the availability of a WebLogic Server Management demo that showcases some of the key provisioning and patching capabilities of WebLogic Server Management Pack Enterprise Edition (EE). To learn more about these features - as well as other features of the pack - please visit the pack's saleskit page.Demo Highlights The demo showcases the following capabilities: Patching Oracle WebLogic Servers Standardizing WebLogic Server Patch Rollouts Creating a WebLogic Domain Provisioning Profile Cloning a WebLogic Domain from a Provisioning Profile Deploying a Java EE Application Scaling Out an Oracle WebLogic Cluster Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Software Lifecycle Automation” section, click on the link “EM Cloud Control 12c WLS Provisioning and Patching” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo.

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • Selling Federal Enterprise Architecture (EA)

    - by TedMcLaughlan
    Selling Federal Enterprise Architecture A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents. Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available. "Selling" the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article. Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. "EA isn't getting deep enough penetration into programs, components, sub-agencies, etc.", verified a panelist at the most recent EA Government Conference in DC. Newer guidance from OMB may be especially difficult to handle, where bottom-up input can't be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D "Agency IT Reductions and Reinvestments" and the information required for "Cloud Computing Alternatives Evaluation" (supporting the new Exhibit 53C, "Agency Cloud Computing Portfolio"). Therefore, EA must be "sold" directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration. Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren't likely to be realized in the short-term. This means there's probably little current, allocated budget to work with; ergo the challenge of trying to sell an "unfunded mandate". Also, the concept of "Enterprise" in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that "...organizational boundaries still trump functional similarities. Most people understand what we're trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you're going to be touching them...there's always that fear factor," Spires said. It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO's recent report "Enterprise Architecture Value Needs To Be Measured and Reported". What's not clear is the method or guidance to sell this value. In fact, the current GAO "Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)", a.k.a. the "EAMMF", does not include words like "sell", "persuade", "market", etc., except in reference ("within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development") to a brief section in the CIO Council's 2001 "Practical Guide to Federal Enterprise Architecture", entitled "3.3.1. Develop an EA Marketing Strategy and Communications Plan." Furthermore, Core Element 19 of the EAMMF is advised to be applied in "Stage 3: Developing Initial EA Versions". This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1. So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold? Following is a brief taxonomy (it's a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial "engagement plan" for evangelizing EA "from within". An EA "Sales Taxonomy" of sorts. We're not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape. Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, "Fit-for-Purpose" (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF). The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet - as in: "Let's talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)". Once the initial discussion targets and subjects are approved (that can be measured and reported), a "marketing and communications plan" can be created. A working example follows the Taxonomy. Enterprise Architecture Sales Taxonomy Draft, Summary Version 1. Community 1.1. Budgeted Programs or Portfolios Communities of Purpose (CoPR) 1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans 1.1.2. Program/System Owners Facing Strategic Change 1.1.2.1. Mandated 1.1.2.2. Expected/Anticipated 1.1.3. Program Managers - Creating Employee Performance Plans 1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP) 1.2. Governance & Communications Communities of Practice (CoP) 1.2.1. Policy Owners 1.2.1.1. OCFO 1.2.1.1.1. Budget/Procurement Office 1.2.1.1.2. Strategic Planning 1.2.1.2. OCIO 1.2.1.2.1. IT Management 1.2.1.2.2. IT Operations 1.2.1.2.3. Information Assurance (Cyber Security) 1.2.1.2.4. IT Innovation 1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements) 1.2.2. Governing IT Council/SME Peers (i.e. an "Architects Council") 1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren't buried solely within the CIO shop) 1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a "shared services" EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended 1.2.2.3. External Oversight/Constraints 1.2.2.3.1. GAO/OIG & Legal 1.2.2.3.2. Industry Standards 1.2.2.3.3. Official public notification, response 1.2.3. Mission Constituents Participant & Analyst Community of Interest (CoI) 1.2.3.1. Mission Operators/Users 1.2.3.2. Public Constituents 1.2.3.3. Industry Advisory Groups, Stakeholders 1.2.3.4. Media 2. Benefit/Value (Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.) 2.1. Program Costs – EA enables sound decisions regarding... 2.1.1. Cost Avoidance – a TCO theme 2.1.2. Sequencing – alignment of capability delivery 2.1.3. Budget Instability – a Federal reality 2.2. Investment Capital – EA illuminates new investment resources via... 2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral 2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication 2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage 2.3. Contextual Knowledge – EA enables informed decisions by revealing... 2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context 2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT 2.3.3. Influence – the impact of politics and relationships can be examined 2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways 2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures 2.4. Mission Performance – EA enables beneficial decision results regarding... 2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization 2.4.2. IT Stability – towards 100%, real-time uptime 2.4.3. Agility – responding to rapid changes in mission 2.4.4. Outcomes –measures of mission success, KPIs – vs. only "Outputs" 2.4.5. Constraints – appropriate response to constraints 2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome 2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of... 2.5.1. Compliance – all the right boxes are checked 2.5.2. Dependencies –cross-agency, segment, government 2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively 2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles 2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues 2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to "future-proof" 2.5.5.2. Anticipated – discovering the level of impact that matters 3. EA Program Facet (What parts of the EA can and should be communicated, using business or mission terms?) 3.1. Architecture Models – the visual tools to be created and used 3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels 3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What's the relationship of these models, with existing system models? 3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful? 3.2. Traceability – the maturity, status, completeness of the tools 3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it? 3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome? 3.3. Governance – what's the interaction, participation method; how are the tools used? 3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts? 3.3.2. Review – how is the EA validated, against what criteria?  Taxonomy Usage Example:   1. To speak with: a. ...a particular set of System Owners Facing Strategic Change, via mandate (like the "Cloud First" mandate); about... b. ...how the EA program's visible and easily accessible Infrastructure Reference Model (i.e. "IRM" or "TRM"), if updated more completely with current system data, can... c. ...help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise... 2. ....the following Marketing & Communications (Sales) Plan can be constructed: a. Create an easy-to-read "Consequence Model" that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the "Consequence Model" for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and c. Schedule a series of short "Discovery" meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the "As-Is" models and frame the "To Be" models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems. --end example -- Note that communications with the intended audience should take a page out of the standard "Search Engine Optimization" (SEO) playbook, using keywords and phrases relating to "value" and "outcome" vs. "compliance" and "output". Searches in email boxes, internal and external search engines for phrases like "cost avoidance strategies", "mission performance metrics" and "innovation funding" should yield messages and content from the EA team. This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick "proof points" (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community. In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes. The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

    Read the article

  • How to get Passive FTP Working Through an Iptables Firewall?

    - by user1133248
    I have an iptables firewall running on a Fedora Linux server that is basically being used as a firewall router and OpenVPN server. That's it. We have been using the same iptables firewall code for YEARS. I did make some changes on 21 December to re-route a mySQL port, but given what has happened I've completely backed those changes out. Sometime after those changes were made and backed out passive FTP, served from a vsftpd process, stopped working. We use a passive ftp client to FLING (that's the name of the ftp client running under Windows! :-) ) images from our remote telescopes to our server. I believe it is something in the firewall code because I can drop the firewall and the FTP file transfer (and connecting to the ftp site with Internet Explorer to see the file list) works. When I raise the iptables firewall, it stops working. Again, this is code that we'd been using for years. However, I felt that maybe there was something I missed, so we had a .bak file from 2009 that I used. Same behavior, passive ftp does not work. So, I went and rebuilt the firewall code line by line to see what line was causing the problem. Everything worked until I put the line -A FORWARD -j DROP in very near the end. Of course, if I am correct, this is the line that basically "turns on" the firewall, saying drop everything except for the exceptions I've made above. However, this line has been in the iptables code probably since 2003. So, I'm at the end of my rope, and I still can't figure out why this has stopped working. I guess I need an expert on iptables configuration. Here is the iptables code (from iptables-save) with comments. # Generated by iptables-save v1.3.8 on Thu Jan 5 18:36:25 2012 *nat # One of the things that I remain ignorant about is what these following three lines # do in both the nat tables (which we're not using on this machine) and the following # filter table. I don't know what the numbers are, but I'm ASSUMING they're port # ranges. # :PREROUTING ACCEPT [7435:551429] :POSTROUTING ACCEPT [6097:354458] :OUTPUT ACCEPT [5:451] COMMIT # Completed on Thu Jan 5 18:36:25 2012 # Generated by iptables-save v1.3.8 on Thu Jan 5 18:36:25 2012 *filter :INPUT ACCEPT [10423:1046501] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [15184:16948770] # The following line is for my OpenVPN configuration. -A INPUT -i tun+ -j ACCEPT # In researching this on the Internet I found some iptables code that was supposed to # open the needed ports up. I never needed this before this week, but since passive FTP # was no longer working, I decided to put the code in. The next three lines are part of # that code. -A INPUT -p tcp -m tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --sport 1024:65535 --dport 20 -m state --state ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m state --state RELATED,ESTABLISHED -j ACCEPT # Another line for the OpenVPN configuration. I don't know why the iptables-save mixed # the lines up. -A FORWARD -i tun+ -j ACCEPT # Various forwards for all our services -A FORWARD -s 65.118.148.197 -p tcp -m tcp --dport 3307 -j ACCEPT -A FORWARD -d 65.118.148.197 -p tcp -m tcp --dport 3307 -j ACCEPT -A FORWARD -s 65.118.148.197 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -d 65.118.148.197 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 21 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 21 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 20 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 20 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 7191 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 7191 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 46000:46999 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 46000:46999 -j ACCEPT -A FORWARD -s 65.118.148.0/255.255.255.0 -j ACCEPT -A FORWARD -d 65.118.148.196 -p udp -m udp --dport 53 -j ACCEPT -A FORWARD -s 65.118.148.196 -p udp -m udp --dport 53 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 53 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 53 -j ACCEPT -A FORWARD -d 65.118.148.196 -p udp -m udp --dport 25 -j ACCEPT -A FORWARD -s 65.118.148.196 -p udp -m udp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 42 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 42 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -d 65.118.148.204 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -s 65.118.148.204 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 6667 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 6667 -j ACCEPT -A FORWARD -s 65.96.214.242 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -s 192.68.148.66 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT # "The line" that causes passive ftp to stop working. Insofar as I can tell, everything # else seems to work - ssh, telnet, mysql, httpd. -A FORWARD -j DROP -A FORWARD -p icmp -j ACCEPT # The following code is again part of my attempt to put in code that would cause passive # ftp to work. I don't know why iptables-save scattered it about like this. -A OUTPUT -p tcp -m tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 20 --dport 1024:65535 -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT COMMIT # Completed on Thu Jan 5 18:36:25 2012 So, with all that prelude, my basic question is: How can I get passive ftp to work behind an iptables firewall? As you can see, I've tried to get it working (again) and tried to do some research on the issue, but have come up...short. Any answers would be appreciated by both me and various variable star astronomers around the world! THANKS! -Richard "Doc" Kinne, American Assoc. of Variable Star Observers, [email protected]

    Read the article

  • Looking for a workaround for IE 6/7 "Unspecified Error" bug when accessing offsetParent; using ASP.N

    - by CodeChef
    I'm using jQuery UI's draggable and droppable libraries in a simple ASP.NET proof of concept application. This page uses the ASP.NET AJAX UpdatePanel to do partial page updates. The page allows a user to drop an item into a trashcan div, which will invoke a postback that deletes a record from the database, then rebinds the list (and other controls) that the item was drug from. All of these elements (the draggable items and the trashcan div) are inside an ASP.NET UpdatePanel. Here is the dragging and dropping initialization script: function initDragging() { $(".person").draggable({helper:'clone'}); $("#trashcan").droppable({ accept: '.person', tolerance: 'pointer', hoverClass: 'trashcan-hover', activeClass: 'trashcan-active', drop: onTrashCanned }); } $(document).ready(function(){ initDragging(); var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function() { initDragging(); }); }); function onTrashCanned(e,ui) { var id = $('input[id$=hidID]', ui.draggable).val(); if (id != undefined) { $('#hidTrashcanID').val(id); __doPostBack('btnTrashcan',''); } } When the page posts back, partially updating the UpdatePanel's content, I rebind the draggables and droppables. When I then grab a draggable with my cursor, I get an "htmlfile: Unspecified error." exception. I can resolve this problem in the jQuery library by replacing elem.offsetParent with calls to this function that I wrote: function IESafeOffsetParent(elem) { try { return elem.offsetParent; } catch(e) { return document.body; } } I also have to avoid calls to elem.getBoundingClientRect() as it throws the same error. For those interested, I only had to make these changes in the jQuery.fn.offset function in the Dimensions Plugin. My questions are: Although this works, are there better ways (cleaner; better performance; without having to modify the jQuery library) to solve this problem? If not, what's the best way to manage keeping my changes in sync when I update the jQuery libraries in the future? For, example can I extend the library somewhere other than just inline in the files that I download from the jQuery website. Update: @some It's not publicly accessible, but I will see if SO will let me post the relevant code into this answer. Just create an ASP.NET Web Application (name it DragAndDrop) and create these files. Don't forget to set Complex.aspx as your start page. You'll also need to download the jQuery UI drag and drop plug in as well as jQuery core Complex.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Complex.aspx.cs" Inherits="DragAndDrop.Complex" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> <script src="jquery-1.2.6.min.js" type="text/javascript"></script> <script src="jquery-ui-personalized-1.5.3.min.js" type="text/javascript"></script> <script type="text/javascript"> function initDragging() { $(".person").draggable({helper:'clone'}); $("#trashcan").droppable({ accept: '.person', tolerance: 'pointer', hoverClass: 'trashcan-hover', activeClass: 'trashcan-active', drop: onTrashCanned }); } $(document).ready(function(){ initDragging(); var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function() { initDragging(); }); }); function onTrashCanned(e,ui) { var id = $('input[id$=hidID]', ui.draggable).val(); if (id != undefined) { $('#hidTrashcanID').val(id); __doPostBack('btnTrashcan',''); } } </script> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <div> <asp:UpdatePanel ID="updContent" runat="server" UpdateMode="Always"> <ContentTemplate> <asp:LinkButton ID="btnTrashcan" Text="trashcan" runat="server" CommandName="trashcan" onclick="btnTrashcan_Click" style="display:none;"></asp:LinkButton> <input type="hidden" id="hidTrashcanID" runat="server" /> <asp:Button ID="Button1" runat="server" Text="Save" onclick="Button1_Click" /> <table> <tr> <td style="width: 300px;"> <asp:DataList ID="lstAllPeople" runat="server" DataSourceID="odsAllPeople" DataKeyField="ID"> <ItemTemplate> <div class="person"> <asp:HiddenField ID="hidID" runat="server" Value='<%# Eval("ID") %>' /> Name: <asp:Label ID="lblName" runat="server" Text='<%# Eval("Name") %>' /> <br /> <br /> </div> </ItemTemplate> </asp:DataList> <asp:ObjectDataSource ID="odsAllPeople" runat="server" SelectMethod="SelectAllPeople" TypeName="DragAndDrop.Complex+DataAccess" onselecting="odsAllPeople_Selecting"> <SelectParameters> <asp:Parameter Name="filter" Type="Object" /> </SelectParameters> </asp:ObjectDataSource> </td> <td style="width: 300px;vertical-align:top;"> <div id="trashcan"> drop here to delete </div> <asp:DataList ID="lstPeopleToDelete" runat="server" DataSourceID="odsPeopleToDelete"> <ItemTemplate> ID: <asp:Label ID="IDLabel" runat="server" Text='<%# Eval("ID") %>' /> <br /> Name: <asp:Label ID="NameLabel" runat="server" Text='<%# Eval("Name") %>' /> <br /> <br /> </ItemTemplate> </asp:DataList> <asp:ObjectDataSource ID="odsPeopleToDelete" runat="server" onselecting="odsPeopleToDelete_Selecting" SelectMethod="GetDeleteList" TypeName="DragAndDrop.Complex+DataAccess"> <SelectParameters> <asp:Parameter Name="list" Type="Object" /> </SelectParameters> </asp:ObjectDataSource> </td> </tr> </table> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> </html> Complex.aspx.cs namespace DragAndDrop { public partial class Complex : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected List<int> DeleteList { get { if (ViewState["dl"] == null) { List<int> dl = new List<int>(); ViewState["dl"] = dl; return dl; } else { return (List<int>)ViewState["dl"]; } } } public class DataAccess { public IEnumerable<Person> SelectAllPeople(IEnumerable<int> filter) { return Database.SelectAll().Where(p => !filter.Contains(p.ID)); } public IEnumerable<Person> GetDeleteList(IEnumerable<int> list) { return Database.SelectAll().Where(p => list.Contains(p.ID)); } } protected void odsAllPeople_Selecting(object sender, ObjectDataSourceSelectingEventArgs e) { e.InputParameters["filter"] = this.DeleteList; } protected void odsPeopleToDelete_Selecting(object sender, ObjectDataSourceSelectingEventArgs e) { e.InputParameters["list"] = this.DeleteList; } protected void Button1_Click(object sender, EventArgs e) { foreach (int id in DeleteList) { Database.DeletePerson(id); } DeleteList.Clear(); lstAllPeople.DataBind(); lstPeopleToDelete.DataBind(); } protected void btnTrashcan_Click(object sender, EventArgs e) { int id = int.Parse(hidTrashcanID.Value); DeleteList.Add(id); lstAllPeople.DataBind(); lstPeopleToDelete.DataBind(); } } } Database.cs namespace DragAndDrop { public static class Database { private static Dictionary<int, Person> _people = new Dictionary<int,Person>(); static Database() { Person[] people = new Person[] { new Person("Chad") , new Person("Carrie") , new Person("Richard") , new Person("Ron") }; foreach (Person p in people) { _people.Add(p.ID, p); } } public static IEnumerable<Person> SelectAll() { return _people.Values; } public static void DeletePerson(int id) { if (_people.ContainsKey(id)) { _people.Remove(id); } } public static Person CreatePerson(string name) { Person p = new Person(name); _people.Add(p.ID, p); return p; } } public class Person { private static int _curID = 1; public int ID { get; set; } public string Name { get; set; } public Person() { ID = _curID++; } public Person(string name) : this() { Name = name; } } }

    Read the article

< Previous Page | 39 40 41 42 43