Search Results

Search found 1473 results on 59 pages for 'richard jones'.

Page 59/59 | < Previous Page | 55 56 57 58 59 

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

  • PASS Summit Feedback

    - by Rob Farley
    PASS Feedback came in last week. I also saw my dentist for some fillings... At the PASS Summit this year, I delivered a couple of regular sessions and a Lightning Talk. People told me they enjoyed it, but when the rankings came out, they showed that I didn’t score particularly well. Brent Ozar was keen to discuss it with me. Brent: PASS speaker feedback is out. You did two sessions and a Lightning Talk. How did you go? Rob: Not so well actually, thanks for asking. Brent: Ha! Sorry. Of course you know that's why I wanted to discuss this with you. I was in one of your sessions at SQLBits in the UK a month before PASS, and I thought you rocked. You've got a really good and distinctive delivery style.  Then I noticed your talks were ranked in the bottom quarter of the Summit ratings and wanted to discuss it. Rob: Yeah, I know. You did ask me if we could do this...  I should explain – my presentation style is not the stereotypical IT conference one. I throw in jokes, and try to engage the audience thoroughly. I find many talks amazingly dry, and I guess I try to buck that trend. I also run training courses, and find that I get a lot of feedback from people thanking me for keeping things interesting. That said, I also get feedback criticising me for my style, and that’s basically what’s happened here. For the rest of this discussion, let’s focus on my talk about the Incredible Shrinking Execution Plan, which I considered to be my main talk. Brent: I thought that session title was the very best one at the entire Summit, and I had it on my recommended sessions list.  In four words, you managed to sum up the topic and your sense of humor.  I read that and immediately thought, "People need to be in this session," and then it didn't score well.  Tell me about your scores. Rob: The questions on the feedback form covered the usefulness of the information, the speaker’s presentation skills, their knowledge of the subject, how well the session was described, the amount of time allocated, and the quality of the presentation materials. Brent: Presentation materials? But you don’t do slides.  Did they rate your thong? Rob: No-one saw my flip-flops in this talk, Brent. I created a script in Management Studio, and published that afterwards, but I think people will have scored that question based on the lack of slides. I wasn’t expecting to do particularly well on that one. That was the only section that didn’t have 5/5 as the most popular score. Brent: See, that sucks, because cookbook-style scripts are often some of my favorites.  Adam Machanic's Service Broker workbench series helped me immensely when I was prepping for the MCM.  As an attendee, I'd rather have a commented script than a slide deck.  So how did you rank so low? Rob: When I look at the scores that you got (based on your blog post), you got very few scores below 3 – people that felt strong enough about your talk to post a negative score. In my scores, between 5% and 10% were below 3 (except on the question about whether I knew my stuff – I guess I came as knowledgeable). Brent: Wow – so quite a few people really didn’t like your talk then? Rob: Yeah. Mind you, based on the comments, some people really loved it. I’d like to think that there would be a certain portion of the room who may have rated the talk as one of the best of the conference. Some of my comments included “amazing!”, “Best presentation so far!”, “Wow, best session yet”, “fantastic” and “Outstanding!”. I think lots of talks can be “Great”, but not so many talks can be “Outstanding” without the word losing its meaning. One wrote “Pretty amazing presentation, considering it was completely extemporaneous.” Brent: Extemporaneous, eh? Rob: Yeah. I guess they don’t realise how much preparation goes into coming across as unprepared. In many ways it’s much easier to give a written speech than to deliver a presentation without slides as a prompt. Brent: That delivery style, the really relaxed, casual, college-professor approach was one of the things I really liked about your presentation at SQLbits.  As somebody who presents a lot, I "get" it - I know how hard it is to come off as relaxed and comfortable with your own material.  It's like improv done by jazz players and comedians - if you've never tried it, you don't realize how hard it is.  People also don't realize how hard it is to make a tough subject fun. Rob: Yeah well... There will be people writing comments on this post that say I wasn't trying to make the subject fun, and that I was making it all about me. Sometimes the style works, sometimes it doesn't. Most of the comments mentioned the fact that I tell jokes, some in a nice way, but some not so much (and it wasn't just a PASS thing - that's the mix of feedback I generally get). One comment at PASS was: “great stand up comedian - not what I'm looking for at pass”, and there were certainly a few that said “too many jokes”. I’m not trying to do stand-up – jokes are my way of engaging with the audience while I demonstrate some of the amazing things that the Query Optimizer can do if you write your queries the right way. Some people didn’t think it was technical enough, but I’ve also had some people tell me that the concepts I’m explaining are deep and profound. Brent: To me, that's a hallmark of a great explanation - when someone says, "But of course it has to work that way - how could it work any other way?  It seems so simple and logical."  Well, sure it does when it's explained correctly, but now pick up any number of thick SQL Server books and try to understand the Redundant Joins concept.  I guarantee it'll take more than 45 minutes. Rob: Some people in my audiences realise that, but definitely not everyone. There's only so much you can tell someone that something is profound. Generally it's something that they either have an epiphany on or not. I like to lull my audience into knowing what's going on, and do something that surprises them. Gain their trust, build a rapport, and then show them the deeper truth of what just happened. Brent: So you've learned your lesson about presentation scores, right?  From here on out, you're going to be dry, humorless, and all your presentations will consist of you reading bullet points off the screen. Rob: No Brent, I’m not. I'm also not going to suggest that most presentations at PASS are like that. No-one tries to present like that. There's a big space to occupy between what "dry and humourless" and me. My difference is to focus on the relationship I have with the crowd, rather than focussing on delivering the perfect session. I want to see people smiling and know they're relaxed. I think most presenters focus on the material, which is completely reasonable and safe. I remember once hearing someone talking about product creation. They talked about mediocrity. They said that one of the worst things that people can ever say about your product is that it’s “good”. What you want is for 10% of the world to love it enough to want to buy it. If 10% the world gave me a dollar, I’d have more money than I could ever use (assuming it wasn’t the SAME dollar they were giving me I guess). Brent: It's the Raving Fans theory.  It's better to have a small number of raving customers than a large number of almost-but-not-really customers who don't care that much about your product or service.  I know exactly how you feel - when I got survey feedback from my Quest video presentation when I was dressed up in a Richard Simmons costume, some of the attendees said I was unprofessional and distracting.  Some of the attendees couldn't get enough and Photoshopped all kinds of stuff into the screen captures.  On a whole, I probably didn't score that well, and I'm fine with that.  It sucks to look at the scores though - do those lower scores bother you? Rob: Of course they do. It hurts deeply. I open myself up and give presentations in a very personal way. All presenters do that, and we all feel the pain of negative feedback. I hate coming 146th & 162nd out of 185, but have to acknowledge that many sessions did worse still. Plus, once I feel the wounds have healed, I’ll be able to remember that there are people in the world that rave about my presentation style, and figure that people will hopefully talk about me. One day maybe those people that don’t like my presentation style will stay away and I might be able to score better. You don’t pay to hear country music if you prefer western... Lots of people find chili too spicy, but it’s still a popular food. Brent: But don’t you want to appeal to everyone? Rob: I do, but I don’t want to be lukewarm as in Revelation 3:16. I’d rather disgust and be discussed. Well, maybe not ‘disgust’, but I don’t want to conform. Conformity just isn’t the same any more. I’m not sure I’ve ever been one to do that. I try not to offend, but definitely like to be different. Brent: Count me among your raving fans, sir.  Where can we see you next? Rob: Considering I live in Adelaide in Australia, I’m not about to appear at anyone’s local SQL Saturday. I’m still trying to plan which events I’ll get to in 2011. I’ve submitted abstracts for TechEd North America, but won’t hold my breath. I’m also considering the SQLBits conferences in the UK in April, PASS in October, and I’m sure I’ll do some LiveMeeting presentations for user groups. Online, people download some of my recent SQLBits presentations at http://bit.ly/RFSarg and http://bit.ly/Simplification though. And they can download a 5-minute MP3 of my Lightning Talk at http://www.lobsterpot.com.au/files/Collation.mp3, in which I try to explain the idea behind collation, using thongs as an example. Brent: I was in the audience for http://bit.ly/RFSarg. That was a great presentation. Rob: Thanks, Brent. Now where’s my dollar?

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • Why does an error appear every time I try to open the Ubuntu Software Center? [duplicate]

    - by askubuntu7639
    This question already has an answer here: How do I remove a broken software source? 3 answers There is a glitch on the Ubuntu Software Center and whenever I open it an error appears and it keeps loading and never opens. Why does this happen? I have installed Ubuntu 13.04 on a disk and partitioned it. Please help me and ask for excess information if you need it. If you know of any duplicates please show me them!! This is the output of a question someone asked me. SystemError: E:Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list This next output is the output of cat /etc/apt/sources.list.d/medibuntu.list </div> <div style="float:left;"> <div class="textwidget"><script type="text/javascript"><!-- google_ad_client = "ca-pub-2917661377128354"; /* 160X600 Sidebar UX */ google_ad_slot = "9908287444"; google_ad_width = 160; google_ad_height = 600; //-- Recent Comments <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://0.gravatar.com/avatar/ae5f4503d5f167f1cf62d3e36e8242b6?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Richard Syme</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/how-to-customize-you-vlc-hot-keys/#comment-13732">#</a> </p> </div> <div class="content" style="float:left;"><p>I dont have a clear button under the hotkeys. All i want to do is get rid of all hotkeys.</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/ffabde94437e996a506e31e981bcf8fc?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Abin Thomas Mathew</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/install-lamp-server-in-centos-6-4-rhel-6-4/#comment-13727">#</a> </p> </div> <div class="content" style="float:left;"><p>Simple and easy to follow tutorial to install and start of phpMyAdmin. Thank you</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://0.gravatar.com/avatar/499ccc1154e9b8569b87413434220b91?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">SK</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/munich-giving-ubuntu-linux-cds-citizens/#comment-13725">#</a> </p> </div> <div class="content" style="float:left;"><p>I have Bosslinux and i used it for a while. Now i swiched to Ubuntu 13.04.</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/3dc2f7140bdd857dcdfe815a6e29aa6b?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Anon</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/linus-torvalds-talks-backdoor-linuxcon/#comment-13724">#</a> </p> </div> <div class="content" style="float:left;"><p>Do you know how much extra bloat is in Ubuntu these days? How the hell does anyone really know?</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/9dd28d1cf5efe754fa58b53c1e6de401?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author"><a href="http://ambitiousgeeks.blogspot.com/" onclick="javascript:_gaq.push(['_trackEvent','outbound-commentauthor','http://ambitiousgeeks.blogspot.com']);" rel='external nofollow' class='url'>Ambition</a></h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/linus-torvalds-talks-backdoor-linuxcon/#comment-13723">#</a> </p> </div> <div class="content" style="float:left;"><p>True :)</p> </article> </div> <div style="float:left;"> &nbsp;<script type="text/javascript"> window.___gcfg = {lang: 'en-US'}; (function() {var po = document.createElement("script"); po.type = "text/javascript"; po.async = true;po.src = "https://apis.google.com/js/plusone.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(po, s); })(); <div class="execphpwidget"></div> </div> <div class="module2"> <div class="recentPost"> <h3 class="module-title2">Favorite Links</h3> <ul class='xoxo blogroll'> http://www.iticy.com']);"Cheap Hosting http://www.tuxmachines.org']);"TuxMachines.org http://www.ubuntugeek.com']);"UbuntuGeek.com http://www.stelinuxhost.com']);"Webdesign & SEO </ul> <img src="http://180016988.r.cdn77.net/wp-content/themes/unimax/images/bigLine.jpg" alt="" /> </div> </div> <div align="center" style="min-height:610px;"> <div class="execphpwidget"></div> <div class="textwidget"><a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en_US" onclick="javascript:_gaq.push(['_trackEvent','outbound-widget','http://creativecommons.org']);"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/88x31.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="unixmen.com" property="cc:attributionName" rel="cc:attributionURL">unixmen.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en_US" >Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License</a>.</div> </div> </div> <!-- #primary .widget-area --> </div> Unixmen Archive Select Month September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 Tags Cloudandroid apache browser Centos chrome command line Debian eyecandy Fedora firefox games gaming gnome google karmic koala kde libreoffice Linux linux distribution LinuxMint lucid lynx maverick meerkat mysql news oneiric ocelot openoffice opensource opensuse oracle ppa Precise Pangolin release RHEL security server software themes tools ubuntu unix upgrade virtualbox vlc windows wine Unixmen Twitts Firefox 16, a treat for developers http://t.co/cnd27CzT Ubuntu 12.10 ‘Quantal Quetzal’: Beta 2 Sneak Peek http://t.co/hd4LwDOy Top 5 security Myths about Linux; and their realities http://t.co/zO1LgHST About Us Advertising Sitemap Privacy Contact Us Hire Us Copyright © 2008-2013 Unixmen.com . Maintained by Unixmen . /* */ jQuery(document).on('ready post-load', easy_fancybox_handler ); http://www.w3-edge.com/wordpress-plugins/ Page Caching using apc Database Caching 3/186 queries in 0.035 seconds using apc Content Delivery Network via 180016988.r.cdn77.net Served from: www.unixmen.com @ 2013-09-25 01:38:14 by W3 Total Cache

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • Token replacement

    - by ClarkeyBoy
    Hey, I currently have a system on my website whereby I can put something like "[cfe]" anywhere in the site and, when the page is rendered, it will replace it with the root to the customer front end (same for "[afe]" and admin front end - so in the admin front end I can put "[cfe]/Default.aspx" to link to the homepage on the customer front end. This is in place as I have a development version of the site, then a test and a live version too. All 3 may have different roots to each section (for example the way the website is set up, the root to the admin front end in test is "/test/Administration/", but in live and development it is just "/Administration/"). Which version it is depends on the URL - all my development sites are in a folder called "development", whereas test is in a folder called "test" and any live urls do not contain either of these. There are also 3 different databases - one for each. All 3, obviously, require a different connection string. I also have a string replacement function in place which can change, for example, "[Product:Cards]" to point to the Cards catalogue page. Problem is that for this I go through all the products and do a replacement on "[Product:" & Product.Name() & "]". However I would like to take this further. I would like to pick up these custom strings when the page is rendered so it picks up "[Product:Cards]" and then goes off to find product "Cards" and replaces the string with a link to the Cards page, rather than looping through all the products and doing a replace just on the off chance that there are any replacements to make. One use for this, which I may start using in the future if I can figure out how to do this, is like on Wikipedia where you put the title of the page you want to point to, then a divider (think its a pipe from memory) then the link text. I would like to apply this to the above situation. This way broken links can also be picked up, and reported to admin (a major advantage as they can then locate them and remove the link or add the product / page that it refers to). I would like to take this to the stage where content of entire pages can be rearranged (kinda like web parts, but not as advanced as that). I mean like so you can put [layout type="3columnImageTopRight" image="imageurl"]Content here[/layout]. This will display, as specified, an image in the top right (with padding at the left and bottom) and 3 columns - maybe with the image spanning one or two columns). The imageurl can be specified as another token: maybe like [Image:imagename.gif] or something. This replaces it with the root to the image folder and then the specified filename. I have not really looked into how I am going to split the content into 3 columns yet, but this would be something to look at for my dissertation and then implement after my project deadline at least. Does anyone have any ideas or pointers which could help me with this? Also if this is not strictly token replacement then please point me to what it is, so I can further develop this. Thanks in advance, Regards, Richard Clarke

    Read the article

  • Overflow - what am I doing wrong?

    - by ClarkeyBoy
    Hi, I have been working on trying to get a page to display a title at the top of the content pane, and then a scrollable list of products below that so that the title of the product range is displayed at all times. I am sure this is a very simple thing to do - but cannot figure it out. Currently the actual page (not the test page for which the code is given below) works ok in the sense that I set the heading div to 5% of the height of .content-container and then set the scrollable div to 95% with top: 5%, both with position: absolute applied. - however I would like to place some links in the heading div to different pages (1, 2, 3 etc), which I would like to centre vertically if they are shorter than the heading and expand the heading div to match the height of the heading or the links, whichever is smallest. Furthermore I would like the div below the heading to shrink so that it doesn't go below the bottom of the content div as the heading div gets taller. The point of this is because it is for a client who may, or may not, be happy with the heading sizes and so on - therefore the heading div height could easily change. Specifying heights so precisely means that changing the h1 height could mean 5 changes to the CSS file - something I want to avoid. The content pane currently has its height fixed to 80% of the page, with the header and footer being 10% each on top of that, so there is no scrollbar at the side of the page and the header / footer are always showing. This is something I would like to keep. In the code below, .content-container is the main content pane - this is contained in another div which is centred using the margin at 50% of the page width. .test-div is the div which contains the heading. .test-div-2 is an attempt to place a div below .test-div, in the hope that I can force .test-div-3 to extend to 100% of its' height but no further, and to display a scrollbar if the content exceeds the height. So far I have the following, but it doesn't do exactly what I would like it to: <div class="content-container"> <div class="test-div"> <h1 style="text-align: center;">Dogs</h1> </div> <div class="test-div-2"> <div class="test-div-3"> //Content here </div> </div> </div> .content-container { position: absolute; width: 100%; height: 100%; left: 0; right: 0; bottom: 0; overflow: auto; } .test-div { position: relative; padding: 0; margin: 0; } .test-div-2 { position: relative; background-color: #CCCCCC; } .test-div-3 { max-height: 100%; background-color: #999999; } Any help with this would be greatly appreciated. I would like to achieve this without the use of JavaScript / jQuery if possible - pure HTML / CSS solutions only please! Regards, Richard

    Read the article

  • Writing more efficient xquery code (avoiding redundant iteration)

    - by Coquelicot
    Here's a simplified version of a problem I'm working on: I have a bunch of xml data that encodes information about people. Each person is uniquely identified by an 'id' attribute, but they may go by many names. For example, in one document, I might find <person id=1>Paul Mcartney</person> <person id=2>Ringo Starr</person> And in another I might find: <person id=1>Sir Paul McCartney</person> <person id=2>Richard Starkey</person> I want to use xquery to produce a new document that lists every name associated with a given id. i.e.: <person id=1> <name>Paul McCartney</name> <name>Sir Paul McCartney</name> <name>James Paul McCartney</name> </person> <person id=2> ... </person> The way I'm doing this now in xquery is something like this (pseudocode-esque): let $ids := distinct-terms( [all the id attributes on people] ) for $id in $ids return <person id={$id}> { for $unique-name in distinct-values ( for $name in ( [all names] ) where $name/@id=$id return $name ) return <name>{$unique-name}</name> } </person> The problem is that this is really slow. I imagine the bottleneck is the innermost loop, which executes once for every id (of which there are about 1200). I'm dealing with a fair bit of data (300 MB, spread over about 800 xml files), so even a single execution of the query in the inner loop takes about 12 seconds, which means that repeating it 1200 times will take about 4 hours (which might be optimistic - the process has been running for 3 hours so far). Not only is it slow, it's using a whole lot of virtual memory. I'm using Saxon, and I had to set java's maximum heap size to 10 GB (!) to avoid getting out of memory errors, and it's currently using 6 GB of physical memory. So here's how I'd really like to do this (in Pythonic pseudocode): persons = {} for id in ids: person[id] = set() for person in all_the_people_in_my_xml_document: persons[person.id].add(person.name) There, I just did it in linear time, with only one sweep of the xml document. Now, is there some way to do something similar in xquery? Surely if I can imagine it, a reasonable programming language should be able to do it (he said quixotically). The problem, I suppose, is that unlike Python, xquery doesn't (as far as I know) have anything like an associative array. Is there some clever way around this? Failing that, is there something better than xquery that I might use to accomplish my goal? Because really, the computational resources I'm throwing at this relatively simple problem are kind of ridiculous.

    Read the article

  • How do you read from a file into an array of struct?

    - by Thomas.Winsnes
    I'm currently working on an assignment and this have had me stuck for hours. Can someone please help me point out why this isn't working for me? struct book { char title[25]; char author[50]; char subject[20]; int callNumber; char publisher[250]; char publishDate[11]; char location[20]; char status[11]; char type[12]; int circulationPeriod; int costOfBook; }; void PrintBookList(struct book **bookList) { int i; for(i = 0; i < sizeof(bookList); i++) { struct book newBook = *bookList[i]; printf("%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d\n",newBook.title, newBook.author, newBook.subject, newBook.callNumber,newBook.publisher, newBook.publishDate, newBook.location, newBook.status, newBook.type,newBook.circulationPeriod, newBook.costOfBook); } } void GetBookList(struct book** bookList) { FILE* file = fopen("book.txt", "r"); struct book newBook[1024]; int i = 0; while(fscanf(file, "%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d", &newBook[i].title, &newBook[i].author, &newBook[i].subject, &newBook[i].callNumber,&newBook[i].publisher, &newBook[i].publishDate, &newBook[i].location, &newBook[i].status, &newBook[i].type,&newBook[i].circulationPeriod, &newBook[i].costOfBook) != EOF) { bookList[i] = &newBook[i]; i++; } /*while(fscanf(file, "%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d", &bookList[i].title, &bookList[i].author, &bookList[i].subject, &bookList[i].callNumber, &bookList[i].publisher, &bookList[i].publishDate, &bookList[i].location, &bookList[i].status, &bookList[i].type, &bookList[i].circulationPeriod, &bookList[i].costOfBook) != EOF) { i++; }*/ PrintBookList(bookList); fclose(file); } int main() { struct book *bookList[1024]; GetBookList(bookList); } I get no errors or warnings on compile it should print the content of the file, just like it is in the file. Like this: OperatingSystems Internals and Design principles;William.S;IT;741012759;Upper Saddle River;2009;QA7676063;Available;circulation;3;11200 Communication skills handbook;Summers.J;Accounting;771239216;Milton;2010;BF637C451;Available;circulation;3;7900 Business marketing management:B2B;Hutt.D;Management;741912319;Mason;2010;HF5415131;Available;circulation;3;1053 Patient education rehabilitation;Dreeben.O;Education;745121511;Sudbury;2010;CF5671A98;Available;reference;0;6895 Tomorrow's technology and you;Beekman.G;Science;764102174;Upper Saddle River;2009;QA76B41;Out;reserved;1;7825 Property & security: selected essay;Cathy.S;Law;750131231;Rozelle;2010;D4A3C56;Available;reference;0;20075 Introducing communication theory;Richard.W;IT;714789013;McGraw-Hill;2010;Q360W47;Available;circulation;3;12150 Maths for computing and information technology;Giannasi.F;Mathematics;729890537;Longman;Scientific;1995;QA769M35G;Available;reference;0;13500 Labor economics;George.J;Economics;715784761;McGraw-Hill;2010;HD4901B67;Available;circulation;3;7585 Human physiology:from cells to systems;Sherwood.L;Physiology;707558936;Cengage Learning;2010;QP345S32;Out;circulation;3;11135 bobs;thomas;IT;701000000;UC;1006;QA7548;Available;Circulation;7;5050 but when I run it, it outputs this: OperatingSystems;;;0;;;;;;0;0 Internals;;;0;;;;;;0;0 and;;;0;;;;;;0;0 Design;;;0;;;;;;0;0 principles;William.S;IT;741012759;Upper;41012759;Upper;;0;;;;;;0;0 Saddle;;;0;;;;;;0;0 River;2009;QA7676063;Available;circulation;3;11200;lable;circulation;3;11200;;0;;;;;;0;0 Communication;;;0;;;;;;0;0 Thanks in advance, you're a life saver

    Read the article

  • Jquery Live Function

    - by marharépa
    Hi! I want to make this script to work as LIVE() function. Please help me! $(".img img").each(function() { $(this).cjObjectScaler({ destElem: $(this).parent(), method: "fit" }); }); the cjObjectScaler script (called in the html header) is this: (thanks for Doug Jones) (function ($) { jQuery.fn.imagesLoaded = function (callback) { var elems = this.filter('img'), len = elems.length; elems.bind('load', function () { if (--len <= 0) { callback.call(elems, this); } }).each(function () { // cached images don't fire load sometimes, so we reset src. if (this.complete || this.complete === undefined) { var src = this.src; // webkit hack from http://groups.google.com/group/jquery-dev/browse_thread/thread/eee6ab7b2da50e1f this.src = '#'; this.src = src; } }); }; })(jQuery); /* CJ Object Scaler */ (function ($) { jQuery.fn.cjObjectScaler = function (options) { /* user variables (settings) ***************************************/ var settings = { // must be a jQuery object method: "fill", // the parent object to scale our object into destElem: null, // fit|fill fade: 0 // if positive value, do hide/fadeIn }; /* system variables ***************************************/ var sys = { // function parameters version: '2.1.1', elem: null }; /* scale the image ***************************************/ function scaleObj(obj) { // declare some local variables var destW = jQuery(settings.destElem).width(), destH = jQuery(settings.destElem).height(), ratioX, ratioY, scale, newWidth, newHeight, borderW = parseInt(jQuery(obj).css("borderLeftWidth"), 10) + parseInt(jQuery(obj).css("borderRightWidth"), 10), borderH = parseInt(jQuery(obj).css("borderTopWidth"), 10) + parseInt(jQuery(obj).css("borderBottomWidth"), 10), objW = jQuery(obj).width(), objH = jQuery(obj).height(); // check for valid border values. IE takes in account border size when calculating width/height so just set to 0 borderW = isNaN(borderW) ? 0 : borderW; borderH = isNaN(borderH) ? 0 : borderH; // calculate scale ratios ratioX = destW / jQuery(obj).width(); ratioY = destH / jQuery(obj).height(); // Determine which algorithm to use if (!jQuery(obj).hasClass("cf_image_scaler_fill") && (jQuery(obj).hasClass("cf_image_scaler_fit") || settings.method === "fit")) { scale = ratioX < ratioY ? ratioX : ratioY; } else if (!jQuery(obj).hasClass("cf_image_scaler_fit") && (jQuery(obj).hasClass("cf_image_scaler_fill") || settings.method === "fill")) { scale = ratioX < ratioY ? ratioX : ratioY; } // calculate our new image dimensions newWidth = parseInt(jQuery(obj).width() * scale, 10) - borderW; newHeight = parseInt(jQuery(obj).height() * scale, 10) - borderH; // Set new dimensions & offset jQuery(obj).css({ "width": newWidth + "px", "height": newHeight + "px"//, // "position": "absolute", // "top": (parseInt((destH - newHeight) / 2, 10) - parseInt(borderH / 2, 10)) + "px", // "left": (parseInt((destW - newWidth) / 2, 10) - parseInt(borderW / 2, 10)) + "px" }).attr({ "width": newWidth, "height": newHeight }); // do our fancy fade in, if user supplied a fade amount if (settings.fade > 0) { jQuery(obj).fadeIn(settings.fade); } } /* set up any user passed variables ***************************************/ if (options) { jQuery.extend(settings, options); } /* main ***************************************/ return this.each(function () { sys.elem = this; // if they don't provide a destObject, use parent if (settings.destElem === null) { settings.destElem = jQuery(sys.elem).parent(); } // need to make sure the user set the parent's position. Things go bonker's if not set. // valid values: absolute|relative|fixed if (jQuery(settings.destElem).css("position") === "static") { jQuery(settings.destElem).css({ "position": "relative" }); } // if our object to scale is an image, we need to make sure it's loaded before we continue. if (typeof sys.elem === "object" && typeof settings.destElem === "object" && typeof settings.method === "string") { // if the user supplied a fade amount, hide our image if (settings.fade > 0) { jQuery(sys.elem).hide(); } if (sys.elem.nodeName === "IMG") { // to fix the weird width/height caching issue we set the image dimensions to be auto; jQuery(sys.elem).width("auto"); jQuery(sys.elem).height("auto"); // wait until the image is loaded before scaling jQuery(sys.elem).imagesLoaded(function () { scaleObj(this); }); } else { scaleObj(jQuery(sys.elem)); } } else { console.debug("CJ Object Scaler could not initialize."); return; } }); }; })(jQuery);

    Read the article

  • Ajax and using responseXML

    - by Banderdash
    Hello, I have a XML file that looks like this: <response> <library name="My Library"> <book id="1" checked-out="1"> <authors> <author>David Flanagan</author> </authors> <title>JavaScript: The Definitive Guide</title> <isbn-10>0596101996</isbn-10> </book> <book id="2" checked-out="1"> <authors> <author>John Resig</author> </authors> <title>Pro JavaScript Techniques (Pro)</title> <isbn-10>1590597273</isbn-10> </book> <book id="3" checked-out="0"> <authors> <author>Erich Gamma</author> <author>Richard Helm</author> <author>Ralph Johnson</author> <author>John M. Vlissides</author> </authors> <title>Design Patterns: Elements of Reusable Object-Oriented Software</title> <isbn-10>0201633612</isbn-10> </book> ... </library> </response> I'm using a simple JS script to, on click show all the titles of the books: <script type="text/javascript"> function loadXMLDoc() { if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { xmlDoc=xmlhttp.responseXML; var txt=""; x=xmlDoc.getElementsByTagName("title"); for (i=0;i<x.length;i++) { txt=txt + x[i].childNodes[0].nodeValue + "<br />"; } document.getElementById("checkedIn").innerHTML=txt; } } xmlhttp.open("GET","ajax-response-data.xml",true); xmlhttp.send(); } </script> This works fine, as you can see here: http://clients.pixelbleed.net/ajax-test/ What I'd like to do is have the results post, on page load (not on click) into two separate DIV's depending on checked-out variable in the XML. So <book id="#" checked-out="1"> would post to the checkedIn div, <book id="#" checked-out="0"> posts to a checkedOut div. Also want to display the title and the author--would love any ideas as best method for accomplishing this. Apologize in advanced for the newbieness of my query.

    Read the article

  • ADO program to list members of a large group.

    - by AlexGomez
    Hi everyone, I'm attempting to list all the members in a Active Directory group using ADO. The problem I have is that many of these groups have over 1500 members and ADSI cannot handle more than 1500 items in a multi-valued attribute. Fortunately I came across Richard Muller's wonderful VBScript that handles more than 1500 members at http://www.rlmueller.net/DocumentLargeGroup.htm I modified his code as shown below so that I can list ALL the groups and its memberships in a certain OU. However, I'm keeping getting the exception shown below: "ADODB.Recordset: Item cannot be found in the collection corresponding to the requested name or ordinal." My program appears to get stuck at: strPath = adoRecordset.Fields("ADsPath").Value Set objGroup = GetObject(strPath) All I am doing above is issuing the query to get back a recordset consisting of the ADsPath for each group in the OU. It then walks through the recordset and grabs the ADsPath for the first group and store its in a variable named strPath; we then use the value of that variable to bind to the group account for that group. It really should work! Any idea why the code below doesn't work for me? Any pointers will be great appreciated. Thanks. Option Explicit Dim objRootDSE, strDNSDomain, adoCommand Dim adoConnection, strBase, strAttributes Dim strFilter, strQuery, adoRecordset Dim strDN, intCount, blnLast, intLowRange Dim intHighRange, intRangeStep, objField Dim objGroup, objMember, strName ' Determine DNS domain name. Set objRootDSE = GetObject("LDAP://RootDSE") 'strDNSDomain = objRootDSE.Get("DefaultNamingContext") strDNSDomain = "XXXXXXXX" ' Use ADO to search Active Directory. Set adoCommand = CreateObject("ADODB.Command") Set adoConnection = CreateObject("ADODB.Connection") adoConnection.Provider = "ADsDSOObject" adoConnection.Open = "Active Directory Provider" adoCommand.ActiveConnection = adoConnection adoCommand.Properties("Page Size") = 100 adoCommand.Properties("Timeout") = 30 adoCommand.Properties("Cache Results") = False ' Specify base of search. strBase = "<LDAP://" & strDNSDomain & ">" ' Specify the attribute values to retrieve. strAttributes = "member" ' Filter on objects of class "group" strFilter = "(&(objectClass=group)(samAccountName=*))" ' Enumerate direct group members. ' Use range limits to handle more than 1000/1500 members. ' Setup to retrieve 1000 members at a time. blnLast = False intRangeStep = 999 intLowRange = 0 IntHighRange = intLowRange + intRangeStep Do While True If (blnLast = True) Then ' If last query, retrieve remaining members. strQuery = strBase & ";" & strFilter & ";" _ & strAttributes & ";range=" & intLowRange _ & "-*;subtree" Else ' If not last query, retrieve 1000 members. strQuery = strBase & ";" & strFilter & ";" _ & strAttributes & ";range=" & intLowRange & "-" _ & intHighRange & ";subtree" End If adoCommand.CommandText = strQuery Set adoRecordset = adoCommand.Execute adoRecordset.MoveFirst intCount = 0 Do Until adoRecordset.EOF strPath = adoRecordset.Fields("ADsPath").Value Set objGroup = GetObject(strPath) For Each objField In adoRecordset.Fields If (VarType(objField) = (vbArray + vbVariant)) _ Then For Each strDN In objField.Value ' Escape any forward slash characters, "/", with the backslash ' escape character. All other characters that should be escaped are. strDN = Replace(strDN, "/", "\/") ' Check dictionary object for duplicates. 'If (objGroupList.Exists(strDN) = False) Then ' Add to dictionary object. 'objGroupList.Add strDN, True ' Bind to each group member, to find member's samAccountName Set objMember = GetObject("LDAP://" & strDN) ' Output group cn, group samaAccountName and group member's samAccountName. Wscript.Echo objMember.samAccountName intCount = intCount + 1 'End if Next End If Next adoRecordset.MoveNext Loop adoRecordset.Close ' If this is the last query, exit the Do While loop. If (blnLast = True) Then Exit Do End If ' If the previous query returned no members, then the previous ' query for the next 1000 members failed. Perform one more ' query to retrieve remaining members (less than 1000). If (intCount = 0) Then blnLast = True Else ' Setup to retrieve next 1000 members. intLowRange = intHighRange + 1 intHighRange = intLowRange + intRangeStep End If Loop

    Read the article

  • SOA Community Newsletter: nouvelle lettre !

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTERAUGUST 2012 Dear SOA partner community member Have you submitted your feedback on SOA Partner Community Survey 2012? This is the last chance to participate in the survey. We recommend you to complete the survey and help us to improve our SOA Community. Thanks to all attendees and trainers for their participation in the excellent Fusion Middleware Summer Camps held in Lisbon and Munich. I would also like to thank you for the great feedback and the nice reports provided by AMIS Technology Blog & Middleware by Link Consulting. Most of our courses have been overbooked, if you did not get a chance or missed it, we offer a wide range of online training and the course material. Key take-away from the advanced BPM course is to become an expert in ADF. Here is the course from Grant Ronald Learn Advanced ADF online available. The Link Consulting Team became experts in SOA Governance with EAMS and Oracle Enterprise Repository! We always encourage our community members to share their best practices and are very keen to publish it. Please let us know if you want to share your best practices through this medium.We encourage you to make use of the Specialization benefits - this month we are giving an opportunity to Promote Your SOA & BPM Events. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Presentations & Training material OFM Summer CampsPromote Your SOA & BPM Events Advanced ADF Online, For Free By Grant BPM 11g Customer Stories & Solution Catalog & Process Accelerators Delivering SOA Governance with EAMS by Link Consulting Team WebLogic Server Provisioning and Patching News from our Partners & CommunityUpdated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum SOA Workspace PRESENTATIONS & TRAINING MATERIAL OFM SUMMER CAMPS Thanks to all attendees who invested their time and utilized the opportunity to attend the Summer Camps! Due to high demand of our most of the trainings, we had a long waiting list with more numbers of partners who are keen to attend it. We would like to give our special thanks to all trainers, who delivered excellent workshops! Most of the presentations and course material have been posted on our SOA Community Workspaceand WebLogic Community Workspace. You can access the content only if you are a registered community member. To register for the SOA Community please click here. You can register for the WebLogic Community here. To find out the first impressions of the event please visit our Facebook pages:www.facebook.com/WebLogicCommunity &www.facebook.com/soacommunity or Picasa AlbumThanks for the excellent blog posts from AMIS Technology Blog & Middleware by Link Consulting. Let us know if you published a twitter blog on@soacommunity & @wlscommunity. We will be pleased to publish it in our Newsletters. BPM Course Quotes “Its always easy, if you know, what you are doing” - Torsten Winterberg, Opitz“ The best ideas are the ideas from the best” - Filipe Sequeria, Primesoft “Best invest in the education in the last 12 months” - Richard Schaller, IPT “Practice best practice with the best instructor” - Graham Lamond Capgemini “If you have basic BPM knowledge, this is the course to really mater it” - Diogo Henriques Link Consulting “Very good trainers lot of work. Lot of fun as well” - Matthias Gris Workflow Factory “If you like to accelerate in Oracle come to the training to bring it all together” - Marcel van der Glind, Amis ADF Course Quotes "Excellent training, great opportunity to network!" - Frank Houweling, Amis "Lots of fun and good ideas" - Ana Santiago, GFI "Learn ADF, worth it Fusion Apps is the future" - Miguel Delgadillo, STO Consulting "The best way to learn Fusion Middleware from the #1" Alexandro Montantes, STO Consulting "Be advanced to to be the first” - Dimitar Petrov Fadata "Great opportunity to suck all the knowledge out of some very experienced product managers” - Wilfred von der Deijl, The Future Group WebLogic Course Quotes “Oracle trainings are the best” - Pedro Neto Novobas“ "Excellent training, well organized” - Pedro Antunh, Capgemini “This course dives you into Oracle WebLogic giving you a quick start on benefiting from Fusion Apps” - Leonardo Fernandes, Outsystems Additional Quotes “Thanks a lot again for organizing such a great and informative Summer Camp. Both training and networking were organized very professionally. I have gained tons of very useful Info, which will definitely help to increase quality of our future projects.” - Daniel Fasko fss-group.com I didn’t get the chance yesterday to thank you for a most enjoyable and thoroughly educational time I had in Munich over the last few days.” - Jeroen Bakker Ordina “Just to congratulate you on a great event, not only today but also in the previous days of training. As we know, a very good organization and, as a native Portuguese that knows Lisbon very good, a nice choice of places to visit. Looking forward to come again next year.” Pedro Miguel Neto, Novobase PROMOTE YOUR SOA & BPM EVENTS The Partner Event Publisher has just been made available to all SOA & BPM specialized partners in EMEA. Partners now have the opportunity to publish their events to theOracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. See the demo below and click here to read more information. ADVANCED ADF ONLINE, FOR FREE BY GRANT The second part of the advanced ADF online eCourse is Live now! This covers the advanced topics of region and region interaction as well as getting down and dirty with some of the layout features of ADF Faces, skinning and DVT components. The aim of this course is to give you a self-paced learning aid which covers the more advanced topics of ADF development. The content is developed by Product Management and our Curriculum development teams and is based on advanced training material we have been running internally for about 18 months. We will get started on the next chapter, but in the meantime, please have a look at chapters one and two. Back to top BPM 11G CUSTOMER STORIES & SOLUTION CATALOG & PROCESS ACCELERATORS Stories Everyone loves a good story on planning or implementing a BPM strategy. Everyone wants to hear how it was done before?, what worked?, what was achieved? If you have achieved success with BPM, we are very keen to hear your stories and examples of how your customers use it. We receive lots of requests from people who are thinking of using BPM to solve a specific problem or in combination with a specific technology to talk to someone who has done it before. These stories are invaluable. Drop down the details of anything you think is relevant with a bit of detail and we will follow up on it. As one good deed deserves another, we will do our best to give you stories if you need them to show that where you are going, others have treaded before. Send your stories to us using this e-mail link and we will share them among other like minded people. Solution Catalogue This summer, Oracle is launching a solution catalogue specifically intended for partners. If you have delivered a successful implementation in BPM and think it could be reused and applied again in a similar scenario in the same industry or in a similar environment, then we ware keen to know about it and will add it to the solution catalogue. The solution catalogue will showcase successful BPM solutions both inside and outside Oracle. Be in touch with us on this e-mail link and we will make sure to add your solution. Process AcceleratorsFinally if you have specific processes that you are expert on, you have implemented at a customer and you want to work with us on getting these productised, then we would love to know about it. The process accelerator programme is explained in the most recent SOA/BPM Community Newsletter but again feel free to contact us if you want to get involved. Good luck with BPM and let us know how we can help. Barry O'Reilly Director BPM [email protected] DELIVERING SOA GOVERNANCE WITH EAMS BY LINK CONSULTING TEAM In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. WEBLOGIC SERVER PROVISIONING AND PATCHING For access to the Oracle demo systems please visit OPN and talk to your Partner Expert.SOA Suite and BPM Suite runs on WebLogic! We are pleased to announce the availability of a WebLogic Server Management demo that showcases some of the key provisioning and patching capabilities of WebLogic Server Management Pack Enterprise Edition (EE). To learn more about these features - as well as other features of the pack - please visit the pack's saleskit page.Demo Highlights The demo showcases the following capabilities: Patching Oracle WebLogic Servers Standardizing WebLogic Server Patch Rollouts Creating a WebLogic Domain Provisioning Profile Cloning a WebLogic Domain from a Provisioning Profile Deploying a Java EE Application Scaling Out an Oracle WebLogic Cluster Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Software Lifecycle Automation” section, click on the link “EM Cloud Control 12c WLS Provisioning and Patching” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo.

    Read the article

  • "Randomly" occurring errors...

    - by ClarkeyBoy
    Hi, My website has a setup whereby when the application starts a module called SiteContent is "created". This runs a clearup function which basically deletes any irrelevant data from the database, in case any has been left in there from previously run functions. The module has instances of Manager classes - namely RangeManager, CollectionManager, DesignManager. There are others but I will just use these as an example. Each Manager class contains an array of items - items may be of type Range, Collection or Design, whichever one is relevant. Data for each range is then read into an instance of Range, Collection or Design. I know this is basically duplicating data - not very efficient but its my final year project at the moment so I can always change it to use Linq or something similar later, when I am not pressured by the one month deadline. I have a form which, on clicking the Save button, saves data by calling SiteContent.RangeManager.Create(vars) or SiteContent.RangeManager.Update(Range As Range, vars) (or the equivalent for other manager classes, whichever one happens to be relevant). These functions call a stored procedure to insert or update in the relevant table. Classes Range, Collection and Design all have attributes such as Name, Description, Display and several others. When the Create or Update function is called, the Manager loops through all the other items to check if an item with the same name already exists. The Update function ensures that it does not compare the item being updated to itself. A custom exception (ItemAlreadyExistsException) is thrown if another item with the same name is found. For some weird reason, if I go into a Range, Collection or Design in edit mode, change something and try to update it, it occasionally doesnt update the item. When I say occasionally I mean every 3 - 4 page loads, sometimes more. I see absolutely no pattern in when or why it occurs. I have a try-catch statement which catches ItemAlreadyExistsException, and outputs "An item with this name already exists" when caught. Occasionally it will output this; other times it will not. Does anyone have any idea why this could happen? Maybe a mistake which someone has made and solved before? I used to have regular expressions in place that the names were compared to - I believe it was [a-zA-Z]{1, 100} (between 1 and 100 lower- or upper-case characters). For some reason the customer who I am developing the site for used to get errors saying its not in the correct format. Yet he could try the same text 5 minutes later and it would work fine. I am thinking this could well be the same problem, since both problems occur at random. Many thanks in advance. Regards, Richard Clarke Edit: After much time spent narrowing down the code, I have decided to wait till my brother, who has been a programmer for at least 8 years more than I have, to come down over Easter and get him to have a look at it. If he cant solve it then I will zip the files up and put them somewhere for people to access and have a go at. I narrowed it down literally to the minimum number of files possible, and it still occurs. It seems to be about every 10th time. Having said that, I force the manager classes to refresh every 10 page loads or 5 minutes (whichever one is sooner). I may look into this - this could be causing a problem. Basically each Manager contains an array of an object. This array is populated using data from the database. The Update function takes an instance of the item and the new values to be set for the object. If it happens to be a page load where the array is reset (ie the data is loaded freshly from the database) then the object instance with the same ID wont be the same instance as the one being passed in. This explains the fact that it throws an ItemAlreadyExistsException now and then. It all makes sense now the more I think about it. If I were to pass in the ID of the object to be altered, rather than the object itself, then it should work perfectly. I will answer the question if I solve it..

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • Selling Federal Enterprise Architecture (EA)

    - by TedMcLaughlan
    Selling Federal Enterprise Architecture A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents. Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available. "Selling" the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article. Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. "EA isn't getting deep enough penetration into programs, components, sub-agencies, etc.", verified a panelist at the most recent EA Government Conference in DC. Newer guidance from OMB may be especially difficult to handle, where bottom-up input can't be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D "Agency IT Reductions and Reinvestments" and the information required for "Cloud Computing Alternatives Evaluation" (supporting the new Exhibit 53C, "Agency Cloud Computing Portfolio"). Therefore, EA must be "sold" directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration. Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren't likely to be realized in the short-term. This means there's probably little current, allocated budget to work with; ergo the challenge of trying to sell an "unfunded mandate". Also, the concept of "Enterprise" in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that "...organizational boundaries still trump functional similarities. Most people understand what we're trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you're going to be touching them...there's always that fear factor," Spires said. It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO's recent report "Enterprise Architecture Value Needs To Be Measured and Reported". What's not clear is the method or guidance to sell this value. In fact, the current GAO "Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)", a.k.a. the "EAMMF", does not include words like "sell", "persuade", "market", etc., except in reference ("within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development") to a brief section in the CIO Council's 2001 "Practical Guide to Federal Enterprise Architecture", entitled "3.3.1. Develop an EA Marketing Strategy and Communications Plan." Furthermore, Core Element 19 of the EAMMF is advised to be applied in "Stage 3: Developing Initial EA Versions". This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1. So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold? Following is a brief taxonomy (it's a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial "engagement plan" for evangelizing EA "from within". An EA "Sales Taxonomy" of sorts. We're not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape. Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, "Fit-for-Purpose" (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF). The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet - as in: "Let's talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)". Once the initial discussion targets and subjects are approved (that can be measured and reported), a "marketing and communications plan" can be created. A working example follows the Taxonomy. Enterprise Architecture Sales Taxonomy Draft, Summary Version 1. Community 1.1. Budgeted Programs or Portfolios Communities of Purpose (CoPR) 1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans 1.1.2. Program/System Owners Facing Strategic Change 1.1.2.1. Mandated 1.1.2.2. Expected/Anticipated 1.1.3. Program Managers - Creating Employee Performance Plans 1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP) 1.2. Governance & Communications Communities of Practice (CoP) 1.2.1. Policy Owners 1.2.1.1. OCFO 1.2.1.1.1. Budget/Procurement Office 1.2.1.1.2. Strategic Planning 1.2.1.2. OCIO 1.2.1.2.1. IT Management 1.2.1.2.2. IT Operations 1.2.1.2.3. Information Assurance (Cyber Security) 1.2.1.2.4. IT Innovation 1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements) 1.2.2. Governing IT Council/SME Peers (i.e. an "Architects Council") 1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren't buried solely within the CIO shop) 1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a "shared services" EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended 1.2.2.3. External Oversight/Constraints 1.2.2.3.1. GAO/OIG & Legal 1.2.2.3.2. Industry Standards 1.2.2.3.3. Official public notification, response 1.2.3. Mission Constituents Participant & Analyst Community of Interest (CoI) 1.2.3.1. Mission Operators/Users 1.2.3.2. Public Constituents 1.2.3.3. Industry Advisory Groups, Stakeholders 1.2.3.4. Media 2. Benefit/Value (Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.) 2.1. Program Costs – EA enables sound decisions regarding... 2.1.1. Cost Avoidance – a TCO theme 2.1.2. Sequencing – alignment of capability delivery 2.1.3. Budget Instability – a Federal reality 2.2. Investment Capital – EA illuminates new investment resources via... 2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral 2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication 2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage 2.3. Contextual Knowledge – EA enables informed decisions by revealing... 2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context 2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT 2.3.3. Influence – the impact of politics and relationships can be examined 2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways 2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures 2.4. Mission Performance – EA enables beneficial decision results regarding... 2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization 2.4.2. IT Stability – towards 100%, real-time uptime 2.4.3. Agility – responding to rapid changes in mission 2.4.4. Outcomes –measures of mission success, KPIs – vs. only "Outputs" 2.4.5. Constraints – appropriate response to constraints 2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome 2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of... 2.5.1. Compliance – all the right boxes are checked 2.5.2. Dependencies –cross-agency, segment, government 2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively 2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles 2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues 2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to "future-proof" 2.5.5.2. Anticipated – discovering the level of impact that matters 3. EA Program Facet (What parts of the EA can and should be communicated, using business or mission terms?) 3.1. Architecture Models – the visual tools to be created and used 3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels 3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What's the relationship of these models, with existing system models? 3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful? 3.2. Traceability – the maturity, status, completeness of the tools 3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it? 3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome? 3.3. Governance – what's the interaction, participation method; how are the tools used? 3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts? 3.3.2. Review – how is the EA validated, against what criteria?  Taxonomy Usage Example:   1. To speak with: a. ...a particular set of System Owners Facing Strategic Change, via mandate (like the "Cloud First" mandate); about... b. ...how the EA program's visible and easily accessible Infrastructure Reference Model (i.e. "IRM" or "TRM"), if updated more completely with current system data, can... c. ...help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise... 2. ....the following Marketing & Communications (Sales) Plan can be constructed: a. Create an easy-to-read "Consequence Model" that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the "Consequence Model" for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and c. Schedule a series of short "Discovery" meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the "As-Is" models and frame the "To Be" models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems. --end example -- Note that communications with the intended audience should take a page out of the standard "Search Engine Optimization" (SEO) playbook, using keywords and phrases relating to "value" and "outcome" vs. "compliance" and "output". Searches in email boxes, internal and external search engines for phrases like "cost avoidance strategies", "mission performance metrics" and "innovation funding" should yield messages and content from the EA team. This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick "proof points" (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community. In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes. The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

    Read the article

  • Server compromised. Bounce message contains many email addresses message was not sent to

    - by Tim Duncklee
    This is not a dupe. Please read and understand the issue before marking this as a duplicate question that has been answered already. Several customers are reporting bounce messages like the one below. At first I thought their computers had a virus but then I received one that was server generated so the problem is with the server. I've inspected the logs and these email addresses do not appear in the logs. The only thing I see that I do not remember seeing in the past are entries like this: Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: hook_dir = '/var/qmail//handlers/before-queue' Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: recipient[3] = '[email protected]' Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: handlers dir = '/var/qmail//handlers/before-queue/recipient/[email protected]' I've searched here and the web and maybe I'm just not entering the right search terms but I find nothing on this issue. Does anyone know how a hacker would attach additional email addresses to a message at the server and have them not appear in the logs? CentOS release 5.4, Plesk 8.6, QMail 1.03 Hi. This is the qmail-send program at psa.aaaaaa.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 82.201.133.227 does not like recipient. Remote host said: 550 #5.1.0 Address rejected. Giving up on 82.201.133.227. <[email protected]>: 64.18.7.10 does not like recipient. Remote host said: 550 No such user - psmtp Giving up on 64.18.7.10. <[email protected]>: 173.194.68.27 does not like recipient. Remote host said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 w8si1903qag.18 - gsmtp Giving up on 173.194.68.27. <[email protected]>: 207.115.36.23 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.23. <[email protected]>: 207.115.37.22 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.22. <[email protected]>: 207.115.37.20 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.20. <[email protected]>: 207.115.37.23 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.23. <[email protected]>: 207.115.36.22 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.22. <[email protected]>: 74.205.16.140 does not like recipient. Remote host said: 553 sorry, that domain isn't in my list of allowed rcpthosts; no valid cert for gatewaying (#5.7.1) Giving up on 74.205.16.140. <[email protected]>: 207.115.36.20 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.20. <[email protected]>: 207.115.37.21 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.21. <[email protected]>: 192.169.41.23 failed after I sent the message. Remote host said: 554 qq Sorry, no valid recipients (#5.1.3) --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15962 invoked from network); 1 May 2013 06:49:34 -0400 Received: from exprod6mo107.postini.com (64.18.1.18) by psa.aaaaaa.com with (DHE-RSA-AES256-SHA encrypted) SMTP; 1 May 2013 06:49:34 -0400 Received: from aaaaaa.com (exprod6lut001.postini.com [64.18.1.199]) by exprod6mo107.postini.com (Postfix) with SMTP id 47F80B8CA4 for <[email protected]>; Wed, 1 May 2013 03:49:33 -0700 (PDT) From: "Support" <[email protected]> To: [email protected] Subject: Detected Potential Junk Mail Date: Wed, 1 May 2013 03:49:33 -0700 Dear [email protected], junk mail protection service has detected suspicious email message(s) since your last visit and directed them to your Message Center. You can inspect your suspicious email at: ... UPDATE: After not seeing this problem for a while, I personally sent a message and immediately got a bounce with several bad addresses that I know I did not send to. These are addresses that are not on my system or on the server. This problem happens with both Mac and Windows clients and with messages generated from Postini and sent to users on my system. This is NOT backscatter. If it was backscatter it would not have the contents of my message in it. UPDATE #2 Here is another bounce. This one was sent by me and the bounce came back immediately. Hi. This is the qmail-send program at psa.aaaaaa.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 71.74.56.227 does not like recipient. Remote host said: 550 5.1.1 <[email protected]>... User unknown Giving up on 71.74.56.227. <[email protected]>: Connected to 208.34.236.3 but sender was rejected. Remote host said: 550 5.7.1 This system is configured to reject mail from 174.142.62.210 [174.142.62.210] (Host blacklisted - Found on Realtime Black List server 'bl.mailspike.net') <[email protected]>: 66.96.80.22 failed after I sent the message. Remote host said: 552 sorry, mailbox [email protected] is over quota temporarily (#5.1.1) <[email protected]>: 83.145.109.52 does not like recipient. Remote host said: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in virtual mailbox table Giving up on 83.145.109.52. <[email protected]>: 69.49.101.234 does not like recipient. Remote host said: 550 5.7.1 <[email protected]>... H:M12 [174.142.62.210] Connection refused due to abuse. Please see http://mailspike.org/anubis/lookup.html or contact your E-mail provider. Giving up on 69.49.101.234. <[email protected]>: 212.55.154.36 does not like recipient. Remote host said: 550-The account has been suspended for inactivity 550 A conta do destinatario encontra-se suspensa por inactividade (#5.2.1) Giving up on 212.55.154.36. <[email protected]>: 199.168.90.102 failed after I sent the message. Remote host said: 552 Transaction [email protected] failed, remote said "550 No such user" (#5.1.1) <[email protected]>: 98.136.217.192 failed after I sent the message. Remote host said: 554 delivery error: dd Sorry your message to [email protected] cannot be delivered. This account has been disabled or discontinued [#102]. - mta1210.sbc.mail.gq1.yahoo.com --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 2618 invoked from network); 2 Jun 2013 22:32:51 -0400 Received: from 75-138-254-239.dhcp.jcsn.tn.charter.com (HELO ?192.168.0.66?) (75.138.254.239) by psa.aaaaaa.com with SMTP; 2 Jun 2013 22:32:48 -0400 User-Agent: Microsoft-Entourage/12.34.0.120813 Date: Sun, 02 Jun 2013 21:32:39 -0500 Subject: Refinance From: Tim Duncklee <[email protected]> To: Scott jones <[email protected]> Message-ID: <CDD16A79.67344%[email protected]> Thread-Topic: Reference Thread-Index: Ac5gAp2QmTs+LRv0SEOy7AJTX2DWzQ== Mime-version: 1.0 Content-type: multipart/mixed; boundary="B_3453053568_12034440" > This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. --B_3453053568_12034440 Content-type: multipart/related; boundary="B_3453053568_11982218" --B_3453053568_11982218 Content-type: multipart/alternative; boundary="B_3453053568_12000660" --B_3453053568_12000660 Content-type: text/plain; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable Scott, ... email body here ... Here are the relevant log entries: Jun 2 22:32:50 psa qmail-queue[2616]: mail: all addreses are uncheckable - need to skip scanning (by deny mode) Jun 2 22:32:50 psa qmail-queue[2616]: scan: the message(drweb.tmp.i2SY0n) sent by [email protected] to [email protected] should be passed without checks, because contains uncheckable addresses Jun 2 22:32:50 psa qmail-queue-handlers[2617]: Handlers Filter before-queue for qmail started ... Jun 2 22:32:50 psa qmail-queue-handlers[2617]: [email protected] Jun 2 22:32:50 psa qmail-queue-handlers[2617]: [email protected] Jun 2 22:32:50 psa qmail-queue-handlers[2617]: hook_dir = '/var/qmail//handlers/before-queue' Jun 2 22:32:50 psa qmail-queue-handlers[2617]: recipient[3] = '[email protected]' Jun 2 22:32:50 psa qmail-queue-handlers[2617]: handlers dir = '/var/qmail//handlers/before-queue/recipient/[email protected]' Jun 2 22:32:51 psa qmail: 1370226771.060211 starting delivery 57: msg 1540285 to remote [email protected] Jun 2 22:32:51 psa qmail: 1370226771.060402 status: local 0/10 remote 1/20 Jun 2 22:32:51 psa qmail: 1370226771.060556 new msg 4915232 Jun 2 22:32:51 psa qmail: 1370226771.060671 info msg 4915232: bytes 687899 from <[email protected]> qp 2618 uid 2020 Jun 2 22:32:51 psa qmail-remote-handlers[2619]: Handlers Filter before-remote for qmail started ... Jun 2 22:32:51 psa qmail-queue-handlers[2617]: starter: submitter[2618] exited normally Jun 2 22:32:51 psa qmail-remote-handlers[2619]: from= Jun 2 22:32:51 psa qmail-remote-handlers[2619]: [email protected] Jun 2 22:32:51 psa qmail: 1370226771.078732 starting delivery 58: msg 4915232 to remote [email protected] Jun 2 22:32:51 psa qmail: 1370226771.078825 status: local 0/10 remote 2/20 Jun 2 22:32:51 psa qmail-remote-handlers[2621]: Handlers Filter before-remote for qmail started ... Jun 2 22:32:51 psa qmail-remote-handlers[2621]: [email protected] Jun 2 22:32:51 psa qmail-remote-handlers[2621]: [email protected]

    Read the article

  • How to get Passive FTP Working Through an Iptables Firewall?

    - by user1133248
    I have an iptables firewall running on a Fedora Linux server that is basically being used as a firewall router and OpenVPN server. That's it. We have been using the same iptables firewall code for YEARS. I did make some changes on 21 December to re-route a mySQL port, but given what has happened I've completely backed those changes out. Sometime after those changes were made and backed out passive FTP, served from a vsftpd process, stopped working. We use a passive ftp client to FLING (that's the name of the ftp client running under Windows! :-) ) images from our remote telescopes to our server. I believe it is something in the firewall code because I can drop the firewall and the FTP file transfer (and connecting to the ftp site with Internet Explorer to see the file list) works. When I raise the iptables firewall, it stops working. Again, this is code that we'd been using for years. However, I felt that maybe there was something I missed, so we had a .bak file from 2009 that I used. Same behavior, passive ftp does not work. So, I went and rebuilt the firewall code line by line to see what line was causing the problem. Everything worked until I put the line -A FORWARD -j DROP in very near the end. Of course, if I am correct, this is the line that basically "turns on" the firewall, saying drop everything except for the exceptions I've made above. However, this line has been in the iptables code probably since 2003. So, I'm at the end of my rope, and I still can't figure out why this has stopped working. I guess I need an expert on iptables configuration. Here is the iptables code (from iptables-save) with comments. # Generated by iptables-save v1.3.8 on Thu Jan 5 18:36:25 2012 *nat # One of the things that I remain ignorant about is what these following three lines # do in both the nat tables (which we're not using on this machine) and the following # filter table. I don't know what the numbers are, but I'm ASSUMING they're port # ranges. # :PREROUTING ACCEPT [7435:551429] :POSTROUTING ACCEPT [6097:354458] :OUTPUT ACCEPT [5:451] COMMIT # Completed on Thu Jan 5 18:36:25 2012 # Generated by iptables-save v1.3.8 on Thu Jan 5 18:36:25 2012 *filter :INPUT ACCEPT [10423:1046501] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [15184:16948770] # The following line is for my OpenVPN configuration. -A INPUT -i tun+ -j ACCEPT # In researching this on the Internet I found some iptables code that was supposed to # open the needed ports up. I never needed this before this week, but since passive FTP # was no longer working, I decided to put the code in. The next three lines are part of # that code. -A INPUT -p tcp -m tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --sport 1024:65535 --dport 20 -m state --state ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m state --state RELATED,ESTABLISHED -j ACCEPT # Another line for the OpenVPN configuration. I don't know why the iptables-save mixed # the lines up. -A FORWARD -i tun+ -j ACCEPT # Various forwards for all our services -A FORWARD -s 65.118.148.197 -p tcp -m tcp --dport 3307 -j ACCEPT -A FORWARD -d 65.118.148.197 -p tcp -m tcp --dport 3307 -j ACCEPT -A FORWARD -s 65.118.148.197 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -d 65.118.148.197 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 21 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 21 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 20 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 20 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 7191 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 7191 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 46000:46999 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 46000:46999 -j ACCEPT -A FORWARD -s 65.118.148.0/255.255.255.0 -j ACCEPT -A FORWARD -d 65.118.148.196 -p udp -m udp --dport 53 -j ACCEPT -A FORWARD -s 65.118.148.196 -p udp -m udp --dport 53 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 53 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 53 -j ACCEPT -A FORWARD -d 65.118.148.196 -p udp -m udp --dport 25 -j ACCEPT -A FORWARD -s 65.118.148.196 -p udp -m udp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 42 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 42 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -d 65.118.148.204 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -s 65.118.148.204 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 6667 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 6667 -j ACCEPT -A FORWARD -s 65.96.214.242 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -s 192.68.148.66 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT # "The line" that causes passive ftp to stop working. Insofar as I can tell, everything # else seems to work - ssh, telnet, mysql, httpd. -A FORWARD -j DROP -A FORWARD -p icmp -j ACCEPT # The following code is again part of my attempt to put in code that would cause passive # ftp to work. I don't know why iptables-save scattered it about like this. -A OUTPUT -p tcp -m tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 20 --dport 1024:65535 -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT COMMIT # Completed on Thu Jan 5 18:36:25 2012 So, with all that prelude, my basic question is: How can I get passive ftp to work behind an iptables firewall? As you can see, I've tried to get it working (again) and tried to do some research on the issue, but have come up...short. Any answers would be appreciated by both me and various variable star astronomers around the world! THANKS! -Richard "Doc" Kinne, American Assoc. of Variable Star Observers, [email protected]

    Read the article

  • In Flex, how to drag a component into a column of DataGrid (not the whole DataGrid)?

    - by Yousui
    Hi guys, I have a custom component: <?xml version="1.0" encoding="utf-8"?> <s:Group xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx"> <fx:Declarations> </fx:Declarations> <fx:Script> <![CDATA[ [Bindable] public var label:String = "don't know"; [Bindable] public var imageName:String = "x.gif"; ]]> </fx:Script> <s:HGroup paddingLeft="8" paddingTop="8" paddingRight="8" paddingBottom="8"> <mx:Image id="img" source="assets/{imageName}" /> <s:Label text="{label}"/> </s:HGroup> </s:Group> and a custom render, which will be used in my DataGrid: <?xml version="1.0" encoding="utf-8"?> <s:MXDataGridItemRenderer xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" focusEnabled="true" xmlns:components="components.*"> <s:VGroup> <components:Person label="{dataGridListData.label}"> </components:Person> </s:VGroup> </s:MXDataGridItemRenderer> This is my application: <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600" xmlns:services="services.*"> <s:layout> <s:VerticalLayout/> </s:layout> <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.controls.Image; import mx.rpc.events.ResultEvent; import mx.utils.ArrayUtil; ]]> </fx:Script> <fx:Declarations> <fx:XMLList id="employees"> <employee> <name>Christina Coenraets</name> <phone>555-219-2270</phone> <email>[email protected]</email> <active>true</active>  </employee> <employee> <name>Joanne Wall</name> <phone>555-219-2012</phone> <email>[email protected]</email> <active>true</active>  </employee> <employee> <name>Maurice Smith</name> <phone>555-219-2012</phone> <email>[email protected]</email> <active>false</active>  </employee> <employee> <name>Mary Jones</name> <phone>555-219-2000</phone> <email>[email protected]</email> <active>true</active>  </employee> </fx:XMLList> </fx:Declarations> <s:HGroup> <mx:DataGrid dataProvider="{employees}" width="100%" dropEnabled="true"> <mx:columns> <mx:DataGridColumn headerText="Employee Name" dataField="name"/> <mx:DataGridColumn headerText="Email" dataField="email"/> <mx:DataGridColumn headerText="Image" dataField="image" itemRenderer="renderers.render1"/> </mx:columns> </mx:DataGrid> <s:List dragEnabled="true" dragMoveEnabled="false"> <s:dataProvider> <s:ArrayCollection> <fx:String>aaa</fx:String> <fx:String>bbb</fx:String> <fx:String>ccc</fx:String> <fx:String>ddd</fx:String> </s:ArrayCollection> </s:dataProvider> </s:List> </s:HGroup> </s:Application> Now what I want to do is let the user drag an one or more item from the left List component and drop at the third column of the DataGrid, then using the dragged data to create another <components:Person /> object. So in the final result, maybe the first line contains just one <components:Person /> object at the third column, the second line contains two <components:Person /> object at the third column and so on. Can this be implemented in Flex? How? Great thanks.

    Read the article

  • Looking for a workaround for IE 6/7 "Unspecified Error" bug when accessing offsetParent; using ASP.N

    - by CodeChef
    I'm using jQuery UI's draggable and droppable libraries in a simple ASP.NET proof of concept application. This page uses the ASP.NET AJAX UpdatePanel to do partial page updates. The page allows a user to drop an item into a trashcan div, which will invoke a postback that deletes a record from the database, then rebinds the list (and other controls) that the item was drug from. All of these elements (the draggable items and the trashcan div) are inside an ASP.NET UpdatePanel. Here is the dragging and dropping initialization script: function initDragging() { $(".person").draggable({helper:'clone'}); $("#trashcan").droppable({ accept: '.person', tolerance: 'pointer', hoverClass: 'trashcan-hover', activeClass: 'trashcan-active', drop: onTrashCanned }); } $(document).ready(function(){ initDragging(); var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function() { initDragging(); }); }); function onTrashCanned(e,ui) { var id = $('input[id$=hidID]', ui.draggable).val(); if (id != undefined) { $('#hidTrashcanID').val(id); __doPostBack('btnTrashcan',''); } } When the page posts back, partially updating the UpdatePanel's content, I rebind the draggables and droppables. When I then grab a draggable with my cursor, I get an "htmlfile: Unspecified error." exception. I can resolve this problem in the jQuery library by replacing elem.offsetParent with calls to this function that I wrote: function IESafeOffsetParent(elem) { try { return elem.offsetParent; } catch(e) { return document.body; } } I also have to avoid calls to elem.getBoundingClientRect() as it throws the same error. For those interested, I only had to make these changes in the jQuery.fn.offset function in the Dimensions Plugin. My questions are: Although this works, are there better ways (cleaner; better performance; without having to modify the jQuery library) to solve this problem? If not, what's the best way to manage keeping my changes in sync when I update the jQuery libraries in the future? For, example can I extend the library somewhere other than just inline in the files that I download from the jQuery website. Update: @some It's not publicly accessible, but I will see if SO will let me post the relevant code into this answer. Just create an ASP.NET Web Application (name it DragAndDrop) and create these files. Don't forget to set Complex.aspx as your start page. You'll also need to download the jQuery UI drag and drop plug in as well as jQuery core Complex.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Complex.aspx.cs" Inherits="DragAndDrop.Complex" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> <script src="jquery-1.2.6.min.js" type="text/javascript"></script> <script src="jquery-ui-personalized-1.5.3.min.js" type="text/javascript"></script> <script type="text/javascript"> function initDragging() { $(".person").draggable({helper:'clone'}); $("#trashcan").droppable({ accept: '.person', tolerance: 'pointer', hoverClass: 'trashcan-hover', activeClass: 'trashcan-active', drop: onTrashCanned }); } $(document).ready(function(){ initDragging(); var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function() { initDragging(); }); }); function onTrashCanned(e,ui) { var id = $('input[id$=hidID]', ui.draggable).val(); if (id != undefined) { $('#hidTrashcanID').val(id); __doPostBack('btnTrashcan',''); } } </script> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <div> <asp:UpdatePanel ID="updContent" runat="server" UpdateMode="Always"> <ContentTemplate> <asp:LinkButton ID="btnTrashcan" Text="trashcan" runat="server" CommandName="trashcan" onclick="btnTrashcan_Click" style="display:none;"></asp:LinkButton> <input type="hidden" id="hidTrashcanID" runat="server" /> <asp:Button ID="Button1" runat="server" Text="Save" onclick="Button1_Click" /> <table> <tr> <td style="width: 300px;"> <asp:DataList ID="lstAllPeople" runat="server" DataSourceID="odsAllPeople" DataKeyField="ID"> <ItemTemplate> <div class="person"> <asp:HiddenField ID="hidID" runat="server" Value='<%# Eval("ID") %>' /> Name: <asp:Label ID="lblName" runat="server" Text='<%# Eval("Name") %>' /> <br /> <br /> </div> </ItemTemplate> </asp:DataList> <asp:ObjectDataSource ID="odsAllPeople" runat="server" SelectMethod="SelectAllPeople" TypeName="DragAndDrop.Complex+DataAccess" onselecting="odsAllPeople_Selecting"> <SelectParameters> <asp:Parameter Name="filter" Type="Object" /> </SelectParameters> </asp:ObjectDataSource> </td> <td style="width: 300px;vertical-align:top;"> <div id="trashcan"> drop here to delete </div> <asp:DataList ID="lstPeopleToDelete" runat="server" DataSourceID="odsPeopleToDelete"> <ItemTemplate> ID: <asp:Label ID="IDLabel" runat="server" Text='<%# Eval("ID") %>' /> <br /> Name: <asp:Label ID="NameLabel" runat="server" Text='<%# Eval("Name") %>' /> <br /> <br /> </ItemTemplate> </asp:DataList> <asp:ObjectDataSource ID="odsPeopleToDelete" runat="server" onselecting="odsPeopleToDelete_Selecting" SelectMethod="GetDeleteList" TypeName="DragAndDrop.Complex+DataAccess"> <SelectParameters> <asp:Parameter Name="list" Type="Object" /> </SelectParameters> </asp:ObjectDataSource> </td> </tr> </table> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> </html> Complex.aspx.cs namespace DragAndDrop { public partial class Complex : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected List<int> DeleteList { get { if (ViewState["dl"] == null) { List<int> dl = new List<int>(); ViewState["dl"] = dl; return dl; } else { return (List<int>)ViewState["dl"]; } } } public class DataAccess { public IEnumerable<Person> SelectAllPeople(IEnumerable<int> filter) { return Database.SelectAll().Where(p => !filter.Contains(p.ID)); } public IEnumerable<Person> GetDeleteList(IEnumerable<int> list) { return Database.SelectAll().Where(p => list.Contains(p.ID)); } } protected void odsAllPeople_Selecting(object sender, ObjectDataSourceSelectingEventArgs e) { e.InputParameters["filter"] = this.DeleteList; } protected void odsPeopleToDelete_Selecting(object sender, ObjectDataSourceSelectingEventArgs e) { e.InputParameters["list"] = this.DeleteList; } protected void Button1_Click(object sender, EventArgs e) { foreach (int id in DeleteList) { Database.DeletePerson(id); } DeleteList.Clear(); lstAllPeople.DataBind(); lstPeopleToDelete.DataBind(); } protected void btnTrashcan_Click(object sender, EventArgs e) { int id = int.Parse(hidTrashcanID.Value); DeleteList.Add(id); lstAllPeople.DataBind(); lstPeopleToDelete.DataBind(); } } } Database.cs namespace DragAndDrop { public static class Database { private static Dictionary<int, Person> _people = new Dictionary<int,Person>(); static Database() { Person[] people = new Person[] { new Person("Chad") , new Person("Carrie") , new Person("Richard") , new Person("Ron") }; foreach (Person p in people) { _people.Add(p.ID, p); } } public static IEnumerable<Person> SelectAll() { return _people.Values; } public static void DeletePerson(int id) { if (_people.ContainsKey(id)) { _people.Remove(id); } } public static Person CreatePerson(string name) { Person p = new Person(name); _people.Add(p.ID, p); return p; } } public class Person { private static int _curID = 1; public int ID { get; set; } public string Name { get; set; } public Person() { ID = _curID++; } public Person(string name) : this() { Name = name; } } }

    Read the article

< Previous Page | 55 56 57 58 59