Search Results

Search found 18314 results on 733 pages for 'document architecture'.

Page 144/733 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Is it possible to use Google Docs Viewer to view files already in Google Docs?

    - by john2x
    The title is a little confusing. I'll elaborate. As far as I can tell, the Google Docs Viewer tool accepts a link to a raw document file (e.g. .doc, .pdf, et. al.), and renders its contents in the browser. For example, this url to a pdf http://research.google.com/archive/bigtable-osdi06.pdf when passed to Viewer, returns this link: http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf What I'm trying to achieve is, use the Viewer to view a document already hosted in Google Docs (i.e. no longer a raw document file). When passing a link to a Google Docs document to the Viewer, the result is not as expected. It renders the link's HTML source instead of the document's contents. The reason I want to do this is that I want to be able to use the "embed" feature of Viewer to view Google Docs documents. Does Google Docs have a "link to embeddable view" feature? P.S. Here is a sample snippet to an embedded document. This is what I want, but pointing to an existing Google Docs document. <iframe src="http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf&embedded=true" width="600" height="780" style="border: none;"></iframe>

    Read the article

  • Looking for resources to explain a security risk.

    - by Dave
    I've a developer which has given users the ability to download a zip archive which contains an html document which references a relative javascript file and flash document. The flash document accepts as one of it's parameters a url which is embedded in the html document. I believe that this archive is meant to be used as a means to transfer an advertisement to someone who would use the source to display the ad on their site, however the end user appears to want to view it locally. When one opens the html document the flash document is presented and when the user clicks on the flash document it redirects to this embedded url. However, if one extracts the archive on the desktop and opens the html document in a browser and clicks the flash object, nothing observable happens, they will not be redirected to the external url. I believe this is a security risk because one is transferring from the local computer zone to an external zone. I'm trying to determine the best way to explain this security risk in the simplest of terms to a very end user. They simply believe it's "broken" when it's not broken, they're being protected from a known vulnerability. The developer attempted to explain how to copy the files to a local iis instance, which I highly doubt is running on the users machine, and I do not consider this to be a viable explanation.

    Read the article

  • javascript split() array contains

    - by Mahesha999
    While learning JavaScript, I did not get why the output when we print the array returned of the Sting.split() method (with regular expression as an argument) is as explained below. var colorString = "red,blue,green,yellow"; var colors = colorString.split(/[^\,]+/); document.write(colors); //this print 7 times comma: ,,,,,,, However when I print individual element of the array colors, it prints an empty string, three commas and an empty string: document.write(colors[0]); //empty string document.write(colors[1]); //, document.write(colors[2]); //, document.write(colors[3]); //, document.write(colors[4]); //empty string document.write(colors[5]); //undefined document.write(colors[6]); //undefined Then, why printing the array directly gives seven commas. Though I think its correct to have three commas in the second output, I did not get why there is a starting (at index 0) and ending empty string (at index 4). Please explain I am screwed up here.

    Read the article

  • js loading page message not working on php file

    - by eggman20
    Merry Christmas guys, I found a code that displays a loading message using a gif. It uses the onLoad="init()" in the body tag. It works fine on an HTML file but it doesn't when the file is in PHP. Do I need to change anything in here or this just won't work in a PHP file? here's the code: <body onLoad="init()"> <div id="loading" style="position:absolute; width:100%; text-align:center; top:300px;"> <img src="loading.gif" border=0></div> <script> var ld=(document.all); var ns4=document.layers; var ns6=document.getElementById&&!document.all; var ie4=document.all; if (ns4) ld=document.loading; else if (ns6) ld=document.getElementById("loading").style; else if (ie4) ld=document.all.loading.style; function init() { if(ns4){ld.visibility="hidden";} else if (ns6||ie4) ld.display="none"; } </script> <!-- Content here --> </body> Thanks in advance!!!

    Read the article

  • How to retain the value of a text box?

    - by udaya
    This is the content on my view page and i want to replace the value from "0" when the monthfunc() is called...the monthfunc() is given below the view content <tr> <td> <input type="submit" name="monthplus" id="monthplus" onClick="return monthfunc('monthplus');" value=">>>>>>"> <input type="hidden" name="monthnum" id="monthnum" value="1"> <input type="text" name="monthname" id="monthname" value="0" value="<? echo $month;?>" > <input type="submit" name="monthminus" id="monthminus" onClick="return monthfunc('monthminus');" value="<<<<<<"> </td> </tr> My script is function monthfunc(mnth) { if(mnth == 'monthplus') { var yn = document.getElementById('monthnum').value; ynpo = parseInt(yn)+1; if(ynpo==13) { ynpo=1; } } else if(mnth == 'monthminus') { var yn = document.getElementById('monthnum').value; ynpo = parseInt(yn)-1; if(ynpo==0) { ynpo=12; } } if(ynpo ==1) { document.getElementById('monthname').value = 'january'; document.getElementById('monthnum').value = ynpo; return true; } else if(ynpo ==2) { document.getElementById('monthname').value = 'february'; document.getElementById('monthnum').value = ynpo; return true; } else if(ynpo ==3) { document.getElementById('monthname').value = 'March'; document.getElementById('monthnum').value = ynpo; return true; } return false; } How can i replace The value with the value like january february etc.. Actually i can change the values but cannot retain the values... i want to retain the values ,,,How to do that

    Read the article

  • Populate on demand a TreeView with datas in an XML

    - by m6a-uds
    Hi I have a large XML file (3000+ nodes) that I want to represent in a TreeView on ASP.NET. I cannot databind it to a XMLDataSource because loading the TreeView will then be way too slow (I never even waited long enough to see it finish...) So the solution for this would be to use the PopulateOnDemand property of the TreeNodes to load data only when needed. Problem is, I can't think of a way to acheive this... How can-I, based on the ID of a node, search a XMLDocument to get all the childnodes of the node having this ID? XML would look like that: <document ID=1> <document ID=2> <document ID=3> </document> </document> <document ID=4> </document> </document> There are nor rules on how much levels it can go down or anything...

    Read the article

  • javascript freezeing ie8

    - by florin
    Can someone tell me why does this code freeze ie8? It is supposed to generate input fields. In firefox, safari, chrome it works, but in in8 when i press generate button it freezes var monthNames = [ "Ianuarie", "Februarie", "Martie", "Aprilie", "Mai", "Iunie", "Iulie", "August", "Septembrie", "Octombrie", "Noiembrie", "Decembrie" ]; function buildMonthlyEntries() { var startDate = new Date(document.getElementById('datastart').value); var endDate = new Date(document.getElementById('dataend').value); if (startDate == "Invalid Date" || endDate == "Invalid Date") { return null; } var monthlyEntries = document.getElementById('monthlyEntries'); monthlyEntries.innerHTML = ""; // inclusiv dataend endDate.setMonth(endDate.getMonth() + 1); // start with startDate; loop until we reach endDate for (var dt = startDate; ! ( dt.getFullYear() == endDate.getFullYear() && dt.getMonth() == endDate.getMonth() ); dt.setMonth( dt.getMonth() + 1 ) ) { monthlyEntries.appendChild( document.createTextNode( monthNames[dt.getMonth()] + " " + String(dt.getFullYear()).substring(2) ) ); var textElement = document.createElement('input'); var textElement2 = document.createElement('input'); var textElement3 = document.createElement('input'); textElement.setAttribute('type', 'text'); //textElement.setAttribute('name', 'entry['+ monthNames[dt.getMonth()] + + String(dt.getFullYear()).substring(2) + ']'); textElement.setAttribute('name', 'entry[]'); textElement2.setAttribute('type', 'hidden'); textElement2.setAttribute('name', 'luna[]'); textElement2.setAttribute('value', '' + monthNames[dt.getMonth()] + ''); textElement3.setAttribute('type', 'hidden'); textElement3.setAttribute('name', 'an[]'); textElement3.setAttribute('value', '' + String(dt.getFullYear()) + ''); monthlyEntries.appendChild(textElement); monthlyEntries.appendChild(textElement2); monthlyEntries.appendChild(textElement3); // adauga br // monthlyEntries.appendChild(document.createElement("br")); } return null; }

    Read the article

  • IE8 error when using dyanamic form actions

    - by user330711
    Hello all: Please go here to see an iframe based web app. Click on the map of Australia, choose a city, then buy some tickets. Now you will see the cart form located on the lower right corner. The problem is in IE8, I cannot delete checked rows from the table; whereas in other browsers such as FireFox3.6, Opera10, Safari4 and Chrome4, this action is all right. Below is the related javascript. It doesn't use jQuery, as part of the requirement is no framework allowed! And iframes are the my best bet, ajax will simply kill me under this restriction. /* cartForm.js */ function toDeleteRoutes() //this function is executed before form is to be submitted. { if(document.getElementsByClassName('delete_box').length > 0) //there're rows to delete { document.getElementById('cartForm').action ="./deleteRoutes.php"; document.getElementById('cartForm').target ="section4"; return true; //this enables the form to be submitted as usual. } else return false; //there is no more row in table to delete! } function toSendEmail() //this function is executed before form is to be submitted. { document.getElementById('cartForm').action ="./sendEmail.php"; document.getElementById('cartForm').target ="section3"; document.getElementById('delete_btn').disabled = true; //disable delete button now return true; //this enables the form to be submitted as usual. } function toCancelPurchase() { document.getElementById('cartForm').action ="./cancelPurchase.php"; document.getElementById('cartForm').target ="section4"; return true; //this enables the form to be submitted as usual. } I don't know which part is wrong, or this is just because IE8 screws all?

    Read the article

  • IASA South East Florida Chapter Meeting Recap - June 2011

    - by Sam Abraham
    Erik Russell and Giles Marino were our speakers for the June 2011 IASA South East Florida Chapter meeting.    Attendees filled all available seats at the Microsoft office conference room where the event was held. This highlights the high interest in Enterprise Architecture as a career track and chartered project role. Also in attendance were our Board of Directors and Alex Funkhouser, President, Sherlock Technology.   Rainer Habermann, Chapter President, kicked off the meeting by introducing our speakers and Board of Directors.   Alex Funkhouser, President of South Florida’s staffing firm Sherlock Technology spoke briefly about available Software Architect positions in the area. Alex also congratulated and presented this week’s Sherlock Raffle winner with $500 in cash.   Our speakers Giles and Erik then proceeded with their talk. Erik presented a business case in the government sector where Enterprise Architecture helped a government entity cut costs and streamline its various business operations. Technologies leveraged in Erik’s demonstrated project were Java-based.   Giles then followed with a thorough demonstration of the Architecture patterns he used to migrate a complete backend system for an insurance company to the .Net Platform.   Audience was very engaged with our speakers as evidenced by the large number of follow-up questions asked at the end of the talk.   We greatly enjoyed Giles and Erik’s talk and look forward to having them share with us more of their adventures as Enterprise Architects in the near future.   Below are some photos of the event.   Sam Abraham Secretary- IASA South East Florida Chapter. http://www.iasaglobal.org/iasa/South_East_Florida.asp Chapter President - Rainer Habermann kicks off our meeting.   Sherlock Technology President Alex Funkhouser holding Sherlock's weekly cash prize. Alex shares available Software Architect opportunities with our members Erik Russell addressing our membership Giles Marino sharing his architecture experience in the insurance industry In this photo: Dave Noderer, Rainer Habermann, Quent Herschelman and Alex Funkhouser. Event attracted a large audience and filled the Microsoft conference room where it was held

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Podcast Show Notes: Fear and Loathing in SOA

    - by Bob Rhubart
    The latest program (#47) in the Arch2Arch podcast series is the first of three segments from another virtual mini-meet-up with architects from the OTN community, recorded on March 9, 2010. In keeping with the meet-up format, I sent an invitation to my list of past participants in Arch2Arch panel discussions. The following people showed up to take seats at the virtual table and drive the conversation: Hajo Normann is a SOA architect and consultant at EDS in Frankfurt Blog | LinkedIn | Oracle Mix | Oracle ACE Profile | Books  Jeff Davies is a Senior Product Manager at Oracle, and is the primary author of The Definitive Guide to SOA: Oracle Service Bus Homepage | Blog | LinkedIn | Oracle Mix Pat Shepherd is an enterprise architect with the Oracle Enterprise Solutions Group. Oracle Mix | LinkedIn | Blog This first segment focuses on a discussion of the persistent fear of SOA the panelists have observed among many developers and architects. Listen to Part 1 The discussion continues in next week’s segment with a look at the misinformation and misunderstanding behind the fear of SOA, and a discussion of possible solutions. So stay tuned: RSS   del.icio.us Tags: oracle,otn,arch2arch,podcast,soa,service-oriented architecture,enterprise architecture Technorati Tags: oracle,otn,arch2arch,podcast,soa,service-oriented architecture,enterprise architecture

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-16

    - by Bob Rhubart
    Applications Architecture | Roy Hunter and Brian Rasmussen www.oracle.com Roy Hunter and Brian Rasmussen examine the strategies three organizations applied to modernize their application architectures. Part of the Oracle Experiences in Enterprise Architecture article series. Public Sector Architecture | Jeremy Foreman and Hamza Jahangir www.oracle.com Jeremy Foreman and Hamza Jahangir examine the strategies used by two different organizations in deploying their respective future-state architectures. Part of the Oracle Experiences in Enterprise Architecture article series. XMLA vs BAPI | Sunil S. Ranka sranka.wordpress.com Oracle ACE Sunil Ranka's brief primer on the XMLA and BAPI standards. The Java EE 6 Example - Running Galleria on WebLogic 12 - Part 3 | Markus Eisele blog.eisele.net Oracle ACE Director Markus Eisele continues his series on working with Galleria. Oracle Linux Online Forum - March 27 event.on24.com Date: Tuesday, March 27, 2012 Time: 9:30 AM PT / 12:30 PM ET Hosts: Oracle Executives Edward Screven and Wim Coekaerts. Customer Presentation: How Oracle Helps Reduce Cost and Improve Performance of Database Applications at Progressive Insurance Speaker: John Dome What's New in Oracle Linux Speakers: Waseem Daher, Chris Mason, Elena Zannoni, Lenz Grimmer Get More Value from your Linux Vendor Speakers: Sergio Leunissen, Chris Mason, Monica Kumar JavaOne 2012 Call for Papers www.oracle.com Don't keep all that Java skill locked up in your overstuffed cranium. Submit your proposal for that killer paper now to share your experience at this year’s JavaOne. Running applications in the cloud are not designed for the cloud | Tom Laszewski blogs.oracle.com "The issue you face with moving client/server applications to the cloud via rehosting is 'where will the applications run?'" says Tom Laszewski. GlassFish 3.1.2 - Which Platform(s)? | The Aquarium blogs.oracle.com The Aquarium shares a list of GlassFish 3.1.2-supported operating systems and JVMs. IT Strategies from Oracle; Three Recipes for Oracle Service Bus 11g ; Stir Up Some SOA www.oracle.com Featured this week on the OTN Architect Portal, along with the latest events, product downloads, community social resources, articles on hot topics, and a whole lot more. Thought for the Day "No matter what the problem is, it's always a people problem." — Gerald M. Weinberg

    Read the article

  • links for 2011-02-28

    - by Bob Rhubart
    Apache Tuscany : SCA Java 2.x Releases (tags: ping.fm) Richard Veryard on Architecture: Modernism and Enterprise Architecture "Underlying conventional enterprise architecture theory and practice are some implicit assumptions that could be loosely characterized as modernist. Several people are offering more or less radical departures from conventional enterprise architecture..." - Richard Veryard (tags: ping.fm entarch) Java / Oracle SOA blog: Building an asynchronous web service with OSB "A few weeks ago I made a blogpost over how you can build an asynchronous web service with JAX-WS. In this blogpost I will do the same in the Oracle Service Bus." - Oracle ACE Edwin Biemond (tags: oracle otn oracleace servicebus esb osb webservices soa) Enterprise Software Development with Java: GlassFish 3.1 arrived! Yes sir, we do cluster now! "GlassFish 3.1 is finally there. As promised by Oracle back in March last year! And it is an exciting release. It brings back all the clustering and high availability support we were missing since 2.x into the Java EE 6 world." - Oracle ACE Director Markus Eisele (tags: oracle otn oracleace glassfish)

    Read the article

  • links for 2010-06-17

    - by Bob Rhubart
    Live Webcast: Alcatel-Lucent Delivers Modern Customer User Experience with New Interactive Portal Saeed Hosseiniyar (CIO of Alcatel-Lucent’s Enterprise Products Group) and Andy MacMillan  (VP of Product Management for Oracle’s Enterprise 2.0 Solutions) discuss how  Alcatel used Oracle’s Enterprise 2.0 solutions to build a community and  give customers a rich interactive experience. (tags: oracle otn webcast enterprise2.0) Up Next, More Browser Tools for WebCenter Sharing | The AppsLab On the heels of our bookmarklet for sharing to WebCenter, today we were designing another other way to help people interact with WebCenter from the browser (tags: ping.fm oracle e20) BPM 11gR1 now available on Amazon EC2 "This is a fully configured image which requires absolutely no installation and lets you get hands on experience with the software within minutes," says  Prasen Palvankar. "This image has all the required software installed and configured." (tags: oracle otn bpm amazon ec2) Webcast: Introducing Next-Generation Business Process Management Hasan Rizvi, Senior Vice President, Oracle Product Development, discusses innovations in Oracle's new BPM Suite 11g in this webcast. (tags: oracle otn webcast bpm) Tim Pinchetti: Architecture as a navigation system "Metaphors have value in communicating different aspects of architecture. So I’d like to explore different perspectives on architecture using different metaphors, starting with: navigation!" -- Tim Pinchetti  (tags: architecture enterprisearchitecture entarch) Oracle Fusion Middleware Innovation Awards 2010 Nominate your organization today for a chance to be recognized for your cutting-edge solution using Oracle Fusion Middleware products. (tags: oracle openworld fusionmiddleware innovation) Oracle OpenWorld Key Financials Sessions Theresa Hickman with highlights on the some of the 70 financial sessions scheduled for Oracle Open World,  crossing all the financials product lines: e-Business Suite, JD Edwards, PeopleSoft, and Fusion. (tags: oracle otn openworld financials) Liberate Your Laptops! The Return of Virtual Developer Day Details on the upcoming Oracle Technology Network Virtual Developer Day - Tuxedo. (English-language version scheduled for July 27th.)  (tags: oracle tuxedo virtualbox otn) Webcast: Effective Smart Data Grid Management David Haak (Accenture), Brad Williams (Oracle), and Chris Foretich (Southern) discuss the strategy behind and the application of smart data grid technology in this on-demand webcast.  (tags: ping.fm oracle bpm)

    Read the article

  • links for 2010-03-18

    - by Bob Rhubart
    Oracle Database HA Architecture « The Oracle Instructor Oracle Certified Master Uwe Hesse introduces his blog's new Oracle Database HA Architecture page. (tags: oracle otn highavailability database) Mario Morgado: Where is the value of Enterprise Architecture? "When we purchase a product, its value is equivalent to the maximum amount that someone is willing to pay for the product. However, is the same equation valid in terms of the business value of enterprise architecture?" Mario Morgado (tags: otn oracle enterprisearchitecture) Steve Wilson: Managing Application to Disk "Of course, what we're introducing today goes beyond a mere re-skinning of Sun Ops Center. The promise is to offer real integration, and now we're delivering on the first phase in that roadmap by introducing the Oracle Management Connector for Ops Center. This software allows customers to connect an instance of Ops Center to an instance of Oracle Enterprise Manager's grid control server and connect the event streams of the two products, allowing for new levels of visibility into the customer's systems when using the combination of Oracle and Sun technology." "Virtual" Steve Wilson (tags: oraclesun opscenter)

    Read the article

  • Partner Webcast: Implementing on SOA - A Hands-On Technology Demonstration

    - by Thanos
    Service Oriented Architecture enables organizations to operate more efficiently and react faster to opportunities. How? By helping you create a flexible application architecture that supports greater business agility. You decide how quickly you want to move. You can start by implementing an application integration platform. Then, you can evolve your environment gradually by introducing business process management, business rules, governance and event processing. This unified but flexible approach also allows you to maximize the long-term cost reduction benefits of SOA and cloud-based applications. In this session, you dive into SOA Suite and you will see the usage of some advanced features. The topics covered range from adapters, automatic and custom business process correlation through service routing, rule based and manual decisions and to error handling, compensations and extending SOA Suite with your own Java code. Agenda: Service Oriented Architecture The Auctions Scenario Live Demo of the Oracle SOA Suite Features Connecting to non service enabled technologies with adapters (Database and File adapter) Orchestrating services with BPEL processes Correlating processes with correlation sets Mediating services Service Component Architecture Event Handling User Notification Human Workflow Business Rules Fault Handling patterns Developing custom components with Spring and using them in SOA Suite composites Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now For all your questions and support requests to adopt and implement the latest Oracle technologies please contact us at [email protected]

    Read the article

  • Oracle CEP on OTN

    - by seth.white
    Here are the latest links to Oracle CEP on the Oracle website (OTN). I've heard from a few folks that these are hard to find. As of this writing, the latest release of Oracle CEP is 11.1.3. You may also see this release referred to as 11.1.1.3 and 11.1.1.3.0. An overview of the new features in 11.1.3 can be found here. The product download page: http://www.oracle.com/technology/software/products/middleware/htdocs/fmw_11_download.html The product documentation: http://download.oracle.com/docs/cd/E14571_01/soa.htm#ocep Don't be alarmed that the release number in the documentation is 11.1.1. This is the documentation for the 11.1.3 release. The user forum: http://forums.oracle.com/forums/forum.jspa?forumID=820 The Oracle CEP samples page (contains useful code samples): http://www.oracle.com/technology/sample_code/products/event-driven-architecture/index.html   The Oracle CEP product page (maintained by our product manager): http://www.oracle.com/technology/products/event-driven-architecture/complex-event-processing.html   The event driven architecture page (Oracle CEP is bundled in the EDA Suite product): http://www.oracle.com/technology/products/event-driven-architecture/index.html   Technorati Tags: Oracle,CEP,OTN

    Read the article

  • Show Notes: Debra Lilley on Fusion Applications

    - by Bob Rhubart
    The latest ArchBeat program features a three-part interview with Oracle ACE Director Debra Lilley (ACE Profile). Debra is Oracle Alliance Director at Fujitsu, Executive Member at the International Oracle Users Group Community (IOUG), Director and Deputy Chair at the UK Oracle Users Group (UKOUG), and a partner at Oracle UK.  So yeah, she’s connected.  In this interview Debra talks about her connection to Oracle Fusion Applications. Listen to Part 1 Debra talks about her role as the as the Director and Deputy Chairperson of the UKOUG and about the UKOUG development group’s involvement in Oracle Fusion Applications. Listen to Part 2 (March 9) Debra shares her insight into what Fusion Applications will bring to Enterprise Architecture, and the importance of user experience in enterprise architecture. Listen to Part 3 (March 16) Debra discusses the need to  close the gap between IT and business, and about how business users should be able to use applications without having to think about the underlying technology. Debra is very active in social networks, so if you have questions or comments you can connect with her via the following: Blog: http://www.debrasoracle.blogspot.com/ Twitter: @debralilley LinkedIn:  http://uk.linkedin.com/pub/debra-lilley/1/438/bba And if you’d like to learn more about Oracle Fusion Applications: http://www.oracle.com/us/products/applications/fusion/index.html Coming Soon Dr. Frank Munz, author of Middleware and Cloud Computing: Oracle Fusion Middleware on Amazon Web Services and Rackspace Cloud.  Andy MacMillan (VP, Enterprise 2.0, Oracle) on the socialization of the enterprise. A panel discussion on “Who gets to be a software architect?” Stay tuned: RSS Technorati Tags: oracle,fusion applications,enterprise architecture,IOUG,UKOUG del.icio.us Tags: oracle,fusion applications,enterprise architecture,IOUG,UKOUG

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Why Are Minimized Programs Often Slow to Open Again?

    - by Jason Fitzpatrick
    It seems particularly counterintuitive: you minimize an application because you plan on returning to it later and wish to skip shutting the application down and restarting it later, but sometimes maximizing it takes even longer than launching it fresh. What gives? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Bart wants to know why he’s not saving any time with application minimization: I’m working in Photoshop CS6 and multiple browsers a lot. I’m not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar – it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it’s slow, unresponsive and even sometimes totally freezes for minute or two. It’s not a hardware problem as it’s been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me – does it also happen to you? I guess OSes somehow “focus” on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let’s say, Photoshop, so it won’t slow down after long period of inactivity? So what is the deal? Why does he find himself waiting to maximize a minimized app? The Answer SuperUser contributor Allquixotic explains why: Summary The immediate problem is that the programs that you have minimized are being paged out to the “page file” on your hard disk. This symptom can be improved by installing a Solid State Disk (SSD), adding more RAM to your system, reducing the number of programs you have open, or upgrading to a newer system architecture (for instance, Ivy Bridge or Haswell). Out of these options, adding more RAM is generally the most effective solution. Explanation The default behavior of Windows is to give active applications priority over inactive applications for having a spot in RAM. When there’s significant memory pressure (meaning the system doesn’t have a lot of free RAM if it were to let every program have all the RAM it wants), it starts putting minimized programs into the page file, which means it writes out their contents from RAM to disk, and then makes that area of RAM free. That free RAM helps programs you’re actively using — say, your web browser — run faster, because if they need to claim a new segment of RAM (like when you open a new tab), they can do so. This “free” RAM is also used as page cache, which means that when active programs attempt to read data on your hard disk, that data might be cached in RAM, which prevents your hard disk from being accessed to get that data. By using the majority of your RAM for page cache, and swapping out unused programs to disk, Windows is trying to improve responsiveness of the program(s) you are actively using, by making RAM available to them, and caching the files they access in RAM instead of the hard disk. The downside of this behavior is that minimized programs can take a while to have their contents copied from the page file, on disk, back into RAM. The time increases the larger the program’s footprint in memory. This is why you experience that delay when maximizing Photoshop. RAM is many times faster than a hard disk (depending on the specific hardware, it can be up to several orders of magnitude). An SSD is considerably faster than a hard disk, but it is still slower than RAM by orders of magnitude. Having your page file on an SSD will help, but it will also wear out the SSD more quickly than usual if your page file is heavily utilized due to RAM pressure. Remedies Here is an explanation of the available remedies, and their general effectiveness: Installing more RAM: This is the recommended path. If your system does not support more RAM than you already have installed, you will need to upgrade more of your system: possibly your motherboard, CPU, chassis, power supply, etc. depending on how old it is. If it’s a laptop, chances are you’ll have to buy an entire new laptop that supports more installed RAM. When you install more RAM, you reduce memory pressure, which reduces use of the page file, which is a good thing all around. You also make available more RAM for page cache, which will make all programs that access the hard disk run faster. As of Q4 2013, my personal recommendation is that you have at least 8 GB of RAM for a desktop or laptop whose purpose is anything more complex than web browsing and email. That means photo editing, video editing/viewing, playing computer games, audio editing or recording, programming / development, etc. all should have at least 8 GB of RAM, if not more. Run fewer programs at a time: This will only work if the programs you are running do not use a lot of memory on their own. Unfortunately, Adobe Creative Suite products such as Photoshop CS6 are known for using an enormous amount of memory. This also limits your multitasking ability. It’s a temporary, free remedy, but it can be an inconvenience to close down your web browser or Word every time you start Photoshop, for instance. This also wouldn’t stop Photoshop from being swapped when minimizing it, so it really isn’t a very effective solution. It only helps in some specific situations. Install an SSD: If your page file is on an SSD, the SSD’s improved speed compared to a hard disk will result in generally improved performance when the page file has to be read from or written to. Be aware that SSDs are not designed to withstand a very frequent and constant random stream of writes; they can only be written over a limited number of times before they start to break down. Heavy use of a page file is not a particularly good workload for an SSD. You should install an SSD in combination with a large amount of RAM if you want maximum performance while preserving the longevity of the SSD. Use a newer system architecture: Depending on the age of your system, you may be using an out of date system architecture. The “system architecture” is generally defined as the “generation” (think generations like children, parents, grandparents, etc.) of the motherboard and CPU. Newer generations generally support faster I/O (input/output), better memory bandwidth, lower latency, and less contention over shared resources, instead providing dedicated links between components. For example, starting with the “Nehalem” generation (around 2009), the Front-Side Bus (FSB) was eliminated, which removed a common bottleneck, because almost all system components had to share the same FSB for transmitting data. This was replaced with a “point to point” architecture, meaning that each component gets its own dedicated “lane” to the CPU, which continues to be improved every few years with new generations. You will generally see a more significant improvement in overall system performance depending on the “gap” between your computer’s architecture and the latest one available. For example, a Pentium 4 architecture from 2004 is going to see a much more significant improvement upgrading to “Haswell” (the latest as of Q4 2013) than a “Sandy Bridge” architecture from ~2010. Links Related questions: How to reduce disk thrashing (paging)? Windows Swap (Page File): Enable or Disable? Also, just in case you’re considering it, you really shouldn’t disable the page file, as this will only make matters worse; see here. And, in case you needed extra convincing to leave the Windows Page File alone, see here and here. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Additional new content SOA Partner Community

    - by JuergenKress
    Oracle Reference Architecture: Application Infrastructure Foundation One of the earliest additions to the IT Strategies from Oracle library, this paper describes the concepts and capabilities of the application infrastructure and defines the platform on which solutions are built. Read it. Scaling Service Oriented Architecture What is scaling, and what does it mean to a service oriented architecture? Author Philip Wik explores those issues and proposes Oracle-based solutions to SOA scaling and a SOA scaling roadmap. Read it. SOA, Cloud, and Service Technologies: A Conversation with Thomas Erl Thomas Erl, the world's best selling SOA author, is joined by Oracle SOA experts Tim Hall and Demed L'Her for a wide ranging four-part conversation on the evolution of SOA and the emergence of the architect in the era of cloud computing. Listen to the Podcast & Read a Transcript Cloud e-book Invite your customers to download this Cloud e-book, packed with multi-media resources to educate your customers on the value of Oracle Cloud computing. Assessment: Are you Leading or Lagging when it comes to SOA and BPM? Take the online SOA Assessment and BPM Assessment. New Collateral: Whitepaper Series: The Promise of BPM Technology for Financial Services Institutions - Resource Kit Whitepaper: Reaching Process Excellence with Process Accelerators - PDF Demystifying Cloud Integration: Whitepapers, webcasts, and customer case studies - Resource Kit Whitepaper: Leveraging Governance to sustain Enterprise Architecture - PDF Article: Rethink SOA: A Recipe for Business Transformation - Article Oracle SOA Resource Kit Oracle SOA Governance Resource Kit Oracle BPM Resource Kit SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Java SE 8 (with JavaFX) Developer Preview Release for ARM

    - by Roger Brinkley
    In an effort to get ARM developers testing Java SE 8 before the scheduled release later this year a Java SE 8 Developer Preview Release for ARM has been made available. This release has been tested on the Raspberry PI but should work on other ARM platforms. In addition to the new Java SE features, this release provides specific support of hard float GPU on the Raspberry PI. The support for hard float GPU has been anticipated by a number of developers. Additionally, this release includes support of an optimized JavaFX. Specific configurations of JDK 8 on ARM are defined below: Java FX is supported on ARM architecture v6/7 (hard float) Supported platforms without Java FX: ARM architecture v6/7 (hard float) ARM architecture v7 (VFP, little endian) ARM architecture v5 (soft float, little endian) Linux x86 The download page includes setup instructions for a Raspberry PI device as well as demos and samples. Developers are also encouraged to try their own applications as well and to share their stories via the JavaFX or Project Feedback Forums.  If you've got a Raspberry PI or other ARM devices it's time to get started with Java SE 8 Developer Preview release.

    Read the article

  • Visual Studio 2010 Extension Manager (and the new VS 2010 PowerCommands Extension)

    - by ScottGu
    This is the twenty-third in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. Today’s blog post covers some of the extensibility improvements made in VS 2010 – as well as a cool new "PowerCommands for Visual Studio 2010” extension that Microsoft just released (and which can be downloaded and used for free). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Extensibility in VS 2010 VS 2010 provides a much richer extensibility model than previous releases.  Anyone can build extensions that add, customize, and light-up the Visual Studio 2010 IDE, Code Editors, Project System and associated Designers. VS 2010 Extensions can be created using the new MEF (Managed Extensibility Framework) which is built-into .NET 4.  You can learn more about how to create VS 2010 extensions from this this blog post from the Visual Studio Team Blog. VS 2010 Extension Manager Developers building extensions can distribute them on their own (via their own web-sites or by selling them).  Visual Studio 2010 also now includes a built-in “Extension Manager” within the IDE that makes it much easier for developers to find, download, and enable extensions online.  You can launch the “Extension Manager” by selecting the Tools->Extension Manager menu option: This loads an “Extension Manager” dialog which accesses an “online gallery” at Microsoft, and then populates a list of available extensions that you can optionally download and enable within your copy of Visual Studio: There are already hundreds of cool extensions populated within the online gallery.  You can browse them by category (use the tree-view on the top-left to filter them).  Clicking “download” on any of the extensions will download, install, and enable it. PowerCommands for Visual Studio 2010 This weekend Microsoft released the free PowerCommands for Visual Studio 2010 extension to the online gallery.  You can learn more about it here, and download and install it via the “Extension Manager” above (search for PowerCommands to find it). The PowerCommands download adds dozens of useful commands to Visual Studio 2010.  Below is a screen-shot of just a few of the useful commands that it adds to the Solution Explorer context menus: Below is a list of all the commands included with this weekend’s PowerCommands for Visual Studio 2010 release: Enable/Disable PowerCommands in Options dialog This feature allows you to select which commands to enable in the Visual Studio IDE. Point to the Tools menu, then click Options. Expand the PowerCommands options, then click Commands. Check the commands you would like to enable. Note: All power commands are initially defaulted Enabled. Format document on save / Remove and Sort Usings on save The Format document on save option formats the tabs, spaces, and so on of the document being saved. It is equivalent to pointing to the Edit menu, clicking Advanced, and then clicking Format Document. The Remove and sort usings option removes unused using statements and sorts the remaining using statements in the document being saved. Note: The Remove and sort usings option is only available for C# documents. Format document on save and Remove and sort usings both are initially defaulted OFF. Clear All Panes This command clears all output panes. It can be executed from the button on the toolbar of the Output window. Copy Path This command copies the full path of the currently selected item to the clipboard. It can be executed by right-clicking one of these nodes in the Solution Explorer: The solution node; A project node; Any project item node; Any folder. Email CodeSnippet To email the lines of text you select in the code editor, right-click anywhere in the editor and then click Email CodeSnippet. Insert Guid Attribute This command adds a Guid attribute to a selected class. From the code editor, right-click anywhere within the class definition, then click Insert Guid Attribute. Show All Files This command shows the hidden files in all projects displayed in the Solution Explorer when the solution node is selected. It enhances the Show All Files button, which normally shows only the hidden files in the selected project node. Undo Close This command reopens a closed document , returning the cursor to its last position. To reopen the most recently closed document, point to the Edit menu, then click Undo Close. Alternately, you can use the CtrlShiftZ shortcut. To reopen any other recently closed document, point to the View menu, click Other Windows, and then click Undo Close Window. The Undo Close window appears, typically next to the Output window. Double-click any document in the list to reopen it. Collapse Projects This command collapses a project or projects in the Solution Explorer starting from the root selected node. Collapsing a project can increase the readability of the solution. This command can be executed from three different places: solution, solution folders and project nodes respectively. Copy Class This command copies a selected class entire content to the clipboard, renaming the class. This command is normally followed by a Paste Class command, which renames the class to avoid a compilation error. It can be executed from a single project item or a project item with dependent sub items. Paste Class This command pastes a class entire content from the clipboard, renaming the class to avoid a compilation error. This command is normally preceded by a Copy Class command. It can be executed from a project or folder node. Copy References This command copies a reference or set of references to the clipboard. It can be executed from the references node, a single reference node or set of reference nodes. Paste References This command pastes a reference or set of references from the clipboard. It can be executed from different places depending on the type of project. For CSharp projects it can be executed from the references node. For Visual Basic and Website projects it can be executed from the project node. Copy As Project Reference This command copies a project as a project reference to the clipboard. It can be executed from a project node. Edit Project File This command opens the MSBuild project file for a selected project inside Visual Studio. It combines the existing Unload Project and Edit Project commands. Open Containing Folder This command opens a Windows Explorer window pointing to the physical path of a selected item. It can be executed from a project item node Open Command Prompt This command opens a Visual Studio command prompt pointing to the physical path of a selected item. It can be executed from four different places: solution, project, folder and project item nodes respectively. Unload Projects This command unloads all projects in a solution. This can be useful in MSBuild scenarios when multiple projects are being edited. This command can be executed from the solution node. Reload Projects This command reloads all unloaded projects in a solution. It can be executed from the solution node. Remove and Sort Usings This command removes and sort using statements for all classes given a project. It is useful, for example, in removing or organizing the using statements generated by a wizard. This command can be executed from a solution node or a single project node. Extract Constant This command creates a constant definition statement for a selected text. Extracting a constant effectively names a literal value, which can improve readability. This command can be executed from the code editor by right-clicking selected text. Clear Recent File List This command clears the Visual Studio recent file list. The Clear Recent File List command brings up a Clear File dialog which allows any or all recent files to be selected. Clear Recent Project List This command clears the Visual Studio recent project list. The Clear Recent Project List command brings up a Clear File dialog which allows any or all recent projects to be selected. Transform Templates This command executes a custom tool with associated text templates items. It can be executed from a DSL project node or a DSL folder node. Close All This command closes all documents. It can be executed from a document tab. How to temporarily disable extensions Extensions provide a great way to make Visual Studio even more powerful, and can help improve your overall productivity.  One thing to keep in mind, though, is that extensions run within the Visual Studio process (DevEnv.exe) and so a bug within an extension can impact both the stability and performance of Visual Studio.  If you ever run into a situation where things seem slower than they should, or if you crash repeatedly, please temporarily disable any installed extensions and see if that fixes the problem.  You can do this for extensions that were installed via the online gallery by re-running the extension manager (using the Tools->Extension Manager menu option) and by selecting the “Installed Extensions” node on the top-left of the dialog – and then by clicking “Disable” on any of the extensions within your installed list: Hope this helps, Scott

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • dynatree: how can i select child node programmatically

    - by Muhammad Adeel Zahid
    hello everyone i m using jquery's dynaTree in my application and i want to select the all the child nodes programmably when a node is selected. the structure of my tree is as follows <div id = "tree"> <ul> <li>package 1 <ul> <li>module 1.1 <ul> <li> document 1.1.1</li> <li> document 1.1.2</li> </ul> </li> <li>module 1.2 <ul> <li>document 1.2.1</li> <li>document 1.2.2</li> </ul> </li> </ul> </li> <li> package 2 <ul> <li> module 2.1 <ul> <li>document 2.1.1</li> <li>document 2.1.1</li> </ul> </li> </ul> </li> </ul> </div> now what i want is that when i click on tree node with title "package 1" all its child nodes i.e (module 1.1, document 1.1.1, document 1.1.2, module 1.2, document 1.2.1, document 1.2.2) should also be selected below is the approach i tried to use $("#tree").dynatree({ onSelect: function(flag, dtnode) { // This will happen each time a check box is selected/deselected var selectedNodes = dtnode.tree.getSelectedNodes(); var selectedKeys = $.map(selectedNodes, function(node) { //alert(node.data.key); return node.data.key; }); // Set the hidden input field's value to the selected items $('#SelectedItems').val(selectedKeys.join(",")); if (flag) { child = dtnode.childList; alert(child.length); for (i = 0; i < child.length; i++) { var x = child[i].select(true); alert(i); } } }, checkbox: true, onActivate: function(dtnode) { //alert("You activated " + dtnode.data.key); } }); in the if(flag) condition i get all the child nodes of element that is selected by user and it gives me the correct value that i can see from alert(child.length) statement. then i run the loop to select all the children but loop never goes beyond the statement var x = child[i].select(true); and i can never see the statement alert(i) being executed. the result of above statement is that if i select package 1, module 1.1 and document 1.1.1 is also selected but never does it execute alert(i) statement neither other children of package 1 are selected. in my view when first time child[i].select(true) statement is executed it also triggers the on select event of its children thus making a recursion kind of thing is my thinking correct? no matter recursion or what why on earth does it not complete the loop and execute very next instruction alert(i). please help me in solving this problem. i m dying to see that alert any suggestion and help is highly appriciated thanks Adeel

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >