Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 154/307 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Takeaways From CSO Roundtable New York

    - by Naresh Persaud
    Thanks to everyone who attended the Chief Security Officer Roundtable in New York last week. We were lucky to have Dennis Brixus, CSO of McGraw-Hill  as a guest speaker. In addition, Jeff Henley, provided a board level perspective on security. Amit Jasuja discussed Oracle's security formula.  A few takeaways from Jeff's talk that were interesting: Security is a board level issue. The challenge at the board level is that boards have short attention span. The CSO needs to be vigilant in educating the board on the strategic importance of security. Every CSO needs to think about cost. The CSO has to look at the economics of security and demonstrate fiduciary responsibility. We have to think of security as a business enabler. Security is the enabler that helps us expand into new markets and connect better with our customers and partners. While the CSO can't prevent every threat, we have to expect the CSO to have a plan. Oracle security-formula View more PowerPoint from OracleIDM

    Read the article

  • How atomic is a SELECT INTO?

    - by leo.pasta
    Last week I got an interesting situation that prompted me to challenge a long standing assumption. I always thought that a SELECT INTO was an atomic statement, i.e. it would either complete successfully or the table would not be created. So I got very surprised when, after a “select into” query was chosen as a deadlock victim, the next execution (as the app would handle the deadlock and retry) would fail with: Msg 2714, Level 16, State 6, Line 1 There is already an object named '#test' in the database. The only hypothesis we could come up was that the “create table” part of the statement was committed independently from the actual “insert”. We can confirm that by capturing the “Transaction Log” event on Profiler (filtering by SPID0). The result is that when we run: SELECT * INTO #results FROM master.sys.objects we get the following output on Profiler: It is easy to see the two independent transactions. Although this behaviour was a surprise to me, it is very easy to workaround it if you feel the need (as we did in this case). You can either change it into independent “CREATE TABLE / INSERT SELECT” or you can enclose the SELECT INTO in an explicit transaction: SET XACT_ABORT ON BEGIN TRANSACTION SELECT * INTO #results FROM master.sys.objects COMMIT

    Read the article

  • How do you avoid jumping to a solution when under pressure? [closed]

    - by GlenPeterson
    When under a particularly strict programming deadline (like an hour), if I panic at all, my tendency is to jump into coding without a real plan and hope I figure it out as I go along. Given enough time, this can work, but in an interview it's been pretty unsuccessful, if not downright counter-productive. I'm not always comfortable sitting there thinking while the clock ticks away. Is there a checklist or are there techniques to recognize when you understand the problem well enough to start coding? Maybe don't touch the keyboard for the first 5-10 minutes of the problem? At what point do you give up and code a brute-force solution with the hope of reasoning out a better solution later? A related follow-up question might be, "How do you ensure that you are solving the right problem?" Or "When is it most productive to think and design more vs. code some experiments to and figure out the design later?" EDIT: One close vote already, but I'm not sure why. I wrote this in the first person, but I doubt I'm the only programmer to ever choke in an interview. Here is a list of techniques for taking a math test and another for taking an oral exam. Maybe I'm not expressing myself well, but I'm asking if there is a similar list of techniques for handling a programming problem under pressure?

    Read the article

  • Are you ready for SharePoint 2010?

    - by Michael Van Cleave
    With SharePoint's next release on the horizon (May 12th) many of my clients and colleagues are starting to ramp up for the upcoming tidal wave of functionality. Microsoft has been doing a terrific job of getting as much information out in the public lime light as possible over the last few months and I think that will definitely pay off with regards to acceptance of the new version of SharePoint. However, there are still some aspects of the new platform that are a little murky. Aspects such as: "Should we upgrade?" "Will my current installation upgrade without issues?" "What benefits will I see by upgrading?" "What are the best practices for upgrading or best practice in general relating to 2010?" "How should we plan to deploy SharePoint 2010 in our organization?" There is a ton of information out there, but how do you go about getting some of these questions answered? Well, I am glad you asked. (J) ShareSquared will be delivering a FREE SharePoint 2010 Readiness Webinar that will cover Preparation, Strategies, and Best Practices for the upcoming version of SharePoint. The webinar will be presented by 2 of ShareSquared's outstanding SharePoint MVP's; Gary Lapointe and Paul Stork. As all those T.V. commercials say… "Space is limited, so sign up now!" Just kidding, well kind of but not really. I am sure that the signup will be huge and space is really limited so the sooner you sign up the better. I would hate for any of you to miss out. If you have any questions please don't hesitate to shoot me a e-mail through my blog or contact ShareSquared directly. See you at the webinar! Michael

    Read the article

  • Want to be a Speaker at Oracle OpenWorld 2012?

    - by Tony Berk
    Yes, your calendar is correct. It is March. But planning has already started for Oracle OpenWorld 2012. So if you want to be a speaker and propose your own session for this year's event in San Francisco on September 30th - October 4th, starting thinking now!  The annual OpenWorld Call for Papers is now open until April 9th! All of the details to submit a paper are available here. Of course we are interested in sessions around any of the Oracle CRM products, but the Call for Papers is open to all Oracle topics. When submitting your topic, be sure to describe what you plan to discuss and the value of the presentation to other attendees. Sell your session, because there will be a lot of competition to be selected. Bonus News: Speakers for selected sessions receive a complimentary full conference pass! Get your papers in and we'll see in San Francisco! Finally, if there are topics you would like to hear about this year, let us know. Comment on our Facebook page or on Twitter @OracleCRM with hashtag #oow12crm.

    Read the article

  • Optimize Many-to-Many with SUMMARIZE and Other Techniques

    - by Marco Russo (SQLBI)
    We are still in the early days of DAX and even if I have been using it since 2 years ago, there is still a lot to learn on that. One of the topics that historically interests me (and many of the readers here, probably) is the many-to-many relationships between dimensions in a dimensional data model. When I and Alberto wrote the The Many to Many Revolution 2.0 we discovered the SUMMARIZE based pattern very late in the whitepaper writing. It is very important for performance optimization and it should be always used. In the last month, Gerhard Brueckl also presented an approach based on cross table filtering behavior that simplify the syntax involved, even if it’s harder to explain how it works internally. I published a short article titled Optimize Many-to-Many Calculation in DAX with SUMMARIZE and Cross Table Filtering on SQLBI website just to provide a quick reference to the three patterns available. A further study is still required to compare performance between SUMMARIZE and Cross Table Filtering patterns. Up to now, I haven’t observed big differences between them, even if their execution plans might be not identical and this suggest me that depending on other conditions you might favor one over the other.

    Read the article

  • Roadmap for Thinktecture IdentityServer

    - by Your DisplayName here!
    I got asked today if I could publish a roadmap for thinktecture IdentityServer (idrsv in short). Well – I got a lot of feedback after B1 and one of the biggest points here was the data access layer. So I made two changes: I moved to configuration database access code to EF 4.1 code first. That makes it much easier to change the underlying database. So it is now just a matter of changing the connection string to use real SQL Server instead of SQL Compact. Important when you plan to do scale out. I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. Then there are some other small changes: The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. I made certificate based endpoint identities for SSL endpoints optional. This caused some problems with configuration and versioning of existing clients. I hope I can release the RC in the next days. If there are no major issues, there will be RTM very soon!

    Read the article

  • The Oracle VM Hall of Fame

    - by Kristin Rose
    “Take me out to the ball game, take me out to the crowd. Buy me a new Oracle VM, I want my competition to be history!”...Yes, baseball is in full swing, and as we make our way to the closing of the quarter, Oracle is ready to “knock it out of the park” with its newly updated release of Oracle VM 3.1. This home run of a server virtualization solution will let you deploy software faster, as it intelligently manages your entire infrastructure, from application to disk. As if that wasn’t enough, the competition can’t even get on base! Have a look at the final score below: Partners will be hitting grand slams left and right because management tools, application templates and single source support, have all teamed up to create one heck of a curve ball for the competition, but more importantly, an absolute first draft pick for our partners. With no license cost and an affordable enterprise support cost, crowds have gathered to see this ‘All Star’ play some hard ball. Watch as Jeff Doolan, Sr. Director of Linux and Virtualization Channel Sales at Oracle, goes into more depth on how Oracle VM is a real game changer and eliminates the competition.Adding to the line-up are two key components of Oracle VM 3.1: Enhanced Ease-of-use: The new GUI design is engineered for faster execution of workflow and to maximize ease of use and reduce deployment time. Administrators have more time to spend at the ball park or focus on the business.New Oracle VM Templates: such as the Oracle E-Business Suite 12.1.3; Oracle PeopleSoft FSCM 9.1; Oracle Enterprise Manager 12c; Oracle Linux 5.8; Oracle Linux 6.1; Oracle Solaris 11 – which add to the existing 100+ existing templates that are ready for download. Oracle VM Templates are pre-configured as an entire stack including OS and application fully tested, production ready and certified from Oracle.For more information on Oracle newest player, Oracle VM 3.1, read this press release or visit our technology information page. Batter Up,The OPN Communications Team

    Read the article

  • Is shipping a Clojure desktop app realistic?

    - by Cedric Martin
    I'm currently shipping a desktop Java application. It is a plain old Java 5 Java / Swing app and so far everything worked nicely. Java 5 was targetted because some users were on OS X version / computers that shall never have Java 6 (we may lift this limitation soon and switch to a newer Java and simply abandoning my users stuck with Java 5). I'm quickly getting up to speed with Clojure but I haven't really done lots of Clojure-to-Java and Java-to-Clojure yet and I was wondering if it was realistic to ship a Clojure desktop application instead of a Java application? The application I'm shipping is currently about 12 MB with all the .jar so adding Clojure doesn't seen to be too much of an issue. My plan would be to have Clojure call Java APIs: my application is already divided in several independent jars. If I understand correctly calling Clojure from Java is harder than calling Java code from Clojure which is why I'd basically rewrite all the UI (part of the UI, mixing Swing components and self-made BufferedImages needs to be rewritten anyway due to the rise of retina display), and do all the 'wiring' from Clojure. So that's the problem I'm facing: is it realistic to ship a Clojure desktop app? (it certainly doesn't seem to be very widespread but then shipping plain Java desktop apps ain't that common either and I'm doing it anyway) Technically, what would need to be done? (compared to shipping a Java app)

    Read the article

  • Welcome to the Oracle FedApps blog

    - by jeffrey.waterman
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Congratulations, you have stumbled upon Oracle’s newest blog: The Federal Applications Blog. Periodically I plan to provide some insight into how Oracle’s application solutions are being applied, or how they can be applied, within the Federal Government. If you are a user of, or just interested in, Oracle’s applications in the Federal space and have questions/topics you would like to see addressed in this blog, please post a comment. So bear with me as I take a bit of time to refine the content, look and feel of this blog. http://www.oracle.com/us/industries/public-sector/038044.htm http://www.oracle.com/us/industries/public-sector/038046.htm -- JMW

    Read the article

  • Optimizing MySQL, Improving Performance of Database Servers

    - by Antoinette O'Sullivan
    Optimization involves improving the performance of a database server and queries that run against it. Optimization reduces query execution time and optimized queries benefit everyone that uses the server. When the server runs more smoothly and processes more queries with less, it performs better as a whole. To learn more about how a MySQL developer can make a difference with optimization, take the MySQL Developers training course. This 5-day instructor-led course is available as: Live-Virtual Event: Attend a live class from your own desk - no travel required. Choose from a selection of events on the schedule to suit different timezones. In-Class Event: Travel to an education center to attend an event. Below is a selection of the events on the schedule.  Location  Date  Delivery Language  Vienna, Austria  17 November 2014  German  Brussels, Belgium  8 December 2014  English  Sao Paulo, Brazil  14 July 2014  Brazilian Portuguese London, English  29 September 2014  English   Belfast, Ireland  6 October 2014  English  Dublin, Ireland  27 October 2014  English  Milan, Italy  10 November 2014  Italian  Rome, Italy  21 July 2014  Italian  Nairobi, Kenya  14 July 2014  English  Petaling Jaya, Malaysia  25 August 2014  English  Utrecht, Netherlands  21 July 2014  English  Makati City, Philippines  29 September 2014  English  Warsaw, Poland  25 August 2014  Polish  Lisbon, Portugal  13 October 2014  European Portuguese  Porto, Portugal  13 October 2014  European Portuguese  Barcelona, Spain  7 July 2014  Spanish  Madrid, Spain  3 November 2014  Spanish  Valencia, Spain  24 November 2014  Spanish  Basel, Switzerland  4 August 2014  German  Bern, Switzerland  4 August 2014  German  Zurich, Switzerland  4 August 2014  German The MySQL for Developers course helps prepare you for the MySQL 5.6 Developers OCP certification exam. To register for an event, request an additional event or learn more about the authentic MySQL curriculum, go to http://education.oracle.com/mysql.

    Read the article

  • Are there good resources for leading documentation for an existing software product having none?

    - by Ben Rose
    Hello. I'm a software developer at a technology company. I have been tasked with leading the documentation effort for the product I work on, both internal to developers as well as spilling over into facilitating the business side of requirements documentation. This internal product has been around for at least 6 years. One challenge is that this software application has no form of documentation other than some small, outdated pieces here and there. There are comments in the code, but they are technical and do not convey any over-arching behavior (even on technical side). As a consequence of having little to no documentation, this product is often unnecessarily complex under the covers adding to the challenge. We are very limited on time that will be given to us to work on documentation. Another thing about me is that I've displayed some ability in writing/communication around the office, but I'm not coming from any sort of documentation or formal writing background (beyond my academic career). Please share your advise or recommend resources, book/website/forum/whatever, for helping me come up with a plan with milestones, best practices, task delegation, templates, buy-in, etc. I'm hoping for a resource targeting or giving special mention of introducing good documentation on existing projects where there previously was none. I would be very grateful for your responses. Ben

    Read the article

  • Updated Batch Best Practices

    - by ACShorten
    The Batch Best Practices whitepaper has been updated and published to My Oracle Support with the latest advice and new facilities. Two of the more interesting updates are: Addition of a Bache Cache Flush - Just like the online, the batch component of the framework caches data. It is now possible to refresh the cache manually using a new batch jobs F1-FLUSH. This is particularly useful if you execute long running threadpool workers across many different batch processes (submitters). New EXTENDED execution mode - A new specialist mode has been introduced for sites that use a large number of submitters (concurrent threads) and are experiencing intermittent communication issues in the threadpoolworker. This mode uses Oracle Coherences Extend mode to allow submitters to be allocated to threadpoolworkers via proxy connections. It differs from CLUSTERED mode in that a submitter can be explicitly allocated to a specific threadpoolworker via a proxy connection. This mode is only used for specific situation and customers should continue to use CLUSTERED mode gemerally. The whitepaper outlines advice for these new facilities and provides advice for existing functionality. The whitepaper is available from My Oracle Support at Doc Id: 836362.1.

    Read the article

  • Slides and Pictures from PowerShell Saturday Columbus 2012

    - by Brian Jackett
    On March 10th, 2012 the first ever PowerShell Saturday conference took place in Columbus, OH and I couldn’t be happier with the outcome.  We had 100 attendees from 10 different states (the biggest surprise to me) come to see 6 speakers present on a variety of PowerShell topics: introduction, WMI, SharePoint, Active Directory, Exchange, 3rd party products and more.      A big thank you also goes out to a number of people. Planning committee Wes Stahler, lead organizer of PowerShell Saturday Columbus, president of Central Ohio PowerShell User Group Ed “Microsoft Scripting Guy” Wilson Teresa “The Scripting Wife” Wilson Ashley McGlone Brian T. Jackett (myself) Speakers Ed Wilson Ashley McGlone James Brundage Trevor Sullivon Daniel Cruz Volunteer Lisa Gardner, fellow Microsoft PFE volunteered her time on a Saturday to assist with smooth operation of the day Facility Coordination Debbie Carrier, facilities coordinator for the Columbus Microsoft Office and helped us out greatly with the venue   Slides and Script Samples    I presented my session on “PowerShell for the SharePoint 2010 Developer”.  Below you can download the slides and script samples.   Photos    I wasn’t able to take took many pictures (only 3) as I was busy doing my presentation, answering questions, and taking care of random items throughout the day.   Pictures on Facebook    click here Pictures on SkyDrive (higher res) PowerShell Saturday Columbus Mar '12 VIEW SLIDE SHOW DOWNLOAD ALL   Conclusion    I’m very happy that this first ever PowerShell Saturday was a success.  My fellow PFE and speaker Ashley McGlone also has a short write-up on his blog about the event (click here).  I have heard rumors that there are other cities starting to plan their own local events.  When I hear more details I’ll spread the word here and on Twitter.         -Frog Out

    Read the article

  • What FOSS solutions are available to manage software requirements?

    - by boos
    In the company where I work, we are starting to plan to be compliant to the software development life cycle. We already have, wiki, vcs system, bug tracking system, and a continuous integration system. The next step we want to have is to start to manage, in a structured way, software requirements. We dont want to use a wiki or shared documentation because we have many input (developer, manager, commercial, security analyst and other) and we dont want to handle proliferation of .doc around the network share. We are trying to search and we hope we can find and use a FOSS software to manage all this things. We have about 30 people, and don't have a budget for commercial software. We need a free solution for requirements management. What we want is software that can manage: Required features: Software requirements divided in a structured configurable way Versioning of the requirements (history, diff, etc, like source code) Interdependency of requirements (child of, parent of, related to) Rule Based Access Control for data handling Multi user, multi project File upload (for graph, document related to or so on) Report and extraction features Optional Features: Web Based Test case Time based management (timeline, excepted data, result data) Person allocation and so on Business related stuff Hardware allocation handling I have already play with testlink and now i'm playing with RTH, the next one i try is redmine.

    Read the article

  • Which problem(s) do YOU want to see solved?

    - by buu700
    My team and I are meeting tonight to come up with a business plan and some community input would be amazing. I've been mulling over this issue for the past few months and bouncing ideas off of others, and now I'd finally like some input from the community. I have come up with a fair selection of ideas, but most of those amount to either fun projects which could potentially be profitable, or otherwise solid business models that have one or two major hurdles (usually related to resources or legality). For our team meeting tonight, my idea is to take inventory of our available skills, resources, and compelling problems which interest us. The last is where I would greatly appreciate some community input. Hell, even entire business ideas/plans would be appreciated. No matter how big or small your thoughts, any input would be appreciated. We're a team of computer scientists, so our business will be primarily based around software/technology/Web solutions. Among my relevant available resources (entire Internet aside), I have the following: A pretty reliable connection to an SEO company a large production company. A stash of fairly powerful server hardware. A fast network with static IPs. The backend for Hackswipe, which includes credit card payment processing and a Google Voice-based SMS gateway. This work in progress design for something completely unrelated but which is backed by some fairly decent infrastructure. Direct access to the experts in just about any relevant field (on-campus Carnegie Mellon professors). A sexual relationship with the baron of a small nation. For further down the line, some investor relationships. Not likely to be so relevant, but a decent social media presence (Stack Overflow reputation, modship in some major reddits, various tech forums). The source code for Eugene fucking McCabe. Pooled with the other team members, the list of projects we can build off of would be longer (including an Android app). So, what are your thoughts? Crossposted to reddit

    Read the article

  • Term for unit testing that separates test logic from test result data

    - by mario
    So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results.

    Read the article

  • Startup/Shutdown time in Xubuntu is increasing!

    - by Ankit
    I am a novice Xubuntu user on a dual-boot machine. The other OS I have is Windows 7. When I first began using Xubuntu, I had really fast startup and shutdown (much much faster than Windows 7 :) ). However, as I started using it more and more for my work, these times started rising. I do not have any problems with execution speed of running applications. My main concern is the shutdown time. Now it has gone above Windows shutdown time [startup time has only partially increase compared to shutdown]. I checked some similar questions like this. However, they seem to not answer my concern as I feel that the concerned users there experience a long wait before the screen goes blue. In my case, the screen goes blue (desktop session ends a blue screen with a moving slider appears) pretty fast. However, it remains blue for a long time. Another answer that I saw on google was to use dmesg and then stopping some services that I do not want. However, me being a novice could not completely understand what it meant

    Read the article

  • MySQL – Export the Resultset to CSV file

    - by Pinal Dave
    In SQL Server, you can use BCP command to export the result set to a csv file. In MySQL too, You can export data from a table or result set as a csv file in many methods. Here are two methods. Method 1 : Make use of Work Bench If you are using Work Bench as a querying tool, you can make use of it’s Export option in the result window. Run the following code in Work Bench SELECT db_names FROM mysql_testing; The result will be shown in the result windows. There is an option called “File”. Click on it and it will prompt you a window to save the result set (Screen shot attached to show how file option can be used). Choose the directory and type out the name of the file. Method 2 : Make use of OUTFILE command You can do the export using a query with OUTFILE command as shown below SELECT db_names FROM mysql_testing INTO OUTFILE 'C:/testing.csv' FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"' LINES TERMINATED BY '\r\n'; After the execution of the above code, you can find a file named testing.csv in C drive of the server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL Tagged: CSV

    Read the article

  • Got a Great Solaris Story to Tell? Come to OpenWorld and Tell It

    - by Larry Wake
    I know there are a lot of Solaris veterans that still haven't experienced the enormousness that is Oracle OpenWorld. Simply put: if you have a chance to go, you should go. You'll learn a lot, and you'll be in one of the greatest cities in the world at the same time. Even better: if you've got something to share, we might be able to get you in for free. Yep, it's that time already: the Call for Papers for this year's OpenWorld (and JavaOne) is open.  But not for long -- you've only got until April 9th to submit your abstract. As a Solaris person, you'll probably be most interested in participating in one of two tracks: SERVER AND STORAGE SYSTEMS: Oracle Solaris ORACLE DEVELOP: Oracle Solaris and Oracle Linux Development All you need to give us right now is a title and an abstract. If your session is accepted, we'll let you know by early June, and you can start to plan to join us in San Francisco from September 30 to October 4. (If you're planning on attending in listen-only mode, be aware that the early registration price is available until March 30.) As is true every year, this is your opportunity to meet the leading Oracle hardware and software engineers, including lots of the Oracle Solaris team, and interact with your peers from all over the world. See you there!

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • Drive project success & financial performance with business critical Enterprise Project Portfolio Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle Primavera invites you to the first in a series of three webcasts linking Enterprise Project Portfolio Management with enhanced operational performance and better financial results. Few organizations fully understand the impact projects have on their business. Consistently delivering successful projects is vital to the financial success of an asset intensive organization. Enterprise Project Portfolio Management (EPPM) is not a new concept yet for many organizations it is not considered "business critical". Webcast 1: Plan – Aligning project selection and prioritization with corporate objectives This webcast will look at 2 key questions: Are you aligning portfolio decisions with strategic objectives? How do you effectively measure the success of your portfolio decisions? Hear from Accenture who'll present a compelling case for why asset intensive organizations should consider EPPM as business critical. They'll explore: How technology is being used to enhance project delivery How collaboration enhances delivery performance The major challenges associated with the planning phase of a project Next hear from Geoff Roberts, Industry Strategist from Oracle Primavera. With over 30 years experience in project management/project controls in the construction, utilities and oil & gas sectors, Geoff will investigate how EPPM is a best practice and can support an organization through project selection and prioritization ensuring that decisions are aligned with corporate objectives. Don’t miss out, register today!

    Read the article

  • Where to find Hg/Git technical support?

    - by Rook
    Posting this as a kind of a favour for a former coleague, so I don't know the exact circumstances, but I'll try to provide as much info as I can ... A friend from my old place of employment (maritime research institute; half government/commercial funding) has asked me if I could find out who provides technical support (commercial) for two major DVCS's of today - Git and Mercurial. They have been using VCS for years now (Subversion while I was there, don't know what they're using now - probably the same), and now they're renewing their software licences (they have to give a plan some time in advance for everything ... then it goes "through the system") and although they will be keeping Subversion as well, they would like to justify beginning of DVCS as an alternative system (most people root for Mercurial since it seems simpler; mostly engineers and physicians there who are not that interested in checking Git repos for corruption and the finer workings of Git, but I believe any one of the two could "pass") - but it has to have a price (can be zero; no problem there) and some sort of official technical support. It is a pro forma matter, but it has to be specified. Most of the people there are using one of the two already, but this has to be specified to be official. So, I'm asking you - do you know where could one go for Git or Mercurial technical support (can be commercial)? Technical forums and the like are out of the question. It has to work on the principle: - I have a problem. - I post a question with the details. - I get an answer in specified time. It can be "we cannot do that." but it has to be an official answer and given in agreed time. I'm sure by now most of you understand what I'm asking, but if not - post a comment or similar. Also, if you think of any reasons which could decide justification of introducing Git/Hg from an technical and administrative viewpoint, feel free to write them down also.

    Read the article

  • Oracle OpenWorld 2011 Call For Papers is Now Open

    - by ruth.donohue
    What is Call for Papers? First, let’s take a small step back. Oracle OpenWorld is the world's largest event dedicated to helping enterprises understand and harness the power of information and the best place to see that technology in action. Oracle OpenWorld showcases the customers and partners whose innovation with Oracle translates to better business results. In addition, there are many opportunities to network with Oracle employees, partners and customers. Oracle OpenWorld 2011 will be held October 2-6 in San Francisco. (Note: Oracle hosts other OpenWorld conferences in China and South America, usually in December) Call for Papers is your opportunity to submit a topic to present at Oracle OpenWorld. When submitting your topic, be sure to describe what you plan to discuss and the value of the presentation to other attendees. So think about the interesting and exciting things you have done with Siebel CRM or Oracle CRM On Demand (or any Oracle product), and submit your topic. The deadline is Sunday, March 27th, so think fast. By the way, if you are selected to present at Oracle OpenWorld, you’ll receive a complimentary full conference pass! And stay tuned in the coming months, we’ll keep you posted on all of the exciting things happening with Oracle CRM at Oracle OpenWorld 2011. Start making your plans to attend now… You won’t want to miss it!   Technorati Tags: Oracle OpenWorld,OOW11,openworld

    Read the article

  • As an indie game dev, what processes are the best for soliciting feedback on my design/spec/idea? [closed]

    - by Jess Telford
    Background I have worked in a professional environment where the process usually goes like the following: Brain storm idea Solidify the game mechanics / design Iterate on design/idea to create a more solid experience Spec out the details of the design/idea Build it Step 3. is generally done with the stakeholders of the game (developers, designers, investors, publishers, etc) to reach an 'agreement' which meets the goals of all involved. Due to this process involving a series of often opposing and unique view points, creative solutions can surface through discussion / iteration. This is backed up by a process for collating the changes / new ideas, as well as structured time for discussion. As a (now) indie developer, I have to play the role of all the stakeholders (developers, designers, investors, publishers, etc), and often find myself too close to the idea / design to do more than minor changes, which I feel to be local maxima when it comes to the best result (I'm looking for the global maxima, of course). I have read that ideas / game designs / unique mechanics are merely multipliers of execution, and that keeping them secret is just silly. In sharing the idea with others outside the realm of my own thinking, I hope to replicate the influence other stakeholders have. I am struggling with the collation of changes / new ideas, and any kind of structured method of receiving feedback. My question: As an indie game developer, how and where can I share my ideas/designs to receive meaningful / constructive feedback? How can I successfully collate the feedback into a new iteration of the design? Are there any specialized websites, etc?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >