Search Results

Search found 5166 results on 207 pages for 'cost benefit'.

Page 137/207 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Java EE 7 turns one today!

    - by delabassee
    "Tell me and I forget. Teach me and I remember. Involve me and I learn." (Benjamin Franklin) Today marks the first year anniversary of Java EE 7. The JSR 342 specification was finalised on May 28, 2013 with the official launch taking place on June 12, 2013 (original press release). As of today, there are already 3 Java EE 7 compatible Application Servers, coming from different 'vendors' (Oracle, TmaxSoft and Red Hat). Two of those Java EE 7 Application Servers are free and open source. We expect the list of Java EE 7 compatible Application Servers to grow over the coming months. Source: RebelLabs - 'Java Tools and Technologies Landscape for 2014' According to a recent independent survey, one third of the Java EE users who participated in that survey is already using Java EE 7. This is a good sign but it also means that a lot of people are not yet on Java EE 7. So if you haven't yet embarked on Java EE 7, now is really the time to do so! There are various ways to learn Java EE 7, in no particular order ... Continue to read The Aquarium. Through this blog, we are relaying Java EE news but we are also doing our best to highlight relevant technical contents such as articles, community tutorials, etc. Watch the GlassFish YouTube channel. Amongst others, it contains the different videos of the Java EE 7 launch, those videos will give you good technical update on Java EE and its different components specifications (JMS 2.0, JAX-RS 2.0, EJB 3.2, etc.) Take a formal training. Oracle University is starting to roll-out Java EE 7 trainings like the 'Java EE 7: New Features' class.  Attend conferences and JUGs sessions. On that note, we have spent a lot of time to create a strong JavaOne 'Server-Side Java' track. It's still possible to benefit from the early bird JavaOne pricing but don't wait too much! Read books. There are more than 25 (!) books related to Java EE 7 or to one of the Java EE 7 component specification.  There are many more ways to learn Java EE but if I have to suggest one and only one way, I would recommend the Java EE 7 Tutorial. It's exhaustive and clear, it's free and it continues to evolve. And finally as the introductory quote suggest, participation is key to learning. Participate in JUGs,  participate in Adopt-a-JSR, get involved in the different open source communities evolving around Java EE, participate in the JCP... in one word, participate!

    Read the article

  • Introducing Oracle Retail Mobile Point-of-Service

    - by user801960
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle recently announced the introduction of Oracle Retail Mobile Point-of-Service, a mobile extension to the Oracle Retail Point-of-Service (POS) used by many retailers internationally. Oracle Retail Mobile POS offers wide ranging cost and efficiency benefits by allowing staff resource to be used more effectively whilst also reducing spend associated with fixed POS solutions. For retailers utilising Oracle Retail Stores Solutions, additional benefits can be realised. Oracle Retail Mobile POS works with these solutions to allow store personnel to check in-store inventory, access product information and specifications, and perform tasks such as the printing or emailing of receipts and the activation of gift cards.  As Oracle Retail Mobile POS is an extension of Oracle Retail Point-of-Service, retailers can benefit from seamless integration with existing systems, simple upgrade procedures and seamless delivery across the business. However, the solution’s scalable and flexible architecture also supports multiple mobile operators and systems, so retailers are not locked into particular vendors. As well as being popular with retailers, Mobile POS has also proved to be well liked by consumers as it facilitates improved customer service levels. Retail staff are able to spend more time with consumers on the shop floor, access requested inventory information, and perform tasks that would traditionally have needed to be completed at a fixed cash register. Additional information can be accessed on Oracle Retail Point-of-Service or read the press announcement Oracle Introduces Mobile Point-of-Service for Retailers. Normal 0 false false false EN-US X-NONE X-NONE

    Read the article

  • SQL Azure and Trust Services

    - by BuckWoody
    Microsoft is working on a new Windows Azure service called “Trust Services”. Trust Services takes a certificate you upload and uses it to encrypt and decrypt sensitive data in the cloud. Of course, like any security service, there’s a bit more to it than that. I’ll give you a quick overview of how you can use this product to protect data you send to SQL Azure. The primary issue with storing data in the cloud is that you are in an environment that isn’t under your control – in fact, that’s the benefit of being in a distributed computing environment in the first place. On premises you’re able to encrypt data you don’t want anyone else to see, using various methods such as passwords (not very strong) or certificates (stronger). When you use a certificate, it’s vital that you create (or procure) and protect it yourself. When you store data remotely, regardless of IaaS, PaaS or SaaS, you don’t own the machines where the data lives. That means if you use a certificate from the cloud vendor to encrypt the data, you have to trust that the data won’t be accessed by the vendor. In some cases having a signed agreement with the vendor that they won’t access your data is sufficient, in other cases that doesn’t meet the requirements your system has for security. With the new Trust Services service, the basic process is that you use a Portal to create a Trust Server using policies and other controls. You place a X.509 Certificate you create or procure in that server. Using the Software development Kit (SDK), the developer has access to an Application Layer Encryption Framework to set fields of data they want to encrypt. From there, the data can be stored in SQL Azure as a standard field – only it is encrypted before it ever arrives. The portion of the client software that decrypts the data uses the same service, so the authenticated user sees the data if they are allowed to do so. The data remains encrypted “at rest”.  You can learn more about this product and check it out in the SQL Azure labs at Microsoft Codename "Trust Services"

    Read the article

  • Sabre Manages Fast Data Growth with Oracle Data Integration Products

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Last year at OpenWorld we announced Sabre Holding as a winner of the Fusion Middleware Innovation Awards. The Sabre team did an excellent job at leveraging cutting edge technologies for managing rapid data growth and exponential scalability demands they have experienced in the travel industry. Today we announced the details and specific benefits of Sabre’s new real-time data integration solution in a press release. Please take a look if you haven’t seen it yet. Sabre Holdings Deploys Oracle Data Integrator and Oracle GoldenGate to Support Rapid Customer Growth There are 3 different areas of benefits Sabre achieved by using Oracle Data Integration products: Manages 7X increase in data sources for the enterprise data warehouse Reduced infrastructure complexity Decreased time to market for new products and services by 30 percent. This simply shows that using latest technologies helps the companies to innovate robust solutions against today’s key data management challenges. And the benefit of using a next generation data integration technology is not only seen in the IT operations, but also in the business side. A better data integration solution for the enterprise data warehouse delivered the platform they need to accelerate how they service their customers, improving their competitive advantage. Tomorrow I will give another great example of innovation with next generation data integration from Oracle. We will be discussing the Fusion Middleware Innovation Awards 2012 winners and their results with using Oracle’s data integration products.

    Read the article

  • Diving into Scala with Cay Horstmann

    - by Janice J. Heiss
    A new interview with Java Champion Cay Horstmann, now up on otn/java, titled  "Diving into Scala: A Conversation with Java Champion Cay Horstmann," explores Horstmann's ideas about Scala as reflected in his much lauded new book,  Scala for the Impatient.  None other than Martin Odersky, the inventor of Scala, called it "a joy to read" and the "best introduction to Scala". Odersky was so enthused by the book that he asked Horstmann if the first section could be made available as a free download on the Typesafe Website, something Horstmann graciously assented to. Horstmann acknowledges that some aspects of Scala are very complex, but he encourages developers to simply stay away from those parts of the language. He points to several ways Java developers can benefit from Scala: "For example," he says, " you can write classes with less boilerplate, file and XML handling is more concise, and you can replace tedious loops over collections with more elegant constructs. Typically, programmers at this level report that they write about half the number of lines of code in Scala that they would in Java, and that's nothing to sneeze at. Another entry point can be if you want to use a Scala-based framework such as Akka or Play; you can use these with Java, but the Scala API is more enjoyable. " Horstmann observes that developers can do fine with Scala without grasping the theory behind it. He argues that most of us learn best through examples and not through trying to comprehend abstract theories. He also believes that Scala is the most attractive choice for developers who want to move beyond Java and C++.  When asked about other choices, he comments: "Clojure is pretty nice, but I found its Lisp syntax a bit off-putting, and it seems very focused on software transactional memory, which isn't all that useful to me. And it's not statically typed. I wanted to like Groovy, but it really bothers me that the semantics seems under-defined and in flux. And it's not statically typed. Yes, there is Groovy++, but that's in even sketchier shape. There are a couple of contenders such as Kotlin and Ceylon, but so far they aren't real. So, if you want to do work with a statically typed language on the JVM that exists today, Scala is simply the pragmatic choice. It's a good thing that it's such a nice choice." Learn more about Scala by going to the interview here.

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • F# Objects &ndash; Integrating with the other .Net Languages &ndash; Part 1

    - by MarkPearl
    In the next few blog posts I am going to explore objects in F#. Up to now, my dabbling in F# has really been a few liners and while I haven’t reached the point where F# is my language of preference – I am already seeing the benefits of the language when solving certain types of programming problems. For me I believe that the F# language will be used in a silo like architecture and that the real benefit of having F# under your belt is so that you can solve problems that F# lends itself towards and then interact with other .Net languages in doing the rest. When I was still very new to the F# language I did the following post covering how to get F# & C# to talk to each other. Today I am going to use a similar approach to demonstrate the structure of F# objects when inter-operating with other languages. Lets start with an empty F# class … type Person() = class end   Very simple, and all we really have done is declared an object that has nothing in it. We could if we wanted to make an object that takes a constructor with parameters… the code for this would look something like this… type Person =     {         Firstname : string         Lastname : string     }   What’s interesting about this syntax is when you try and interop with this object from another .Net language like C# - you would see the following…   Not only has a constructor been created that accepts two parameters, but Firstname and Lastname are also exposed on the object. Now it’s important to keep in mind that value holders in F# are immutable by default, so you would not be able to change the value of Firstname after the construction of the object – in C# terms it has been set to readonly. One could however explicitly state that the value holders were mutable, which would then allow you to change the values after the actual creation of the object. type Person = { mutable Firstname : string mutable Lastname : string }   Something that bugged me for a while was what if I wanted to have an F# object that requires values in its constructor, but does not expose them as part of the object. After bashing my head for a few moments I came up with the following syntax – which achieves this result. type Person(Firstname : string, Lastname : string) = member v.Fullname = Firstname + " " + Lastname What I haven’t figured out yet is what is the difference between the () & {} brackets when declaring an object.

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • Good, simple reasons for having a multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't phased because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I cant really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

  • Transform Your Application Integration with Best Practices from Oracle Customers

    - by Lionel Dubreuil
    You want to transform your application integration into an environment based on a service-oriented architecture (SOA). You also want to utilize business process management (BPM) to improve efficiency, deliver business agility, lower total cost of ownership, and increase business visibility. And you want to hear directly from like-minded professionals who have made those types of transformations. Easy enough. Attend this Webcast series to learn from customers who have successfully integrated with Oracle SOA and BPM solutions.Join us for this series and discover how to: Use a single unified platform for all types of processes Increase real-time process visibility Improve efficiency of existing IT investments Lower up-front costs and achieve faster time to market Gain greater benefit from SOA with the addition of BPM Here's the list of upcoming webcasts: “Migrating to SOA at Choice Hotels” on Thurs., June 21, 2012 — 10 a.m. PT / 1 p.m. ET Hear how Choice Hotels successfully made the transition from a complex legacy environment into a SOA-based shared services infrastructure that accelerated time to market as the company implemented its event-driven Google API project. “San Joaquin County—Optimizing Justice and Public Safety with Oracle BPM and Oracle SOA” on Thurs., July 26, 2012 — 10 a.m. PT / 1 p.m. ET Learn how San Joaquin County moved to a service-oriented architecture foundation and business process management platform to gain efficiency and greater visibility into mission-critical information for public safety. “Streamlining Order to Cash with SOA at Eaton” on Thurs., August 23, 2012 — 10 a.m. PT / 1 p.m. ET Discover how Eaton transitioned from a legacy TIBCO infrastructure. Learn about the company’s reference architecture for a SOA-based Oracle Fusion Distributed Order Orchestration (DOO). “Fast BPM Implementation with Fusion: Production in Five Months” on Thurs., September 13, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Nets Denmark A/S implemented Oracle Unified Business Process Management Suite in just five months. The Webcast will cover the implementation from start to production, including integration with legacy systems. “SOA Implementation at Farmers Insurance” on Thurs., October 18, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Farmers Insurance Group lowered application infrastructure costs, reduced time to market, and introduced flexibility by transforming to a SOA-based infrastructure with SOA governance. Register today!

    Read the article

  • Transform Your Application Integration with Best Practices from Oracle Customers

    - by Lionel Dubreuil
    You want to transform your application integration into an environment based on a service-oriented architecture (SOA). You also want to utilize business process management (BPM) to improve efficiency, deliver business agility, lower total cost of ownership, and increase business visibility. And you want to hear directly from like-minded professionals who have made those types of transformations. Easy enough. Attend this Webcast series to learn from customers who have successfully integrated with Oracle SOA and BPM solutions.Join us for this series and discover how to: Use a single unified platform for all types of processes Increase real-time process visibility Improve efficiency of existing IT investments Lower up-front costs and achieve faster time to market Gain greater benefit from SOA with the addition of BPM Here's the list of upcoming webcasts: “Migrating to SOA at Choice Hotels” on Thurs., June 21, 2012 — 10 a.m. PT / 1 p.m. ET Hear how Choice Hotels successfully made the transition from a complex legacy environment into a SOA-based shared services infrastructure that accelerated time to market as the company implemented its event-driven Google API project. “San Joaquin County—Optimizing Justice and Public Safety with Oracle BPM and Oracle SOA” on Thurs., July 26, 2012 — 10 a.m. PT / 1 p.m. ET Learn how San Joaquin County moved to a service-oriented architecture foundation and business process management platform to gain efficiency and greater visibility into mission-critical information for public safety. “Streamlining Order to Cash with SOA at Eaton” on Thurs., August 23, 2012 — 10 a.m. PT / 1 p.m. ET Discover how Eaton transitioned from a legacy TIBCO infrastructure. Learn about the company’s reference architecture for a SOA-based Oracle Fusion Distributed Order Orchestration (DOO). “Fast BPM Implementation with Fusion: Production in Five Months” on Thurs., September 13, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Nets Denmark A/S implemented Oracle Unified Business Process Management Suite in just five months. The Webcast will cover the implementation from start to production, including integration with legacy systems. “SOA Implementation at Farmers Insurance” on Thurs., October 18, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Farmers Insurance Group lowered application infrastructure costs, reduced time to market, and introduced flexibility by transforming to a SOA-based infrastructure with SOA governance. Register today!

    Read the article

  • Transform Your Application Integration with Best Practices from Oracle Customers

    - by Lionel Dubreuil
    You want to transform your application integration into an environment based on a service-oriented architecture (SOA). You also want to utilize business process management (BPM) to improve efficiency, deliver business agility, lower total cost of ownership, and increase business visibility. And you want to hear directly from like-minded professionals who have made those types of transformations. Easy enough. Attend this Webcast series to learn from customers who have successfully integrated with Oracle SOA and BPM solutions.Join us for this series and discover how to: Use a single unified platform for all types of processes Increase real-time process visibility Improve efficiency of existing IT investments Lower up-front costs and achieve faster time to market Gain greater benefit from SOA with the addition of BPM Here's the list of upcoming webcasts: “Migrating to SOA at Choice Hotels” on Thurs., June 21, 2012 — 10 a.m. PT / 1 p.m. ET Hear how Choice Hotels successfully made the transition from a complex legacy environment into a SOA-based shared services infrastructure that accelerated time to market as the company implemented its event-driven Google API project. “San Joaquin County—Optimizing Justice and Public Safety with Oracle BPM and Oracle SOA” on Thurs., July 26, 2012 — 10 a.m. PT / 1 p.m. ET Learn how San Joaquin County moved to a service-oriented architecture foundation and business process management platform to gain efficiency and greater visibility into mission-critical information for public safety. “Streamlining Order to Cash with SOA at Eaton” on Thurs., August 23, 2012 — 10 a.m. PT / 1 p.m. ET Discover how Eaton transitioned from a legacy TIBCO infrastructure. Learn about the company’s reference architecture for a SOA-based Oracle Fusion Distributed Order Orchestration (DOO). “Fast BPM Implementation with Fusion: Production in Five Months” on Thurs., September 13, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Nets Denmark A/S implemented Oracle Unified Business Process Management Suite in just five months. The Webcast will cover the implementation from start to production, including integration with legacy systems. “SOA Implementation at Farmers Insurance” on Thurs., October 18, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Farmers Insurance Group lowered application infrastructure costs, reduced time to market, and introduced flexibility by transforming to a SOA-based infrastructure with SOA governance. Register today!

    Read the article

  • Transform Your Application Integration with Best Practices from Oracle Customers

    - by Lionel Dubreuil
    You want to transform your application integration into an environment based on a service-oriented architecture (SOA). You also want to utilize business process management (BPM) to improve efficiency, deliver business agility, lower total cost of ownership, and increase business visibility. And you want to hear directly from like-minded professionals who have made those types of transformations. Easy enough. Attend this Webcast series to learn from customers who have successfully integrated with Oracle SOA and BPM solutions.Join us for this series and discover how to: Use a single unified platform for all types of processes Increase real-time process visibility Improve efficiency of existing IT investments Lower up-front costs and achieve faster time to market Gain greater benefit from SOA with the addition of BPM Here's the list of upcoming webcasts: “Migrating to SOA at Choice Hotels” on Thurs., June 21, 2012 — 10 a.m. PT / 1 p.m. ET Hear how Choice Hotels successfully made the transition from a complex legacy environment into a SOA-based shared services infrastructure that accelerated time to market as the company implemented its event-driven Google API project. “San Joaquin County—Optimizing Justice and Public Safety with Oracle BPM and Oracle SOA” on Thurs., July 26, 2012 — 10 a.m. PT / 1 p.m. ET Learn how San Joaquin County moved to a service-oriented architecture foundation and business process management platform to gain efficiency and greater visibility into mission-critical information for public safety. “Streamlining Order to Cash with SOA at Eaton” on Thurs., August 23, 2012 — 10 a.m. PT / 1 p.m. ET Discover how Eaton transitioned from a legacy TIBCO infrastructure. Learn about the company’s reference architecture for a SOA-based Oracle Fusion Distributed Order Orchestration (DOO). “Fast BPM Implementation with Fusion: Production in Five Months” on Thurs., September 13, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Nets Denmark A/S implemented Oracle Unified Business Process Management Suite in just five months. The Webcast will cover the implementation from start to production, including integration with legacy systems. “SOA Implementation at Farmers Insurance” on Thurs., October 18, 2012 — 10 a.m. PT / 1 p.m. ET Learn how Farmers Insurance Group lowered application infrastructure costs, reduced time to market, and introduced flexibility by transforming to a SOA-based infrastructure with SOA governance. Register today!

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar’s Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a ‘perfect storm’ of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one’s work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven’t far more qualified and established commentators, MVPs and so on, already said it? While it’s a good idea to pick reasonably fresh and interesting topics, it’s more important not to let such fears lead to writer’s block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn’t always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what’s involved in getting an application that is ‘done’ ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we’d love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to ‘throw your hat into the ring’, drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Some More New ADF Features in JDeveloper 11.1.2

    - by Steven Davelaar
    The official list of new features in JDeveloper 11.1.2 is documented here. While playing with JDeveloper 11.1.2 and scanning the web user interface developer's guide for 11.1.2, I noticed some additional new features in ADF Faces, small but might come in handy:  You can use the af:formatString and af:formatNamed constructs in EL expressions to use substituation variables. For example: <af:outputText value="#{af:formatString('The current user is: {0}',someBean.currentUser)}"/> See section 3.5.2 in web user interface guide for more info. A new ADF Faces Client Behavior tag: af:checkUncommittedDataBehavior. See section 20.3 in web user interface guide for more info. For this tag to work, you also need to set the  uncommittedDataWarning  property on the af:document tag. And this property has quite some issues as you can read here. I did a quick test, the alert is shown for a button that is on the same page, however, if you have a menu in a shell page with dynamic regions, then clicking on another menu item does not raise the alert if you have pending changes in the currently displayed region. For now, the JHeadstart implementation of pending changes still seems the best choice (will blog about that soon). New properties on the af:document tag: smallIconSource creates a so-called favicon that is displayed in front of the URL in the browser address bar. The largeIconSource property specifies the icon used by a mobile device when bookmarking the page to the home page. See section 9.2.5 in web user interface guide for more info. Also notice the failedConnectionText property which I didn't know but was already available in JDeveloper 11.1.1.4. The af:showDetail tag has a new property handleDisclosure which you can set to client for faster rendering. In JDeveloper 11.1.1.x, an expression like #{bindings.JobId.inputValue} would return the internal list index number when JobId was a list binding. To get the actual JobId attribute value, you needed to use #{bindings.JobId.attributeValue}. In JDeveloper 11.1.2 this is no longer needed, the #{bindings.JobId.inputValue} expression will return the attribute value corresponding with the selected index in the choice list. Did you discover other "hidden" new features? Please add them as comment to this blog post so everybody can benefit

    Read the article

  • A "First" at Oracle OpenWorld

    - by Kathryn Perry
    A guest post by Adam May, Director, Fusion CRM, Oracle Applications Development There are always firsts at OpenWorld. These firsts keep the conference fresh and are the reason people come back year after year. An important first this year is our Fusion CRM customers who are using the product and deriving real benefit from Fusion CRM. Everyone can learn from and interact with them -- including us!  We love talking to customers, especially those who are using our solutions in unexpected ways because they challenge us! At previous OpenWorlds, we presented our overall Fusion vision and our plans for Fusion CRM. Those presentations helped customers plan their strategies and map out their new release uptakes. Fast forward to March of this year when the first Fusion CRM customer went live. Since then we've watched the pace of go-lives accelerate every single month. Now we're at the threshold of another OpenWorld -- with over 45,000 attendees, 2,500 sessions and LOTS of other activities. To avoid having our customers curl into a ball with sensory overload, we designed a Focus On Document to outline the most important Fusion CRM activities. Here are some of the highlights: Anthony Lye's "Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap" on Monday at 3:15 p.m. The CRM Pavilion, open in Moscone West from Monday through Wednesday; features our strategic Fusion CRM partners and provides live demonstrations of their capabilities General Session: "Oracle Fusion CRM--Improving Sales Effectiveness, Efficiency, and Ease of Use" on Tuesday at 11:45 a.m.; features Anthony Lye and Deloitte "Meet the Fusion CRM Experts" on Tuesday at 5:00 p.m.; this session gives customers the opportunity to interact one-on-one with Fusion experts divided into eight categories of expertise CRM Social Reception on Tuesday from 6-8 p.m.; there's no better way to spend the early evening than discussing Fusion CRM with Oracle experts and strategic partners over appetizers and drinks Wednesday night is Oracle's Customer Appreciation event; enjoy Pearl Jam, Kings of Leon, etc. beginning at 7:30 p.m. at Treasure Island Be sure to drink plenty of water before sleeping Wednesday night and don't stay out too late because we have lots of great content on Thursday; at the top of the list is "Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement" at 11:15 a.m. We hope you have a fantastic experience at OpenWorld 2012! And here's a little video treat to whet your appetite: http://www.youtube.com/user/FusionAppsAtOracle

    Read the article

  • Dealing with coworkers when developing, need advice [closed]

    - by Yippie-Kai-Yay
    I developed our current project architecture and started developing it on my own (reaching something like, revision 40). We're developing a simple subway routing framework and my design seemed to be done extremely well - several main models, corresponding views, main logic and data structures were modeled "as they should be" and fully separated from rendering, algorithmic part was also implemented apart from the main models and had a minor number of intersection points. I would call that design scalable, customizable, easy-to-implement, interacting mostly based on the "black box interaction" and, well, very nice. Now, what was done: I started some implementations of the corresponding interfaces, ported some convenient libraries and wrote implementation stubs for some application parts. I had the document describing coding style and examples of that coding style usage (my own written code). I forced the usage of more or less modern C++ development techniques, including no-delete code (wrapped via smart pointers) and etc. I documented the purpose of concrete interface implementations and how they should be used. Unit tests (mostly, integration tests, because there wasn't a lot of "actual" code) and a set of mocks for all the core abstractions. I was absent for 12 days. What do we have now (the project was developed by 4 other members of the team): 3 different coding styles all over the project (I guess, two of them agreed to use the same style :), same applies to the naming of our abstractions (e.g CommonPathData.h, SubwaySchemeStructures.h), which are basically headers declaring some data structures. Absolute lack of documentation for the recently implemented parts. What I could recently call a single-purpose-abstraction now handles at least 2 different types of events, has tight coupling with other parts and so on. Half of the used interfaces now contain member variables (sic!). Raw pointer usage almost everywhere. Unit tests disabled, because "(Rev.57) They are unnecessary for this project". ... (that's probably not everything). Commit history shows that my design was interpreted as an overkill and people started combining it with personal bicycles and reimplemented wheels and then had problems integrating code chunks. Now - the project still does only a small amount of what it has to do, we have severe integration problems, I assume some memory leaks. Is there anything possible to do in this case? I do realize that all my efforts didn't have any benefit, but the deadline is pretty soon and we have to do something. Did someone have a similar situation? Basically I thought that a good (well, I did everything that I could) start for the project would probably lead to something nice, however, I understand that I'm wrong. Any advice would be appreciated, sorry for my bad english.

    Read the article

  • Is there a pedagogical game engine?

    - by K.G.
    I'm looking for a book, website, or other resource that gives modern 3D game engines the same treatment as Operating Systems: Design and Implementation gave operating systems. I have read Jason Gregory's Game Engine Architecture, which I enjoyed. However, by intent the author treated components of the architecture as atomic units, whereas what I'm interested in is the plumbing between those units that makes a coherent whole out of ideally loosely coupled parts. In books such as these, one usually reads that "that's academic," but that's the point! I have also read Julian Gold's Object-oriented Game Development, which likewise was good, but I feel is beginning to show its age. Since even mobile platforms these days are multicore and have fast video memory, those kinds of things (concurrency, display item buffering) would ideally be covered. There are other resources, such as the Doom 3 source code, which is highly instructive for its being a shipped product. The problem with those is as follows: float Q_rsqrt( float number ) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; // evil floating point bit level hacking i = 0x5f3759df - ( i >> 1 ); // what the f***? y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed return y; } To wit, while brilliant, this kind of source requires more enlightenment than I can usually muster upon first read. In summary, here's my white whale: For an adult reader with experience in programming. I wish I could save all the trees killed by every. Single. Game Programming book ever devoting the first two chapters to "Now just what is a variable anyway?" In C or C++, very preferably C++. Languages that are more concise are fantastic for teaching, except for when what you want to learn is how to cope with a verbose language. There is also the benefit of the guardrails that C++ doesn't provide, such as garbage collection. Platform agnostic. I'm sincerely afraid that this book is out there and it's Visual C++/DirectX oriented. I'm a Linux guy, and I'd do what it takes, but I would very much like to be able to use OpenGL. Thanks for everything! Before anyone gets on my case about it, Fast inverse square root was from Quake III Arena, not Doom 3!

    Read the article

  • As a tooling/automation developer, can I be making better use of OOP?

    - by Tom Pickles
    My time as a developer (~8 yrs) has been spent creating tooling/automation of one sort or another. The tools I develop usually interface with one or more API's. These API's could be win32, WMI, VMWare, a help-desk application, LDAP, you get the picture. The apps I develop could be just to pull back data and store/report. It could be to provision groups of VM's to create live like mock environments, update a trouble ticket etc. I've been developing in .Net and I'm currently reading into design patterns and trying to think about how I can improve my skills to make better use of and increase my understanding of OOP. For example, I've never used an interface of my own making in anger (which is probably not a good thing), because I honestly cannot identify where using one would benefit later on when modifying my code. My classes are usually very specific and I don't create similar classes with similar properties/methods which could use a common interface (like perhaps a car dealership or shop application might). I generally use an n-tier approach to my apps, having a presentation layer, a business logic/manager layer which interfaces with layer(s) that make calls to the API's I'm working with. My business entities are always just method-less container objects, which I populate with data and pass back and forth between my API interfacing layer using static methods to proxy/validate between the front and the back end. My code by nature of my work, has few common components, at least from what I can see. So I'm struggling to see how I can better make use of OOP design and perhaps reusable patterns. Am I right to be concerned that I could be being smarter about how I work, or is what I'm doing now right for my line of work? Or, am I missing something fundamental in OOP? EDIT: Here is some basic code to show how my mgr and api facing layers work. I use static classes as they do not persist any data, only facilitate moving it between layers. public static class MgrClass { public static bool PowerOnVM(string VMName) { // Perform logic to validate or apply biz logic // call APIClass to do the work return APIClass.PowerOnVM(VMName); } } public static class APIClass { public static bool PowerOnVM(string VMName) { // Calls to 3rd party API to power on a virtual machine // returns true or false if was successful for example } }

    Read the article

  • SharePoint Thoughts

    - by Tim Murphy
    I was listening to .NET Rocks episode #713 and it got me thinking about a number of SharePoint related topics. I have been working with SharePoint since the 2001 product came out and have watched it evolve over the years.  Today SharePoint is one of the most powerful and flexible products in the market.  Of course that doesn’t mean there isn’t room for improvement (a lot of improvement in fact) and with much power comes much responsibility. My main gripe these days is that you have to develop on a server instance.  This adds a real barrier to entry for developers.  You either have to run VMWare or Hyper-V on your developer machine or actually develop on your dev server for most tasks.  Yes, there is a way to setup a Windows 7 machine with the SharePoint components but it is very hackish. Beyond that the tools in VS2010 are a great leap forward from past generations.  Not requiring a separate package creation tool is not the least of the improvements.  Better workflow and web part development have also eased the burden of many developers. The other thing the show brought up in my thoughts was more around usage.  Users want to be able to self server everything without regard to what affect that has on leveraging their data from a corporate perspective.  My coworkers who work on Lotus Notes ask why the user can’t just do what ever they want? Part of the reason is that those features have not been built, but the other part is that giving them those features is often like giving an infant a loaded hand gun.  You can do it but it doesn’t make it the smart thing to do. As with any tool that is going to be used in the enterprise it should be subject to governance.  If controls are not in place as they said in the episode of DNR the document libraries and I believe SharePoint in general starts to look as disarrayed and unusable as a shared drive.  Consider these factors before giving into every whim of the users.  You should be able to explain to them the tradeoffs of giving them full control versus being able to leverage the information they collect to the benefit of the organization. These are just a couple of the thoughts that were triggered by the show.  I’m sure there are more discussions that can be had.  Feel free to leave your comments about the pros and cons of SharePoint. del.icio.us Tags: .NET Rocks,SharePoint,software development

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Big-name School for Undergrad Students

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >