Search Results

Search found 350 results on 14 pages for 'mature'.

Page 9/14 | < Previous Page | 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • What's the advantage of an Adobe AIR app over a traditional desktop app?

    - by John
    I'm pretty familiar with using Adobe Flex & AS3, and compared with writing apps in JS/HTML I think it's very cool. However, since AIR is essentially a non-browser version of Flex with benefits like local storage, it seems to be competing as a cross-platform desktop application platform... and in that space it's much less mature than more established desktop technologies. So what's the advantage of creating a desktop application using AIR compared to something like Java (or C++ using a cross-platform GUI library like wxWidgets)? Java's equally capable of communicating with the server for instance, I'm not quite sure what AIR adds when competing head-to-head in the desktop development world?

    Read the article

  • Is RIA Services right for our Silverlight app at this point?

    - by Alex
    Hi, I'm looking at Silverlight architectures and RIA Services looks interesting, but I am a bit concerned about its prelease status and how the feature set will change. We need our client app to be as responsive as possible over a slow network link, so a high priority is a solid sync system for pushing model state changes from the client back to the server. Will RIA Services help us in this regard or will I have to roll my own logic to do this ? Are there any other frameworks that can assist with this? Is the feature set involved in these requirements liable to change much in the next couple of months? If it makes any difference, our frontend is 100% Silverlight, so we dont need to worry about exposing SOAP APIs from the server or anything like that. It appears to me that RIA so far is a bit more mature for Silverlight use. Is this correct?

    Read the article

  • Getting Started With nServiceBus on VAN Mar 31

    - by van
    Topic: nServiceBus is mature and powerful open source framework that enables to design robust, scalable, message-based, service-oriented architectures. Latest improvements in the configuration API enables developers to quickly get started and build a working simple system that uses messaging infrastructure. The goal of this session is to give a jump start with the framework, introduce basic concepts such as message handlers, Sagas, Pub/Sub, Generic Host and also create a working demo application that uses publish/subscribe messaging. The content of the session is addressed to developers that are interested in learning how to get started using nServiceBus in order to design and build distributed systems. Bio: Bernard Kowalski is currently a Software Developer at Microdesk, one of Autodesk's leading partners in providing variety of Geospatial and Computer-Aided Design solutions. Bernard has experience developing .NET framework-based applications utilizing Windows Forms, Windows Services, ASP.NET MVC, and Web services. In a recent project, Bernard architected and implemented a distributed system based on SOA principles using an open source implementation of an Enterprise Service Bus. Bernard develops software with Agile patterns and practices using Domain Driven Design combined with TDD (Test Driven Development). He is familiar with all of the following APIs: Autodesk Vault/Product Stream API, AutoCAD ActiveX/VBA/.NET API, AutoCAD Mechanical API, Autodesk Inventor API, Autodesk MapGuide Enterprise. Prior to joining Microdesk, Bernard worked as a researcher and teacher at the University of Science and Technology in Krakow, Poland where he was awarded with a PhD in Computer Methods in Materials Science. He also participated in research projects where he developed applications for analysis of hot compression test results using advanced optimization techniques. He also developed Finite Element Method-based programs for thermal and stress analysis using C++ and FORTRAN. Bernard is a member of the Domain Driven Design and ALT.NET user groups in NYC. Virtual ALT.NET (VAN) is the online gathering place of the ALT.NET community. Through conversations, presentations, pair programming and dojos, we strive to improve, explore, and challenge the way we create software. Using net conferencing technology such as Skype and LiveMeeting, we hold regular meetings, open to anyone, usually taking the form of a presentation or an Open Space Technology-style conversation. Please see the Calendar(http://www.virtualaltnet.com/Home/Calendar) to find a VAN group that meets at a time convenient to you, and feel welcome to join a meeting. Past sessions can be found on the Recording page. To stay informed about VAN activities, you can subscribe to the Virtual ALT.NET Google Group and follow the Virtual ALT.NET blog. Times below are Central Standard Time Start Time: Wed, Mar 31, 2010 8:00 PM UTC/GMT -5 hours End Time: Wed, Mar 31, 2010 10:00 PM UTC/GMT -5 hours Attendee URL: http://www.virtualaltnet.com/van Zach Young http://www.virtualaltnet.com

    Read the article

  • Web application / Domain model integration using JSON capable DTOs [on hold]

    - by g-makulik
    I'm a bit confused about architectural choices for the web-applications/java/python world. For c/c++ world the available (open source) choices to implement web applications is pretty limited to zero, involving java or python the choices explode to a,- hard to sort out -, mess of available 'frameworks' and application approaches. I want to sort out a clean MVC model, where the M stands for a fully blown (POCO, POJO driven) domain model (according M.Fowler's EAA pattern) using a mature OO language (Java,C++) for implementation. The background is: I have a system with certain hardware components (that introduce system immanent active behavior) and a configuration database for system meta and HW-components configuration data (these are even usually self contained, since the HW-components are capable to persist their configuration data anyway). For realization of the configuration/status data exchange protocol with the HW-components we have chosen the Google Protobuf format, which works well for the directly wired communication with these components. This protocol is already used successfully with a Java based GUI application via TCP/IP connection to the main system controlling HW-component. This application has some drawbacks and design flaws for historical reasons. Now we want to develop an abstract model (domain model) for configuration and monitoring those HW-components, that represents a more use case oriented view to the overall system behavior. I have the feeling that a plain Java class model would fit best for this (c++ implementation seems to have too much implementation/integration overhead with viable language-bridge interfaces). Google Protobuf message definitions could still serve well to describe DTO objects used to interact with a domain model API. But integrating Google Protobuf messages client side for e.g. data binding in the current view doesn't seem to be a good choice. I'm thinking about some extra serialization features, e.g. for JSON based data exchange with the views/controllers. Most lightweight solutions seem to involve a python based presentation layer using JSON based data transfer (I'm at least not sure to be fully informed about this). Is there some lightweight (applicable for a limited ARM Linux platform) framework available, supporting such architecture to realize a web-application? UPDATE: According to my recent research and comments of colleagues I've noticed that using Java (and some JVM) might not be the preferable choice for integration with python on a limited linux system as we have (running on ARM9 with hard to discuss memory and MCU costs), but C/C++ modules would do well for this (since this forms the native interface to python extensions, doesn't it?). I can imagine to provide a domain model from an appropriate C/C++ API (though I still think it's more efforts and higher skill requirements for the involved developers to do with these languages). Still I'm searching for a good approach that supports such architecture. I'll appreciate any pointers!

    Read the article

  • User Experience Highlights in Siebel: Direct from George Jacob

    - by mvaughan
    By Misha Vaughan and Kathy Miedema, Oracle Applications User Experience This is the first in a series of blog posts on the user experience (UX) highlights coming in various Oracle product families. You’ll see themes around productivity and efficiency, as well as a thoughtful approach to pushing UX capabilities into the underlying tooling. Of course, you can also expect to get an early look at the latest mobile offerings coming through these product lines.Today’s post is on Siebel. To learn more about what’s ahead, attend Siebel OpenWorld presentations. Our first interview is with George Jacob, the Group Vice President for CRM Applications. George Jacob Q: How would you describe the vision you have for the user experience of Siebel? A: Contemporary: Siebel runs in all browsers and all browser-capable devices using the latest web technology standards, such as Javascript, CSS, and HTML 5.Productive: Siebel is designed for a user experience that reduces clutter and user keystrokes.User-sensitive: The user experience enables Siebel to adapt easily to site and user preferences.Q: How are the UX features you have delivered so far resonating with customers? A:  Customers are very excited about our refresh of the Siebel user interface framework; the Siebel roadmap and user interface sessions at Oracle OpenWorld last year overflowed. We have had to turn back customer requests to participate in the early adopter program because we had more than we could handle. Customers are calling this a game-changer for Siebel.Q: So the UX highlights are popular? A: Yes, the UX highlights are very popular, although to a certain extent we expected this!  Q: What’s coming in Siebel on a mobile platform? A: Our current mobile offering is based on Windows Mobile (native application), and is fairly mature (over 5 years). The new Siebel Open User Interface Framework, by virtue of working on all browsers, will run – when it is released this year – on tablets and smartphones. This is one of the reasons a number of customers are most excited about our UX changes. Views of Siebel data on mobile devices Q: What are you working on now that you think is going to be exciting to customers at OOW? A: We are working on the Siebel Open User Interface Framework, to be released this year in the Siebel 2012 8.1.1.9 & 8.2.2.2 innovation packs. We are also working on Connected Mobile applications for Sales, Service, Consumer Goods and Pharmaceuticals, and Disconnected Mobile applications for Pharmaceuticals in the same release. We are building specialized applications that exploit the new UI framework for Telco Order Capture and for Life Sciences healthcare professional visits. Our 2012 delivery will be the foundation for further user experience enhancements, next year and beyond.Q: What do you want Siebel customers to know? A:  We are excited to be focused on improving the user experience of Siebel applications, and it is encouraging to see the positive feedback from Siebel customers and partners.If you would like to see more in the Siebel user experience, be sure to check out these sessions at OpenWorld: CON9700 - Siebel CRM Overview, Strategy, and Roadmap CON9703 - User Interface Innovations with the New Siebel “Open UI” CON9705 - Unleash the Power of “Open UI” CON9697 - Mobile Solutions for Siebel CRM

    Read the article

  • CISDI Cloud - Industrial Cloud Computing Platform based on Oracle Products

    - by Wenyu Duan
    In today's era, Cloud Computing is becoming integral to the vision and corporate strategy of leading organizations and is often seen as a key business driver to achieve growth and innovation. Headquartered in Chongqing, China, CISDI Engineering Co., Ltd. is a large state-owned engineering company, offering consulting, engineering design, EPC contracting, and equipment integration services to steel producers all over the world. With over 50 years of experience, CISDI offers quality services for every aspect of production for projects in the metal industry and the company has evolved into a leading international engineering service group with 18 subsidiaries providing complete lifecycle for E&C projects. CISDI group delegation led by Mr. Zhaohui Yu, CEO of CISDI Group, Mr. Zhiyou Li, CEO of CISDI Info, Mr. Qing Peng, CTO of CISDI Info and Mr. Xin Xiao, Head of CISDI Info's R&D joined Oracle OpenWorld 2012 and presented a very impressive cloud initiative case in their session titled “E&C Industry Solution in CISDI Cloud - An Industrial Cloud Computing Platform Based on Oracle Products”. CISDI group plans to expand through three phases in the construction of its cloud computing platform: first, it will relocate its existing technologies to Oracle systems, along with establishing private cloud for CISDI; secondly, it will gradually provide mixed cloud services for its subsidiaries and partners; and finally it plans to launch an industrial cloud with a highly mature, secure and scalable environment providing cloud services for customers in the engineering construction and steel industries, among others. “CISDI Cloud” will become the growth engine for the organization to expand its global reach through online services and achieving the strategic objective of being the preferred choice of E&C companies worldwide. The new cloud computing platform is designed to provide access to the shared computing resources pool in a self-service, dynamic, elastic and measurable way. It’s flexible and scalable grid structure can support elastic expansion and sustainable growth, and can bring significant benefits in speed, agility and efficiency. Further, the platform can greatly cut down deployment and maintenance costs. CISDI delegation highlighted these points as the key reasons why the group decided to have a strategic collaboration with Oracle for building this world class industrial cloud - - Oracle’s strategy: Open, Complete and Integrated - Oracle as the only company who can provide engineered system, with complete product chain of hardware and software - Exadata, Exalogic, EM 12c to provide solid foundation for "CISDI Cloud" The cloud blueprint and advanced architecture for industrial cloud computing platform presented in the session shows how Oracle products and technologies together with industrial applications from CISDI can provide end-end portfolio of E&C industry services in cloud. CISDI group was recognized for business leadership and innovative solutions and was presented with Engineering and Construction Industry Excellence Award during Oracle OpenWorld.

    Read the article

  • 2 Birds, 1 Stone: Enabling M2M and Mobility in Healthcare

    - by Eric Jensen
    Jim Connors has created a video showcase of a comprehensive healthcare solution, connecting a mobile application directly to an embedded patient monitoring system. In the demo, Jim illustrates how you can easily build solutions on top of the Java embedded platform, using Oracle products like Berkeley DB and Database Mobile Server. Jim is running Apache Tomcat on an embedded device, using Berkeley DB as the data store. BDB is transparently linked to an Oracle Database backend using  Database Mobile Server. Information protection is important in healthcare, so it is worth pointing out that these products offer strong data encryption, for storage as well as transit. In his video, Jim does a great job of demystifying M2M. What's compelling about this demo is that uses a solution architecture that enterprise developers are already comfortable and familiar with: a Java apps server with a database backend. The additional pieces used to embed this solution are Oracle Berkeley DB and Database Mobile Server. It functions transparently, from the perspective of Java apps developers. This means that organizations who understand Java apps (basically everyone) can use this technology to develop embedded M2M products. The potential uses for this technology in healthcare alone are immense; any device that measures and records some aspect of the patient could be linked, securely and directly, to the medical records database. Breathing, circulation, other vitals, sensory perception, blood tests, x-rats or CAT scans. The list goes on and on. In this demo case, it's a testament to the power of the Java embedded platform that they are able to easily interface the device, called a Pulse Oximeter, with the web application. If Jim had stopped there, it would've been a cool demo. But he didn't; he actually saved the most awesome part for the end! At 9:52 Jim drops a bombshell: He's also created an Android app, something a doctor would use to view patient health data from his mobile device. The mobile app is seamlessly integrated into the rest of the system, using the device agent from Oracle's Database Mobile Server. In doing so, Jim has really showcased the full power of this solution: the ability to build M2M solutions that integrate seamlessly with mobile applications. In closing, I want to point out that this is not a hypothetical demo using beta or even v1.0 products. Everything in Jim's demo is available today. What's more, every product shown is mature, and already in production at many customer sites, albeit not in the innovative combination Jim has come up with. If your customers are in the market for these type of solutions (and they almost certainly are) I encourage you to download the components and try it out yourself! All the Oracle products showcased in this video are available for evaluation download via Oracle Technology Network.

    Read the article

  • Handling Coding Standards at Work (I'm not the boss)

    - by Josh Johnson
    I work on a small team, around 10 devs. We have no coding standards at all. There are certain things that have become the norm but some ways of doing things are completely disparate. My big one is indentation. Some use tabs, some use spaces, some use a different number of spaces, which creates a huge problem. I often end up with conflicts when I merge because someone used their IDE to auto format and they use a different character to indent than I do. I don't care which we use I just want us all to use the same one. Or else I'll open a file and some lines have curly brackets on the same line as the condition while others have them on the next line. Again, I don't mind which one so long as they are all the same. I've brought up the issue of standards to my direct manager, one on one and in group meetings, and he is not overly concerned about it (there are several others who share the same view as myself). I brought up my specific concern about indentation characters and he thought a better solution would be to, "create some kind of script that could convert all that when we push/pull from the repo." I suspect that he doesn't want to change and this solution seems overly complicated and prone to maintenance issues down the road (also, this addresses only one manifestation of a larger issue). Have any of you run into a similar situation at work? If so, how did you handle it? What would be some good points to help sell my boss on standards? Would starting a grass roots movement to create coding standards, among those of us who are interested, be a good idea? Am I being too particular, should I just let it go? Thank you all for your time. Note: Thanks everyone for the great feedback so far! To be clear, I don't want to dictate One Style To Rule Them All. I'm willing to concede my preferred way of doing something in favor of what suits everyone the best. I want consistency and I want this to be a democracy. I want it to be a group decision that everyone agrees on. True, not everyone will get their way, but I'm hoping that everyone will be mature enough to compromise for the betterment of the group. Note 2: Some people are getting caught up in the two examples I gave above. I'm more after the heart of the matter. It manifests itself with many examples: naming conventions, huge functions that should be broken up, should something go in a util or service, should something be a constant or injected, should we all use different versions of a dependency or the same, should an interface be used for this case, how should unit tests be set up, what should be unit tested, (Java specific) should we use annotations or external config. I could go on.

    Read the article

  • Automated build platform for .NET portfolio - best choice?

    - by jkohlhepp
    I am involved with maintaining a fairly large portfolio of .NET applications. Also in the portfolio are legacy applications built on top of other platforms - native C++, ECLIPS Forms, etc. I have a complex build framework on top of NAnt right now that manages the builds for all of these applications. The build framework uses NAnt to do a number of different things: Pull code out of Subversion, as well as create tags in Subversion Build the code, using MSBuild for .NET or other compilers for other platforms Peek inside AssemblyInfo files to increment version numbers Do deletes of certain files that shouldn't be included in builds / releases Releases code to deployment folders Zips code up for backup purposes Deploy Windows services; start and stop them Etc. Most of those things can be done with just NAnt by itself, but we did build a couple of extension tasks for NAnt to do some things that were specific to our environment. Also, most of those processes above are genericized and reused across a lot of our different application build scripts, so that we don't repeat logic. So it is not simple NAnt code, and not simple build scripts. There are dozens of NAnt files that come together to execute a build. Lately I've been dissatisfied with NAnt for a couple reasons: (1) it's syntax is just awful - programming languages on top of XML are really horrific to maintain, (2) the project seems to have died on the vine; there haven't been a ton of updates lately and it seems like no one is really at the helm. Trying to get it working with .NET 4 has cause some pain points due to this lack of activity. So, with all of that background out of the way, here's my question. Given some of the things that I want to accomplish based on that list above, and given that I am primarily in a .NET shop, but I also need to build non-.NET projects, is there an alternative to NAnt that I should consider switching to? Things on my radar include Powershell (with or without psake), MSBuild by itself, and rake. These all have pros and cons. For example, is MSBuild powerful enough? I remember using it years ago and it didn't seem to have as much power as NAnt. Do I really want to have my team learn Ruby just to do builds using rake? Is psake really mature enough of a project to pin my portfolio to? Is Powershell "too close to the metal" and I'll end up having to write my own build library akin to psake to use it on its own? Are there other tools that I should consider? If you were involved with maintaining a .NET portfolio of significant complexity, what build tool would you be looking at? What does your team currently use?

    Read the article

  • Transparent Technology from Amazon

    - by David Dorf
    Amazon has been making some interesting moves again, this time in the augmented humanity area.  Augmented humanity is about helping humans overcome their shortcomings using technology.  Putting a powerful smartphone in your pocket helps you in many ways like navigating streets, communicating with far off friends, and accessing information.  But the interface for smartphones is somewhat limiting and unnatural, so companies have been looking for ways to make the technology more transparent and therefore easier to use. When Apple helped us drop the stylus, we took a giant leap forward in simplicity.  Using touchscreens with intuitive gestures was part of the iPhone's original appeal.  People don't want to know that technology is there -- they just want the benefits.  So what's the next leap beyond the touchscreen to make smartphones even easier to use? Two natural ways we interact with the world around us is by using sight and voice.  Google and Apple have been using both in their mobile platforms for limited uses cases.  Nobody actually wants to type a text message, so why not just speak it?  Any if you want more information about a book, why not just snap a picture of the cover?  That's much more accurate than trying to key the title and/or author. So what's Amazon been doing?  First, Amazon released a new iPhone app called Flow that allows iPhone users to see information about products in context.  Yes, its an augmented reality app that uses the phone's camera to view products, and overlays data about the products on the screen.  For the most part it requires the barcode to be visible to correctly identify the product, but I believe it can also recognize certain logos as well.  Download the app and try it out but don't expect perfection.  Its good enough to demonstrate the concept, but its far from accurate enough.  (MobileBeat did a pretty good review.)  Extrapolate to the future and we might just have a heads-up display in our eyeglasses. The second interesting area is voice response, for which Siri is getting lots of attention.  Amazon may have purchased a voice recognition company called Yap, although the deal is not confirmed.  But it would make perfect sense, especially with the Kindle Fire in Amazon's lineup. I believe over the next 3-5 years the way in which we interact with smartphones will mature, and they will become more transparent yet more important to our daily lives.  This will, of course, impact the way we shop, making information more readily accessible than it already is.  Amazon seems to be positioning itself to be at the forefront of this trend, so we should be watching them carefully.

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • ArchBeat Link-o-Rama for December 6, 2012

    - by Bob Rhubart
    Above and Beyond with the A-Team Maybe it's the coffee… If you follow this blog you've probably noticed that I regularly feature posts from members of the Oracle Fusion Middleware Architecture team, otherwise known as the A-Team. One of those bloggers, someone identified only as "fip" who writes on the A-Team SOA blog, went above and beyond on Dec 4, publishing a total of four substantial technical posts in a single day, each one worth a look: Retrieve Performance Data from SOA Infrastructure Database Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics How to Achieve OC4J RMI Load Balancing Using BPEL Performance Statistics to Diagnose Performance Bottlenecks Web Service Example - Part 3: Asynchronous | The Oracle ADF Mobile Blog Part 3 in this series from the Oracle ADF Mobile blog looks at "firing the web service asynchronously and then filling in the UI when it completes." Denis says, "This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data." ADF Mobile - Implementing Reusable Mobile Architecture | Andrejus Baranovskis "Reusability was always a strong part of ADF," says Oracle ACE Director Andrejus Baranovskis. "The same high reusability level is supported now in ADF Mobile." The objective of this post is "to prove technically that [the] reusable architecture concept works for ADF Mobile." Basic is Best | Eric Stephens "The world we live in and enterprises we strive to transform with enterprise architecture are complicated organisms, much like the human body," says Oracle Enterprise Architect Eric Stephens. "But sometimes a simple solution is the best approach...Whatever level of abstraction you are working at, less is more." Selling Federal Enterprise Architecture | Ted McLaughlan "EA must be 'sold' directly to the communities that matter from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration, " explains Ted McLaughlan. And that's true for any organization. Avoiding the "I'm Spartacus" Scenario in SOA | Ben Wilcock "This ‘SOA Spartacus’ scenario usually occurs quite soon after SOA is articulated as the primary strategic direction of the programme," says Ben Wilcock, "but before the organisation’s SOA capability is mature enough to understand what is meant by SOA, and how it should be designed and delivered." In such cases, perhaps the "A" in SOA is missing, no? Thought for the Day "It makes me feel guilty that anybody should have such a good time doing what they are supposed to do." — Charles Eames (1907–1978) Source: SoftwareQuotes.com

    Read the article

  • Is Java viable for serious game development?

    - by tehtros
    Ever since I was a little kid, my dream has been to develop games. Well, now that I am older, more mature, and have some programming experience, I would like to start. However, I would like to turn this into a career. The problem, is that my language of choice is Java. Now, I am not intending this to be a Java vs. C++ question, but rather, is Java an acceptable language for serious game development, instead of lower level languages like C++. By serious, I mean high quality graphics, and being able to play a game with said high quality graphics, without much lag on decent computers. Also, eventually, possible making it to consoles. I have scoured the internet, but there are not very many resources for Java game development, not nearly as many as C++. In fact, most engines are written in C++. Once, I tried to play a made with jMonkeyEngine. The game was terribly slow, to the point where my computer froze. I had no other Java applications running and nothing too resource intensive. Keep in mind, that my computer can play most modern 3D games with ease. So, I am really serious about game development, is Java still a viable choice? I have tried multiple times to learn C++, but I don't really like the language. I don't really know why, but usually, whenever I try to learn, I can never grasp the topics. Also, my most of my friends know Java, and one is even anti-C++, saying that no one knows how to use it right. Then, he goes to say that "there is no right way to use C++, that it can not be used correctly. The nature of the language prevents good code." Also, if I continue to learn and improve Java now, and it turns out that later I am required to learn C++, will making the switch be difficult? So, in short, can Java be taken serious, for serious game development. This includes heavy graphics, fast game play without lag, and possibly, and easy switch to consoles?

    Read the article

  • WPF Control Toolkits Comparison for LOB Apps

    In preparation for a new WPF project Ive been researching options for WPF Control toolkits.  While we want a lot of the benefits of WPF, the application is a fairly typical line of business application (LOB).  So were not focused on things like media and animations, but instead a simple, solid, intuitive, and modern user interface that allows for well architected separation of business logic and presentation layers. While WPF is mature, it hasnt lived the long life that Winforms has yet, so there is still a lot of room for third party and community control toolkits to fill the gaps between the controls that ship with the Framework.  There are two such gaps I was concerned about.  As this is an LOB app, we have needs for presenting lots of data and not surprisingly much of it is in grid format with the need for high performance, grouping, inline editing, aggregation, printing and exporting and things that weve been doing with LOB apps for a long time.  In addition we want a dashboard style for the UI in which the user can rearrange and shrink and grow tiles that house the content and functionality.  From a cost perspective, building these types of well performing controls from scratch doesnt make sense.  So I evaluated what you get from the .NET Framework along with a few different options for control toolkits.  I tried to be fairly thorough, but know that this isnt a detailed benchmarking comparison or intense evaluation.  Its just meant to be a feature set comparison to be used when thinking about building an LOB app in WPF.  I tried to list important feature differences and notes based on my experience with the trial versions and what I found in documentation and reference materials and samples.  Ive also listed the importance of the controls based on how I think they are needed in LOB apps.  There are several toolkits available, but given I dont have unlimited time, I picked just a few.  Maybe Ill add on more later.  The toolkits I compared are: Teleriks RadControls for WPF since I had heard some good things about Telerik Infragistics NetAdvantage WPF since both I and the customer have some experience with the vendors tools WPF Toolkit on codeplex since many of my colleagues have used it Blacklight codeplex project which had WPF support for the Tile View control  (with Release 4.3 WPF is not going to be supported in favor of focusing only on SilverLight controls, so I dropped that from the comparison) Click Here to Download the WPF Control Toolkits Comparison Hopefully this helps someone out there.  Feel free to post a comment on your experiences or if you think something I listed is incorrect or missing.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • best way to "introduce" OOP/OOD to team of experienced C++ engineers

    - by DXM
    I am looking for an efficient way, that also doesn't come off as an insult, to introduce OOP concepts to existing team members? My teammates are not new to OO languages. We've been doing C++/C# for a long time so technology itself is familiar. However, I look around and without major infusion of effort (mostly in the form of code reviews), it seems what we are producing is C code that happens to be inside classes. There's almost no use of single responsibility principle, abstractions or attempts to minimize coupling, just to name a few. I've seen classes that don't have a constructor but get memset to 0 every time they are instantiated. But every time I bring up OOP, everyone always nods and makes it seem like they know exactly what I'm talking about. Knowing the concepts is good, but we (some more than others) seem to have very hard time applying them when it comes to delivering actual work. Code reviews have been very helpful but the problem with code reviews is that they only occur after the fact so to some it seems we end up rewriting (it's mostly refactoring, but still takes lots of time) code that was just written. Also code reviews only give feedback to an individual engineer, not the entire team. I am toying with the idea of doing a presentation (or a series) and try to bring up OOP again along with some examples of existing code that could've been written better and could be refactored. I could use some really old projects that no one owns anymore so at least that part shouldn't be a sensitive issue. However, will this work? As I said most people have done C++ for a long time so my guess is that a) they'll sit there thinking why I'm telling them stuff they already know or b) they might actually take it as an insult because I'm telling them they don't know how to do the job they've been doing for years if not decades. Is there another approach which would reach broader audience than a code review would, but at the same time wouldn't feel like a punishment lecture? I'm not a fresh kid out of college who has utopian ideals of perfectly designed code and I don't expect that from anyone. The reason I'm writing this is because I just did a review of a person who actually had decent high-level design on paper. However if you picture classes: A - B - C - D, in the code B, C and D all implement almost the same public interface and B/C have one liner functions so that top-most class A is doing absolutely all the work (down to memory management, string parsing, setup negotiations...) primarily in 4 mongo methods and, for all intents and purposes, calls almost directly into D. Update: I'm a tech lead(6 months in this role) and do have full support of the group manager. We are working on a very mature product and maintenance costs are definitely letting themselves be known.

    Read the article

  • Addressing threats introduced by the BYOD trend

    - by kyap
    With the growth of the mobile technology segment, enterprises are facing a new type of threats introduced by the BYOD (Bring Your Own Device) trend, where employees use their own devices (laptops, tablets or smartphones) not necessarily secured to access corporate network and information.In the past - actually even right now, enterprises used to provide laptops to their employees for their daily work, with specific operating systems including anti-virus and desktop management tools, in order to make sure that the pools of laptop allocated are spyware or trojan-horse free to access the internal network and sensitive information. But the BYOD reality is breaking this paradigm and open new security breaches for enterprises as most of the username/password based systems, especially the internal web applications, can be accessed by less or none protected device.To address this reality we can adopt 3 approaches:1. Coué's approach: Close your eyes and assume that your employees are mature enough to know what he/she should or should not do.2. Consensus approach: Provide a list of restricted and 'certified' devices to the internal network. 3. Military approach: Access internal systems with certified laptop ONLYIf you choose option 1: Thanks for visiting my blog and I hope you find the others entries more useful :)If you choose option 2: The proliferation of new hardware and software updates every quarter makes this approach very costly and difficult to maintain.If you choose option 3: You need to find a way to allow the access into your sensitive application from the corporate authorized machines only, managed by the IT administrators... but how? The challenge with option 3 is to find out how end-users can restrict access to certain sensitive applications only from authorized machines, or from another angle end-users can not access the sensitive applications if they are not using the authorized machine... So what if we find a way to store the applications credential secretly from the end-users, and then automatically submit them when the end-users access the application? With this model, end-users do not know the username/password to access the applications so even if the end-users use their own devices they will not able to login. Also, there's no need to reconfigure existing applications to adapt to the new authenticate scheme given that we are still leverage the same username/password authenticate model at the application level. To adopt this model, you can leverage Oracle Enterprise Single Sign On. In short, Oracle ESSO is a desktop based solution, capable to store credentials of Web and Native based applications. At the application startup and if it is configured as an esso-enabled application - check out my previous post on how to make Skype essso-enabled, Oracle ESSO takes over automatically the sign-in sequence with the store credential on behalf of the end-users. Combined with Oracle ESSO Provisioning Gateway, the credentials can be 'pushed' in advance from an actual provisioning server, like Oracle Identity Manager or Tivoli Identity Manager, so the end-users can login into sensitive application without even knowing the actual username and password, so they can not login with other machines rather than those secured by Oracle ESSO.Below is a graphical illustration of this approach:With this model, not only you can protect the access to sensitive applications only from authorized machine, you can also implement much stronger Password Policies in terms of Password Complexity as well as Password Reset Frequency but end-users will not need to remember the passwords anymore.If you are interested, do not hesitate to check out the Oracle Enterprise Single Sign-on products from OTN !

    Read the article

  • What is SOA ?

    - by llaszews
    First, let’s mention what SOA is not: • SOA is not the same thing as web services. Web Services implies the use of standard such as Java/JAX-RPC, .NET or REST. Web Services also implies the use of a WSDL, SOAP, and/or J2EE Connector Architecture (J2EE CA) and HTTP. SOA architectures can be implemented using J2EE CA, XML file transfer or Remote Procedural Call (RPC) over File Transfer Protocol (FTP), TCP/IP, Remote Method Invocation (RMI) or other protocols. In other words, Web Services are a very specific set of technologies. SOA is a concept and can be implemented in many different ways. Some very rudimentary, such as transfering flat files between applications. • SOA will not solve all of your problems. It will make your business more agile, increase business visibility, reduce integration costs and provide better reuse. However, if you don’t need help in these area or expect SOA to cure all of your IT problems, you are looking in the wrong place. • The concepts behind SOA are not new, but SOA is also not mature. SOA as it stands today has really only been around for 5 years. The concepts of standards based protocol handlers, predefined communication schemas and remote method invocation have been around for decades. So, what is SOA? SOA is an architectural blueprint, a way of developing applications, and a set of best practices. SOA is not an ‘out of the box’ solution you buy, install and then have up and running in a matter of months. SOA is a journey to a better way of doing business and the technology architecture to support this better way of doing business. SOA is also a broader set of technologies including more then just web services. Techologies like an Enterpirse Service Bus (ESB), Business Processs Execution Language (BPEL), message queues and Business Activity Monitoring (BAM) all are part of a SOA architecture. So, what is SOA? SOA is an architectural blueprint, a way of developing applications, and a set of best practices. SOA is not an ‘out of the box’ solution you buy, install and then have up and running in a matter of months. SOA is a journey to a better way of doing business and the technology architecture to support this better way of doing business. SOA is also a broader set of technologies including more then just web services. Techologies like an Enterpirse Service Bus (ESB), Business Processs Execution Language (BPEL), message queues and Business Activity Monitoring (BAM) all are part of a SOA architecture. Read more here: Oracle Modernization Solutions

    Read the article

  • Enterprise with eyes on NoSQL

    - by thegreeneman
    Since joining Oracle a few months back, I have had the fortune of being able to interact with a number of large enterprise organizations and discuss their current state of adoption for NoSQL database technology.   It is worth noting that a large percentage of these organizations do have some NoSQL use and have been steadily increasing their understanding of its applicability for certain data management workloads.   Thru those discussions I’ve learned that it seems one of the biggest issues confronting enterprise adoption of NoSQL databases is the lack of standards for access, administration and monitoring.    This was not so much of an issue with the early adopters of NoSQL technology because they employed a highly DevOps centric approach to application deployment leaving a select few highly qualified developers with the task of managing the production of the system that they designed and implemented. However, as NoSQL technology moves out of the startup and into the hands of larger corporate entities, developers with a broad skill set that are capable of both development and I.T. type production management are in short supply and quickly get moved on to do new projects, often moving to different roles within the company.  This difference in the way smaller more agile startups operate as compared to more established companies is revealing a gap in the NoSQL technology segment that needs to get addressed.    This is one of places that a company such as Oracle has a leg up in the NoSQL Database front.  A combination of having gone thru a past database maturization process,  combined with a vast set of corporate relationships that have grown hand in hand to solve these types of issues, Oracle is in a great place to lead the way in closing the requirements gap for NoSQL technology.  Oracle's understanding of the needs specific to mature organizations have already made their way into the Oracle’s NoSQL Database offering with features such as:  One click cluster deployment with visual topology planning,  standards based monitoring protocols such as SNMP, support for data access for reporting via standard SQL  and integration with emerging standards for data access such as MapReduce.  Given the exciting developments we’re driving in the Oracle NoSQL Database group, I will have a lot more to say about this topic as we move into the second half of the year.

    Read the article

  • How do I install OpenStack on a single Ubuntu 12.04 node?

    - by Sam Edwards
    I'm having trouble installing OpenStack in Ubuntu 12.04, for various reasons: The official Ubuntu website recommends Juju and MAAS. However, this is a single node I am trying to get OpenStack installed on, and MAAS requires "two or more nodes" according to the docs. Additionally, I don't have any experience in MAAS and Juju and would rather stick to technologies I am more familiar with so that I can debug problems that arise. I have tried StackGeek but this fails because the node only has a single Ethernet port. The node does, however, have the second hard drive required for the nova storage. I have tried DevStack but cannot log into the dashboard. The login form appears fine, but as soon as I try to submit the page, my browser begins loading indefinitely. I have tried installing straight from packages, but I get an Internal Server Error in the dashboard upon trying to log in, with no helpful logs anywhere in sight to aid me in debugging the issue. Each of these attempts was with a fresh Ubuntu 12.04 LTS setup; I'm finding it really strange that no matter what I try, I cannot get OpenStack installed. Is this even a stable/mature project? Why am I encountering so many bugs?

    Read the article

  • Network monitoring tools with API features

    - by Kev
    We use ks-soft's Advanced Hostmonitor package to monitor around 2000 items on our network. We think it's great, the chap that supports it is fantastic, the product is fast, stable and mature but I feel as as we grow as a company it's beginning to show some friction points in the area of integration with our back office admin systems. One of the things we'd like to do is be able to add new tests to whatever monitoring tool we use via an API. For example, when orders for servers come from our retail interface, the server gets built automatically, and as part of the automated build process we'd like to automatically add new tests to the network monitoring systems. Hostmonitor has some support for this via a feature called HM Script but we're starting to encounter some speedbumps - we can't add new operators/users we can't define new "Action Profiles" - these are the actions to be taken when a test goes good or bad. What we love about hostmonitor though are the Action Profiles. For example if a Windows IIS box goes bad our action profile for a bad test does something like: Check host again (one time) Wait another 30 seconds then test again Try restart app pool on remote machine (up to two times) Send an email to ops about the restart failure Try restarting IIS on remote machine (up to four times) Page duty admin (up to 5 times - stops after duty admin ACKS alert) Page backup duty admin (5 times - stops after duty admin ACKS alert) I'm starting to look around at other network monitoring tools and I'm looking for: a comprehensive API to be able to add/remove/control tests/test "action profiles"/operators (not just plugins, we need control and admin interfaces) the ability to have quite detailed action/escalation profiles (and define these via an API) I've looked at Nagios and Icinga but Ican't seem to glean from their documentation whether we could have these features or not, or if we could, how much work would be involved to implement/customise. Can anyone provide any advice, guidance or experiences?

    Read the article

  • Widespread misinterpretation of DNS rules in resolving wildcards

    - by Dominic Sayers
    [EDITED to add: This problem has gone away on its own. I believe Cloudflare's name resolution may have been to blame. See my own answer below] Here is a snippet of my zone file *.example.com. 300 IN CNAME proxy.herokuapp.com. foo.example.com. 300 IN A 111.111.111.111 If I dig @8.8.8.8 foo.example.com I get the answer I expect: ;; ANSWER SECTION: foo.example.com. 30 IN A 111.111.111.111 The same is true of all other public DNS servers I've tried. However, when I try to set up a check with Pingdom to a URL on foo.example.com it instead sends the traffic to my Heroku app referenced by the *.example.com RR. The same is true of checks set up on New Relic, Errplane and traffic generated by the Heroku app itself. So on the one side, all public DNS servers interpret the zone file one way. Yet four service providers all interpret it a different way, one that differs to the standard suggested by RFC 4592. My question is: are these reputable, mature service providers all wrong? Or is it little me?

    Read the article

  • Surprising corruption and never-ending fsck after resizing a filesystem.

    - by Steve Kemp
    System in question has Debian Lenny installed, running a 2.65.27.38 kernel. System has 16Gb memory, and 8x1Tb drives running behind a 3Ware RAID card. The storage is managed via LVM. Short version: Running a KVM guest which had 1.7Tb storage allocated to it. The guest was reaching a full-disk. So we decided to resize the disk that it was running upon We're pretty familiar with LVM, and KVM, so we figured this would be a painless operation: Stop the KVM guest. Extend the size of the LVM partition: "lvextend -L+500Gb ..." Check the filesystem : "e2fsck -f /dev/mapper/..." Resize the filesystem: "resize2fs /dev/mapper/" Start the guest. The guest booted successfully, and running "df" showed the extra space, however a short time later the system decided to remount the filesystem read-only, without any explicit indication of error. Being paranoid we shut the guest down and ran the filesystem check again, given the new size of the filesystem we expected this to take a while, however it has now been running for 24 hours and there is no indication of how long it will take. Using strace I can see the fsck is "doing stuff", similarly running "vmstat 1" I can see that there are a lot of block input/output operations occurring. So now my question is threefold: Has anybody come across a similar situation? Generally we've done this kind of resize in the past with zero issues. What is the most likely cause? (3Ware card shows the RAID arrays of the backing stores as being A-OK, the host system hasn't rebooted and nothing in dmesg looks important/unusual) Ignoring brtfs + ext3 (not mature enough to trust) should we make our larger partitions in a different filesystem in the future to avoid either this corruption (whatever the cause) or reduce the fsck time? xfs seems like the obvious candidate?

    Read the article

  • Recommended open-source firmware for ASUS RT-N16

    - by MasterF
    I have recently acquired an ASUS RT-N16 router. My original plan for it was to install Tomato on it. However, after checking their website i found out that the firmware was not updated in the last 2 years. There seem to be a few updated mods but none of them really seemed mature/stable/well-documented. I would like to know what other people recommend as open-source firmware for this router. I know the answers will probably be subjective; so i will give a bit of background on my needs: for now i will only use the Wi-Fi on an Android phone the connection will not be shared with anyone (so QOS is optional) i want a stable (wired) connection on my PC (for online gaming etc.) i want the (wired) download/upload speeds to be as close as possible to those achieved by directly plugging the Ethernet cable to the PC's network card; i have a 100 Mbps connection my ISP uses PPPOE my technical level: i am a software developer and i have good knowledge of bash scripting, but no experience with networking Also, i know that i could probably just use the stock firmware (and maybe will use it for a while), but i'm interested in trying an open-source version (for more features, flexibility, as a learning exercise etc.)

    Read the article

  • CLI package to replace Plesk

    - by dotancohen
    Myself and another programmer are tasked with maintaining a few webservers. I prefer CLI tools, she prefers Plesk. However, I am adamant about not installing Plesk for quite a few reasons. I have written a small Python script for adding new domains, and now I am about to add the ability to configure email addresses while abstracting the details of Postfix from her. Before I go that route, I have googled to see if anything already exists, and am surprised that I have come up with nothing! Are there any mature, stable "control panels" or "server admin" tools like Plesk, but which are accessed via the CLI over SSH? I am looking for the following features: Add / remove / configure domains served by Apache. Add / remove / configure email boxes and mail groups. Add / remove MySQL databases, users, and configure users to databases. Provide basic monitoring of "server health", that is: memory usage, disk usage, CPU usage, bandwidth usage. Possibly set up STFP accounts so that only specific FTP users could access specific /var/www/someSite/ directories. Note that I was unsure if this question is OT for ServerFault. As per the ServerFault about page (There seems to be no more FAQ) this question meets two of the "ask about" criterion and zero of the "don't ask about" with the possible exception of being opinion-based. Therefore, to keep on-topic, I would like to know about the available applications but we should be subjective and less opinionated. Thank you!

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14  | Next Page >