Search Results

Search found 9349 results on 374 pages for 'turing complete'.

Page 67/374 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • JMSContext, @JMSDestinationDefintion, DefaultJMSConnectionFactory with simplified JMS API: TOTD #213

    - by arungupta
    "What's New in JMS 2.0" Part 1 and Part 2 provide comprehensive introduction to new messaging features introduced in JMS 2.0. The biggest improvement in JMS 2.0 is introduction of the "new simplified API". This was explained in the Java EE 7 Launch Technical Keynote. You can watch a complete replay here. Sending and Receiving a JMS message using JMS 1.1 requires lot of boilerplate code, primarily because the API was designed 10+ years ago. Here is a code that shows how to send a message using JMS 1.1 API: @Statelesspublic class ClassicMessageSender { @Resource(lookup = "java:comp/DefaultJMSConnectionFactory") ConnectionFactory connectionFactory; @Resource(mappedName = "java:global/jms/myQueue") Queue demoQueue; public void sendMessage(String payload) { Connection connection = null; try { connection = connectionFactory.createConnection(); connection.start(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); MessageProducer messageProducer = session.createProducer(demoQueue); TextMessage textMessage = session.createTextMessage(payload); messageProducer.send(textMessage); } catch (JMSException ex) { ex.printStackTrace(); } finally { if (connection != null) { try { connection.close(); } catch (JMSException ex) { ex.printStackTrace(); } } } }} There are several issues with this code: A JMS ConnectionFactory needs to be created in a application server-specific way before this application can run. Application-specific destination needs to be created in an application server-specific way before this application can run. Several intermediate objects need to be created to honor the JMS 1.1 API, e.g. ConnectionFactory -> Connection -> Session -> MessageProducer -> TextMessage. Everything is a checked exception and so try/catch block must be specified. Connection need to be explicitly started and closed, and that bloats even the finally block. The new JMS 2.0 simplified API code looks like: @Statelesspublic class SimplifiedMessageSender { @Inject JMSContext context; @Resource(mappedName="java:global/jms/myQueue") Queue myQueue; public void sendMessage(String message) { context.createProducer().send(myQueue, message); }} The code is significantly improved from the previous version in the following ways: The JMSContext interface combines in a single object the functionality of both the Connection and the Session in the earlier JMS APIs.  You can obtain a JMSContext object by simply injecting it with the @Inject annotation.  No need to explicitly specify a ConnectionFactory. A default ConnectionFactory under the JNDI name of java:comp/DefaultJMSConnectionFactory is used if no explicit ConnectionFactory is specified. The destination can be easily created using newly introduced @JMSDestinationDefinition as: @JMSDestinationDefinition(name = "java:global/jms/myQueue",        interfaceName = "javax.jms.Queue") It can be specified on any Java EE component and the destination is created during deployment. JMSContext, Session, Connection, JMSProducer and JMSConsumer objects are now AutoCloseable. This means that these resources are automatically closed when they go out of scope. This also obviates the need to explicitly start the connection JMSException is now a runtime exception. Method chaining on JMSProducers allows to use builder patterns. No need to create separate Message object, you can specify the message body as an argument to the send() method instead. Want to try this code ? Download source code! Download Java EE 7 SDK and install. Start GlassFish: bin/asadmin start-domain Build the WAR (in the unzipped source code directory): mvn package Deploy the WAR: bin/asadmin deploy <source-code>/jms/target/jms-1.0-SNAPSHOT.war And access the application at http://localhost:8080/jms-1.0-SNAPSHOT/index.jsp to send and receive a message using classic and simplified API. A replay of JMS 2.0 session from Java EE 7 Launch Webinar provides complete details on what's new in this specification: Enjoy!

    Read the article

  • Implementing Search for BlogReader Windows 8 Sample

    - by Harish Ranganathan
    The BlogReader sample is an excellent place to start speeding up your Windows 8 development skills.  The tutorial is available here and the complete source code is available here Create a project called WindowsBlogReader and create pages for ItemsPage.xaml, SplitPage.xaml and DetailPage.xaml and copy the corresponding code blocks from the sample listed above. Created a class file FeedData.cs and copy the code.  Finally, create a class DateConverter.cs and copy the code associated with it. With that you should be able to build and run the project.  There seems to be one issue in the sample feeds listed that the first week (feed1) doesn’t seem to expose it.  So you can skip that and use the second feed as first feed.  You will end up with one feed less but it works. I had demonstrated this in the recent TechDays at Chennai.  How we can use the Search Contract and implement Search for within the Blog Titles. First off, we need to declare that the App will be using Search Contract, in the Package.appmanifest file Next, we would need a handle of the Search Contract when user types on the search window in Charms Menu. If you had completed the code sample from the link above, you would have ItemsPage.xaml and ItemsPage.xaml.cs.  Open the ItemsPage.xaml.cs. Import the namespaces using System.Collections.ObjectModel and System.Linq. in the ItemsPage() constructor, right after this.InitializeComponent(); add the following code Windows.ApplicationModel.Search.SearchPane.GetForCurrentView().QuerySubmitted += ItemsPage_QuerySubmitted; This event is fired when users open up the Search Panel from Charms Menu, type something and hit enter. We need to handle this event declared in the delegate.  For that we need to pull the FeedDataSource instantiation to the root of the class to make it global. So, add the following as the first line within the partial class FeedDataSource feedDataSource; Also, modify the LoadState method, as follows:- protected override void LoadState(Object navigationParameter, Dictionary<String, Object> pageState)        {            feedDataSource = (FeedDataSource)App.Current.Resources["feedDataSource"];            if (feedDataSource != null)            {                this.DefaultViewModel["Items"] = feedDataSource.Feeds;            }        } Next is to implement the ItemsPage_QuerySubmitted method void ItemsPage_QuerySubmitted(Windows.ApplicationModel.Search.SearchPane sender, Windows.ApplicationModel.Search.SearchPaneQuerySubmittedEventArgs args)         {             this.DefaultViewModel["Items"] = from dynamic item in feedDataSource.Feeds                                              where                                              item.Title.Contains(args.QueryText)                                              select item;         } As you can see we are almost using the same defaultviewmodel with the change that we are using a linq query to do a search on feeds which has the Title that matches QueryText. With this we are ready to run the app. Run the App.  Hit the Charms Menu with Windows + C key combination and type a text to search within the blog. You can see that it filters the Blogs which has the matching text. We can modify the above Linq query to do a search for the Text in other attributes like description, actual blog content etc., I have uploaded the complete code since the original WindowsBlogReader Code is not available for download.  You can download it from here note:  this code is provided as-is without any warranties.  Cheers!!!

    Read the article

  • My Tech Ed North America Preview - Certification Edition

    - by Chris Gardner
    In my previous TechEd North America Preview, I addressed all the content I wanted to see at the show. This time, we shall turn our attention to the certifications I might try to pick up. If you have never been to TechEd North America before, one of the greatest things about the event is an on-site certification center. If you have a couple hours to spare, you can walk up to a test. The first test on my agenda is 70-5231. I took this update test once, but did not do well on the MVC portion2. A few practice tests later, and I think I'm ready to fake that section. After that, I need to complete my road to being a master. The good folks here at work have been having a real love / hate relationship with the idea of me become an MCM in SQL Server3. Of course, before I do that, I need to finally take the SQL Administration tests. Thus, we shall add 70-4324 and 70-4505 to the list. Speaking of MCM, TechEd North America will have a special on test 88-9706. This test is normally $500, and you have to find a place to take it7. However, there is a special 50% off rate for people who take it on location. With those kind of prices, I may just take it as a form of study guide. As a final push, I may take some Windows Phone exams. I mentioned in my previous post that I may attend the 70-5998 Exam Cram session. Unfortunately, I will be staffing the Hands-On-Lab at that time. As we know, this has never stopped me from taking a test. This may lead to fits of 70-5069, but after we've come this far... That should complete my list. Do I really think I'll find time to take 6 tests at TechEd North America? Probably not. I have done it at TechEd North America before, but that was before I was TechEd North America staff. I also had a co-worker pass 9 in one year, but he basically did nothing but travel to Orlando in 2007 to take tests. And what's the point of attending a HUGE conference if you don't network? Of course, networking will have to wait for Friday's post... 1 Upgrade: Transition Your MCPD .NET Framework 3.5 Web Developer Skills to MCPD .NET Framework 4 Web Developer 2Because I never have used, nor do I really think I ever will use, MVC... 3By that, I mean they love the idea, and they hate the price 4Microsoft SQL Server 2008, Implementation and Maintenance 5PRO: Designing, Optimizing and Maintaining a Database Administrative Solution Using Microsoft SQL Server 2008 6SQL Server 2008 Microsoft Certified Master: Knowledge Exam 7Which isn't nearly as expensive as the Lab Exam, nor as difficult to find a location. However, it is not offered at every testing facility. 8PRO: Designing and Developing Windows Phone Applications 9TS: Silverlight 4, Development

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-26

    - by Bob Rhubart
    Software Architecture for High Availability in the Cloud | Brian Jimerson Brian Jimerson looks at the paradigm shifts from machine-based architectures to cloud-based architectures when designing fault tolerance, and how enterprise applications need to be engineered to ensure the highest level of availability in the cloud. SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Registration is now open for one of the premier SOA, Cloud, and Service Technology events. Once again, the Oracle community is well-represented in the session schedule. And now you can save on registration with a special Oracle discount code. Progress 4GL and DB to Oracle and cloud | Tom Laszewski "Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy," says cloud migration expert Tom Laszewski. "The least risky and expensive option...is to use the Progress OpenEdge DataServer for Oracle." Embrace 'big data' now or fall behind the competition, analyst warns | TechTarget TechTarget's Mark Brunelli's story says, in essence, that Big Data is not your fathers Business Intelligence. Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache | Ricardo Ferreira Ferreira illustrates a programmatic way to use the Oracle Coherence API to calculate the total size of a specific cache that resides in the data grid. WebCenter Portal Tutorial Part 7: Integrating Discussions and Link service | Yannick Ongena The latest chapter in Oracle ACE Yannick Ongena's ongoing series. How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester? | Jack Desai Helpful technical tips from yet another member of the Oracle Fusion Middleware Architecture Team. Big Data for the Enterprise; Software Architecture for High Availability in the Cloud; Why Cloud Computing is a Paradigm Shift - And Why It Isn't This week on the OTN Solution Architect Homepage, along with an updated events list and this weeks list of selected community blog posts. Worst Practices for Big Data | Dain Hansen Dain Hansen shares some insight on what NOT to do if you want to captialize on Big Data. Free Virtual Developer Day - Oracle Fusion Development | Grant Ronald "The online conference will include seminars, hands-on lab and live chats with our technical staff including me!" says Grant Ronald. "And the best bit, it doesn't cost you a single penny. It's free and available right on your desktop." Penguin is Getting Ready for Oracle OpenWorld 2012 | Zeynep Koch Linux fan? Check out Zeynep Koch's post for a list of Linux-based sessions at Oracle OpenWorld 2012 in San Francisco. Amazon Web Services (AWS) Autoscaling | Frank Munz "Autoscaling on AWS can only be configured with lengthy commands from the command line but not from the web cased AWS console," says Frank Munz. "Getting all the parameters right can be tricky." He demonstrates one easy example in this video. Oracle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broin "These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight," says O'Broin. How Much Data Is Created Every Minute? [INFOGRAPHIC] | Mashable Explaining what the "Big" in Big Data really means -- and it's more than a little mind-boggling. Thought for the Day "Real, though miniature, Turing Tests are happening all the time, every day, whenever a person puts up with stupid computer software." — Jaron Lanier Source: SoftwareQuotes.com

    Read the article

  • Partner Blog: Hub City Media Introduces iPad Application for Oracle Identity Analytics

    - by Tanu Sood
    About the Writer:Steve Giovannetti is CTO of Hub City Media, Inc., a company that specializes in implementation and product development on the Oracle Identity Management platform. Recently, Hub City Media announced the introduction of iPad application IdentityCert for Oracle Identity Analytics. This post explore the business use cases and application of IdentityCert.Hub City Media(HCM) has been deploying certification solutions based on Oracle Identity Analytics since it first appeared on the market as Vaau RBACx. With each deployment we've seen the same pattern repeat time and time again:1. Customers suffering under the weight of manual access certification regimens deploy Oracle Identity Analytics (OIA) for automated certification. 2. OIA improves the frequency, speed, accuracy, and participation of certifications across the organization. 3. Then the certifiers, typically managers and supervisors, ask, “Is there any easier way to do these certifications offline?”The current version of OIA has a way to export certification data to a spreadsheet.  For some customers, we've leveraged this feature and combined it with some of our own custom code to provide a solution based on spreadsheet exports and imports.  Customers export the certification to Microsoft Excel, complete it, and then import the spreadsheet to OIA. It worked well for offline certification, but if the user accidentally altered the format of the spreadsheet, the import of the data could fail. We were close to a solution but it wasn’t reliable.Over the past few years, we've seen the proliferation of Apple iOS devices, specifically the iPhone and iPad, in the enterprise.  As our customers were asking for offline certification, we noticed the same population of users traditionally responsible for access certification, were early adopters of the iPad. The environment seemed ideal for us to create an iPad application to support offline certifications using Oracle Identity Analytics. That’s why we created IdentityCert™.IdentityCert allows users to view their analytics dashboard, complete user certifications, and resolve policy violations with OIA, from their iPads.The current IdentityCert analytics dashboard displays the same charts that are available in the Oracle Identity Analytics product. However, we plan to expand the number of available analytics in future releases.The main function of IdentityCert is user certification which can be performed quickly and efficiently using a simple touch interface. Managers tap into a certification, use simple gestures to claim users and certify their access.  Certifications can be securely downloaded to IdentityCert and can be completed with or without a network connection. The user can upload the completed certifications once they are connected to a cellular or wi-fi network.Oracle Identity Analytics can generate policy violation notifications based on detective scans of identity warehouse or via preventative analysis of identity access requests. IdentityCert allows users to view all policy violations, resolve, or delegate them to appropriate users. IdentityCert also analyzes the policy violation expression and produces more human friendly descriptions of the policy violation which improves the ability of users to resolve the violation. IdentityCert can be deployed quickly into a customer's environment. It is deployed with Hub City Media's ID Services to connect Oracle Identity Analytics securely with the iPad application.Oracle Identity Management 11g R2 is an important evolutionary release. Oracle's Identity Management suite has more characteristics of a cohesive platform. This platform provides an integrated set of identity services that can be used to protect, manage, and audit security within the enterprise. At HCM we take the platform concept a step further and see it as an opportunity to create unique solutions for Oracle Identity Management customers. IdentityCert is our commitment to this platform. You can download IdentityCert from the Apple iOS App Store today. It includes a demo dataset that you can use to explore the functions of the product without any server infrastructure. Download it. Give it a try. We would appreciate your interest and welcome any feedback.Resources:Press Release: Hub City Media Introduces iPad Application IdentityCert™ for Oracle Identity AnalyticsApp Store Download: http://bit.ly/IdentityCertOracle Identity Governance Suite

    Read the article

  • Oracle Database Upcoming Event dates to know

    - by mandy.ho
    February may be a short month, but it's not short of exciting Oracle events. From information packed "Real Performance Days" to participation in one of the biggest IT Security events - look out for Oracle Database and let us know if you are there with us! Feb 13-18, 2011 - Las Vegas, NV TDWI World Conference Series Join Oracle in highlighting Exadata x2-2 and x2-8, along with Oracle Business Intelligence, Enterprise Performance management and Data Warehousing solutions. Oracle will be presenting a workshop - Oracle Data Integration: Best-of-Breed Solutions for the Enterprise Wednesday, February 16, 2011 7p.m - 9p.m Glen Goodrich, Director of Product Management Christophe Dupupet, Director of Product Management, Data Integration http://events.tdwi.org/events/las-vegas-world-conference-2011/sessions/session-list.aspx Feb 14-17, 2011 - Barcelona, Spain Mobile World Congress MWC is an event where Oracle showcases the near complete breadth and depth of value that our Communications Industry strategy and Hardware and Software Solutions can deliver. Oracle supports Communications Service Providers today and delivers platforms and flexibility primed for the future. Oracle will have a two story Pavilion, along with an Oracle Java and Embedded Solutions Center - App Planet. The Exhibition times are Monday, 14th February 09.00 - 19.00 Tuesday, 15th February 09.00 - 19.00 Wednesday, 16th February 09.00 - 19.00 Thursday, 17th February 09.00 - 16.00 Have questions? Meet with Oracle Sales representatives at the Oracle Café. Open every day from 9am to 17:00pm. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=109912&src=6973382&src=6973382&Act=4 Feb 14-18, 2011 - San Francisco, CA RSA Conference As the world's most complete, open, integrated business software and hardware systems provider, Oracle can uniquely safeguard your information throughout its entire lifecycle. Learn more by attending these sessions: Cloud Computing: A Brave New World for Security and Privacy (CLD-201) Wednesday, February 16 at 8:30 a.m. Databases Under Attack - Securing Heterogeneous Database Infrastructures (DAS-301) Thursday, February 17, 2011 at 8:30 a.m. Seven Steps to Protecting Databases (DAS-402) Friday, February 18 at 10:10 a.m. RSA Conference Attendees will also have the opportunity to meet with Oracle Security Solution experts, see live product demos and more by visiting booth # 1559. Hours: Monday, February 14, 6:00 p.m. - 8:00 p.m., Tuesday, February 15, 11:00 a.m. - 6:00 p.m. and 4:30 p.m. - 6:00p.m., Wednesday, February 16, 11:00 a.m. - 6:00 p.m., and Thursday, February 17, 11:00 a.m. - 3:00 p.m. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=127657&src=6967733&src=6967733&Act=12 Feb 21-25, 2011 - Various Locations IOUG Presents - A Day of Real World Performance with Tom Kyte, Andrew Holdsworth and Graham Wood These Oracle experts will debate, discuss and delineate the best practices for designing hardware architectures, deploying Oracle databases, and developing applications that deliver the fastest possible performance for your business.Topics are covered in a conversational format - with all three chiming in where appropriate. Each presenter has their own screen projector to demonstrate their individual points to the participants. Customers will have the opportunity to get their specific performance/tuning questions answered and learn how to balance all the different environmental requirements for their applications to improve performance. Register today for the following dates and locations • February 21 in San Diego, CA • February 22 in Los Angeles, CA • February 23 in Seattle, WA • February 25 in Phoenix, AZ http://www.ioug.org/tabid/194/Default.aspx Feb 8-24 - Various Oracle Enterprise Cloud Summit This series of full-day events with cloud experts, sharing real-world best practices, reference architectures and more continues during the month of February. Attend the Oracle Enterprise Cloud Summit to learn how to: • Build a state-of-the-art cloud architecture • Leverage your existing IT investments • Optimize your IT management processes Whether you are considering a move to cloud computing or have already adopted a cloud model, this event offers you the insights you need to take full advantage of cloud computing. Check below to see if the event is coming to a city near you. http://www.oracle.com/us/corporate/events/cloud-events-214342.html

    Read the article

  • My search what the Cloud will mean for my Work, part 2

    - by Kay Sellenrode
    My experience with the cloud and why work will change and not disappear. Until now I have multiple experiences with the cloud, for the most good. i have worked on multiple cloud solutions in the past but let me describe them as 0.x versions. For me the 1st real serious cloud experience was a bit more than 1 year ago, when our company switched from an in house server to Microsoft BPOS as a complete replacement. Since we are a small consultancy firm and don’t have that much else to do than consulting, our IT requirements are quite simple. We need Mail and Storage space for our documents. With the in house server we had multiple outages during a year, mostly by lack of administering. Being consultants in the field and hardly having time to maintain a server, BPOS was and still is for us the right solution. Since the migration we have less outages and a much more robust solution. Have we run into issues with BPOS for our own environment? No not that I’m aware of. Based on this experience I made a stance about deploy ability of BPOS and cloud solutions, they are suitable for MKB (Dutch for Medium and Small Businesses). Most Small businesses don’t have the amount of work to hire a full time it admin. Hiring a service provider to maintain their own server might be even more costly than hiring an admin. So seeing the capabilities of BPOS and the needs of most businesses I see it as a great solution that gives the business a complete Server replacement solution for a fixed price per user. resulting in a clear budget for IT spending, something most small businesses were looking for, for a long time. So right now I’m deploying BPOS with a customer, and I run into some of the Cloud 1.0 issues. In my opinion BPOS is a good working Cloud version 1.0 solution. What do I mean with 1.0? Well 1.0 is mostly a tested solution (unlike 0.x versions) but still have quite some limitations caused by too few market experience. in my opnion this is also the reason why we don’t see that much BPOS customers yet and why I think Office 365 will make a huge difference. What I have seen of 365 shows me it is a Cloud 2.0 version, meaning it has all needed features and is much more flexible to the customer. This is also why I see changes happen in my work field, changes and not unemployment due to Cloud solutions. Cloud 1.0 solutions gave me the idea that if every customer would adopt them I would be out of work. But in reality Cloud 1.0 solutions are here just to set the market needs. The Cloud 2.0 and higher versions will give the customer much more flexibility, but also require the need for a consultant. Where the 1.0 versions are simple to setup and maintain, the 2.0 solution needs more thought upfront and afterwards. ie. BPOS in its 1.0 version brings you a very simplified Exchange 2007 solution, Suitable for some customers. Looking at Office 365 you receive almost a full blown Exchange 2010 solution. I expect this to be even more customizable in the next version. In my search for the changes to my work I try to regulary write a post with my thought around the Cloud and the impact on my work as a consultant. I'm also planning to present around this topic, so if anyone is interested to see me present around this topic, you're more than welcome to contact me.

    Read the article

  • How to prepare for a programming competition? Graphs, Stacks, Trees, oh my! [closed]

    - by Simucal
    Last semester I attended ACM's (Association for Computing Machinery) bi-annual programming competition at a local University. My University sent 2 teams of 3 people and we competed amongst other schools in the mid-west. We got our butts kicked. You are given a packet with about 11 problems (1 problem per page) and you have 4 hours to solve as many as you can. They'll run your program you submit against a set of data and your output must match theirs exactly. In fact, the judging is automated for the most part. In any case.. I went there fairly confident in my programming skills and I left there feeling drained and weak. It was a terribly humbling experience. In 4 hours my team of 3 people completed only one of the problems. The top team completed 4 of them and took 1st place. The problems they asked were like no problems I have ever had to answer before. I later learned that in order to solve them some of them effectively you have to use graphs/graph algorithms, trees, stacks. Some of them were simply "greedy" algo's. My question is, how can I better prepare for this semesters programming competition so I don't leave there feeling like a complete moron? What tips do you have for me to be able to answer these problems that involve graphs, trees, various "well known" algorithms? How can I easily identify the algorithm we should implement for a given problem? I have yet to take Algorithm Design in school so I just feel a little out of my element. Here are some examples of the questions asked at the competitions: ACM Problem Sets Update: Just wanted to update this since the latest competition is over. My team placed 1st for our small region (about 6-7 universities with between 1-5 teams each school) and ~15th for the midwest! So, it is a marked improvement over last years performance for sure. We also had no graduate students on our team and after reviewing the rules we found out that many teams had several! So, that would be a pretty big advantage in my own opinion. Problems this semester ranged from about 1-2 "easy" problems (ie bit manipulation, string manipulation) to hard (graph problems involving fairly complex math and network flow problems). We were able to solve 4 problems in our 5 hours. Just wanted to thank everyone for the resources they provided here, we used them for our weekly team practices and it definitely helped! Some quick tips that I have that aren't suggested below: When you are seated at your computer before the competition starts, quickly type out various data structures that you might need that you won't have access to in your languages libraries. I typed out a Graph data-structure complete with floyd-warshall and dijkstra's algorithm before the competition began. We ended up using it in our 2nd problem that we solved and this is the main reason why we solved this problem before anyone else in the midwest. We had it ready to go from the beginning. Similarly, type out the code to read in a file since this will be required for every problem. Save this answer "template" someplace so you can quickly copy/paste it to your IDE at the beginning of each problem. There are no rules on programming anything before the competition starts so get any boilerplate code out the way. We found it useful to have one person who is on permanent whiteboard duty. This is usually the person who is best at math and at working out solutions to get a head start on future problems you will be doing. One person is on permanent programming duty. Your fastest/most skilled "programmer" (most familiar with the language). This will save debugging time also. The last person has several roles between assessing the packet of problems for the next "easiest" problem, helping the person on the whiteboard work out solutions and helping the person programming work out bugs/issues. This person needs to be flexible and be able to switch between roles easily.

    Read the article

  • Tyrus 1.8

    - by Pavel Bucek
    Another version of Tyrus, the reference implementation of JSR 356 – Java API for WebSocket is out! Complete list of fixes and features is below, but let me describe some of the new features in more detail. All information presented here is also available in Tyrusdocumentation. What’s new? First to mention is that JSR 356 Maintenance review Ballot is over and the change proposed for 1.1 release was accepted. More details about changes in the API can be found in this article. Important part is that Tyrus 1.8 implements this API, meaning you can use Lambda expressions and some features of Nashorn without the need for any workarounds. Almost all other features are related to client side support, which was significantly improved in this release. Firstly – I have to admit, that Tyrus client contained security issue – SSL Hostname verification was not performed when connecting to “wss” endpoints. This was fixed as part of TYRUS-339 and resulted in some changes in the client configuration API. Now you can control whether HostnameVerification should be performed (SslEngineConfigurator#setHostnameVerificationEnabled(boolean)) or even set your own HostnameVerifier (please use carefully): #setHostnameVerifier(…). Detailed description can be found in Host verification chapter. Another related enhancement is support for Http Basic and Digest authentication schemes. Tyrus client now enables users to provide credentials and underlying implementation will take care of everything else. Our implementation is strictly non pre-emptive, so the login information is sent always as a response to 401 Http Status Code. If the Basic and Digest are not good enough and there is a need to use some custom scheme or something which is not yet supported in Tyrus, custom Authenticator can be registered and the authentication part of the handshake process will be handled by it. Please seeClient HTTP Authentication chapter in the user guide for more details. There are other features, like fine-grain threadpool configuration for JDK client container, build-in Http redirect support and some reshuffling related to unifying the location of client configuration classes and properties definition – every property should be now part of ClientProperties class. All new features are described in the user guide – in chapterTyrus proprietary configuration. Update – Tyrus 1.8.1 There was another slightly late reported issue related to running in environments with SecurityManager enabled, so this version fixes that. Another noteworthy fixes are TYRUS-355 and TYRUS-361; the first one is about incorrect thread factory used for shared container timeout, which resulted in JVM waiting for that thread and not exiting as it should. The other issue enables relative URIs in Location header when using redirect feature. Links Tyrus homepage mailing list JIRA Complete list of changes: Bug [TYRUS-333] – Multiple endpoints on one client [TYRUS-334] – When connection is closed by a peer, periodic heartbeat pong is not stopped [TYRUS-336] – ReaderBuffer.getNextChars() keeps blocking a server thread after client has closed the session [TYRUS-338] – JDK client SSL filter needs better synchronization during handshake phase [TYRUS-339] – SSL hostname verification is missing [TYRUS-340] – Test PathParamTest are not stable with JDK client [TYRUS-341] – A control frame inside a stream of continuation frames is treated as the part of the stream [TYRUS-343] – ControlFrameInDataStreamTest does not pass on GF [TYRUS-345] – NPE is thrown, when shared container timeout property in JDK client is not set [TYRUS-346] – IllegalStateException is thrown, when using proxy in JDK client [TYRUS-347] – Introduce better synchronization in JDK client thread pool [TYRUS-348] – When a client and server close connection simultaneously, JDK client throws NPE [TYRUS-356] – Tyrus cannot determine the connection port for a wss URL [TYRUS-357] – Exception thrown in MessageHandler#OnMessage is not caught in @OnError method [TYRUS-359] – Client based on Java 7 Asynchronous IO makes application unexitable Improvement [TYRUS-328] – JDK 1.7 AIO Client container – threads – (setting threadpool, limits, …) [TYRUS-332] – Consolidate shared client properties into one file. [TYRUS-337] – Create an SSL version of Basic Servlet test New Feature [TYRUS-228] – Add client support for HTTP Basic/Digest Task [TYRUS-330] – create/run tests/servlet/basic via wss [TYRUS-335] – [clustering] – introduce RemoteSession and expose them via separate method (not include remote sessions in the getOpenSessions()) [TYRUS-344] – Introduce Client support for HTTP Redirect

    Read the article

  • Is a university education really worth it for a good programmer?

    - by Jon Purdy
    The title says it all, but here's the personal side of it: I've been doing design and programming for about as long as I can remember. If there's a programming problem, I can figure it out. (Though admittedly StackOverflow has allowed me to skip the figuring out and get straight to the doing in many instances.) I've made games, esoteric programming languages, and widgets and gizmos galore. I'm currently working on a general-purpose programming language. There's nothing I do better than programming. However, I'm just as passionate about design. Thus when I felt leaving high school that my design skills were lacking, I decided to attend university for New Media Design and Imaging, a digital design-related major. For a year, I diligently studied art and programmed in my free time. As the next year progressed, however, I was obligated to take fewer art and design classes and more technical classes. The trouble was of course that these classes were geared toward non-technical students, and were far beneath my skill level at the time. No amount of petitioning could overcome the institution's reluctance to allow me to test out of such classes, and the major offered no promise for any greater challenge in the future, so I took the extreme route: I switched into the technical equivalent of the major, New Media Interactive Development. A lot of my credits moved over into the new major, but many didn't. It would have been infeasible to switch to a more rigorous technical major such as Computer Science, and having tutored Computer Science students at every level here, I doubt I would be exposed to anything that I haven't already or won't eventually find out on my own, since I'm so involved in the field. I'm now on track to graduate perhaps a year later than I had planned, which puts a significant financial strain on my family and my future self. My schedule continues to be bogged down with classes that are wholly unnecessary for me to take. I'm being re-introduced to subjects that I've covered a thousand times over, simply because I've always been interested in it all. And though I succeed in avoiding the cynical and immature tactic of failing to complete work out of some undeserved sense of superiority, I'm becoming increasingly disillusioned by the lack of intellectual stimulation. Further, my school requires students to complete a number of quarters of co-op work experience proportional to their major. My original major required two quarters, but my current requires three, delaying my graduation even more. To top it all off, college is putting a severe strain on my relationship with my very close partner of a few years, so I've searched diligently for co-op jobs in my area, alas to no avail. I'm now in my third year, and approaching that point past which I can no longer handle this. Either I keep my head down, get a degree no matter what it takes, and try to get a job with a company that will pay me enough to do what I love that I can eventually pay off my loans; or I cut my losses now, move wherever there is work, and in six months start paying off what debt I've accumulated thus far. So the real question is: is a university education really more than just a formality? It's a big decision, and one I can't make lightly. I think this is the appropriate venue for this kind of question, and I hope it sticks around for the sake of others who might someday find themselves in similar situations. My heartfelt thanks for reading, and in advance for your help.

    Read the article

  • Join us on our Journey to be #1 in SaaS!

    - by jessica.ebbelaar(at)oracle.com
    WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring?   Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring? Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. What is the Business Development Group (BDG) The Business Development Group is the key entry point in Oracle for the future Sales and Management talent of the organisation. We are the Demand Generation engine for Oracle in EMEA. We provide revenue generating, quality sales pipeline to our Inside and Field Sales professionals as well as to our Channel Partners. Our current focus is to provide an agile and flexible service offering to our customers and stakeholders to meet ever changing business needs, whilst constantly striving to improve the customer experience, quality of our pipeline, market coverage and penetration. As a SaaS Business Development Consultant (BDC) you will be the first touch point with new customers. Your goal is to proactively identify and qualify business opportunities leading to revenue for Oracle. You will work closely with your Inside Sales colleagues who will progress your qualified pipeline and opportunities. Work for us Work for the only multi-pillar SaaS vendor in the market Be part of a FUN, fast paced and truly International sales team  Develop you solution sales EXPERTISE Drive your CAREER development within a structured and supportive environment The Profile You have a passion for selling cutting-edge technology You thrive in a fast paced and dynamic work environment where being the best is paramount Your priority is always the customer You live for a challenge and you love to win Join us on our Journey to be #1 in SaaS and be part of our Cloud Success Story! You will find more information about open roles here

    Read the article

  • Oracle Fusion Supply Chain Management (SCM) Designs May Improve End User Productivity

    - by Applications User Experience
    By Applications User Experience on March 10, 2011 Michele Molnar, Senior Usability Engineer, Applications User Experience The Challenge: The SCM User Experience team, in close collaboration with product management and strategy, completely redesigned the user experience for Oracle Fusion applications. One of the goals of this redesign was to increase end user productivity by applying design patterns and guidelines and incorporating findings from extensive usability research. But a question remained: How do we know that the Oracle Fusion designs will actually increase end user productivity? The Test: To answer this question, the SCM Usability Engineers compared Oracle Fusion designs to their corresponding existing Oracle applications using the workflow time analysis method. The workflow time analysis method breaks tasks into a sequence of operators. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time can be calculated. The workflow time analysis method has been recently adopted by the Applications User Experience group for use in predicting end user productivity. Using this method, a design can be tested and refined as needed to improve productivity even before the design is coded. For the study, we selected some of our recent designs for Oracle Fusion Product Information Management (PIM). The designs encompassed tasks performed by Product Managers to create, manage, and define products for their organization. (See Figure 1 for an example.) In applying this method, the SCM Usability Engineers collaborated with Product Management to compare the new Oracle Fusion Applications designs against Oracle’s existing applications. Together, we performed the following activities: Identified the five most frequently performed tasks Created detailed task scenarios that provided the context for each task Conducted task walkthroughs Analyzed and documented the steps and flow required to complete each task Applied standard time estimates to the operators in each task to estimate the overall task completion time Figure 1. The interactions on each Oracle Fusion Product Information Management screen were documented, as indicated by the red highlighting. The task scenario and script provided the context for each task.  The Results: The workflow time analysis method predicted that the Oracle Fusion Applications designs would result in productivity gains in each task, ranging from 8% to 62%, with an overall productivity gain of 43%. All other factors being equal, the new designs should enable these tasks to be completed in about half the time it takes with existing Oracle Applications. Further analysis revealed that these performance gains would be achieved by reducing the number of clicks and screens needed to complete the tasks. Conclusions: Using the workflow time analysis method, we can expect the Oracle Fusion Applications redesign to succeed in improving end user productivity. The workflow time analysis method appears to be an effective and efficient tool for testing, refining, and retesting designs to optimize productivity. The workflow time analysis method does not replace usability testing with end users, but it can be used as an early predictor of design productivity even before designs are coded. We are planning to conduct usability tests later in the development cycle to compare actual end user data with the workflow time analysis results. Such results can potentially be used to validate the productivity improvement predictions. Used together, the workflow time analysis method and usability testing will enable us to continue creating, evaluating, and delivering Oracle Fusion designs that exceed the expectations of our end users, both in the quality of the user experience and in productivity. (For more information about studying productivity, refer to the Measuring User Productivity blog.)

    Read the article

  • Designing status management for a file processing module

    - by bot
    The background One of the functionality of a product that I am currently working on is to process a set of compressed files ( containing XML files ) that will be made available at a fixed location periodically (local or remote location - doesn't really matter for now) and dump the contents of each XML file in a database. I have taken care of the design for a generic parsing module that should be able to accommodate the parsing of any file type as I have explained in my question linked below. There is no need to take a look at the following link to answer my question but it would definitely provide a better context to the problem Generic file parser design in Java using the Strategy pattern The Goal I want to be able to keep a track of the status of each XML file and the status of each compressed file containing the XML files. I can probably have different statuses defined for the XML files such as NEW, PROCESSING, LOADING, COMPLETE or FAILED. I can derive the status of a compressed file based on the status of the XML files within the compressed file. e.g status of the compressed file is COMPLETE if no XML file inside the compressed file is in a FAILED state or status of the compressed file is FAILED if the status of at-least one XML file inside the compressed file is FAILED. A possible solution The Model I need to maintain the status of each XML file and the compressed file. I will have to define some POJOs for holding the information about an XML file as shown below. Note that there is no need to store the status of a compressed file as the status of a compressed file can be derived from the status of its XML files. public class FileInformation { private String compressedFileName; private String xmlFileName; private long lastModifiedDate; private int status; public FileInformation(final String compressedFileName, final String xmlFileName, final long lastModified, final int status) { this.compressedFileName = compressedFileName; this.xmlFileName = xmlFileName; this.lastModifiedDate = lastModified; this.status = status; } } I can then have a class called StatusManager that aggregates a Map of FileInformation instances and provides me the status of a given file at any given time in the lifetime of the appliciation as shown below : public class StatusManager { private Map<String,FileInformation> processingMap = new HashMap<String,FileInformation>(); public void add(FileInformation fileInformation) { fileInformation.setStatus(0); // 0 will indicates that the file is in NEW state. 1 will indicate that the file is in process and so on.. processingMap.put(fileInformation.getXmlFileName(),fileInformation); } public void update(String filename,int status) { FileInformation fileInformation = processingMap.get(filename); fileInformation.setStatus(status); } } That takes care of the model for the sake of explanation. So whats my question? Edited after comments from Loki and answer from Eric : - I would like to know if there are any existing design patterns that I can refer to while coming up with a design. I would also like to know how I should go about designing the status management classes. I am more interested in understanding how I can model the status management classes. I am not interested in how other components are going to be updated about a change in status at the moment as suggested by Eric.

    Read the article

  • Cloud hosted CI for .NET projects

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2014/06/02/cloud-hosted-ci-for-.net-projects.aspxContinuous integration (CI) is important. If you don’t have it set up…you should. There are a lot of different options available for hosting your own CI server, but they all require you to maintain your own infrastructure. If you’re a business, that generally isn’t a problem. However, if you have some open source projects hosted, for example on GitHub, there haven’t really been any options. That has changed with the latest release of AppVeyor, which bills itself as “Continuous integration for busy developers.” What’s different about AppVeyor is that it’s a hosted solution. Why is that important? By being a hosted solution, it means that I don’t have to maintain my own infrastructure for a build server. How does that help if you’re hosting an open source project? AppVeyor has a really competitive pricing plan. For an unlimited amount of public repositories, it’s free. That gives you a cloud hosted CI system for all of your GitHub projects for the cost of some time to set them up, which actually isn’t hard to do at all. I have several open source projects (hosted at https://github.com/scottdorman), so I signed up using my GitHub credentials. AppVeyor fully supported my two-factor authentication with GitHub, so I never once had to enter my password for GitHub into AppVeyor. Once it was done, I authorized GitHub and it instantly found all of the repositories I have (both the ones I created and the ones I cloned from elsewhere). You can even add “build badges” to your markdown files in GitHub, so anyone who visits your project can see the status of the lasted build. Out of the box, you can simply select a repository, add the build project, click New Build and wait for the build to complete. You now have a complete CI server running for your project. The best part of this, besides the fact that it “just worked” with almost zero configuration is that you can configure it through a web-based interface which is very streamlined, clean and easy to use or you can use a appveyor.yml file. This means that you can define your CI build process (including any scripts that might need to be run, etc.) in a standard file format (the YAML format) and store it in your repository. The benefits to that are huge. The file becomes a versioned artifact in your source control system, so it can be branched, merged, and is completely transparent to anyone working on the project. By the way, AppVeyor isn’t limited to just GitHub. It currently supports GitHub, BitBucket, Visual Studio Online, and Kiln. I did have a few issues getting one of my projects to build, but the same day I posted the problem to the support forum a fix was deployed, and I had a functioning CI build about 5 minutes after that. Since then, I’ve provided some additional feature requests and had a few other questions, all of which have seen responses within a 24-hour period. I have to say that it’s easily been one of the best customer support experiences I’ve seen in a long time. AppVeyor is still young, so it doesn’t yet have full feature parity with some of the older (more established) CI systems available,  but it’s getting better all the time and I have no doubt that it will quickly catch up to those other CI systems and then pass them. The bottom line, if you’re looking for a good cloud-hosted CI system for your .NET-based projects, look at AppVeyor.

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • How do you formulate the Domain Model in Domain Driven Design properly (Bounded Contexts, Domains)?

    - by lko
    Say you have a few applications which deal with a few different Core Domains. The examples are made up and it's hard to put a real example with meaningful data together (concisely). In Domain Driven Design (DDD) when you start looking at Bounded Contexts and Domains/Sub Domains, it says that a Bounded Context is a "phase" in a lifecycle. An example of Context here would be within an ecommerce system. Although you could model this as a single system, it would also warrant splitting into separate Contexts. Each of these areas within the application have their own Ubiquitous Language, their own Model, and a way to talk to other Bounded Contexts to obtain the information they need. The Core, Sub, and Generic Domains are the area of expertise and can be numerous in complex applications. Say there is a long process dealing with an Entity for example a Book in a core domain. Now looking at the Bounded Contexts there can be a number of phases in the books life-cycle. Say outline, creation, correction, publish, sale phases. Now imagine a second core domain, perhaps a store domain. The publisher has its own branch of stores to sell books. The store can have a number of Bounded Contexts (life-cycle phases) for example a "Stock" or "Inventory" context. In the first domain there is probably a Book database table with basically just an ID to track the different book Entities in the different life-cycles. Now suppose you have 10+ supporting domains e.g. Users, Catalogs, Inventory, .. (hard to think of relevant examples). For example a DomainModel for the Book Outline phase, the Creation phase, Correction phase, Publish phase, Sale phase. Then for the Store core domain it probably has a number of life-cycle phases. public class BookId : Entity { public long Id { get; set; } } In the creation phase (Bounded Context) the book could be a simple class. public class Book : BookId { public string Title { get; set; } public List<string> Chapters { get; set; } //... } Whereas in the publish phase (Bounded Context) it would have all the text, release date etc. public class Book : BookId { public DateTime ReleaseDate { get; set; } //... } The immediate benefit I can see in separating by "life-cycle phase" is that it's a great way to separate business logic so there aren't mammoth all-encompassing Entities nor Domain Services. A problem I have is figuring out how to concretely define the rules to the physical layout of the Domain Model. A. Does the Domain Model get "modeled" so there are as many bounded contexts (separate projects etc.) as there are life-cycle phases across the core domains in a complex application? Edit: Answer to A. Yes, according to the answer by Alexey Zimarev there should be an entire "Domain" for each bounded context. B. Is the Domain Model typically arranged by Bounded Contexts (or Domains, or both)? Edit: Answer to B. Each Bounded Context should have its own complete "Domain" (Service/Entities/VO's/Repositories) C. Does it mean there can easily be 10's of "segregated" Domain Models and multiple projects can use it (the Entities/Value Objects)? Edit: Answer to C. There is a complete "Domain" for each Bounded Context and the Domain Model (Entity/VO layer/project) isn't "used" by the other Bounded Contexts directly, only via chosen paths (i.e. via Domain Events). The part that I am trying to figure out is how the Domain Model is actually implemented once you start to figure out your Bounded Contexts and Core/Sub Domains, particularly in complex applications. The goal is to establish the definitions which can help to separate Entities between the Bounded Contexts and Domains.

    Read the article

  • Seperation of project responsibilities in new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (1 a slightly more experienced and will be the lead) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) PROS: Integration points are quite clear and so developers can work without dependencies on others fairly easily Code practices such as naming conventions and style is more easily managed in regards to consistancy as primarily only one developer will be handling an area CONS: Completion of an entire feature becomes a bit grey as no single person is responsible for an entire feature (story?) A person might not have a full appreciation for all areas of the project and so code overlap might be lacking if suddenly that person left. OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task from view - controller - model. PROS: A person is responsible for one entire feature so it's "complete" state can be clearly defined Code overlap into different areas will occur so each individual has good coverage over the entire application CONS: Overlap of development will occur in all the modules and developers can develop/extend without a true understanding of what the original code owner was intending. This could potentially lead more easily to code bloat? Following a convention might be harder as developers are adding to all areas of the project If a developer sets up a way of doing things would it be harder to enforce the other developers to follow that convention or even build on it (or even discuss it?). Dunno.. Bugs could more easily be introduced into areas not thought about by the developer It's easier to possibly to carry a team member in so far as one member just hacks code together to complete a task whilst another takes time to build a foundation that could be used by others and so help make future tasks easier i.e. starts building a framework? QUESTION: As it might appear I'm more in favor of option 1, however I'm interested to see how others might have approached this or what is the standard or best or preferred way of undertaking a project. Or indeed any different approach to handling this?

    Read the article

  • Server-Sent Events using GlassFish (TOTD #179)

    - by arungupta
    Bhakti blogged about Server-Sent Events on GlassFish and I've been planning to try it out for past some days. Finally, I took some time out today to learn about it and build a simplistic example showcasing the touch points. Server-Sent Events is developed as part of HTML5 specification and provides push notifications from a server to a browser client in the form of DOM events. It is defined as a cross-browser JavaScript API called EventSource. The client creates an EventSource by requesting a particular URL and registers an onmessage event listener to receive the event notifications. This can be done as shown var url = 'http://' + document.location.host + '/glassfish-sse/simple';eventSource = new EventSource(url);eventSource.onmessage = function (event) { var theParagraph = document.createElement('p'); theParagraph.innerHTML = event.data.toString(); document.body.appendChild(theParagraph);} This code subscribes to a URL, receives the data in the event listener, adds it to a HTML paragraph element, and displays it in the document. This is where you'll parse JSON and other processing to display if some other data format is received from the URL. The URL to which the EventSource is subscribed to is updated on the server side and there are multipe ways to do that. GlassFish 4.0 provide support for Server-Sent Events and it can be achieved registering a handler as shown below: @ServerSentEvent("/simple")public class MySimpleHandler extends ServerSentEventHandler { public void sendMessage(String data) { try { connection.sendMessage(data); } catch (IOException ex) { . . . } }} And then events can be sent to this handler using a singleton session bean as shown: @Startup@Statelesspublic class SimpleEvent { @Inject @ServerSentEventContext("/simple") ServerSentEventHandlerContext<MySimpleHandler> simpleHandlers; @Schedule(hour="*", minute="*", second="*/10") public void sendDate() { for(MySimpleHandler handler : simpleHandlers.getHandlers()) { handler.sendMessage(new Date().toString()); } }} This stateless session bean injects ServerSentEventHandlers listening on "/simple" path. Note, there may be multiple handlers listening on this path. The sendDate method triggers every 10 seconds and send the current timestamp to all the handlers. The client side browser simply displays the string. The HTTP request headers look like: Accept: text/event-streamAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Cache-Control: no-cacheConnection: keep-aliveCookie: JSESSIONID=97ff28773ea6a085e11131acf47bHost: localhost:8080Referer: http://localhost:8080/glassfish-sse/faces/index2.xhtmlUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5 And the response headers as: Content-Type: text/event-streamDate: Thu, 14 Jun 2012 21:16:10 GMTServer: GlassFish Server Open Source Edition 4.0Transfer-Encoding: chunkedX-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 4.0 Java/Apple Inc./1.6) Notice, the MIME type of the messages from server to the client is text/event-stream and that is defined by the specification. The code in Bhakti's blog can be further simplified by using the recently-introduced Twitter API for Java as shown below: @Schedule(hour="*", minute="*", second="*/10") public void sendTweets() { for(MyTwitterHandler handler : twitterHandler.getHandlers()) { String result = twitter.search("glassfish", String.class); handler.sendMessage(result); }} The complete source explained in this blog can be downloaded here and tried on GlassFish 4.0 build 34. The latest promoted build can be downloaded from here and the complete source code for the API and implementation is here. I tried this sample on Chrome Version 19.0.1084.54 on Mac OS X 10.7.3.

    Read the article

  • Oracle Executive Strategy Brief: Enterprise-Grade Cloud Applications

    - by B Shashikumar
    Cloud Computing has clearly evolved into one of the dominant secular trends in the industry. Organizations are looking to the cloud to change how they buy and consume IT. And its no longer about just lower up-front costs. The cloud promises to deliver greater agility and free up resources to focus on innovation versus running and maintaining systems. But are organizations actually realizing these benefits? The full promise of cloud is not being realized by customers who entrust their business to multiple niche cloud providers. While almost 9 out of 10 companies  expect more IT agility with cloud, only 47% are actually getting it (Source: 2011 State of Cloud Survey by Symantec). These niche cloud customers have also seen the promises of lower costs, efficiency gains, improved security, and compliance go unfulfilled. Having one cloud provider for customer relationship management (CRM) and another for human capital management (HCM), and then trying to glue these proprietary systems together while integrating to a back-office financial system can add to complexity and long-term costs. Completing a business process or generating an integrated report is cumbersome, and leverages incomplete data. Why can’t niche cloud providers deliver on the full promise of cloud? It’s simple: you still need to complete business processes. You still need reporting that enables you to take action using data from multiple systems. You still have to comply with SOX and other industry regulations. These requirements don’t go away just because you deploy in the cloud. Delivering lower up-front costs by enabling customers to buy software as a service (SaaS) is the easy part. To get real value that lasts longer than your quarterly report, it’s important to realize the benefits of cloud without compromising on functionality and while having the right level of control and flexibility. This is the true promise of cloud. Oracle’s cloud strategy centers around delivering the benefits of cloud—without compromise. We uniquely empower our customers with complete solutions and choice. From the richest functionality to integrated reporting and great user experience. It’s all available in the cloud. And it works not just with other Oracle cloud applications, but with your existing Oracle and third-party systems as well. This helps protect your current investments and extend their value as you journey to the cloud. We’ve made the necessary investments not only in our applications but also in the underlying technology that makes it all run—from the platform down to the hardware and operating system. We make it all. And we’ve engineered it to work together and be highly optimized for our customers, in the cloud. With Oracle enterprise-grade cloud applications, you get the benefits of cloud plus more power, more choice, and more confidence. Read more about how you can realize the true advantage of Cloud with Oracle Enterprise-grade Cloud applications in the Oracle Executive Strategy Brief here.  You can also attend an Oracle Cloud Conference event at a city near you. Register here. 

    Read the article

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • What's new in Servlet 3.1 ? - Java EE 7 moving forward

    - by arungupta
    Servlet 3.0 was released as part of Java EE 6 and made huge changes focused at ease-of-use. The idea was to leverage the latest language features such as annotations and generics and modernize how Servlets can be written. The web.xml was made as optional as possible. Servet 3.1 (JSR 340), scheduled to be part of Java EE 7, is an incremental release focusing on couple of key features and some clarifications in the specification. The main features of Servlet 3.1 are explained below: Non-blocking I/O - Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. Non-blocking I/O allow to build scalable applications. TOTD #188 provide more details about how non-blocking I/O can be done using Servlet 3.1. HTTP protocol upgrade mechanism - Section 14.42 in the HTTP 1.1 specification (RFC 2616) defines an upgrade mechanism that allows to transition from HTTP 1.1 to some other, incompatible protocol. The capabilities and nature of the application-layer communication after the protocol change is entirely dependent upon the new protocol chosen. After an upgrade is negotiated between the client and the server, the subsequent requests use the new chosen protocol for message exchanges. A typical example is how WebSocket protocol is upgraded from HTTP as described in Opening Handshake section of RFC 6455. The decision to upgrade is made in Servlet.service method. This is achieved by adding a new method: HttpServletRequest.upgrade and two new interfaces: javax.servlet.http.HttpUpgradeHandler and javax.servlet.http.WebConnection. TyrusHttpUpgradeHandler shows how WebSocket protocol upgrade is done in Tyrus (Reference Implementation for Java API for WebSocket). Security enhancements Applying run-as security roles to #init and #destroy methods Session fixation attack by adding HttpServletRequest.changeSessionId and a new interface HttpSessionIdListener. You can listen for any session id changes using these methods. Default security semantic for non-specified HTTP method in <security-constraint> Clarifying the semantics if a parameter is specified in the URI and payload Miscellaneous ServletResponse.reset clears any data that exists in the buffer as well as the status code, headers. In addition, Servlet 3.1 will also clears the state of calling getServletOutputStream or getWriter. ServletResponse.setCharacterEncoding: Sets the character encoding (MIME charset) of the response being sent to the client, for example, to UTF-8. Relative protocol URL can be specified in HttpServletResponse.sendRedirect. This will allow a URL to be specified without a scheme. That means instead of specifying "http://anotherhost.com/foo/bar.jsp" as a redirect address, "//anotherhost.com/foo/bar.jsp" can be specified. In this case the scheme of the corresponding request will be used. Clarification in HttpServletRequest.getPart and .getParts without multipart configuration. Clarification that ServletContainerInitializer is independent of metadata-complete and is instantiated per web application. A complete replay of What's New in Servlet 3.1: An Overview from JavaOne 2012 can be seen here (click on CON6793_mp4_6793_001 in Media). Each feature will be added to the JSR subject to EG approval. You can share your feedback to [email protected]. Here are some more references for you: Servlet 3.1 Public Review Candidate Downloads Servlet 3.1 PR Candidate Spec Servlet 3.1 PR Candidate Javadocs Servlet Specification Project JSR Expert Group Discussion Archive Java EE 7 Specification Status Several features have already been integrated in GlassFish 4 Promoted Builds. Have you tried any of them ? Here are some other Java EE 7 primers published so far: Concurrency Utilities for Java EE (JSR 236) Collaborative Whiteboard using WebSocket in GlassFish 4 (TOTD #189) Non-blocking I/O using Servlet 3.1 (TOTD #188) What's New in EJB 3.2 ? JPA 2.1 Schema Generation (TOTD #187) WebSocket Applications using Java (JSR 356) Jersey 2 in GlassFish 4 (TOTD #182) WebSocket and Java EE 7 (TOTD #181) Java API for JSON Processing (JSR 353) JMS 2.0 Early Draft (JSR 343) And of course, more on their way! Do you want to see any particular one first ?

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • Cloud Fact for Business Managers #3: Where You Data Is, and Who Has Access to It Might Surprise You

    - by yaldahhakim
    Written by: David Krauss While data security and operational risk conversations usually happen around the desk of a CCO/CSO (chief compliance and/or security officer), or perhaps the CFO, since business managers are now selecting cloud providers, they need to be able to at least ask some high-level questions on the topic of risk and compliance.  While the report found that 76% of adopters were motivated to adopt cloud apps because of quick access to software, most of these managers found that after they made a purchase decision their access to exciting new capabilities in the cloud could be hindered due to performance and scalability constraints put forth  by their cloud provider.  If you are going to let your business consume their mission critical business applications as a service, then it’s important to understand who is providing those cloud services and what kind of performance you are going to get.  Different types of departments, companies and industries will all have unique requirements so it’s key to take this also into consideration.   Nothing puts a CEO in a bad mood like a public data breach or finding out the company lost money when customers couldn’t buy a product or service because your cloud service provider had a problem.  With 42% of business managers having seen a data security breach in their department associated directly with the use of cloud applications, this is happening more than you think.   We’ve talked about the importance of being able to avoid information silos through a unified cloud approach and platform.  This is also important when keeping your data safe and secure, and a key conversation to have with your cloud provider.  Your customers want to know that their information is protected when they do business with you, just like you want your own company information protected.   This is really hard to do when each line of business is running different cloud application services managed by different cloud providers, all with different processes and controls.   It only adds to the complexity, and the more complex, the more risky and the chance that something will go wrong. What about compliance? Depending on the cloud provider, it can be difficult at best to understand who has access to your data, and were your data is actually stored.  Add to this multiple cloud providers spanning multiple departments and it becomes very problematic when trying to comply with certain industry and country data security regulations.  With 73% of business managers complaining that having cloud data handled externally by one or more cloud vendors makes it hard for their department to be compliant, this is a big time suck for executives and it puts the organization at risk. Is There A Complete, Integrated, Modern Cloud Out there for Business Executives?If you are a business manager looking to drive faster innovation for your business and want a cloud application that your CIO would approve of, I would encourage you take a look at Oracle Cloud.  It’s everything you want from a SaaS based application, but without compromising on functionality and other modern capabilities like embedded business intelligence, social relationship management (for your entire business), and advanced mobile.  And because Oracle Cloud is built and managed by Oracle, you can be confident that your cloud application services are enterprise-grade.  Over 25 Million users and 10 thousands companies around the globe rely on Oracle Cloud application services everyday – maybe your business should too.  For more information, visit cloud.oracle.com. Additional Resources •    Try it: cloud.oracle.com•    Learn more: http://www.oracle.com/us/corporate/features/complete-cloud/index.html•    Research Report: Cloud for Business Managers: The Good, the Bad, and the Ugly

    Read the article

  • Wix Custom Action problems

    - by Grandpappy
    I'm trying to create a custom action for my Wix install, and it's just not working, and I'm unsure why. Here's the bit in the appropriate Wix File: <Binary Id="INSTALLERHELPER" SourceFile=".\Lib\InstallerHelper.dll" /> <CustomAction Id="SQLHelperAction" BinaryKey="INSTALLERHELPER" DllEntry="CustomAction1" Execute="immediate" /> Here's the full class file for my custom action: using Microsoft.Deployment.WindowsInstaller; namespace InstallerHelper { public class CustomActions { [CustomAction] public static ActionResult CustomAction1(Session session) { session.Log("Begin CustomAction1"); return ActionResult.Success; } } } When I run the MSI, I get this error in the log: MSI (c) (08:5C) [10:08:36:978]: Connected to service for CA interface. MSI (c) (08:4C) [10:08:37:030]: Note: 1: 1723 2: SQLHelperAction 3: CustomAction1 4: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp Error 1723. There is a problem with this Windows Installer package. A DLL required for this install to complete could not be run. Contact your support personnel or package vendor. Action SQLHelperAction, entry: CustomAction1, library: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp MSI (c) (08:4C) [10:08:38:501]: Product: SessionWorks :: Judge Edition -- Error 1723. There is a problem with this Windows Installer package. A DLL required for this install to complete could not be run. Contact your support personnel or package vendor. Action SQLHelperAction, entry: CustomAction1, library: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp Action ended 10:08:38: SQLHelperAction. Return value 3. DEBUG: Error 2896: Executing action SQLHelperAction failed. The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2896. The arguments are: SQLHelperAction, , Neither of the two error codes or messages it gives me is enough to tell me what's wrong. Or perhaps I'm just not understanding what they're saying is wrong. Any ideas on what I'm doing wrong?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >