Search Results

Search found 4193 results on 168 pages for 'dave aaron smith'.

Page 30/168 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • What's In Storage?

    - by [email protected]
    Oracle Flies South for Storage Networking Event Storage Networking World (now simply called SNW) is the place you'll find the most-comprehensive education on storage, infrastructure, and the datacenter in the spring of 2010. It's also the place where you'll see Oracle. During the April 12-15 event in Orlando, Florida, the industry's premiere presentations on storage trends and best practices are combined with hands-on labs covering storage management and IP storage. You'll also have the opportunity to learn about Oracle's Sun storage solutions, from Flash and open storage to enterprise disk and tape. Plus, if you stop by booth 207 in the expo hall, you might walk away with a bookish prize: an Amazon Kindle, courtesy of Oracle. Proving, once again, that education can be quite rewarding.

    Read the article

  • I am getting a SQUID Error

    - by Dave
    Hello, What exactly is wrong here Entry in SQUID File--- httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on acl lan src 192.168.1.1 192.168.2.0/24 http_access allow localhost Error after: service squid restart 2010/02/01 14:24:29| Processing Configuration File: /etc/squid/squid.conf (depth 0) 2010/02/01 14:24:29| cache_cf.cc(361) parseOneConfigFile: squid.conf:10 unrecognized: 'broken_vary_encoding' 2010/02/01 14:24:29| WARNING: Netmasks are deprecated. Please use CIDR masks instead. 2010/02/01 14:24:29| WARNING: IPv4 netmasks are particularly nasty when used to compare IPv6 to IPv4 ranges. 2010/02/01 14:24:29| WARNING: For now we assume you meant to write /0 2010/02/01 14:24:29| WARNING: (B) '::/4294967200' is a subnetwork of (A) '::' 2010/02/01 14:24:29| WARNING: because of this '::' is ignored to keep splay tree searching predictable 2010/02/01 14:24:29| WARNING: You should probably remove '::/4294967200' from the ACL named 'all' 2010/02/01 14:24:29| WARNING: Netmasks are deprecated. Please use CIDR masks instead. 2010/02/01 14:24:29| WARNING: IPv4 netmasks are particularly nasty when used to compare IPv6 to IPv4 ranges. 2010/02/01 14:24:29| WARNING: For now we assume you meant to write /128 2010/02/01 14:24:29| aclParseIpData: unknown netmask '255.255.255.255' in '127.0.0.1/255.255.255.255' FATAL: Bungled squid.conf line 25: acl localhost src 127.0.0.1/255.255.255.255 Squid Cache (Version 3.1.0.14): Terminated abnormally. CPU Usage: 0.013 seconds = 0.006 user + 0.007 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Also please provide me with the simplest squid script for the proxy to run. Restrictions can be entered. Thanks Dave

    Read the article

  • A Video Chat with OAUG President David Ferguson

    - by Aaron Lazenby
    A week ago, I had a chance to sit down with OAUG president David Ferguson. I was really looking forward to this conversation after the sharp opinion piece David submitted to Profit Online last year about what it takes to implement social CRM in a sales organization.  Here, David shares his thoughts about this year's Collaborate 10 conference, the topics users are exited about, and the work the OAUG will be doing in the next twelve months.

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    Hi, This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US, but we're in Texas. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Stopping by the Store

    - by [email protected]
    Registrants Get Online Savings on Oracle Products Have you heard about the Oracle Store? It's the one-stop online shop for buying Oracle software and support at significant savings. Better yet, when you register for Oracle OpenWorld 2010 by April 30, you can get an additional 10% off your next purchase. The 10% discount applies to a one-time "click and buy" checkout, so load up as many items as you can. To get started, you'll need to visit the Oracle OpenWorld registration page to get more information about the promotion, including the promo code and link. It's another great way to turn your early bird registration into a long-term gain for your organization.

    Read the article

  • CQRS &ndash; Questions and Concerns

    - by Dylan Smith
    I’ve been doing a lot of learning on CQRS and Event Sourcing over the last little while and I have a number of questions that I haven’t been able to answer. 1. What is the benefit of CQRS when compared to a typical DDD architecture that uses Event Sourcing and properly captures intent and behavior via verb-based commands? (other than Scalability) 2. When using CQRS what do you do with complex query-based logic? I’m going to elaborate on #1 in this blog post and I’ll do a follow-up post on #2. I watched through Greg Young’s video on the business benefits of CQRS + Event Sourcing and first let me say that I thought it was an excellent presentation that really drives home a lot of the benefits to this approach to architecture (I watched it twice in a row I enjoyed it so much!). But it didn’t answer some of my questions fully (I wish I had been there to ask these of Greg in person!). So let me pick apart some of the points he makes and how they relate to my first question above. I’m completely sold on the idea of event sourcing and have a clear understanding of the benefits that it brings to the table, so I’m not going to question that. But you can use event sourcing without going to a CQRS architecture, so my main question is around the benefits of CQRS + Event Sourcing vs Event Sourcing + Typical DDD architecture Architecture with Event Sourcing + Commands on Left, CQRS on Right Greg talks about how the stereotypical architecture doesn’t support DDD, but is that only because his diagram shows DTO’s coming up from the client. If we use the same diagram but allow the client to send commands doesn’t that remove a lot of the arguments that Greg makes against the stereotypical architecture? We can now introduce verbs into the system. We can capture intent now (storing it still requires event sourcing, but you can implement event sourcing without doing CQRS) We can create a rich domain model (as opposed to an anemic domain model) Scalability is obviously a benefit that CQRS brings to the table, but like Greg says, very few of the systems we create truly need significant scalability Greg talks about the ability to scale your development efforts. He says CQRS allows you to split the system into 3 parts (Client, Domain/Commands, Reads) and assign 3 teams of developers to work on them in parallel; letting you scale your development efforts by 3x with nearly linear gains. But in the stereotypical architecture don’t you already have 2 separate modules that you can split your dev efforts between: The client that sends commands/queries and receives DTO’s, and the Domain which accepts commands/queries, and generates events/DTO’s. If this is true it’s not really a 3x scaling you achieve with CQRS but merely a 1.5x scaling which while great doesn’t sound nearly as dramatic (“I can do it with 10 devs in 12 months – let me hire 5 more and we can have it done in 8 months”). Making the Query side “stupid simple” such that you can assign junior developers (or even outsource it) sounds like a valid benefit, but I have some concerns over what you do with complex query-based logic/behavior. I’m going to go into more detail on this in a follow-up blog post shortly. He also seemed to focus on how “stupid-simple” it is doing queries against the de-normalized data store, but I imagine there is still significant complexity in the event handlers that interpret the events and apply them to the de-normalized tables. It sounds like Greg suggests that because we’re doing CQRS that allows us to apply Event Sourcing when we otherwise wouldn’t be able to (~33:30 in the video). I don’t believe this is true. I don’t see why you wouldn’t be able to apply Event Sourcing without separating out the Commands and Queries. The queries would just operate against the domain model instead of the database. But you’d still get the benefits of Event Sourcing. Without CQRS the queries would only be able to operate against the current state rather than the event history, but even in CQRS the domain behaviors can only operate against the current state and I don’t see that being a big limiting factor. If some query needs to operate against something that is not captured by the current state you would just have to update the domain model to capture that information (no different than if that statement were made about a Command under CQRS). Some of the benefits I do see being applicable are that your domain model might end up being simpler/smaller since it only needs to represent the state needed to process commands and not worry about the reads (like the Deactivate Inventory Item and associated comment example that Greg provides). And also commands that can be handled in a Transaction Script style manner by the command handler simply generating events and not touching the domain model. It also makes it easier for your senior developers to focus on the command behavior and ignore the queries, which is usually going to be a better use of their time. And of course scalability. If anybody out there has any thoughts on this and can help educate me further, please either leave a comment or feel free to get in touch with me via email:

    Read the article

  • What&rsquo;s in a name?

    - by Aaron Kowall
    My online presence has become caffeinatedgeek.  As such, I recently had my blog moved from geekswithblogs.net/aaronsblog to geekswithblogs.net/caffeinatedgeek. Same sporadic but hoepfully valuable posting, just new web home. Technorati Tags: caffeinatedgeek

    Read the article

  • Michael Stephenson joins CloudCasts

    - by Alan Smith
    Mike Stephenson has recorded a couple of webcasts focusing on build and test in BizTalk Server 2009. These are part of the “BizTalk Light & Easy” series of webcasts created by some of the BizTalk Server MVPs. Testing BizTalk Applications Implementing an Automated Build Process with BizTalk Server 2009

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Thomas Kurian's COLLABORATE Keynote: Process not Product

    - by Aaron Lazenby
    Right off the bat, Oracle's Senior Vice President, Server Technologies Development made his purpose very clear: demonstrate how the elements of the Oracle product stack are evolving (and integrating) together. There are some great details about the new functionality of each Oracle application line and how the different products sync and interact. The lifecycle charts in Kurian's presentation illustrate how data can flow from an Oracle Demantra into Oracle E-Business Suite and back out to an Oracle Agile system to support value chain planning. With so many products at play in the enterprise, Kurian shows that if you trust that your systems can work together, IT strategy becoming much more about managing business process than managing software product.

    Read the article

  • Up in the Air: Team Oracle Play-by-Play

    - by Aaron Lazenby
    Yesterday, I had the amazing opportunity to fly along with Sean D. Tucker and Team Oracle. Leaving from the San Carols airport, we did a 30 minute flight over the Pacific just south of the coastal town of Half Moon Bay. In that half hour, I rode through a massive 4G loop, survived a crushing hammerhead, and took control of the plane to perform a basic wing over (you can learn what the heck I'm talking about by visiting this website). I have lots of great video, but it's going to take me some time to make sense of it. For now, here's my Twitter-based play-by-play of yesterday's events. Many thanks to Sean D. Tucker and the whole crew (Ben and Ian, especially) for this great opportunity to fly with Team Oracle.Live tweets from @OracleProfitI will be spending the afternoon in a stunt plane, upside down above the San Francisco bay. http://bit.ly/cwkrkIAt the San Carlos airport. More than slightly freaked out. Shaking hands diminish texting ability. Slightly reassuring. http://yfrog.com/1qt61nj There go the doors to the photo plane... #teamoracle http://yfrog.com/58ywljSean D Tucker assures me: "The sky is a great place to be." Helpful, but I'm still nervous. #teamoracle"You get a parachute. He gets a harness." How was this decision made? #teamoracleThe plane with @radu43 has returned. I'm up next...Couldn't help myself...drank a soda before flying. Mistake? We'll see... #teamoracleAdvice of the day "If you pull with two hands, you improve the chances of the chute deploying on the first try." Lovely. #teamoracleI feel so strange. But I flew a high performance airplane. And did an aerobatics move. Wild. #teamoracle"Flying ten feet off he ground, upside-down at 250 miles per hour isn't exciting to me." Sean D. Tucker #teamoracle"What is exciting to me is flying that perfect pattern, just like I imagined it in my head." Sean D. Tucker #teamoracle"You're going to sleep well tonight. You just carried four times your body weight." #teamoracle #gforce Just watched the #teamoracle plane take off for its flight home. I'm waiting for Caltrain. #undignifiedanticlimaxEnough with the #teamoracle. Check http://blogs.oracle.com/profit for the video. Coming soon! 

    Read the article

  • Silent Partner

    - by [email protected]
    The Team Behind the Man Behind the Mask As a continuing sponsor of the blockbuster Iron Man franchise, Oracle has been quietly preparing for the explosive sequel blasting its way into theaters this May. Through a series of advertising campaigns, immersive online experiences, and contests, Oracle plans to highlight its backstage efforts to help Marvel Entertainment hone its newfound superpowers. By driving the performance of critical systems, Oracle technologies are helping Marvel transform itself from mild-mannered comic book publisher to film industry power broker. You can learn more about this dynamic duo, and get free movie memorabilia, by visiting our Iron Man 2 showcase site.

    Read the article

  • iPad Impressions

    - by Aaron Lazenby
    So, I spent some quality time with my new iPad on Saturday. Here are things I like/don't like: -- Don't like that it has to sync with iTunes before you use it: I was traveling and left my laptop at home thinking I'd use this iPad thing instead. But the first thing it asked me to do is connect it to a laptop. Ugh. Had to borrow my mother-in-law's MacBook Pro just to get the iPad rolling. -- Like that magazines and newspapers are forever changed: And I think for the better...it's why I bought this thing in the first place. I spent significant time with The New York Times, The Wall Street Journal, Time Magazine and Popular Science on the iPad. Sliding stories around, jumping from section to section, enlarging images = all excellent experiences. Actually prefer iPad magazine to print, which will require a major shift in editorial strategy, summed up by Popular Science's Mark Jannot in his editor's note "What defines a magazine? Curated expertise--not paper." -- Don't like the screwy human factors: I actually enjoy the virtual keyboard (although I think I'm in the minority), but you have to hunch over to look down at what you're typing. Bad technology ergonomics have already jacked my body in various ways. The iPad just introduced a new one.-- Like the multitouch: In fact, it's awesome. Hands down. Probably will have the most lasting impact on the personal computing industry as a whole.   -- Don't like that it's heavy: If you plan to read in bed, you'd better double up on the creatine and curls. Holding this thing up on your own gets pretty uncomfortable. -- Like the Netfilx app: I wanted to watch "The Big Lebowski," so I did. That is all. -- Don't like that people feel 3G is necessary: For $30 a month? Please. I'm already accustomed to limiting my laptop internet use to readily available free wi-fi. Why do I expect anything different with the iPad? Most anyplace I have time to sit and read/use a computer (cafe, airport, you house, library, etc.) has free wi-fi. I can live without web surfing in your car. That's what the iPhone is for. -- Don't like that not everyone was ready in day one: I'm looking at you Facebook. No iPad app for launch? Lame. iPhone apps scaled-up to work on the iPad look grainy and cheap. Not a quality befitting this beautiful $700 piece of glass.Verdict: I'm bringing it to COLLABORATE 08 and seeing if I can go the whole week using only the iPad. If I can trade this thing for my laptop, I know it's a winner. For now, I'm enjoying Popular Science.

    Read the article

  • Oracle on iPad

    - by Aaron Lazenby
    This came across the Twitter-sphere from Steve Wilson (aka @virtualsteve), Oracle Vice President, Systems management:"One of the engineers on the Ops Center team just sent me a pic of OC running on an iPad. Neat!"And here's proof:

    Read the article

  • Missing features from WebGL and OpenGL ES

    - by Chris Smith
    I've started using WebGL and am pleased with how easy it is to leverage my OpenGL (and by extension OpenGL ES) experience. However, my understanding is as follows: OpenGL ES is a subset of OpenGL WebGL is a subset of OpenGL ES Is this correct for both cases? If so, are there resources for detailing which features are missing? For example, one notable missing feature is glPushMatrix and glPopMatrix. I don't see those in WebGL, but in my searches I cannot find them referenced in OpenGL ES material either.

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • Tomorrow: Profit Rides into the DANGER ZONE!!!

    - by Aaron Lazenby
    On May 4 I'll be suiting up with Oracle social media maven Marius Ciortea-- Iceman and Maverick-style--for a flight in the Team Oracle stunt plane. World-renowned pilot Sean Tucker and his team were nice enough to invite us along to participate in aerial photo shoots over Oracle headquarters and the San Francisco bay. I don't think we'll be able to recreate the epic tension generated between Tom Cruise and Val Kilmer in "Top Gun" but we'll do our best to get some good photos, videos, and interviews along the way. Check back on Wednesday for a full report.

    Read the article

  • What kinds of low level knowledge matter?

    - by Peter Smith
    I realize that this question is similar to Low level programming - what's in it for me, but the answers didn't really address my question well. Part from just an understanding, how exactly does your low level knowledge translate into faster and better programs? There's the obvious lack of garbage collection, but what else is an advantage? Do you really outperform your optimizing compiler? Do you pack your data structures in as tight as possible and be concerned about alignment? There's extra freedom naturally, but does that really translate into a faster program?

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Blogging is Hard

    - by Aaron Lazenby
    Not really. But wi-fi access is limited to common areas in the COLLABORATE 10 conference center here in Las Vegas. So my grand roving iPad blog update plan has been delayed a day while I measured signal strength and searched for a place to sit. Tuesday morning, I accomplished both. Yesterday I shot a nice, quick video of Bahseer Khan about embedded decision support--a part of his Oracle Fusion Applications presentation that I think could do with some additional discussion as we ramp up for Oracle's next-generation applications. I'll post that video here by the end of the day. Later today I'll also be interviewing OAUG president David Ferguson about the prevailing trends at COLLABORATE 10, the addition of Sun (and Sun's user groups) to the Oracle portfolio, and what the next 12 month holds in store for the Oracle user community. Look for that video later today too. If you can't wait for me to dash down to the lobby to make a blog update, don't forget that you can follow Profit at COLLABORATE 10 on Twitter (@OracleProfit). That way, you'll get updates about Billy Cripe's kilt in real time. More to come as this day develops. Next up: virtualization. Also, notes and coverage from yesterday's keynote presentation.

    Read the article

  • Profit Staff Takes Center Stage...

    - by Aaron Lazenby
    ...for a moment, at least. Here's a somewhat unflattering shot of me (left) and a nice one of Profit/Oracle Magazine art director Richard Merchan (right) at the Wells Fargo museum in San Francisco, CA. We were shooting the cover for the May issue of Profit with CFO Howard Atkins and took some souvenir shots in front of the classic Wells Fargo stage coach. Thanks to Richard and photographer Bob Adler for their hard work on the May issue.

    Read the article

  • Website Editor control for WYSIWYG/regions

    - by Dan Smith
    For lack of a better title, let me try to explain further: I'm looking for a control that will allow me to have a library of "page elements" (such as a list of employees, or a photo gallery, or a contact form, etc) that could be dragged onto the page canvas. The page canvas could have pre-set regions/boxes where these items could be drug into, preventing the user from screwing up the pages layout. I'm looking for any pre-built commercial (or open-source with commercial use allowed) tools available like this.

    Read the article

  • Thunderbird uses the wrong browser

    - by Aaron Digulla
    I'm unable to make Thunderbird open the default browser. In the browser preferences, Chromium is selected as the default browser. It's also selected in "Default Applications" in System Settings. In Thunderbird, I read "Chrome (Default)" which is wrong on all levels: Chrome itself complains that it's not the default browser when I click a link inside Thunderbird. In all other places, that I could find, Chromium is the default Here is what I tried: I used update-alternatives --config x-www-browser to select chromium-browser as well (see How do I change the default browser?). And even when I select a different browser from the list in the Thunderbird preferences, it still opens Chrome. My current solution is to create a link from /usr/bin/google-chrome to chromium-browser. How can I force Thunderbird to use the browser I want??? EDIT I also updated gnome-www-browser (update-alternatives --config gnome-www-browser) after feedback from roadmr but that didn't help. At least sensible-browser opens Chromium, now, but Thunderbird is stubborn.

    Read the article

  • Windows 7 on an EEE PC 901 - Is it a practical change?

    - by Dave
    I am currently running WinXP on my EEE PC 901, and I'm happy to say that it runs really well. But this did not come with out significant manipulation of the OS. Here's the basic steps I took: Install XP Modify registry to install Install bare essential drivers Relocate page file to d:\ (remember, this model has two SSD's, one roughly 3.6gb, and the other roughly 16gb - XP won't run on the bigger drive, only the smaller one) Install remaining drivers skip normal updates, install service pack 2 straight away. modify system registry to place service pack backup folder into new Program Files directory on D drive (where software is being installed). Change My Documents folder to sit on D drive. Install .net framework Install remaining updates and service pack 3 (the hidden backup folders in the c:\Windows directory are deleted after every update as well as the contents of the service pack downloads folder in order to continually free up space). I have also found that Disktrix UltimateDefrag to be brilliant at keeping the system clean and tidy. This is roughly the order I did things in. In this configuration the machine works really well. QUESTION: Can this kind of configuration be implemented with Windows 7 to achieve the same result on this machine? Thanks in advance. Dave.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >