Search Results

Search found 3213 results on 129 pages for 'spring ws'.

Page 125/129 | < Previous Page | 121 122 123 124 125 126 127 128 129  | Next Page >

  • Developer Training – A Conclusive Summary- Part 5

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 We have now reached the end of our series about developer training.  I hope you have come away thinking that training is the best way to advance in your company and that you are looking for training opportunities right now.  If you’re still not convinced here are a few things to keep in mind:  Training benefits the employer and the employee. A well trained employee is a happy employee, and a happy employee is more efficient and productive. Training an employee might be expensive, but it is less expensive than hiring a new person. Whether you are looking at him from the employee’s or the company’s point of view, there are always advantages to training. A Broader View This series is definitely written for Developer Training but it is not limited to developers only. There are IT Pro, System Admins, DBAs as well many other technology professionals; this article series is for all professionals in the world. The concepts and take away will remain common across all the platform and regardless of technology affiliation. Pass the Knowledge If I have to pick one advise which is extremely important related to training, I will pick – pass the knowledge. Once you have decided in favor of training, there is more to it than simply showing up and staying awake.  It is always a good idea to take notes – at the very least it will help you stay awake, but they will often serve as a good way to remember your training when you go back to work.  You can also use them to pass your new knowledge on to fellow employees, which can be very fun and rewarding. Right Place, Right Time and Right Training There are so many ways to get developer training.  In-person and on the job training is easy to come by and is the most usual type of training, but don’t overlook my favorite type of training: On Demand.  Being able to learn at your own pace, own place and on your own time will make training a realistic goal for almost every employee. I can think of nothing more important in life than furthering your education.  Especially when you work in a field that is constantly changing – like technology.  Whether you like it or not, training is incredibly important.  That is why I feel it is so important to receive training.  And because there are so many different training formats – live, online, through books, through people – I am certain that we all can find a way to be trained that best suits our goals and personalities. The Teacher Within If you think of anyone who is a master of the technology field or an incredibly successful developer (the obvious examples that spring to mind are Steve Jobs or Bill Gates), you will also find a teacher.  Both these individuals spent their lives developing better technology, but also educating other developers and the public about how to use these technologies and how it can change your life for the better.  I think that we all should strive to be like these wonderful teachers.  We might not be able to change the world, but we can certainly change a few lives around us. Even if we never turn into trainers ourselves , being trained as a student can be a good exercise.  We learn a lot and become better employees – and it would not be a stretch to say that this makes us better individuals, as well. Final Say I think learning and growing in your chosen field is not only a good idea, career-wise, but can be fun, too!  I for one never feel more alive than when I am learning about something I am really passionate about.  I think my job title – technology evangelist – explains how enthusiastic I am about this subject.  But please don’t think that I am thinking of this as someone who wants to train and educate others (although this is also one of my passions).  I am also a passionate student.  I enjoy learning new things and am always on the lookout for new ways to learn and new people to learn from. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tricks and Optimizations for you Sitecore website

    - by amaniar
    When working with Sitecore there are some optimizations/configurations I usually repeat in order to make my app production ready. Following is a small list I have compiled from experience, Sitecore documentation, communicating with Sitecore Engineers etc. This is not supposed to be technically complete and might not be fit for all environments.   Simple configurations that can make a difference: 1) Configure Sitecore Caches. This is the most straight forward and sure way of increasing the performance of your website. Data and item cache sizes (/databases/database/ [id=web] ) should be configured as needed. You may start with a smaller number and tune them as needed. <cacheSizes hint="setting"> <data>300MB</data> <items>300MB</items> <paths>5MB</paths> <standardValues>5MB</standardValues> </cacheSizes> Tune the html, registry etc cache sizes for your website.   <cacheSizes> <sites> <website> <html>300MB</html> <registry>1MB</registry> <viewState>10MB</viewState> <xsl>5MB</xsl> </website> </sites> </cacheSizes> Tune the prefetch cache settings under the App_Config/Prefetch/ folder. Sample /App_Config/Prefetch/Web.Config: <configuration> <cacheSize>300MB</cacheSize> <!--preload items that use this template--> <template desc="mytemplate">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</template> <!--preload this item--> <item desc="myitem">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }</item> <!--preload children of this item--> <children desc="childitems">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</children> </configuration> Break your page into sublayouts so you may cache most of them. Read the caching configuration reference: http://sdn.sitecore.net/upload/sitecore6/sc62keywords/cache_configuration_reference_a4.pdf   2) Disable Analytics for the Shell Site <site name="shell" virtualFolder="/sitecore/shell" physicalFolder="/sitecore/shell" rootPath="/sitecore/content" startItem="/home" language="en" database="core" domain="sitecore" loginPage="/sitecore/login" content="master" contentStartItem="/Home" enableWorkflow="true" enableAnalytics="false" xmlControlPage="/sitecore/shell/default.aspx" browserTitle="Sitecore" htmlCacheSize="2MB" registryCacheSize="3MB" viewStateCacheSize="200KB" xslCacheSize="5MB" />   3) Increase the Check Interval for the MemoryMonitorHook so it doesn’t run every 5 seconds (default). <hook type="Sitecore.Diagnostics.MemoryMonitorHook, Sitecore.Kernel"> <param desc="Threshold">800MB</param> <param desc="Check interval">00:05:00</param> <param desc="Minimum time between log entries">00:01:00</param> <ClearCaches>false</ClearCaches> <GarbageCollect>false</GarbageCollect> <AdjustLoadFactor>false</AdjustLoadFactor> </hook>   4) Set Analytics.PeformLookup (Sitecore.Analytics.config) to false if your environment doesn’t have access to the internet or you don’t intend to use reverse DNS lookup. <setting name="Analytics.PerformLookup" value="false" />   5) Set the value of the “Media.MediaLinkPrefix” setting to “-/media”: <setting name="Media.MediaLinkPrefix" value="-/media" /> Add the following line to the customHandlers section: <customHandlers> <handler trigger="-/media/" handler="sitecore_media.ashx" /> <handler trigger="~/media/" handler="sitecore_media.ashx" /> <handler trigger="~/api/" handler="sitecore_api.ashx" /> <handler trigger="~/xaml/" handler="sitecore_xaml.ashx" /> <handler trigger="~/icon/" handler="sitecore_icon.ashx" /> <handler trigger="~/feed/" handler="sitecore_feed.ashx" /> </customHandlers> Link: http://squad.jpkeisala.com/2011/10/sitecore-media-library-performance-optimization-checklist/   6) Performance counters should be disabled in production if not being monitored <setting name="Counters.Enabled" value="false" />   7) Disable Item/Memory/Timing threshold warnings. Due to the nature of this component, it brings no value in production. <!--<processor type="Sitecore.Pipelines.HttpRequest.StartMeasurements, Sitecore.Kernel" />--> <!--<processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel"> <TimingThreshold desc="Milliseconds">1000</TimingThreshold> <ItemThreshold desc="Item count">1000</ItemThreshold> <MemoryThreshold desc="KB">10000</MemoryThreshold> </processor>—>   8) The ContentEditor.RenderCollapsedSections setting is a hidden setting in the web.config file, which by default is true. Setting it to false will improve client performance for authoring environments. <setting name="ContentEditor.RenderCollapsedSections" value="false" />   9) Add a machineKey section to your Web.Config file when using a web farm. Link: http://msdn.microsoft.com/en-us/library/ff649308.aspx   10) If you get errors in the log files similar to: WARN Could not create an instance of the counter 'XXX.XXX' (category: 'Sitecore.System') Exception: System.UnauthorizedAccessException Message: Access to the registry key 'Global' is denied. Make sure the ApplicationPool user is a member of the system “Performance Monitor Users” group on the server.   11) Disable WebDAV configurations on the CD Server if not being used. More: http://sitecoreblog.alexshyba.com/2011/04/disable-webdav-in-sitecore.html   12) Change Log4Net settings to only log Errors on content delivery environments to avoid unnecessary logging. <root> <priority value="ERROR" /> <appender-ref ref="LogFileAppender" /> </root>   13) Disable Analytics for any content item that doesn’t add value. For example a page that redirects to another page.   14) When using Web User Controls avoid registering them on the page the asp.net way: <%@ Register Src="~/layouts/UserControls/MyControl.ascx" TagName="MyControl" TagPrefix="uc2" %> Use Sublayout web control instead – This way Sitecore caching could be leveraged <sc:Sublayout ID="ID" Path="/layouts/UserControls/MyControl.ascx" Cacheable="true" runat="server" />   15) Avoid querying for all children recursively when all items are direct children. Sitecore.Context.Database.SelectItems("/sitecore/content/Home//*"); //Use: Sitecore.Context.Database.GetItem("/sitecore/content/Home");   16) On IIS — you enable static & dynamic content compression on CM and CD More: http://technet.microsoft.com/en-us/library/cc754668%28WS.10%29.aspx   17) Enable HTTP Keep-alive and content expiration in IIS.   18) Use GUID’s when accessing items and fields instead of names or paths. Its faster and wont break your code when things get moved or renamed. Context.Database.GetItem("{324DFD16-BD4F-4853-8FF1-D663F6422DFF}") Context.Item.Fields["{89D38A8F-394E-45B0-826B-1A826CF4046D}"]; //is better than Context.Database.GetItem("/Home/MyItem") Context.Item.Fields["FieldName"]   Hope this helps.

    Read the article

  • Top Tweets SOA Partner Community – November 2011

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity soacommunity SOA Community Dutch ACEs SOA Partner Community award celebration wp.me/p10C8u-i9 OracleBPM Gauging Maturity of your BPM Strategy – part 1/2, bit.ly/vJE9UZ MagicChatzi Dutch ACE’s and ACE Directors had a small party: achatzia.blogspot.com/2011/11/celebr… leonsmiers #Capgemini #Oracle #BPM Blog index bit.ly/tUYtvD #yam lucasjellema Blog post by my colleague Emiel on the AMIS blog: Timeouts in Oracle SOA Suite 11g – tinyurl.com/73amo3r biemond Solving __OAUX_GENXSD_.TOP.XSD with BPEL: When you use an external web service in combination with a BPEL servic… t.co/Gzzatzrr OracleBlogs Jumpstart Fusion Middleware projects with Oracle User Productivity Kit ow.ly/1fJMev cpurdy on Oracle Coherence data grid, its new RESTful APIs, and Oracle Service Bus (OSB): blogs.oracle.com/slc/entry/orac… Accenture Learn how Service-Oriented Architecture can help public service agencies solve legacy system issues. bit.ly/sTteM4 #SOA eelzinga Thanks for organising it Andreas! #soacommunity eelzinga Had a nice drink with the fellow Dutch Oracle ACE members for a little celebration of the SOA Community Partner Award. #soacommunity EmielP Wrote a blogpost about timeouts in the #Oracle #SOA Suite: bit.ly/uhUcrX OracleBlogs Processing Binary Data in SOA Suite 11g t.co/Tzd1xBsY OracleBlogs Finding the Value in SOA by Stephen Bennett t.co/9MMLJoLz OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/5viNj8ib OracleBlogs Demo: Business Transaction Management with SOA Management Pack ow.ly/1fFBv3 OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/Dnfzo0PN oracletechnet Wikis.oracle.com lives leonsmiers A new #capgemini #oracle #blog, Measuring the Human Task activity in Oracle BPM bit.ly/uPan08 #yam @CapgeminiOracle OTNArchBeat 3 SOA business cases, explained in a 2-minute elevator speech | @JoeMcKendrick t.co/aYGNkZup OTNArchBeat Gartner, Inc. places Oracle SOA Governance in Magic Quadrant for SOA Governance Technologies t.co/bSG5cuTr Jphjulstad Red carpet to Oracle BPM – evita.no evita.no/ikbViewer/soa-… Oracle #Oracle Named a Leader in #SOA Governance Magic Quadrant by Leading Analyst Firm t.co/prnyGu2U soacommunity What presentations & topics do you like to see at the next SOA & BPM & Webcenter Community Forum early 2012? #soacommunity soacommunity Oracle BPM Suite 11g Handbook Released wp.me/p10C8u-hU OTNArchBeat SOA Development Virtual Developer Day (On Demand) | @soacommunity bit.ly/sqhQmX OracleBlogs SOA Development Virtual Developer Day (On Demand) t.co/MDrdnx0h 9 Nov Favorite Undo Retweet Reply OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 biemond @stevendavelaar this is for you t.co/hInKCcfY it explains your sso problem soacommunity SOA Development Virtual Developer Day (on demand) t.co/flXPWk4R soacommunity IPT Swiss SOA Experts – thanks for the nice ink wp.me/p10C8u-i3 soacommunity Enjoy #wjax specially the presentations from our #ACE @t_winterberg @myfear @AdamBien pic.twitter.com/m8VcBSG3 OTNArchBeat Discounts on books, more, for Oracle Technology Network members bit.ly/vRxMfB OracleSOA Justify the ROI of SOA in 10 seconds…a pic is worth 1000 words bit.ly/roi_of_soa_img #oraclesoa #soa #oow11 orclateamsoa A-Team SOA Blog: Case Management in BPM 11g -  Mark Foster Oracle BPM 11g & Case Management I’ve seen… t.co/l5zb6pFr t_winterberg Die nächste SIG #SOA steht an: 7.12. in Hamburg. Neues Tooling und Erfahrungen rund um Oracle FMW, SOA, BPM… (cont) deck.ly/~YC57v OracleBlogs Continuous Integration for SOA/BPM ow.ly/1fsekI OracleBlogs BPM Suite 11g Handbook Released ow.ly/1frlzv lucasjellema Iterating over collection (array) in BPM (and dispatching jobs for entries in array): t.co/1SEhSvWv – subprocesses are the key. lucasjellema Lucas Jellema Useful tip from Mark Nelson: BPM API documentation (as well as Human Workflow Service) available: redstack.wordpress.com/2011/09/28/api… OTNArchBeat SOA, cloud: it’s the architecture that matters | Joe McKendrick zd.net/tNCiTF orclateamsoa: Building a job dispatcher in BPM -or- Iterating over collections in BPM ow.ly/1frbrz orclateamsoa Using the Database as a Policy Store for SOA 11g ow.ly/1frbrA OracleBPM Oracle launches Process Accelerators for BPM: t.co/XPEE61QL Jphjulstad Human-Centric BPM Selection Checklist t.co/3TZXZHLH OracleBlogs Fusion Middleware General Session at OOW 2011: Missed It? Read On… t.co/aU5JvM6K gschmutz Great! The product page of the OSB 11g Development Cookbook is now online: t.co/5Jfbe6Ng Looking forward to get it, u too? brhubart Oracle IT Architecture Essentials; Lightweight Composite Service Development with SCA and Spring; Cloud Migration ow.ly/7esNg eelzinga New blogpost : Oracle Service Bus, Generic fault handling, bit.ly/sGr4UL #osb #oracleservicebus For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Implementing an Interceptor Using NHibernate’s Built In Dynamic Proxy Generator

    - by Ricardo Peres
    NHibernate 3.2 came with an included proxy generator, which means there is no longer the need – or the possibility, for that matter – to choose Castle DynamicProxy, LinFu or Spring. This is actually a good thing, because it means one less assembly to deploy. Apparently, this generator was based, at least partially, on LinFu. As there are not many tutorials out there demonstrating it’s usage, here’s one, for demonstrating one of the most requested features: implementing INotifyPropertyChanged. This interceptor, of course, will still feature all of NHibernate’s functionalities that you are used to, such as lazy loading, and such. We will start by implementing an NHibernate interceptor, by inheriting from the base class NHibernate.EmptyInterceptor. This class does not do anything by itself, but it allows us to plug in behavior by overriding some of its methods, in this case, Instantiate: 1: public class NotifyPropertyChangedInterceptor : EmptyInterceptor 2: { 3: private ISession session = null; 4:  5: private static readonly ProxyFactory factory = new ProxyFactory(); 6:  7: public override void SetSession(ISession session) 8: { 9: this.session = session; 10: base.SetSession(session); 11: } 12:  13: public override Object Instantiate(String clazz, EntityMode entityMode, Object id) 14: { 15: Type entityType = Type.GetType(clazz); 16: IProxy proxy = factory.CreateProxy(entityType, new _NotifyPropertyChangedInterceptor(), typeof(INotifyPropertyChanged)) as IProxy; 17: 18: _NotifyPropertyChangedInterceptor interceptor = proxy.Interceptor as _NotifyPropertyChangedInterceptor; 19: interceptor.Proxy = this.session.SessionFactory.GetClassMetadata(entityType).Instantiate(id, entityMode); 20:  21: this.session.SessionFactory.GetClassMetadata(entityType).SetIdentifier(proxy, id, entityMode); 22:  23: return (proxy); 24: } 25: } Then we need a class that implements the NHibernate dynamic proxy behavior, let’s place it inside our interceptor, because it will only need to be used there: 1: class _NotifyPropertyChangedInterceptor : NHibernate.Proxy.DynamicProxy.IInterceptor 2: { 3: private PropertyChangedEventHandler changed = delegate { }; 4:  5: public Object Proxy 6: { 7: get; 8: set;} 9:  10: #region IInterceptor Members 11:  12: public Object Intercept(InvocationInfo info) 13: { 14: Boolean isSetter = info.TargetMethod.Name.StartsWith("set_") == true; 15: Object result = null; 16:  17: if (info.TargetMethod.Name == "add_PropertyChanged") 18: { 19: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 20: this.changed += propertyChangedEventHandler; 21: } 22: else if (info.TargetMethod.Name == "remove_PropertyChanged") 23: { 24: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 25: this.changed -= propertyChangedEventHandler; 26: } 27: else 28: { 29: result = info.TargetMethod.Invoke(this.Proxy, info.Arguments); 30: } 31:  32: if (isSetter == true) 33: { 34: String propertyName = info.TargetMethod.Name.Substring("set_".Length); 35: this.changed(this.Proxy, new PropertyChangedEventArgs(propertyName)); 36: } 37:  38: return (result); 39: } 40:  41: #endregion 42: } What this does for every interceptable method (those who are either virtual or from the INotifyPropertyChanged) is: For methods that came from the INotifyPropertyChanged interface, add_PropertyChanged and remove_PropertyChanged (yes, events are methods ), we add an implementation that adds or removes the event handlers to the delegate which we declared as changed; For all the others, we direct them to the place where they are actually implemented, which is the Proxy field; If the call is setting a property, it fires afterwards the PropertyChanged event. In order to use this, we need to add the interceptor to the Configuration before building the ISessionFactory: 1: using (ISessionFactory factory = cfg.SetInterceptor(new NotifyPropertyChangedInterceptor()).BuildSessionFactory()) 2: { 3: using (ISession session = factory.OpenSession()) 4: using (ITransaction tx = session.BeginTransaction()) 5: { 6: Customer customer = session.Get<Customer>(100); //some id 7: INotifyPropertyChanged inpc = customer as INotifyPropertyChanged; 8: inpc.PropertyChanged += delegate(Object sender, PropertyChangedEventArgs e) 9: { 10: //fired when a property changes 11: }; 12: customer.Address = "some other address"; //will raise PropertyChanged 13: customer.RecentOrders.ToList(); //will trigger the lazy loading 14: } 15: } Any problems, questions, do drop me a line!

    Read the article

  • Identity Propagation across Web and Web Service - 11g

    - by Prakash Yamuna
    I was on a customer call recently and this topic came up. In fact since this topic seems to come up fairly frequently - I thought I would describe the recommended model for doing SSO for Web Apps and then doing Identity Propagation across the Back end web services. The Image below shows a typical flow: Here is a more detailed drill down of what happens at each step of the flow (the number in red in the diagram maps to the description below of the behind the scenes processing that happens in the stack). [1] The Web App is protected with OAM and so the typical SSO scenario is applicable. The Web App URL is protected in OAM. The Web Gate intercepts the request from the Browser to the Web App - if there is an OAM (SSO) token - then the Web Gate validates the OAM token. If there is no SSO token - then the user is directed to the login page - user enters credentials, user is authenticated and OAM token is created for that browser session. [2] Once the Web Gate validates the OAM token - the token is propagated to the WLS Server where the Web App is running. You need to ensure that you have configured the OAM Identity Asserter in the Weblogic domain. If the OAM Identity Asserter is configured, this will end up creating a JAAS Subject. Details can be found at: http://docs.oracle.com/cd/E23943_01/doc.1111/e15478/webgate.htm#CACIAEDJ [3] The Web Service client (in the Web App) is secured with one of the OWSM SAML Client Policies. If secured in this fashion, the OWSM Agent creates a SAML Token from the JAAS Subject (created in [2] by the OAM Identity Asserter) and injects it into the SOAP message. Steps for securing a JEE JAX-WS Proxy Client using OWSM Policies are documented at: http://docs.oracle.com/cd/E23943_01/web.1111/b32511/attaching.htm#BABBHHHC Note: As shown in the diagram - instead of building a JEE Web App - you can also use WebCenter and build portlets. If you are using WebCenter then you can follow the same architecture. Only the steps for securing WebCenter Portlets with OWSM is different. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} http://docs.oracle.com/cd/E23943_01/webcenter.1111/e12405/wcadm_security_wss.htm#CIHEBAHB [4] The SOA Composite App is secured with OWSM SAML Service policy. OWSM Agent intercepts the incoming SOAP request and validates the SAML token and creates a JAAS Subject. [5] When the SOA Composite App tries to invoke the OSB Proxy Service, the SOA Composite App "Reference" is secured with OWSM SAML Client Policy. Here again OWSM Agent will create a new SAML Token from the JAAS Subject created in [4] by the OWSM Agent and inject it into the SOAP message. Steps for securing SOA Composite Apps (Service, Reference, Component) are documented at: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} http://docs.oracle.com/cd/E23943_01/web.1111/b32511/attaching.htm#CEGDGIHD [6] When the request reaches the OSB Proxy Service, the Proxy Service is again secured with the OWSM SAML Token Service Policy. So the same steps are performed as in [4]. The end result is a JAAS Subject. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} [7] When OSB needs to invoke the Business App Web Service, it goes through the OSB Business Service. The OSB Business Service is secured with OWSM SAML Client Policy and step [5] is repeated. Steps for securing OSB Proxy Service and OSB Business Services are document at: http://docs.oracle.com/cd/E23943_01/admin.1111/e15867/proxy_services.htm#OSBAG1097[8] Finally when the message reaches the Business App Web Service, this service is protected by OWSM SAML Service policy and step [4] is repeated by the OWSM Agent. Steps for securing Weblogic Web Services, ADF Web Services, etc are documented at: http://docs.oracle.com/cd/E23943_01/web.1111/b32511/attaching.htm#CEGCJDIF In the above description for purposes of brevity - I have not described which OWSM SAML policies one should use; OWSM ships with a number of SAML policies, I briefly described some of the trade-offs involved with the various SAML policies here. The diagram above and the accompanying description of what is happening in each step of the flow - assumes you are using "SAML SV" or SAML Bearer" based policies without an STS.

    Read the article

  • Java Cloud Service Integration using Web Service Data Control

    - by Jani Rautiainen
    Java Cloud Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance.This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance.In this article a custom application integrating with Fusion Application using Web Service Data Control will be implemented. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration. Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source. The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the “Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud” link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guide. For details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Create Application In this example the “JcsWsDemo” application created in the “Java Cloud Service Integration using Web Service Proxy” article is used as the base. Create Web Service Data Control In this example we will use a Web Service Data Control to integrate with Credit Rule Service in Fusion Applications. The data control will be used to query data from Fusion Applications using a web service call and present the data in a table. To generate the data control choose the “Model” project and navigate to "New -> All Technologies -> Business Tier -> Data Controls -> Web Service Data Control" and enter following: Name: CreditRuleServiceDC URL: https://ic-[POD].oracleoutsourcing.com/icCnSetupCreditRulesPublicService/CreditRuleService?WSDL Service: {{http://xmlns.oracle.com/apps/incentiveCompensation/cn/creditSetup/creditRule/creditRuleService/}CreditRuleService On step 2 select the “findRule” operation: Skip step 3 and on step 4 define the credentials to access the service. Do note that in this example these credentials are only used if testing locally, for JCS deployment credentials need to be manually updated on the EAR file: Click “Finish” and the proxy generation is done. Creating UI In order to use the data control we will need to populate complex objects FindCriteria and FindControl. For simplicity in this example we will create logic in a managed bean that populates the objects. Open “JcsWsDemoBean.java” and add the following logic: Map findCriteria; Map findControl; public void setFindCriteria(Map findCriteria) { this.findCriteria = findCriteria; } public Map getFindCriteria() { findCriteria = new HashMap(); findCriteria.put("fetchSize",10); findCriteria.put("fetchStart",0); return findCriteria; } public void setFindControl(Map findControl) { this.findControl = findControl; } public Map getFindControl() { findControl = new HashMap(); return findControl; } Open “JcsWsDemo.jspx”, navigate to “Data Controls -> CreditRuleServiceDC -> findRule(Object, Object) -> result” and drag and drop the “result” node into the “af:form” element in the page: On the “Edit Table Columns” remove all columns except “RuleId” and “Name”: On the “Edit Action Binding” window displayed enter reference to the java class created above by selecting “#{JcsWsDemoBean.findCriteria}”: Also define the value for the “findControl” by selecting “#{JcsWsDemoBean.findControl}”. Deploy to JCS For WS DC the authentication details need to be updated on the connection details before deploying. Open “connections.xml” by navigating “Application Resources -> Descriptors -> ADF META-INF -> connections.xml”: Change the user name and password entry from: <soap username="transportUserName" password="transportPassword" To match the access details for the target environment. Follow the same steps as documented in previous article ”Java Cloud Service ADF Web Application”. Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsWsDemo-ViewController-context-root/faces/JcsWsDemo.jspx When accessed the first 10 rules in the system are displayed: Summary In this article we learned how to integrate with Fusion Applications using a Web Service Data Control in JCS. In future articles various other integration techniques will be covered. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • Is Apache 2.2.22 able to sustain 1.000 simultaneous connected clients?

    - by Fnux
    For an article in a news paper, I'm benchmarking 5 different web servers (Apache2, Cherokee, Lighttpd, Monkey and Nginx). The tests made consist of measuring the execution times as well as different parameters such as the number of request served per second, the amount of RAM, the CPU used, during a growing load of simultaneous clients (from 1 to 1.000 with a step of 10) each client sending 1.000.000 requets of a small fixed file, then of a medium fixed file, then a small dynamic content (hello.php) and finally a complex dynamic content (the computation of the reimbursment of a loan). All the web servers are able to sustain such a load (up to 1.000 clients) but Apache2 which always stops to respond when the test reach 450 to 500 simultaneous clients. My configuration is : CPU: AMD FX 8150 8 cores @ 4.2 GHz RAM: 32 Gb. SSD: 2 x Crucial 240 Gb SATA6 OS: Ubuntu 12.04.3 64 bit WS: Apache 2.2.22 My Apache2 configuration is as follows: /etc/apache2/apache2.conf LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive On MaxKeepAliveRequests 1000000 KeepAliveTimeout 2 ServerName "fnux.net" <IfModule mpm_prefork_module> StartServers 16 MinSpareServers 16 MaxSpareServers 16 ServerLimit 2048 MaxClients 1024 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType None HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ /etc/apache2/ports.conf NameVirtualHost *:8180 Listen 8180 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> /etc/apache2/mods-available <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /cgi-bin/php5.external <Location "/cgi-bin/php5.external"> Order Deny,Allow Deny from All Allow from env=REDIRECT_STATUS </Location> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:8180> ServerAdmin webmaster@localhost DocumentRoot /var/www/apache2 <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg ##### CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> </VirtualHost> /etc/security/limits.conf * soft nofile 1000000 * hard nofile 1000000 So, I would trully appreciate your advice to setup Apache2 to make it able to sustain 1.000 simultaneous clients, if this is even possible. TIA for your help. Cheers.

    Read the article

  • Java2Days 2012 Trip Report

    - by reza_rahman
    Java2Days 2012 was held in beautiful Sofia, Bulgaria on October 25-26. For those of you not familiar with it, this is the third installment of the premier Java conference for the Balkan region. It is an excellent effort by admirable husband and wife team Emo Abadjiev and Iva Abadjieva as well as the rest of the Java2Days team including Yoana Ivanova and Nadia Kostova. Thanks to their hard work, the conference continues to grow vigorously with almost a thousand enthusiastic, bright young people attending this year and no less than three tracks on Java, the Cloud and Mobile. The conference is a true gem in this region of the world and I am very proud to have been a part of it again, along with the other world class speakers the event rightfully attracts. It was my honor to present the first talk of the conference. It was a full-house session on Java EE 7 and 8 titled "JavaEE.Next(): Java EE 7, 8, and Beyond". The talk was primarily along the same lines as Arun Gupta's JavaOne 2012 technical keynote. I covered the changes in JMS 2, the Java API for WebSocket (JSR 356), the Java API for JSON Processing (JSON-P), JAX-RS 2, JCache, JPA 2.1, JTA 1.2, JSF 2.2, Java Batch, Bean Validation 1.1 and the rest of the APIs in Java EE 7. I also briefly talked about the possible contents of Java EE 8. My stretch goal was to gather some feedback on some open issues in the Java EE EG (more on that soon) but I ran out of time in the short format forty-five minute session. The talk was received well and I had some pretty good discussions afterwards. The slides for the talk are here: JavaEE.Next(): Java EE 7, 8, and Beyond from reza_rahman To my delight, the Java2Days folks were very interested in my domain-driven design/Java EE 6 talk (titled "Domain Driven Design with Java EE 6"). I've had this talk in my inventory for a long time now but it always gets overridden by less theoretical talks on APIs, tools, etc. The talk has three parts -- a brief overview of DDD theory, mapping DDD to Java EE and actual running DDD code in Java EE 6/GlassFish. For the demo, I converted the well-known DDD sample application (http://dddsample.sourceforge.net/) written mostly in Spring 2 and Hibernate 2 to Java EE 6. My eventual plan is to make the code available via a top level java.net project. Even despite the broad topic and time constraints, the talk went very well. It was a full house, the Q & A was excellent and one of the other speakers even told me they thought this was the best talk of the conference! The slides for the talk are here: Domain Driven Design with Java EE 6 from Reza Rahman The code examples are available here: https://blogs.oracle.com/reza/resource/dddsample.zip for now, as a simple zip file. Give me a shout if you would like to get it up and running. It was also a great honor to present the last session of the conference. It was a talk on the Java API for WebSocket/JSR 356 titled "Building HTML5/WebSocket Applications with JSR 356 and GlassFish". The talk is based on Danny Coward's JavaOne 2012 talk. The talk covers the basic of WebSocket, the JSR 356 API and a simple demo using Tyrus/GlassFish. The talk went very well and there were some very good questions afterwards. The slides for the talk are here: Building HTML5/WebSocket Applications with GlassFish and JSR 356 from Reza Rahman The code samples are available here: https://blogs.oracle.com/arungupta/resource/totd183-HelloWebSocket.zip. You'll need the latest promoted GlassFish 4 build to run the code. Give me a shout if you need help. Besides presenting my talks, I got to attend some great sessions on OSGi, HTML5, cloud, agile and Java 8. I got an invite to speak at the Macedonia JUG when possible. Victor Grazi of InfoQ wrote about my sessions and Java2Days here: http://www.infoq.com/news/2012/11/Java2DaysConference. Stoyan Rachev was very kind to blog about my sessions here: http://www.stoyanr.com/2012/11/java2days-2012-java-ee.html. I definitely enjoyed Java2Days 2012 and hope to be part of the conference next year!

    Read the article

  • Goals for 2010 Retrospective

    - by Brian Jackett
    As we approach the end of 2010 I’d like to take a  few minutes to reflect back on this past year and revisit the goals that I set for myself at the beginning of the year (click here to see those goals).  I feel it is important to track your goals not only to see if you accomplished them but also to see what new directions in life you pursued.  Once we enter into 2011 I’ll follow up with a new post on goals for the new year. Professional Blog – This year I intended to write at least 2 posts a month.  Looking back I far surpassed that goal by writing 47 posts (this one being my 48th).  As with many things in life, quantity does not mean quality.  A good example is a number of my posts announcing upcoming speaking engagements and providing links to presentation slides and scripts.  That aside, I like to at least keep content relatively fresh on this blog  which I was able to accomplish.  At the same time I’ve gotten much more comfortable in my blogging style and it has become much easier to write. Speaking – I didn’t define a clear goal for speaking engagements, but had a rough idea of wanting to speak at 2-3 events.  Once again I far exceeded that number by speaking at 10 separate events and delivering 12+ presentations.  I’m very thankful for all of the opportunities that I was given and all of the wonderful people I have met as a result. Volunteering – This year I intended to help out with the COSPUG (now Buckeye SPUG) steering committee and Stir Trek conference.  I fulfilled both goals and as well as taking on lead organizer duties for the first ever SharePoint Saturday Columbus.  Each of these events and groups turned out to be successful and I was glad to be a part of them all.  I look forward to continuing to volunteer with each next year in some capacity. Android Development – My goal for getting into Android development was a late addition, but one I didn’t necessarily fulfill.  I spent a couple nights downloading the tools, configuring my environment, and going through some “simple” tutorials.  I say “simple” because in my opinion the tutorials were not laid out very well, took a long time to get running properly, and confused me more than helped.  After about a week I was frustrated with the process and didn’t think it was a good use of my time.  On a side note, I’ve dabbled in Windows Phone 7 development over the past few months and have been very excited by how easy and intuitive it was to get started and develop some proof of concepts. Personal Getting in Shape – I had intended to play on recreational sports leagues and work out on a semi-regular basis.  For the most part I fulfilled this goal by playing on various softball and volleyball leagues as well as using the gym.  At the same time I had some major setbacks.  In the spring I badly sprained my ankle and got hit in the knee with a softball which kept me inactive for almost 2 months.  More recently I broke my knuckle (click here to read about it) which I am still recovering from. Volunteering – On the volunteering front I kept my commitments at my parish’s high school youth group.  As for other volunteering opportunities I got involved with a great organization called Columbus Gives Back (website).  I’ve volunteered with them a few times and really enjoy their goal to provide opportunities to people with busy schedules.  They  offer a variety of events typically after work hours and spread out around Columbus with no set commitments on time you need to put in.  If you have the time or motivation I highly recommend them. House/Condo – I had been thinking of buying a house or condo this past summer, but decided to extend my apartment lease for another year instead.  I have begun the search for a place in the past few weeks and am excited begin the process of owning a home. Conclusion     This year I was able to set and achieve many of my goals.  For next year I’ll try to put more specific numbers to all of my goals.  If any of you readers set goals for 2011 feel free to send me a link as I’d love to see what you are aiming to accomplish.  Have a great end of 2010 and best wishes for the start of 2011!       -Frog Out

    Read the article

  • Custom Configuration Section Handlers

    Most .NET developers who need to store something in configuration tend to use appSettings for this purpose, in my experience.  More recently, the framework itself has helped things by adding the <connectionStrings /> section so at least these are in their own section and not adding to the appSettings clutter that pollutes most apps.  I recommend avoiding appSettings for several reasons.  In addition to those listed there, I would add that strong typing and validation are additional reasons to go the custom configuration section route. For my ASP.NET Tips and Tricks talk, I use the following example, which is a simple DemoSettings class that includes two fields.  The first is an integer representing how many attendees there are present for the talk, and the second is the title of the talk.  The setup in web.config is as follows: <configSections> <section name="DemoSettings" type="ASPNETTipsAndTricks.Code.DemoSettings" /> </configSections>   <DemoSettings sessionAttendees="100" title="ASP.NET Tips and Tricks DevConnections Spring 2010" /> Referencing the values in code is strongly typed and straightforward.  Here I have a page that exposes two properties which internally get their values from the configuration section handler: public partial class CustomConfig1 : System.Web.UI.Page { public string SessionTitle { get { return DemoSettings.Settings.Title; } } public int SessionAttendees { get { return DemoSettings.Settings.SessionAttendees; } } } Note that the settings are only read from the config file once after that they are cached so there is no need to be concerned about excessive file access. Now weve seen how to set it up on the config file and how to refer to the settings in code.  All that remains is to see the file itself: public class DemoSettings : ConfigurationSection { private static DemoSettings settings = ConfigurationManager.GetSection("DemoSettings") as DemoSettings; public static DemoSettings Settings{ get { return settings;} }   [ConfigurationProperty("sessionAttendees" , DefaultValue = 200 , IsRequired = false)] [IntegerValidator(MinValue = 1 , MaxValue = 10000)] public int SessionAttendees { get { return (int)this["sessionAttendees"]; } set { this["sessionAttendees"] = value; } }   [ConfigurationProperty("title" , IsRequired = true)] [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;\"|\\")] public string Title { get { return (string)this["title"]; } set { this["title"] = value; }   } } The class is pretty straightforward, but there are some important components to note.  First, it must inherit from System.Configuration.ConfigurationSection.  Next, as a convention I like to have a static settings member that is responsible for pulling out the section when the class is first referenced, and further to expose this collection via a static readonly property, Settings.  Note that the types of both of these are the type of my class, DemoSettings. The properties of the class, SessionAttendees and Title, should map to the attributes of the config element in the XML file.  The [ConfigurationProperty] attribute allows you to map the attribute name to the property name (thus using both XML standard naming conventions and C# naming conventions).  In addition, you can specify a default value to use if nothing is specified in the config file, and whether or not the setting must be provided (IsRequired).  If it is required, then it doesnt make sense to include a default value. Beyond defaults and required, you can specify more advanced validation rules for the configuration values using additional C# attributes, such as [IntegerValidator] and [StringValidator].  Using these, you can declaratively specify that your configuration values be in a given range, or omit certain forbidden characters, for instance.  Of course you can write your own custom validation attributes, and there are others specified in System.Configuration. Individual sections can also be loaded from separate files, using syntax like this: <DemoSettings configSource="demosettings.config" /> Summary Using a custom configuration section handler is not hard.  If your application or component requires configuration, I recommend creating a custom configuration handler dedicated to your app or component.  Doing so will reduce the clutter in appSettings, will provide you with strong typing and validation, and will make it much easier for other developers or system administrators to locate and understand the various configuration values that are necessary for a given application. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer

    - by Elton Stoneman
    This is the third in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer As the patterns get further from the simple .NET full-trust consumer, all that changes is the communication protocol and the authentication mechanism. In Part 3 the scenario is that we still have a secure .NET environment consuming our service, so we can store shared keys securely, but the runtime environment is locked down so we can't use Microsoft.ServiceBus to get the nice WCF relay bindings. To support this we will expose a RESTful endpoint through the Azure Service Bus, and require the consumer to send a security token with each HTTP service request. Pattern applicability This is a good fit for scenarios where: the runtime environment is secure enough to keep shared secrets the consumer can execute custom code, including building HTTP requests with custom headers the consumer cannot use the Azure SDK assemblies the service may need to know who is consuming it the service does not need to know who the end-user is Note there isn't actually a .NET requirement here. By exposing the service in a REST endpoint, anything that can talk HTTP can be a consumer. We'll authenticate through ACS which also gives us REST endpoints, so the service is still accessed securely. Our real-world example would be a hosted cloud app, where we we have enough room in the app's customisation to keep the shared secret somewhere safe and to hook in some HTTP calls. We will be flowing an identity through to the on-premise service now, but it will be the service identity given to the consuming app - the end user's identity isn't flown through yet. In this post, we’ll consume the service from Part 1 in ASP.NET using the WebHttpRelayBinding. The code for Part 3 (+ Part 1) is on GitHub here: IPASBR Part 3. Authenticating and authorizing with ACS We'll follow the previous examples and add a new service identity for the namespace in ACS, so we can separate permissions for different consumers (see walkthrough in Part 1). I've named the identity partialTrustConsumer. We’ll be authenticating against ACS with an explicit HTTP call, so we need a password credential rather than a symmetric key – for a nice secure option, generate a symmetric key, copy to the clipboard, then change type to password and paste in the key: We then need to do the same as in Part 2 , add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: partialTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send As with Part 2, this sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. RESTfully exposing the on-premise service through Azure Service Bus Relay The part 3 sample code is ready to go, just put your Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files.  But to do it yourself is very simple. We already have a WebGet attribute in the service for locally making REST calls, so we are just going to add a new endpoint which uses the WebHttpRelayBinding to relay that service through Azure. It's as easy as adding this endpoint to Web.config for the service:         <endpoint address="https://sixeyed-ipasbr.servicebus.windows.net/rest"                   binding="webHttpRelayBinding"                    contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> - and adding the webHttp attribute in your endpoint behavior:           <behavior name="SharedSecret">             <webHttp/>             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="gl0xaVmlebKKJUAnpripKhr8YnLf9Neaf6LR53N8uGs="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> Where's my WSDL? The metadata story for REST is a bit less automated. In our local webHttp endpoint we've enabled WCF's built-in help, so if you navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/help - you'll see the uri format for making a GET request to the service. The format is the same over Azure, so this is where you'll be connecting: https://[your-namespace].servicebus.windows.net/rest/reverse?string=abc123 Build the service with the new endpoint, open that in a browser and you'll get an XML version of an HTTP status code - a 401 with an error message stating that you haven’t provided an authorization header: <?xml version="1.0"?><Error><Code>401</Code><Detail>MissingToken: The request contains no authorization header..TrackingId:4cb53408-646b-4163-87b9-bc2b20cdfb75_5,TimeStamp:10/3/2012 8:34:07 PM</Detail></Error> By default, the setup of your Service Bus endpoint as a relying party in ACS expects a Simple Web Token to be presented with each service request, and in the browser we're not passing one, so we can't access the service. Note that this request doesn't get anywhere near your on-premise service, Service Bus only relays requests once they've got the necessary approval from ACS. Why didn't the consumer need to get ACS authorization in Part 2? It did, but it was all done behind the scenes in the NetTcpRelayBinding. By specifying our Shared Secret credentials in the consumer, the service call is preceded by a check on ACS to see that the identity provided is a) valid, and b) allowed access to our Service Bus endpoint. By making manual HTTP requests, we need to take care of that ACS check ourselves now. We do that with a simple WebClient call to the ACS endpoint of our service; passing the shared secret credentials, we will get back an SWT: var values = new System.Collections.Specialized.NameValueCollection(); values.Add("wrap_name", "partialTrustConsumer"); //service identity name values.Add("wrap_password", "suCei7AzdXY9toVH+S47C4TVyXO/UUFzu0zZiSCp64Y="); //service identity password values.Add("wrap_scope", "http://sixeyed-ipasbr.servicebus.windows.net/"); //this is the realm of the RP in ACS var acsClient = new WebClient(); var responseBytes = acsClient.UploadValues("https://sixeyed-ipasbr-sb.accesscontrol.windows.net/WRAPv0.9/", "POST", values); rawToken = System.Text.Encoding.UTF8.GetString(responseBytes); With a little manipulation, we then attach the SWT to subsequent REST calls in the authorization header; the token contains the Send claim returned from ACS, so we will be authorized to send messages into Service Bus. Running the sample Navigate to http://localhost:2028/Sixeyed.Ipasbr.WebHttpClient/Default.cshtml, enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • Gamification = -10#/3mo

    - by erikanollwebb
    One of the purposes of gamification of anything is to see if you can modify the behavior of the user. In the enterprise, that might mean getting sales people to enter more information into a CRM system, encouraging employees to update their HR records, motivating people to participate in forums and discussions, or process invoices more quickly.  Wikipedia defines behavior modification as "the traditional term for the use of empirically demonstrated behavior change techniques to increase or decrease the frequency of behaviors, such as altering an individual's behaviors and reactions to stimuli through positive and negative reinforcement of adaptive behavior and/or the reduction of behavior through its extinction, punishment and/or satiation."  Gamification is just a way to modify someone's behavior using game mechanics. And the magic question is always whether it works. So I thought I would present my own little experiment from the last few months.  This spring, I upgraded to a Samsung Galaxy 4.  It's a pretty sweet phone in many ways, but one of the little extras I discovered was a built in app called S Health. S Health is an app that you can use to track calories, weight, exercise and it has a built in pedometer. I looked at it when I got the phone, but assumed you had to turn it on to use it so I didn't look at it much.  But sometime in July, I realized that in fact, it just ran in the background and was quietly tracking my steps, with a goal of 10,000 per day.  10,000 steps per day is this magic number recommended by the Surgeon General and the American Heart Association.  Dr. Oz pushes it as the goal for daily exercise.  It's about 5 miles of walking. I'm generally not the kind of person who always has my phone with me.  I leave it in my purse and pull it out when I need it.  But then I realized that meant I wasn't getting a good measure of my steps.  I decided to do a little experiment, and carry it with me as much as possible for a week.  That's when I discovered the gamification that changed my life over the last 3 months.  When I hit 10,000 steps, the app jingled out a little "success!" tune and I got a badge.  I was hooked.  I started carrying my phone.  I started making sure I had shoes I could walk in with me.  I started walking at lunch time, because I realized how often I sat at my desk for 8-10 hours every day without moving.  I started pestering my husband to walk with me after work because I hadn't hit my 10,000 yet, leading him at one point to say "I'm not as much a slave to that badge as you are!"  I started looking at parking lots differently.  Can't get a space up close?  No worries, just that many steps toward my 10,000.  I even tried to see if there was a second power user level at 15,000 or 20,000 (*sadly, no).  If I was close at the end of the day, I have done laps around my house until I got my badge.  I have walked around the block one more time to get my badge.  I have mentally chastised myself when I forgot to put my phone in my pocket because I don't know how many steps I got.  The badge below I got when my boss and I were in New York City and we walked around the block of our hotel just to watch the badge pop up. There are a bunch of tools out on the market now that have similar ideas for helping you to track your exercise, make it social.  There are apps (my favorite is still Zombies, Run!).  You could buy a FitBit or UP by Jawbone.   Interactive fitness makes the Expresso stationary bike with built in video games.  All designed to help you be more aware of your activity and keep you engaged and motivated.  And the idea is to help you change your behavior. I know someone who would spend extra time and work hard on the Expresso because he had built up strategies for how to kill the most dragons while he was riding to get more points.  When the machine broke down, he didn't ride a different bike because it just wasn't that interesting. But for me, just the simple jingle and badge have been all I needed.  I admit, I still giggle gleefully when I hear the tune sing out from my pocket. After a few weeks, I noticed I had dropped a few pounds.  Not a lot, just 2-3.  But then I was really hooked.  I started making a point both to eat a little less and hit 10,000 steps as much as I could.  I bemoaned that during the floods in Boulder, I wasn't hitting my 10,000 steps.  And now, a few months later, I'm almost 10 lbs lighter. All for 1 badge a day. So yes, simple gamification can increase motivation and engagement.  And that can lead to changes in behavior.  Now the job is to apply that to the enterprise space in a meaningful and engaging way. 

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • What Counts for A DBA: Observant

    - by drsql
    When walking up to the building where I work, I can see CCTV cameras placed here and there for monitoring access to the building. We are required to wear authorization badges which could be checked at any time. Do we have enemies?  Of course! No one is 100% safe; even if your life is a fairy tale, there is always a witch with an apple waiting to snack you into a thousand years of slumber (or at least so I recollect from elementary school.) Even Little Bo Peep had to keep a wary lookout.    We nerdy types (or maybe it was just me?) generally learned on the school playground to keep an eye open for unprovoked attack from simpler, but more muscular souls, and take steps to avoid messy confrontations well in advance. After we’d apprehensively negotiated adulthood with varying degrees of success, these skills of watching for danger, and avoiding it,  translated quite well to the technical careers so many of us were destined for. And nowhere else is this talent for watching out for irrational malevolence so appropriate as in a career as a production DBA.   It isn’t always active malevolence that the DBA needs to watch out for, but the even scarier quirks of common humanity.  A large number of the issues that occur in the enterprise happen just randomly or even just one time ever in a spurious manner, like in the case where a person decided to download the entire MSDN library of software, cross join every non-indexed billion row table together, and simultaneously stream the HD feed of 5 different sporting events, making the network access slow while the corporate online sales just started. The decent DBA team, like the going, gets tough under such circumstances. They spring into action, checking all of the sources of active information, observes the issue is no longer happening now, figures that either it wasn’t the database’s fault and that the reboot of the whatever device on the network fixed the problem.  This sort of reactive support is good, and will be the initial reaction of even excellent DBAs, but it is not the end of the story if you really want to know what happened and avoid getting called again when it isn’t even your fault.   When fires start raging within the corporate software forest, the DBA’s instinct is to actively find a way to douse the flames and get back to having no one in the company have any idea who they are.  Even better for them is to find a way of killing a potential problem while the fires are small, long before they can be classified as raging. The observant DBA will have already been monitoring the server environment for months in advance.  Most troubles, such as disk space and security intrusions, can be predicted and dealt with by alerting systems, whereas other trouble can come out of the blue and requires a skill of observing ongoing conditions and noticing inexplicable changes that could signal an emerging problem.  You can’t automate the DBA, because the bankable skill of a DBA is in detecting the early signs of unexpected problems, and working out how to deal with them before anyone else notices them.    To achieve this, the DBA will check the situation as it is currently happening,  and in many cases is likely to have been the person who submitted the problem to the level 1 support person in the first place, just to let the support team know of impending issues (always well received, I tell you what!). Database and host computer settings, configurations, and even critical data might be profiled and captured for later comparisons. He’ll use Monitoring tools, built-in, commercial (Not to be too crassly commercial or anything, but there is one such tool is SQL Monitor) and lots of homebrew monitoring tools to monitor for problems and changes in the server environment.   You will know that you have it right when a support call comes in and you can look at your monitoring tools and quickly respond that “response time is well within the normal range, the query that supports the failing interface works perfectly and has actually only been called 67% as often as normal, so I am more than willing to help diagnose the problem, but it isn’t the database server’s fault and is probably a client or networking slowdown causing the interface to be used less frequently than normal.” And that is the best thing for any DBA to observe…

    Read the article

  • ActionScript/Flex ArrayCollection of Number objects to Java Collection<Long> using BlazeDS

    - by Justin
    Hello, I am using Flex 3 and make a call through a RemoteObject to a Java 1.6 method and exposed with BlazeDS and Spring 2.5.5 Integration over a SecureAMFChannel. The ActionScript is as follows (this code is an example of the real thing which is on a separate dev network); import com.adobe.cairngorm.business.ServiceLocator; import mx.collections.ArrayCollection; import mx.rpc.remoting.RemoteObject; import mx.rpc.IResponder; public class MyClass implements IResponder { private var service:RemoteObject = ServiceLocator.getInstance().getRemoteOjbect("mySerivce"); public MyClass() { [ArrayElementType("Number")] private var myArray:ArrayCollection; var id1:Number = 1; var id2:Number = 2; var id3:Number = 3; myArray = new ArrayCollection([id1, id2, id3]); getData(myArray); } public function getData(myArrayParam:ArrayCollection):void { var token:AsyncToken = service.getData(myArrayParam); token.addResponder(this.responder); //Assume responder implementation method exists and works } } This will make a call, once created to the service Java class which is exposed through BlazeDS (assume the mechanics work because they do for all other calls not involving Collection parameters). My Java service class looks like this; public class MySerivce { public Collection<DataObjectPOJO> getData(Collection<Long> myArrayParam) { //The following line is never executed and throws an exception for (Long l : myArrayParam) { System.out.println(l); } } } The exception that is thrown is a ClassCastException saying that a java.lang.Integer cannot be cast to a java.lang.Long. I worked around this issue by looping through the collection using Object instead, checking to see if it is an Integer, cast it to one, then do a .longValue() on it then add it to a temp ArraList. Yuk. The big problem is my application is supposed to handle records in the billions from a DB and the id will overflow the 2.147 billion limit of an integer. I would love to have BlazeDS or the JavaAdapter in it, translate the ActionScript Number to a Long as specified in the method. I hate that even though I use the generic the underlying element type of the collection is an Integer. If this was straight Java, it wouldn't compile. Any ideas are appreciated. Solutions are even better! :)

    Read the article

  • Using DisplayTag library, I want to have the currently selected row have a unique custom class using

    - by Mary
    I have been trying to figure out how to highlight the selected row in a table. In my jsp I have jsp scriplet that can get access to the id of the row the displaytag library is creating. I want to compare it to the the id of the current row selected by the user ${currentNoteId}. Right now if the row id = 849 (hardcoded) the class "currentClass" is added to just that row of the table. I need to change the 849 for the {$currentNoteId} and I don't know how to do it. I am using java, Spring MVC. The jsp: ... <% request.setAttribute("dyndecorator", new org.displaytag.decorator.TableDecorator() { public String addRowClass() { edu.ilstu.ais.advisorApps.business.Note row = (edu.ilstu.ais.advisorApps.business.Note)getCurrentRowObject(); String rowId = row.getId(); if ( rowId.equals("849") ) { return "currentClass"; } return null; } }); %> <c:set var="currentNoteId" value="${studentNotes.currentNote.id}"/> ... <display:table id="noteTable" name="${ studentNotes.studentList }" pagesize="20" requestURI="notesView.form.html" decorator="dyndecorator"> <display:column title="Select" class="yui-button-match" href="/notesView.form.html" paramId="note.id" paramProperty="id"> <input type="button" class="yui-button-match2" name="select" value="Select"/> </display:column> <display:column property="userName" title="Created By" sortable="true"/> <display:column property="createDate" title="Created On" sortable="true" format="{0,date,MM/dd/yy hh:mm:ss a}"/> <display:column property="detail" title="Detail" sortable="true"/> </display:table> ... This could also get done using javascript and that might be best, but the documentation suggested this so I thought I would try it. I cannot find an example anywhere using the addRowClass() unless the comparison is to a field already in the row (a dollar amount is used in the documentation example) or hardcoded in like the "849" id. Thanks for any help you can provide.

    Read the article

  • Why would Java classloading fail on Linux, but succeed on Windows?

    - by arnsholt
    I've got a Java web application (using Spring), deployed with Jetty. If I try to run it on a Windows machine everything works as expected, but if I try to run the same code on my Linux machine, it fails like this: [normal startup output] 11:16:39.657 INFO [main] org.mortbay.jetty.servlet.ServletHandler$Context.log(ServletHandler.java:1145) 16 Set web app root system property: 'webapp.root' = [/path/to/working/dir] java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.mortbay.start.Main.invokeMain(Main.java:151) at org.mortbay.start.Main.start(Main.java:476) at org.mortbay.start.Main.main(Main.java:94) Caused by: java.lang.ExceptionInInitializerError at org.springframework.web.util.Log4jWebConfigurer.initLogging(Log4jWebConfigurer.java:129) at org.springframework.web.util.Log4jConfigListener.contextInitialized(Log4jConfigListener.java:51) at org.mortbay.jetty.servlet.WebApplicationContext.doStart(WebApplicationContext.java:495) at org.mortbay.util.Container.start(Container.java:72) at org.mortbay.http.HttpServer.doStart(HttpServer.java:708) at org.mortbay.util.Container.start(Container.java:72) at org.mortbay.jetty.Server.main(Server.java:460) ... 7 more Caused by: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: No suitable Log constructor [Ljava.lang.Class;@15311bd for org.apache.commons.logging.impl.Log4JLogger (Caused by java.lang.NoClassDefFoundError: org/apache/log4j/Category) (Caused by org.apache.commons.logging.LogConfigurationException: No suitable Log constructor [Ljava.lang.Class;@15311bd for org.apache.commons.logging.impl.Log4JLogger (Caused by java.lang.NoClassDefFoundError: org/apache/log4j/Category)) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:543) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:235) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:209) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:351) at org.springframework.util.SystemPropertyUtils.(SystemPropertyUtils.java:42) ... 14 more Caused by: org.apache.commons.logging.LogConfigurationException: No suitable Log constructor [Ljava.lang.Class;@15311bd for org.apache.commons.logging.impl.Log4JLogger (Caused by java.lang.NoClassDefFoundError: org/apache/log4j/Category) at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:413) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:529) ... 18 more Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/Category at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) at java.lang.Class.getConstructor0(Class.java:2699) at java.lang.Class.getConstructor(Class.java:1657) at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:410) ... 19 more Caused by: java.lang.ClassNotFoundException: org.apache.log4j.Category at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 24 more [shutdown output] I've run the app with java -verbose:class, and according to that output, org.apache.log4j.Category is loaded from the log4j JAR in my /WEB-INF/lib, just before the first exception is thrown. Now, the Java versions on the two machines are slightly different. Both the machines have Sun's java, the Linux machine has 1.6.0_10, while the Windows machine has 1.6.0_08, or maybe 07 or 06, I can't remember the exact number right now, and don't have the machine at hand. But even though the minor versions of the Javas are slightly different, the code shouldn't break like this. Does anyone understand what's wrong here?

    Read the article

  • Efficiency of Java "Double Brace Initialization"?

    - by Jim Ferrans
    In Hidden Features of Java the top answer mentions Double Brace Initialization, with a very enticing syntax: Set<String> flavors = new HashSet<String>() {{ add("vanilla"); add("strawberry"); add("chocolate"); add("butter pecan"); }}; This idiom creates an anonymous inner class with just an instance initializer in it, which "can use any [...] methods in the containing scope". Main question: Is this as inefficient as it sounds? Should its use be limited to one-off initializations? (And of course showing off!) Second question: The new HashSet must be the "this" used in the instance initializer ... can anyone shed light on the mechanism? Third question: Is this idiom too obscure to use in production code? Summary: Very, very nice answers, thanks everyone. On question (3), people felt the syntax should be clear (though I'd recommend an occasional comment, especially if your code will pass on to developers who may not be familiar with it). On question (1), The generated code should run quickly. The extra .class files do cause jar file clutter, and slow program startup slightly (thanks to coobird for measuring that). Thilo pointed out that garbage collection can be affected, and the memory cost for the extra loaded classes may be a factor in some cases. Question (2) turned out to be most interesting to me. If I understand the answers, what's happening in DBI is that the anonymous inner class extends the class of the object being constructed by the new operator, and hence has a "this" value referencing the instance being constructed. Very neat. Overall, DBI strikes me as something of an intellectual curiousity. Coobird and others point out you can achieve the same effect with Arrays.asList, varargs methods, Google Collections, and the proposed Java 7 Collection literals. Newer JVM languages like Scala, JRuby, and Groovy also offer concise notations for list construction, and interoperate well with Java. Given that DBI clutters up the classpath, slows down class loading a bit, and makes the code a tad more obscure, I'd probably shy away from it. However, I plan to spring this on a friend who's just gotten his SCJP and loves good natured jousts about Java semantics! ;-) Thanks everyone!

    Read the article

  • how can i overriding the inbound message validation with cxf?

    - by user1648330
    when i input a non-numeric content to a Integer types of field, i have got a fault mesasge 'Not a number: 0.012A'. How to do ability to in prior to the Unmarshal for schema validation and output custom error messages? I used cxf 2.6.1 and also configuration with <entry key="schema-validation-enabled" value="true" ></entry> in cxf-spring.xml. java.lang.RuntimeException: Not a number: 0.012A at com.jiemai.jmservice.handlers.ValidationEventHandler.handleEvent(ValidationEventHandler.java:19) at org.apache.cxf.jaxb.io.DataReaderImpl$WSUIDValidationHandler.handleEvent(DataReaderImpl.java:78) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleEvent(UnmarshallingContext.java:655) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleError(UnmarshallingContext.java:691) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleError(UnmarshallingContext.java:687) at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.handleParseConversionException(Loader.java:271) at com.sun.xml.bind.v2.runtime.unmarshaller.LeafPropertyLoader.text(LeafPropertyLoader.java:69) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.text(UnmarshallingContext.java:514) at com.sun.xml.bind.v2.runtime.unmarshaller.InterningXmlVisitor.text(InterningXmlVisitor.java:93) at com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.processText(StAXStreamConnector.java:338) at com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.handleEndElement(StAXStreamConnector.java:216) at com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:185) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:370) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:349) at org.apache.cxf.jaxb.JAXBEncoderDecoder.doUnmarshal(JAXBEncoderDecoder.java:784) at org.apache.cxf.jaxb.JAXBEncoderDecoder.access$100(JAXBEncoderDecoder.java:97) at org.apache.cxf.jaxb.JAXBEncoderDecoder$1.run(JAXBEncoderDecoder.java:812) at java.security.AccessController.doPrivileged(Native Method) at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:810) at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:644) at org.apache.cxf.jaxb.io.DataReaderImpl.read(DataReaderImpl.java:157) at org.apache.cxf.interceptor.DocLiteralInInterceptor.handleMessage(DocLiteralInInterceptor.java:108) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:262) at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:122) at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:211) at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:213) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:193) at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:129) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:187) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:110) at javax.servlet.http.HttpServlet.service(HttpServlet.java:710) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:166) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Unknown Source)

    Read the article

  • EntityManagerFactory error on Websphere

    - by neverland
    I have a very weird problem. Have an application using Hibernate and spring.I have an entitymanger defined which uses a JNDI lookup .It looks something like this <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="persistenceUnitName" value="ConfigAPPPersist" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="generateDdl" value="false" /> <property name="databasePlatform" value="org.hibernate.dialect.Oracle9Dialect" /> </bean> </property> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.WebSphereDataSourceAdapter"> <property name="targetDataSource"> <bean class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/pmp" /> </bean> </property> </bean> This application runs fine in DEV. But when we move to higher envs the team that deploys this application does it successfully initially but after a few restarts of the application the entitymanager starts giving this problem Caused by: javax.persistence.PersistenceException: [PersistenceUnit: ConfigAPPPersist] Unable to build EntityManagerFactory at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:677) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:132) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:224) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:291) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1368) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1334) ... 32 more Caused by: org.hibernate.MappingException: **property mapping has wrong number of columns**: com.***.***.jpa.marketing.entity.MarketBrands.$performasure_j2eeInfo type: object Now you would say this is pretty obvious the entity MarketBrands is incorrect. But its not it maps to the table just fine. And the same code works on DEV. Also the jndi cannot be incorrect since it deploys and works fine initially but throws uo this error after a restart. This is weird and not very logical. But if someone has faced this or has any idea on what might be causing this Please!! help The persistence.xml for the persitence unit has very little <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="ConfigAPPPersist"> <!-- commented code --> </persistence-unit> </persistence>

    Read the article

  • What was "The Next Big Thing" when you were just starting out in programming?

    - by Andrew
    I'm at the beginning of my career and there are lots of things which are being touted as "The Next Big Thing". For example: Dependency Injection (Spring, etc) MVC (Struts, ASP.NET MVC) ORMs (Linq To SQL, Hibernate) Agile Software Development These things have probably been around for some time, but I've only just started out. And don't get me wrong, I think these things are great! So, what was "The Next Big Thing" when you were starting out? When was it? Were people sceptical of it at first? Why? Did you think it would catch on? Did it pan out and become widely accepted/used? If not, why not? EDIT It's been nearly a week since I first posted this question and I can safely say that I did not expect such explosive interest. I asked the question so that I could gain a perspective of what kinds of innovations in programming people thought were most important when they were starting out. At the time of writing this I have read ~95% of all answers. To answer a few questions, the "Next Big Things" I listed are ones that I am currently really excited about and that I had not really been exposed to until I started working. I'm hoping to implement some or all of these in the near future at my current workplace. To many people they are probably old news. In regards to the "is this a real question" debate, I can see that obviously hasn't been settled yet. I feel bad whenever I read a comment saying that these kinds of questions take away from the real meaning of SO. I'm not wholly convinced that it doesn't. On the other hand, I have seen a lot of comments saying what a great question it is. Anyway, I have chosen "The Internet!" as my answer to this question. I don't think (in my very humble opinion, and, it seems many SOers opinions) that many things related to programming can compare. Nowadays every business and their dog has a website which can do anything from simply supplying information to purchasing goods halfway around the world to updating your blog. And of course, all these businesses need people like us. Thanks to everyone for all the great answers!

    Read the article

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • HttpPost request unsuccessful

    - by The Thom
    I have written a web service and am now writing a tester to perform integration testing from the outside. I am writing my tester using apache httpclient 4.3. Based on the code here: http://hc.apache.org/httpcomponents-client-4.3.x/quickstart.html and here: http://hc.apache.org/httpcomponents-client-4.3.x/tutorial/html/fundamentals.html#d5e186 I have written the following code. Map<String, String> parms = new HashMap<>(); parms.put(AirController.KEY_VALUE, json); postUrl(SERVLET, parms); ... protected String postUrl(String servletName, Map<String, String> parms) throws AirException{ String url = rootUrl + servletName; HttpPost post = new HttpPost(url); List<NameValuePair> nvps = new ArrayList<NameValuePair>(); for(Map.Entry<String, String> entry:parms.entrySet()){ BasicNameValuePair parm = new BasicNameValuePair(entry.getKey(), entry.getValue()); nvps.add(parm); } try { post.setEntity(new UrlEncodedFormEntity(nvps)); } catch (UnsupportedEncodingException use) { String msg = "Invalid parameters:" + parms; throw new AirException(msg, use); } CloseableHttpResponse response; try { response = httpclient.execute(post); } catch (IOException ioe) { throw new AirException(ioe); } String result; if(HttpStatus.SC_OK == response.getStatusLine().getStatusCode()){ result = processResponse(response); } else{ String msg = MessageFormat.format("Invalid status code {0} received from query {1}.", new Object[]{response.getStatusLine().getStatusCode(), url}); throw new AirException(msg); } return result; } This code successfully reaches my servlet. In my servlet, I have (using Spring's AbstractController): protected ModelAndView post(HttpServletRequest request, HttpServletResponse response) { String json = String.valueOf(request.getParameter(KEY_VALUE)); if(json.equals("null")){ log.info("Received null value."); response.setStatus(406); return null; } And this code always falls into the null parameter code and returns a 406. I'm sure I'm missing something simple, but can't see what it is.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129  | Next Page >