Search Results

Search found 8830 results on 354 pages for 'enterprise deployment'.

Page 36/354 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Ray Wang: Why engagement matters in an era of customer experience

    - by Michael Snow
    Why engagement matters in an era of customer experience R "Ray" Wang Principal Analyst & CEO, Constellation Research Mobile enterprise, social business, cloud computing, advanced analytics, and unified communications are converging. Armed with the art of the possible, innovators are seeking to apply disruptive consumer technologies to enterprise class uses — call it the consumerization of IT in the enterprise. The likely results include new methods of furthering relationships, crafting longer term engagement, and creating transformational business models. It's part of a shift from transactional systems to engagement systems. These transactional systems have been around since the 1950s. You know them as ERP, finance and accounting systems, or even payroll. These systems are designed for massive computational scale; users find them rigid and techie. Meanwhile, we've moved to new engagement systems such as Facebook and Twitter in the consumer world. The rich usability and intuitive design reflect how users want to work — and now users are coming to expect the same paradigms and designs in their enterprise world. ~~~ Ray is a prolific contributor to his own blog as well as others. For a sneak peak at Ray's thoughts on engagement, take a look at this quick teaser on Avoiding Social Media Fatigue Through Engagement Or perhaps you might agree with Ray on Dealing With The Real Problem In Social Business Adoption – The People! Check out Ray's post on the Harvard Business Review Blog to get his perspective on "How to Engage Your Customers and Employees." For a daily dose of Ray - follow him on Twitter: @rwang0 But MOST IMPORTANTLY.... Don't miss the opportunity to join leading industry analyst, R "Ray" Wang of Constellation Research in the latest webcast of the Oracle Social Business Thought Leaders Series as he explains how to apply the 9 C's of Engagement for both your customers and employees.

    Read the article

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • how can I invoke a webservice from another webservice

    - by vinny
    I have two webservices A and B. A needs to invoke one of the webMethods in B. How can I achieve this? I am using maven's wsimport plugin to build A. This is to generate the necessary stubs for B and include them as part of the Webservice A. However, when I try to Invoke the webmethod o f B, I get an Exception. Can anyone please tell me what is going on? Below Is the code and the exception trace: Code: BBeanService bbs = new BBeanService(); BBean bb = bbs.getBBeanPort(); bb.invokeWebService(); // this is throwing exception This is the exception trace: java.lang.NullPointerException at com.sun.xml.ws.fault.SOAP11Fault.getProtocolException(SOAP11Fault.java:188) at com.sun.xml.ws.fault.SOAPFaultBuilder.createException(SOAPFaultBuilder.java:116) at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:119) at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:89) at com.sun.xml.ws.client.sei.SEIStub.invoke(SEIStub.java:118) at $Proxy175.getCase(Unknown Source) at com.kebok.ais.billing.server.ejb.impl.ChargeManagerBean.generateBillDetails(ChargeManagerBean.java:144) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.security.application.EJBSecurityManager.runMethod(EJBSecurityManager.java:1011) at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:175) at com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2920) at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:4011) at com.sun.ejb.containers.WebServiceInvocationHandler.invoke(WebServiceInvocationHandler.java:190) at $Proxy173.generateBillDetails(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.webservice.InvokerImpl.invoke(InvokerImpl.java:78) at com.sun.enterprise.webservice.EjbInvokerImpl.invoke(EjbInvokerImpl.java:82) at com.sun.xml.ws.server.InvokerTube$2.invoke(InvokerTube.java:146) at com.sun.xml.ws.server.sei.EndpointMethodHandler.invoke(EndpointMethodHandler.java:257) at com.sun.xml.ws.server.sei.SEIInvokerTube.processRequest(SEIInvokerTube.java:93) at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:595) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:554) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:539) at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:436) at com.sun.xml.ws.api.pipe.helper.AbstractTubeImpl.process(AbstractTubeImpl.java:106) at com.sun.enterprise.webservice.MonitoringPipe.process(MonitoringPipe.java:147) at com.sun.xml.ws.api.pipe.helper.PipeAdapter.processRequest(PipeAdapter.java:115) at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:595) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:554) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:539) at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:436) at com.sun.xml.ws.api.pipe.helper.AbstractTubeImpl.process(AbstractTubeImpl.java:106) at com.sun.xml.ws.tx.service.TxServerPipe.process(TxServerPipe.java:317) at com.sun.enterprise.webservice.CommonServerSecurityPipe.processRequest(CommonServerSecurityPipe.java:222) at com.sun.enterprise.webservice.CommonServerSecurityPipe.process(CommonServerSecurityPipe.java:133) at com.sun.xml.ws.api.pipe.helper.PipeAdapter.processRequest(PipeAdapter.java:115) at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:595) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:554) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:539) at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:436) at com.sun.xml.ws.server.WSEndpointImpl$2.process(WSEndpointImpl.java:243) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:444) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:244) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:135) at com.sun.enterprise.webservice.Ejb3MessageDispatcher.handlePost(Ejb3MessageDispatcher.java:113) at com.sun.enterprise.webservice.Ejb3MessageDispatcher.invoke(Ejb3MessageDispatcher.java:87) at com.sun.enterprise.webservice.EjbWebServiceServlet.dispatchToEjbEndpoint(EjbWebServiceServlet.java:228) at com.sun.enterprise.webservice.EjbWebServiceServlet.service(EjbWebServiceServlet.java:157) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at com.sun.enterprise.web.AdHocContextValve.invoke(AdHocContextValve.java:114) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:648) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:593) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:587) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:87) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:222) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:648) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:593) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:587) at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1096) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:166) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:648) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:593) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:587) at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1096) at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:288) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.invokeAdapter(DefaultProcessorTask.java:647) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:579) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:831) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214) at com.sun.enterprise.web.portunif.PortUnificationPipeline$PUTask.doTask(PortUnificationPipeline.java:380) at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:265) at com.sun.enterprise.web.connector.grizzly.ssl.SSLWorkerThread.run(SSLWorkerThread.java:106) Caused by: javax.xml.ws.WebServiceException: java.lang.NullPointerException at com.sun.enterprise.security.jmac.config.PipeHelper.makeFaultResponse(PipeHelper.java:328) at com.sun.enterprise.security.jmac.config.PipeHelper.getFaultResponse(PipeHelper.java:366) at com.sun.enterprise.webservice.CommonServerSecurityPipe.processRequest(CommonServerSecurityPipe.java:227) at com.sun.enterprise.webservice.CommonServerSecurityPipe.process(CommonServerSecurityPipe.java:133) at com.sun.xml.ws.api.pipe.helper.PipeAdapter.processRequest(PipeAdapter.java:115) at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:595) at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:554) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:539) at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:436) at com.sun.xml.ws.server.WSEndpointImpl$2.process(WSEndpointImpl.java:243) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:444) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:244) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:135) at com.sun.enterprise.webservice.Ejb3MessageDispatcher.handlePost(Ejb3MessageDispatcher.java:113) at com.sun.enterprise.webservice.Ejb3MessageDispatcher.invoke(Ejb3MessageDispatcher.java:87) at com.sun.enterprise.webservice.EjbWebServiceServlet.dispatchToEjbEndpoint(EjbWebServiceServlet.java:228) at com.sun.enterprise.webservice.EjbWebServiceServlet.service(EjbWebServiceServlet.java:157) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at com.sun.enterprise.web.AdHocContextValve.invoke(AdHocContextValve.java:114) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:648) at org.apache.catalina.

    Read the article

  • Web Application Project Deployment VS2010 - Precompile Views

    - by Malcolm Frexner
    Using Visual Studio 2010 Ultimate I created a ASP.NET MVC 2.0 Web Application. I read http://msdn.microsoft.com/query/dev10.query?appId=Dev10IDEF1&l=EN-US&k=k%28WEBAPPLICATIONPROJECTS.PACKAGEPUBLISHOVERVIEW%29;k%28TargetFrameworkMoniker-%22.NETFRAMEWORK%2cVERSION%3dV4.0%22%29&rd=true. Its about the new features for Web Application Deployment. I dont see an option to precompile Views. Also I dont see other options that where available in previous version of Web Deployment Project: ie if the web is compiled into a single assembly or into one assembly per page. I had the impression that Application Project Deployment is the scuccessor of Web Deployment Project.. maybe I am wrong about it. How should I precompile views now?

    Read the article

  • Avoiding deployment when debugging an ASP.NET MVC application

    - by John Kaster
    Once an asp.net MVC project has a web deployment project associated with it, there doesn't seem to be a way to avoid deploying when you build the MVC project. I still want to be able to debug my MVC project locally and avoid deploying new versions of it. How do I configure the web deployment project to skip the deployment step when I'm trying to debug?

    Read the article

  • Why do I have a dependency to gwt?

    - by stacker
    In a seam-gen generated application the following exception is thrown during deployment: ERROR [LoadMgr3] Not resheduling failed loading task, loadTask=org.jboss.mx.loading.ClassLoadingTask@8c5c9c{classname: org.jboss.seam.remoting.gwt.GWT14Service, requestingThread: Thread[ScannerThread,5,jboss], requestingClassLoader: org.jboss.mx.loading.UnifiedClassLoader3@3e4532{ url=f ile:/C:/dev/jboss-4.3.0.GA/server/default/deploy/myapp.ear/ ,addedOrder=50}, loadedClass: nullnull, loadOrder: 2147483647, loadException: java.lang.NoClassDefFoundError: com/google/gwt/user/server/rpc/SerializationPolicyProvider, threadTaskCount: 0, state: 1, #CCE: 1} java.lang.NoClassDefFoundError: com/google/gwt/user/server/rpc/SerializationPolicyProvider at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:621) ... at org.jboss.deployment.scanner.URLDeploymentScanner.deploy(URLDeploymentScanner.java:421) at org.jboss.deployment.scanner.URLDeploymentScanner.scan(URLDeploymentScanner.java:610) at org.jboss.deployment.scanner.AbstractDeploymentScanner$ScannerThread.doScan(AbstractDeploymentScanner.java:263) at org.jboss.deployment.scanner.AbstractDeploymentScanner$ScannerThread.loop(AbstractDeploymentScanner.java:274) at org.jboss.deployment.scanner.AbstractDeploymentScanner$ScannerThread.run(AbstractDeploymentScanner.java:225) The problem (and workaround) is described here. Since I don't use gwt, my question is why do I have this dependency when I'm not using gwt at all? Seam version 2.1.2

    Read the article

  • Documentation on System.Deployment

    - by krisnam
    I have a Win Application which is publish using ClickOnce deployment (go though VS IDE). I want to develop another small application (Web) to do this deployment process without going though VS IDE. I heard about System.Deployment and Microsoft.Build.BuildEngine name spaces. But I count find good doc to solve my problem. If you have one please send me any references.

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Oracle WebCenter at the Enterprise 2.0 Conference

    - by Brian Dirking
    We had a great week at the E20 Conference, presenting in four sessions – Andy MacMillan gave a session titled Today’s Successful Enterprises are Social Enterprises and was on a panel that Tony Byrne moderated; Christian Finn spoke on a panel on Unified Communications Unified Communications + Social Computing = Best of Both Worlds?, Mark Bennett spoke on a panel on The Evolution of Talent Management. The key areas of focus this year were sentiment analysis, adoption and community building, the benefits of failure, and social’s role in process applications. Sentiment analysis. This was focused not on external audiences but more on employee sentiment. Tim Young showed his internal "NikoNiko" project, where employees use smilies to report their current mood. The result was a dashboard that showed the company mood by department. Since the goal is to improve productivity, people can see which departments are running into issues and try and address them. A company might otherwise wait until the end of the quarter financials to find out that there was a problem and product didn’t ship. This is a way to identify issues immediately. Tim is great – he had the crowd laughing as soon as he hit the stage, with his proposed hastag for his session: by making it 138 characters long, people couldn’t say much behind his back. And as I tweeted during his session, I loved his comment that complexity diffuses energy - it sounds like something Sun Tzu would say. Another example of employee sentiment analysis was CubeVibe. Founder and CEO Aaron Aycock, in his 3 minute pitch or die session talked about how engaged employees perform better. It was too bad he got gonged, he was just picking up speed, but CubeVibe did win the vote – congratulations to them. Internal adoption, community building, and involvement. On this topic I spoke to Terri Griffith, and she said there is some good work going on at University of Indiana regarding this, and hinted that she might be blogging about it in the near future. This area holds lots of interest for me. Amongst our customers, - CPAC stands out as an organization that has successfully built a community. So, I wonder - what are the building blocks? A strong leader? A common or unifying purpose? A certain level of engagement? I imagine someone has created an equation that says “for a community to grow at 30% per month, there must be an engagement level x to the square root of y, where x equals current community size, and y equals the expected growth rate, and the result is how many engagements the average user must contribute to maintain that growth.” Does anyone have a framework like that? The net result of everyone’s experience is that there is nothing to do but start early and fail often. Kevin Jones made this the focus of his keynote. He talked about the types of failure and what they mean. And he showed his famous kids at work video: Kevin’s blog also has this post: Social Business Failure #8: Workflow Integration. This is something that we’ve been working on at Oracle. Since so much of business is based in enterprise applications such as ERP and CRM (and since Oracle offers e-Business Suite, Siebel, PeopleSoft, and JD Edwards, as well as Fusion Applications), it makes sense that the social capabilities of Oracle WebCenter is built right into these applications. There are two types of social collaboration – ad-hoc, and exception handling. When you are in a business process and encounter an exception, you immediately look for 1) the document that tells you how to handle it, or 2) the person who can tell you how to handle it. With WebCenter built into these processes, people either search their content management system, or engage in expertise location and conversation. The great thing is, THEY DON’T HAVE TO LEAVE THE APPLICATION TO DO IT. Oracle has built the social capabilities right into the applications and business processes. I don’t think enough folks were able to see that at the event, but I expect that over the next six months folks will become very aware of it. WebCenter also provides the ability to have ad-hoc collaboration, search, and expertise location that folks need when they are innovating or collaborating. We demonstrated Oracle Social Network. It’s built on our Oracle WebCenter product to provide social collaboration inside and outside of your company. When we showed it to people, there were a number of areas that they commented on that were different from the other products being shown at the conference: Screenshots from within the product Many authors working on documents simultaneously Flagging people for follow up Direct ability to call out to people Ability to see presence not just if someone is online, but which conversation they are actively in Great stuff, the conference was full of smart people that that we enjoy spending time with. We’ll keep up in the meantime, but we look forward to seeing you in Boston.

    Read the article

  • Install Oracle Enterprise Linux 5 on VMWare

    - by TUFEKCIOGLU,FATIH
    In this article, we will try to install Install Oracle Enterprise Linux 5 on VMWare. To make the installation easier, I will show the screenshots of the whole steps. 1.   2.   3.   4.   5.   6. 7. 8.   9.   10.   11.   12.   13.   14.   15.   16.   17.   18.   19.   20.   21.   22.   23.   24.   25.   26.   27.   28.   29.   30.   31.   32.   33.   34.   35.   36.   37.   38.   39.   40.   41.   42.   43.   44.   45.   46.   47.   48.   49.   50.   51.   52.   53.   54.   55.   56.   57.   58.   59.   60.   61.   62.   The installation is completed, Thanks, Fatih Tufekcioglu  

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • What guidelines are best suited for leveraging automatic deployments?

    - by Scott
    We are hoping to leverage a static code analysis tool (Sonar) as part of our continuous integration server, and are hoping to determine some useful guidelines to serve as a base for allowing the deployment to continue. What conditions should we make mandatory before allowing a build to proceed to the next set of testing? The obvious answers include that it compiles and the unit tests are successful. But what are some other things we should require before allowing a build to not be rolled back?

    Read the article

  • Server 2012 AD-DS Setup Fails (Microsoft.Directory.Services.Deployment.DeepTasks.DeepTasks not found)

    - by Daniel Steiner
    Good Morning everyone, I am currently trying to promote my 2012 Server to a Domain Controller but when I am at the first step in the setup I get the Error Message (German, Original Message): [Bereitstellungskonfiguration] Fehler bei der Bestimmung, ob der Zielserver bereits ein Domänencontroller ist: Der Typ [Microsoft.Directory.Services.Deployment.DeepTasks.DeepTasks] wurde nicht gefunden: Vergewissern Sie sich, dass die Assembly, die diesen Typ enthält, geladen ist. (Translated to English): Error while determining, if the Targetserver already is a Domain Controller: The Type [Microsoft.Directory.Services.Deployment.DeepTasks.DeepTasks] was not found: Make sure, that the assembly, that contains this type, is loaded. Thus I can neither Configure the AD-DS nor deinstall them via Server Manager. Any Help how to fix that problem would be greatly appricieated.

    Read the article

  • Hyper-V Deployment Options Best Practices

    - by Erv Walter
    In what circumstances would you choose each of the following deployment options: Hyper-V installed as the bare bones Windows Hyper-V Server 2008 R2 Hyper-V role installed on a Windows Server 2008 R2 Server Core installation Hyper-V role installed on a Windows Server 2008 R2 Full Installation For example, I know there are licensing considerations for each option: With Hyper-V on top of a full installation of Enterprise or Data Center edition, you can use Windows Server as a guest OS without needing additional licenses (4 for Enterprise, unlimited for Data Center) With "Windows Hyper-V Server" you have to obtain licenses for each guest OS. But my real question is, are there technical considerations as well? I understand that the Full Installation doesn't perform as well as the other two options, but is there a significant difference between Server Core and "Windows Hyper-V Server"? What are the pros and cons of Hyper-V on Server Core vs "Windows Hyper-V Server" and when would you choose each?

    Read the article

  • How to install Web Deployment Agent

    - by Jerry
    I am trying to setup the TFS automated deploys. But I keep getting the following error message when running the deploy. It appears that I have a service called "Web Management Service", but the error message says that I need "We Deploy Agent Service". I tried installing Web Deploy 2.0, but the server said that I already had this installed. What can I do to fix this problem? Error Code: ERROR_DESTINATION_NOT_REACHABLE Could not connect to the destination computer ("myServer"). On the destination computer, make sure that Web Deploy is installed and that the required process ("Web Deployment Agent Service") is started. --Update-- Looks like the Web Deployment agent is not installed by default. I had to re-install MSDeploy, select Change or Custom, then add the Web Deploy Agent service. Now the deploy works correctly.

    Read the article

  • Remove Applications page on MDT 2012 Deployment Summary

    - by Ben M.
    I am using SCCM 2012 and MDT 2012 for OSD and app deployment. Because of the nature of how some of the apps need to be detected (and which changes as apps get updated) they show up with a warning sign on the Applications tab of the Deployment Summary. There is pretty much no way to get them all green check marked (not realistically in our setup anyway) so I would like to get rid of that page so that I don't have to hear about it from my teams as they deploy. Any help would be appreciated. MDT Summary screen: Applications Installed (the tab I want to remove)

    Read the article

  • Podcast Show Notes: William Ulrich and Neal McWhorter on Business Architecture

    - by Bob Rhubart
    The latest ArchBeat podcast program features a four-part conversation with William Ulrich and Neal McWhorter, the authors of Business Architecture: The Art and Practice of Business Transformation, available from Meghan-Kiffer Press. Listen to Part 1 Bill and Neal cover the basics and discuss the effects of the lack of business architecture on organizations. Listen to Part 2 (Jan 19) What really happens to the billions of dollars annually invested in IT. Listen to Part 3 (Jan 26) Why the IT and business sides of many organizations can’t play nice. Listen to Part 4 (Feb 2) How IT architects and business architects can work together to get the ship back on course and keep it there. Connect William Ulrich Website | LinkedIn | Business Architecture Guild Neal McWhorter Website | LinkedIn | Business Architecture Group on OMG Coming Soon Bob Hensle, Director, Oracle Enterprise Architecture Group, discusses the recently launched IT Solutions from Oracle (ITSO) library of documents. Excerpts from a recent OTN Architect Community Virtual Meet-up. Stay tuned: RSS del.icio.us Tags: business architecture,enterprise architecture,arch2arch,archbeat,podcast,business transformation,oracle,oracle technology network Technorati Tags: business architecture,enterprise architecture,arch2arch,archbeat,podcast,business transformation,oracle,oracle technology network

    Read the article

  • CIO Magazine's State of the CIO and its Impact on Your EA

    - by david.olivencia(at)oracle.com
    CIO Magazine today released its State of the CIO report.  As most Enterpise Architects report to (or report very close to) the CIO, the report provides interesting insights as to where most CIOs minds and priorities are.  The information will allow Enterprise Architects  to better align plans, approaches, models, and stratagies. The report's summary can be found here:  http://assets.cio.com/documents/cache/pdfs/2011/dec15_gatefold.pdf   Specifically the article highlights: * How IT Makes A Difference * Critical Leadership Skills * Business Focused CIOs * Areas of Increasing Responsibility * Plans for 2015   Enterprise Architects what insights from this report will alter they way you successfully lead in 2011?   David Olivencia | Solution Director, Enterprise Architecture & Exa ServicesOracle Consulting Latin America and Caribbean

    Read the article

  • Help me classify this type of software architecture

    - by Alex Burtsev
    I read some books about software architecture as we are using it in our project but I can't classify the architecture properly. It's some kind of Enterprise Architecture, but what exactly... SOA, ESB (Enterprise Service Bus), Message Bus, Event Driven SOA, there are so many terms in Enterprise software.... The system is based on custom XML messages exchanges between services. (it's not SOAP, nor any other XML based standard, just plain XML). These messages represent notifications (state changes) that are applied to the Domain model, (it's not like CRUD when you serialize the whole domain object, and pass it to service for persistence). The system is centralized, and system participants use different programming languages and frameworks (c++, c#, java). Also, messages are not processed at the moment they are received as they are stored first and the treatment begins on demand. It's called SOA+EDA -:)

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • SQL Server Express Uninstall / Enterprise Install Issue

    - by user19049
    I need help installing SQL Server 2005 Enterprise edition.I really need to remove the current SQL Server 2005.installation that is no longer on my Add/Remove software list but yet still installed on the machine. I tried to uninstall SQL Server Express / Developer Edition but it only removed it from my Add/Remove software list. It returned immediately but did NOT actually remove the product. (I'm now in a bad state.) i tried to install SQL Server 2005 Enterprise and its says I'm blocked as all components are already installed - but they are not. How can I remove all instance of previous one and install clean Enterprise edition on my server Thanks

    Read the article

  • Multi screen RDP in Windows 8.1 Enterprise

    - by bgs264
    I have just flattened my machine and installed Windows 8.1 Enterprise Edition. I have used the Hyper-V to create a virtual machine for my Software Development stuff, on my VM I have also installed Windows 8.1 Enterprise Edition. I want to have two screen support when using this VM (not using /span) Both the Hyper-V viewer and Remote Desktop give me a tickbox to "Use all my monitors for the remote session". However even with it ticked (and even when I tried the /multimon switch on the command line), I only get a single screen. Am I missing something - this should be supported in Enterprise edition, right? Is there some extra config I need to do on the RDP host? Forgive me if it's an obvious question, I'm more a developer and just stumbling through ;-) Cheers! Ben

    Read the article

  • javax.naming.NameAlreadyBoundException: in glassfish server v2

    - by Nila
    Hi! I'm implementing stateless session bean ejb3 in glassfish server using netbeans. First time, it is working properly. Later, I'm getting the exception as follows: LDR5012: Jndi name conflict found in [SampleEjb3]. Jndi name [Lulu.HellostatelessRemote] for bean [HellostatelessBean] is already in use. LDR5013: Naming exception while creating EJB container: javax.naming.NameAlreadyBoundException: Use rebind to override at com.sun.enterprise.naming.TransientContext.doBindOrRebind(TransientContext.java:292) at com.sun.enterprise.naming.TransientContext.bind(TransientContext.java:232) at com.sun.enterprise.naming.SerialContextProviderImpl.bind(SerialContextProviderImpl.java:111) at com.sun.enterprise.naming.LocalSerialContextProviderImpl.bind(LocalSerialContextProviderImpl.java:90) at com.sun.enterprise.naming.SerialContext.bind(SerialContext.java:461) at com.sun.enterprise.naming.SerialContext.bind(SerialContext.java:476) at javax.naming.InitialContext.bind(InitialContext.java:404) at com.sun.enterprise.naming.NamingManagerImpl.publishObject(NamingManagerImpl.java:237) at com.sun.enterprise.naming.NamingManagerImpl.publishObject(NamingManagerImpl.java:190) at com.sun.ejb.containers.BaseContainer.initializeHome(BaseContainer.java:1015) at com.sun.ejb.containers.StatelessSessionContainer.initializeHome(StatelessSessionContainer.java:232) at com.sun.ejb.containers.ContainerFactoryImpl.createContainer(ContainerFactoryImpl.java:654) at com.sun.enterprise.server.AbstractLoader.loadEjbs(AbstractLoader.java:536) at com.sun.enterprise.server.ApplicationLoader.doLoad(ApplicationLoader.java:188) at com.sun.enterprise.server.TomcatApplicationLoader.doLoad(TomcatApplicationLoader.java:126) at com.sun.enterprise.server.AbstractLoader.load(AbstractLoader.java:244) at com.sun.enterprise.server.AbstractManager.load(AbstractManager.java:225) at com.sun.enterprise.server.ApplicationLifecycle.onStartup(ApplicationLifecycle.java:217) at com.sun.enterprise.server.ApplicationServer.onStartup(ApplicationServer.java:442) at com.sun.enterprise.server.ondemand.OnDemandServer.onStartup(OnDemandServer.java:120) at com.sun.enterprise.server.PEMain.run(PEMain.java:411) at com.sun.enterprise.server.PEMain.main(PEMain.java:338) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.server.PELaunch.main(PELaunch.java:412) Then, I'll remove the ejb module from the glassfish server and I'll restart the server. It will work then. So, how to overcome this problem..

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • Deploy with rsync(or svn, git, cvs) and ignore inconsistent state during deployment?

    - by zedoo
    We are currently talking about deploying a website via rsync. However, during rsyncing the application is left in an inconsistent state, as some files may already be synced while others still are left with the old version right? How do people deal with this issue? I guess the same problem exists when deploying via svn/git/cvs. Should I just close the site, rsync, and open up again? Or do people simply ignore this inconsistency problem?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >