Search Results

Search found 31269 results on 1251 pages for 'process management'.

Page 321/1251 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Oracle EMEA News Digest - May 2014

    - by Steve Walker
    Systems Oracle introduced a technology preview of an OpenStack® distribution that allows Oracle Linux and Oracle VM users to work with the open source cloud software. This provides customers with additional choices and interoperability while taking advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. The distribution is delivered as part of the Oracle Linux and Oracle VM Premier Support offerings, at no additional cost. Oracle plans to work further with the OpenStack community to develop and enhance its enterprise-class capabilities to meet customer demands. Also in the Open Source arena, Oracle announced the general availability of MySQL Fabric. MySQL Fabric provides an integrated system that makes it simpler to manage groups of MySQL databases. It delivers both high availability - via failure detection and failover - and scalability through automated data sharding. Oracle Database, Middleware and Technology The company made two announcements for Oracle Tuxedo, the #1 application server for C, C++, COBOL and Java deployments in private cloud or traditional data center environments. With enhanced management and monitoring features and tighter integration with Oracle technologies, the latest release of Oracle Tuxedo 12c enables organizations to dramatically increase application throughput, while reducing total cost of ownership and time to market for new application development and deployment. Oracle also introduced the latest release of its mainframe application rehosting platform, Oracle Tuxedo ART 12c, to help organizations speed up migration projects and accelerate the adoption of the new environment by current IT staff. It enables organizations to accelerate the rehosting of IBM mainframe applications and greatly enhance management and supportability of the rehosted applications while reducing costs and risk. Applications According to new Oracle studies, B2B and B2C commerce professionals find integrated, omni-channel customer experiences increasingly valuable to their organizations, and are continuing to invest in technologies and digital content strategies to facilitate them. The studies—one for B2B and one for B2C—surveyed e-commerce professionals in business and technology departments from around the world. Although the priorities, success metrics, and technology investments differed between the two groups, customer acquisition and retention emerged as common themes across B2B and B2C. Growing market share and enhancing customer experience are cited as top investment areas for all e-commerce professionals. In product news, Oracle announced the latest release of Oracle Business Intelligence (BI) Applications (version 11.1.1.8.1, in case anyone asks). It includes prebuilt connectors between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards. Additionally, a new Oracle Human Resources Analytics module for developing and maintaining a skilled workforce has been introduced. In use at more than 4,000 companies worldwide, Oracle BI Applications support leading enterprise applications, including Oracle E-Business Suite, Oracle’s PeopleSoft, Oracle's Siebel CRM, Oracle’s JD Edwards EnterpriseOne offering high-performing analytics at a lower cost. Industries For the Communications Industry, Oracle has launched a new release of the Oracle Communications Core Session Manager. This gives CSPs a new way to design, deploy and manage complex networking services and embrace next-generation technology, It provides them with an immediate entry point for  network function virtualization (NFV) efforts, allowing them to realize immediate benefits associated with network virtualization – including increased service agility and improved network resource sharing. And for the Utilities Industry, Oracle is releasing solutions with new business features and enhanced technical architecture that help position utilities for success now and into the future. Oracle has provided new releases for its customer information system,  meter data management system, customer self-service solution and mobile workforce management solution.

    Read the article

  • It’s official – Red Gate is a great place to work!

    - by red@work
    At a glittering award ceremony last week, we found out that we’re officially the 14th best small company to work for in the whole of the UK! This is no mean feat, considering that about 1,000 companies enter the Sunday Times Top 100 best companies awards each year. Most of these are in the small companies category too. It's the fourth year in a row for us to be in the Top 100 list and we're tickled pink because the results are based on employee opinion. We’re particularly proud to be the best small company in Cambridge (in the whole of East Anglia, in fact) and the best small software development company in the entire UK. So how does it all work? Well, 90% of us took the time to answer over 70 questions on categories such as management, benefits, wellbeing, leadership, giving something back and what we think of Red Gate as a whole. It makes you think about every part of day to day working life and how you feel about it. Do you slightly or strongly agree or disagree that your manager motivates your to do your best every day, or that you have confidence in Red Gate's leaders, or that you’re not spending too much time working? It's great to see that we had one of the best scores in the country for the question "Do you think your company takes advantage of you?" We got particularly high scores for management, wellbeing and for giving something back too. A few of us got dressed up and headed to London for the awards; very excited about where we’d place but slightly nervous about having to get up on stage. There was a last minute hic up with a bow tie but the Managing Editor of the Sunday Times kindly stepped in to offer his assistance just before we had our official photo taken. We were nominated for two Special Recognition Awards. Despite not bringing them home this year, we're very proud to be nominated as there are only three nominations in each category. First we were up for the Training and Development award. Best Companies loved that we get together at lunchtimes to teach each other photography, cookery and French, as well as our book clubs and techie talks. And of course they liked our opportunities to go on training courses and to jet off to international conferences. Our other nomination was for the Wellbeing award. Best Companies loved our free food (and let’s face it, so do we). Porridge or bacon sandwiches for breakfast, a three course hot dinner, and free fruit and cereals all day long. If all that has an affect on the waistline then there are plenty of sporty activities for us all to get involved in, such as yoga, running or squash. Or if that’s not your thing then a relaxing massage helps us all to unwind every few months or so. The awards were hosted by news presenter Kate Silverton. She gave us a special mention during the ceremony for having great customer engagement as well as employee engagement, after we told her about Rodney Landrum (a Friend of Red Gate) tattooing our logo on his arm. We showed off our customised dinner jacket (thanks to Dom from Usability) with a flashing Red Gate logo on the back and she seemed suitability impressed. Back in the office the next day, we popped open the champagne and raised a glass to our success. Neil, our joint CEO, talked about how pleased he was with the award because it's based on the opinions of the people that count – us. You can read more about the Sunday Times awards here. By the way, we're still growing and are still hiring. If you’d like to keep up with our latest vacancies then why not follow us on Twitter at twitter.com/redgatecareers. Right now we're busy hiring in development, test, sales, product management, web development, and project management. Here's a link to our current job opportunities page – we'd love to hear from great people who are looking for a great place to work! After all, we're only great because of the people who work here. Post by: Alice Chapman

    Read the article

  • Rotating WebLogic Server logs to avoid large files using WLST.

    - by adejuanc
    By default, when WebLogic Server instances are started in development mode, the server automatically renames (rotates) its local server log file as SERVER_NAME.log.n.  For the remainder of the server session, log messages accumulate in SERVER_NAME.log until the file grows to a size of 500 kilobytes.Each time the server log file reaches this size, the server renames the log file and creates a new SERVER_NAME.log to store new messages. By default, the rotated log files are numbered in order of creation filenamennnnn, where filename is the name configured for the log file. You can configure a server instance to include a time and date stamp in the file name of rotated log files; for example, server-name-%yyyy%-%mm%-%dd%-%hh%-%mm%.log.By default, when server instances are started in production mode, the server rotates its server log file whenever the file grows to 5000 kilobytes in size. It does not rotate the local server log file when the server is started. For more information about changing the mode in which a server starts, see Change to production mode in the Administration Console Online Help.You can change these default settings for log file rotation. For example, you can change the file size at which the server rotates the log file or you can configure a server to rotate log files based on a time interval. You can also specify the maximum number of rotated files that can accumulate. After the number of log files reaches this number, subsequent file rotations delete the oldest log file and create a new log file with the latest suffix.  Note: WebLogic Server sets a threshold size limit of 500 MB before it forces a hard rotation to prevent excessive log file growth. To Rotate via WLST : #invoke WLSTC:\>java weblogic.WLST#connect WLST to an Administration Serverawls:/offline> connect('username','password')#navigate to the ServerRuntime MBean hierarchywls:/mydomain/serverConfig> serverRuntime()wls:/mydomain/serverRuntime>ls()#navigate to the server LogRuntimeMBeanwls:/mydomain/serverRuntime> cd('LogRuntime/myserver')wls:/mydomain/serverRuntime/LogRuntime/myserver> ls()-r-- Name myserver-r-- Type LogRuntime-r-x forceLogRotation java.lang.Void :#force the immediate rotation of the server log filewls:/mydomain/serverRuntime/LogRuntime/myserver> cmo.forceLogRotation()wls:/mydomain/serverRuntime/LogRuntime/myserver> The server immediately rotates the file and prints the following message: <Mar 2, 2012 3:23:01 PM EST> <Info> <Log Management> <BEA-170017> <The log file C:\diablodomain\servers\myserver\logs\myserver.log will be rotated. Reopen the log file if tailing has stopped. This can happen on some platforms like Windows.><Mar 2, 2012 3:23:01 PM EST> <Info> <Log Management> <BEA-170018> <The log file has been rotated to C:\diablodomain\servers\myserver\logs\myserver.log00001. Log messages will continue to be logged in C:\diablodomain\servers\myserver\logs\myserver.log.> To specify the Location of the archived Log Files The following command specifies the directory location for the archived log files using the -Dweblogic.log.LogFileRotationDir Java startup option: java -Dweblogic.log.LogFileRotationDir=c:\foo-Dweblogic.management.username=installadministrator-Dweblogic.management.password=installadministrator weblogic.Server For more information read the following documentation ; Using the WebLogic Scripting Tool http://download.oracle.com/docs/cd/E13222_01/wls/docs103/config_scripting/using_WLST.html Configuring WebLogic Logging Services http://download.oracle.com/docs/cd/E12840_01/wls/docs103/logging/config_logs.html

    Read the article

  • Top 5 Reasons to Invest in Enterprise 2.0 Technologies

    - by kellsey.ruppel(at)oracle.com
    In 2010, Oracle's portal, content management, and collaboration solutions evolved rapidly, supported by increasingly deep integrations across Oracle Fusion Middleware and the entire Oracle stack. In light of these developments, we asked Vince Casarez, vice president of Enterprise 2.0 product management, for his top five reasons to invest in Enterprise 2.0 (E2.0) technologies--including real-world examples of businesses already realizing the benefits of next-generation E2.0 technologies. 1. Provide a modern user experience As E2.0 technologies gain widespread adoption, customers and employees expect intuitive Web experiences that are both interactive and community-based. By partnering with Oracle, Alcatel-Lucent Enterprise Group is already making that happen. With 76,000 employees and operations in more than 100 countries, the company wanted a streamlined, personalized user experience with more relevant content in fewer clicks. Working with Oracle, they created a global support portal that supports personalization and integration with Oracle Business Intelligence Enterprise Edition and Oracle E-Business Suite--and drives collaboration with tools such as wikis, blogs, and forums. Learn more about Alcatel-Lucent Enterprise Group's Global Support Portal in this Webcast. 2. Improve productivity and collaboration As E2.0 technologies mature, Oracle anticipates companies moving beyond the idea of simply creating yet another Facebook-like destination for its employees, and instead shaping work environments around specific business tasks. After rapid growth--both organic and through acquisition--construction and infrastructure services leader Balfour Beatty found itself with multiple homegrown intranet sites with very minimal content-sharing capabilities. Today, thanks to Oracle WebCenter Suite, Oracle WebCenter Spaces, Oracle WebCenter Services, and Oracle Universal Content Management, Balfour Beatty is benefiting from collaborative workspaces, a central place to use and work with documents, and unified search across content. 3. Leverage business processes and applications Modern portals are now able to integrate users, content, and business processes in unprecedented ways. To take advantage of these new possibilities, leading dairy provider Land O'Lakes has implemented a fully integrated ERP solution together with Oracle's ECM platform. As a result, Land O'Lakes has been able to achieve better information management and compliance, increased adoption rates for enterprise tools, and increased business process efficiency thanks to more effective information sharing and collaboration. 4. Enhance customer and supplier relationships Companies have begun to move beyond the idea that E2.0 simply means enabling customer reviews or embedding chat functionality. They are taking E2.0 to the next level and providing interactive experiences for their customers. For example, to enhance customer and supplier relationships, Wind River, a global leader in device software optimization, successfully partnered with Oracle to: Integrate ERP and ECM content to provide customers the latest and most relevant support information for products they own Enable customers to personalize their support experience and receive updates regarding patches, application notes, and other relevant content Enable discussions, wikis, and blogs for more efficient collaboration 5. Increase business visibility and responsiveness By strategically embedding collaboration and communication tools into specific business contexts, companies significantly increase visibility into changing business conditions--and can respond much more agilely. Texas A&M University System--one of the largest systems of higher education in the U.S.--partnered with Oracle to create a unified repository that would enable the retrieval of research and grant data from disparate systems via an Enterprise 2.0 user interface. By enabling researchers to customize their own portals with easy-to-use tools, they have also been able to significantly reduce their reliance on the IT department. Learn how other Oracle customers are leveraging Enterprise 2.0 technologies.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • Error deploying Spring with JAX-WS application in Jboss 6 server

    - by arlahiru
    I'm getting this error when I deploying spring+JAX-WS application in jboss server 6.1.0. 09:14:38,175 ERROR [org.springframework.web.context.ContextLoader] Context initialization failed: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.sun.xml.ws.transport.http.servlet.SpringBinding#0' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Cannot create inner bean '(inner bean)' of type [org.jvnet.jax_ws_commons.spring.SpringService] while setting bean property 'service'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)': FactoryBean threw exception on object creation; nested exception is java.lang.LinkageError: loader constraint violation: when resolving field "DATETIME" the class loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) of the referring class, javax/xml/datatype/DatatypeConstants, and the class loader (instance of <bootloader>) for the field's resolved type, loader constraint violation: when resolving field "DATETIME" the class loader at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:230) [:2.5.6] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1245) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1010) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) [:2.5.6] at java.security.AccessController.doPrivileged(Native Method) [:1.7.0_05] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) [:2.5.6] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) [:2.5.6] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) [:2.5.6] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) [:2.5.6] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380) [:2.5.6] at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) [:2.5.6] at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) [:2.5.6] at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) [:2.5.6] at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3369) [:6.1.0.Final] at org.apache.catalina.core.StandardContext.start(StandardContext.java:3828) [:6.1.0.Final] at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:294) [:6.1.0.Final] at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:146) [:6.1.0.Final] at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:476) [:6.1.0.Final] at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118) [:6.1.0.Final] at org.jboss.web.deployers.WebModule.start(WebModule.java:95) [:6.1.0.Final] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [:1.7.0_05] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [:1.7.0_05] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [:1.7.0_05] at java.lang.reflect.Method.invoke(Method.java:601) [:1.7.0_05] at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) [:6.0.0.GA] at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) [:6.0.0.GA] at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) [:6.0.0.GA] at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:271) [:6.0.0.GA] at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:670) [:6.0.0.GA] at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206) [:2.2.0.SP2] at $Proxy41.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:53) [:2.2.0.SP2] at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:41) [:2.2.0.SP2] at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:379) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:301) [:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2044) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1083) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1322) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1246) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1139) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:939) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:654) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.system.ServiceController.doChange(ServiceController.java:671) [:6.1.0.Final (Build SVNTag:JBoss_6.1.0.Final date: 20110816)] at org.jboss.system.ServiceController.start(ServiceController.java:443) [:6.1.0.Final (Build SVNTag:JBoss_6.1.0.Final date: 20110816)] at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:189) [:6.1.0.Final] at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:102) [:6.1.0.Final] at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:49) [:6.1.0.Final] at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:63) [:2.2.2.GA] at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:55) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:179) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1832) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1550) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1571) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1491) [:2.2.2.GA] at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:379) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2044) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1083) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1322) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1246) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1139) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:939) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:654) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.deployers.plugins.deployers.DeployersImpl.change(DeployersImpl.java:1983) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:1076) [:2.2.2.GA] at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:679) [:2.2.2.GA] at org.jboss.system.server.profileservice.deployers.MainDeployerPlugin.process(MainDeployerPlugin.java:106) [:6.1.0.Final] at org.jboss.profileservice.dependency.ProfileControllerContext$DelegateDeployer.process(ProfileControllerContext.java:143) [:0.2.2] at org.jboss.profileservice.plugins.deploy.actions.DeploymentStartAction.doPrepare(DeploymentStartAction.java:98) [:0.2.2] at org.jboss.profileservice.management.actions.AbstractTwoPhaseModificationAction.prepare(AbstractTwoPhaseModificationAction.java:101) [:0.2.2] at org.jboss.profileservice.management.ModificationSession.prepare(ModificationSession.java:87) [:0.2.2] at org.jboss.profileservice.management.AbstractActionController.internalPerfom(AbstractActionController.java:234) [:0.2.2] at org.jboss.profileservice.management.AbstractActionController.performWrite(AbstractActionController.java:213) [:0.2.2] at org.jboss.profileservice.management.AbstractActionController.perform(AbstractActionController.java:150) [:0.2.2] at org.jboss.profileservice.plugins.deploy.AbstractDeployHandler.startDeployments(AbstractDeployHandler.java:168) [:0.2.2] at org.jboss.profileservice.management.upload.remoting.DeployHandlerDelegate.startDeployments(DeployHandlerDelegate.java:74) [:6.1.0.Final] at org.jboss.profileservice.management.upload.remoting.DeployHandler.invoke(DeployHandler.java:156) [:6.1.0.Final] at org.jboss.remoting.ServerInvoker.invoke(ServerInvoker.java:967) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.completeInvocation(ServerThread.java:791) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.processInvocation(ServerThread.java:744) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.dorun(ServerThread.java:548) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.run(ServerThread.java:234) [:6.1.0.Final] Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)': FactoryBean threw exception on object creation; nested exception is java.lang.LinkageError: loader constraint violation: when resolving field "DATETIME" the class loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) of the referring class, javax/xml/datatype/DatatypeConstants, and the class loader (instance of <bootloader>) for the field's resolved type, loader constraint violation: when resolving field "DATETIME" the class loader at org.springframework.beans.factory.support.FactoryBeanRegistrySupport$1.run(FactoryBeanRegistrySupport.java:127) [:2.5.6] at java.security.AccessController.doPrivileged(Native Method) [:1.7.0_05] at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:116) [:2.5.6] at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:98) [:2.5.6] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:223) [:2.5.6] ... 89 more Caused by: java.lang.LinkageError: loader constraint violation: when resolving field "DATETIME" the class loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) of the referring class, javax/xml/datatype/DatatypeConstants, and the class loader (instance of <bootloader>) for the field's resolved type, loader constraint violation: when resolving field "DATETIME" the class loader at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.<clinit>(RuntimeBuiltinLeafInfoImpl.java:263) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.<init>(RuntimeTypeInfoSetImpl.java:65) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:85) [:2.2] at com.sun.xml.bind.v2.model.impl.ModelBuilder.<init>(ModelBuilder.java:156) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.<init>(RuntimeModelBuilder.java:93) [:2.2] at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:473) [:2.2] at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319) [:2.2] at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) [:2.2] at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:188) [:2.2] at com.sun.xml.bind.api.JAXBRIContext.newInstance(JAXBRIContext.java:111) [:2.2] at com.sun.xml.ws.developer.JAXBContextFactory$1.createJAXBContext(JAXBContextFactory.java:113) [:2.2.3] at com.sun.xml.ws.model.AbstractSEIModelImpl$1.run(AbstractSEIModelImpl.java:166) [:2.2.3] at com.sun.xml.ws.model.AbstractSEIModelImpl$1.run(AbstractSEIModelImpl.java:159) [:2.2.3] at java.security.AccessController.doPrivileged(Native Method) [:1.7.0_05] at com.sun.xml.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:158) [:2.2.3] at com.sun.xml.ws.model.AbstractSEIModelImpl.postProcess(AbstractSEIModelImpl.java:99) [:2.2.3] at com.sun.xml.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:250) [:2.2.3] at com.sun.xml.ws.server.EndpointFactory.createSEIModel(EndpointFactory.java:343) [:2.2.3] at com.sun.xml.ws.server.EndpointFactory.createEndpoint(EndpointFactory.java:205) [:2.2.3] at com.sun.xml.ws.api.server.WSEndpoint.create(WSEndpoint.java:513) [:2.2.3] at org.jvnet.jax_ws_commons.spring.SpringService.getObject(SpringService.java:333) [:] at org.jvnet.jax_ws_commons.spring.SpringService.getObject(SpringService.java:45) [:] at org.springframework.beans.factory.support.FactoryBeanRegistrySupport$1.run(FactoryBeanRegistrySupport.java:121) [:2.5.6] ... 93 more *** DEPLOYMENTS IN ERROR: Name -> Error vfs:///usr/jboss/jboss-6.1.0.Final/server/default/deploy/SpringWS.war -> org.jboss.deployers.spi.DeploymentException: URL file:/usr/jboss/jboss-6.1.0.Final/server/default/tmp/vfs/automountb159aa6e8c1b8582/SpringWS.war-4ec4d0151b4c7d7/ deployment failed DEPLOYMENTS IN ERROR: Deployment "vfs:///usr/jboss/jboss-6.1.0.Final/server/default/deploy/SpringWS.war" is in error due to the following reason(s): org.jboss.deployers.spi.DeploymentException: URL file:/usr/jboss/jboss-6.1.0.Final/server/default/tmp/vfs/automountb159aa6e8c1b8582/SpringWS.war-4ec4d0151b4c7d7/ deployment failed But this application is workinng correctly in glassfish 3.x server and web service is up and running. I'm using Netbeans IDE in Ubuntu 12.04 to build and deploy the application and I couldn't figure out what is the issue here. I guess its about spring and jboss because its working in glassfish smoothly. Please help me to solve this issue. Thanks all in advance.

    Read the article

  • django: error while migrating from 1.1 to 1.2

    - by gruszczy
    I am trying to migrate our django project from 1.1 to 1.2, but I get following error: Traceback (most recent call last): File "./manage.py", line 11, in <module> execute_manager(settings) File "/usr/local/lib/python2.6/dist-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/usr/local/lib/python2.6/dist-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python2.6/dist-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/local/lib/python2.6/dist-packages/django/core/management/base.py", line 209, in execute translation.activate('en-us') File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/__init__.py", line 66, in activate return real_activate(language) File "/usr/local/lib/python2.6/dist-packages/django/utils/functional.py", line 55, in _curried return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs)) File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/__init__.py", line 36, in delayed_loader return getattr(trans, real_name)(*args, **kwargs) File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/trans_real.py", line 193, in activate _active[currentThread()] = translation(language) File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/trans_real.py", line 176, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/trans_real.py", line 159, in _fetch app = import_module(appname) File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/gruszczy/Programy/bozorth/../bozorth/notifications/__init__.py", line 2, in <module> from django.db.models.signals import post_save File "/usr/local/lib/python2.6/dist-packages/django/db/__init__.py", line 75, in <module> connection = connections[DEFAULT_DB_ALIAS] File "/usr/local/lib/python2.6/dist-packages/django/db/utils.py", line 92, in __getitem__ conn = backend.DatabaseWrapper(db, alias) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/sqlite3/base.py", line 154, in __init__ super(DatabaseWrapper, self).__init__(*args, **kwargs) TypeError: __init__() takes exactly 2 arguments (3 given) Anyone encountered this problem and know, how to solve?

    Read the article

  • Django Error - AttributeError: 'Settings' object has no attribute 'LOCALE_PATHS'

    - by Randy Simon
    I am trying to learn django by following along with this tutorial. I am using django version 1.1.1 I run django-admin.py startproject mysite and it creates the files it should. Then I try to start the server by running python manage.py runserver but here is where I get the following error. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 213, in execute translation.activate('en-us') File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 73, in activate return real_activate(language) File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 205, in activate _active[currentThread()] = translation(language) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 172, in _fetch for localepath in settings.LOCALE_PATHS: File "/Library/Python/2.6/site-packages/django/utils/functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'LOCALE_PATHS' Now, I can add a LOCAL_PATH atribute set to an empty string to my settings.py file but then it just complains about another setting and so on. What am I missing here?

    Read the article

  • Excel export displaying '#####...'

    - by Cypher
    I'm trying to export an Excel database into .txt (Tab Delimited), but some of my cells are quite large. When I export into a txt some of the cells are exported as '#######....' which is surprisingly useless. Has this happened to anyone else? Do you know an easy fix? Data from one cell of my column: Accounting, African Studies, Agricultural/Bioresource Engineering, Agricultural Economics, Agricultural Science, Anatomy/Cell Biology, Animal Biology, Animal Science, Anthropology, Applied Zoology, Architecture, Art History, Atmospheric/Oceanic Science, Biochemistry, Biology, Botanical Sciences, Canadian Studies, Chemical Engineering, Chemistry/Bio-Organic/Environmental/Materials,ChurchMusicPerformance, Civil Engineering/Applied Mechanics, Classics, Composition, Computer Engineering,ComputerScience, ContemporaryGerman Studies, Dietetics, Early Music Performance, Earth/Planetary Sciences, East Asian Studies, Economics, Electrical Engineering, English Literature/ Drama/Theatre/Cultural Studies, Entrepreneurship, Environment, Environmental Biology, Finance, Food Science, Foundations of Computing, French Language/Linguistics/Literature/Translation, Geography, Geography/ Urban Systems, German, German Language/Literature/Culture, Hispanic Languages/Literature/Culture,History,Humanistic Studies, Industrial Relations, Information Systems, International Business, International Development Studies, Italian Studies/Medieval/Renaissance, Jazz Performance, Jewish Studies, Keyboard Studies, Kindergarten/Elementary Education, Kindergarten/Elementary Education/Jewish Studies,Kinesiology, Labor/Management Relations, Latin American/Caribbean Studies, Linguistics, Literature/Translation, Management Science, Marketing, Materials Engineering,Mathematics,Mathematics/Statistics,Mechanical Engineering, Microbiology, Microbiology/Immunology, Middle Eastern Studies, Mining Engineering, Music, Music Education, MusicHistory,Music Technology,Music Theory,North American Studies, Nutrition,OperationsManagement,OrganizationalBehavior/Human Resources Management, Performing Arts, Philosophy, Physical Education, Physics, Physiology, Plant Sciences, Political Science, Psychology, Quebec Studies, Religious Studies/Scriptures/Interpretations/World Religions,ResourceConservation,Russian, Science for Teachers,Secondary Education, Secondary Education/Music, Secondary Education/Science, SocialWork, Sociology, Software Engineering, Soil Science, Strategic Management, Teaching of French/English as a Second Language, Theology, Wildlife Biology, Wildlife Resources, Women’s Studies.

    Read the article

  • python manage.py runserver fails

    - by Randy Simon
    I am trying to learn django by following along with this tutorial. I am using django version 1.1.1 I run django-admin.py startproject mysite and it creates the files it should. Then I try to start the server by running python manage.py runserver but here is where I get the following error. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 213, in execute translation.activate('en-us') File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 73, in activate return real_activate(language) File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 205, in activate _active[currentThread()] = translation(language) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 172, in _fetch for localepath in settings.LOCALE_PATHS: File "/Library/Python/2.6/site-packages/django/utils/functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'LOCALE_PATHS' Now, I can add a LOCALE_PATH atribute and set to an empty tuple to my settings.py file but then it just complains about another setting and so on. What am I missing here?

    Read the article

  • Software development metrics and reporting

    - by David M
    I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation. As an example, my view is that code coverage is a useful metric in the following ways (and maybe others): For a team's own internal use when combined with other measurements. For facilitating/enabling/mentoring teams, where it might be instructive when considered on a team-by-team basis as a trend (e.g. if team A and B have coverage this month of 75 and 50, I'd be more concerned with team A than B if the previous month they'd had 80 and 40). For senior management when presented as an aggregated statistic across a number of teams or a whole department. But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code. I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation? I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations? EDIT We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.

    Read the article

  • Natural vs surrogate keys on support tables

    - by Bugeo
    I have read many articles about the battle between natural versus surrogate primary keys. I agree in the use of surrogate keys to identify records of tables whose contents are created by the user. But in the case of supporting tables what should I use? For example, in a hypothetical table "orderStates". If you use a natural key would have the following data: TABLE ORDERSTATES {ID: "NEW", NAME: "New"} {ID: "MANAGEMENT" NAME: "Management"} {ID: "SHIPPED" NAME: "Shipped"} If I use a surrogate key would have the following data: TABLE ORDERSTATES {ID: 1 CODE: "NEW", NAME: "New"} {ID: 2 CODE: "MANAGEMENT" NAME: "Management"} {ID: 3 CODE: "SHIPPED" NAME: "Shipped"} Now let's take an example: a user enters a new order. In the case in which use natural keys, in the code I can write this: newOrder.StateOrderId = "NEW"; With the surrogate keys instead every time I have an additional step. stateOrderId_NEW = .... I retrieve the id corresponding to the recod code "NEW" newOrder.StateOrderId = stateOrderId_NEW; The same will happen every time I have to move the order in a new status. So, in this case, what are the reason to chose one key type vs the other one?

    Read the article

  • Rebuilding old (2010) django project in 2012

    - by birgit
    I am trying to make an old Django project run again. After seemingly having solved issues with old sorl.thumbnail versions and deprecated expressions I now get this error when running python manage.py runserver I also tried to copy & paste my old files into a new Django project and get the exactly same error. Maybe someone here has a clue where the problem lies? Unhandled exception in thread started by <bound method Command.inner_run of <django.contrib.staticfiles.management.commands.runserver.Command object at 0x2a80510>> Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/django/core/management/commands/runserver.py", line 88, in inner_run self.validate(display_num_errors=True) File "/usr/lib/python2.7/dist-packages/django/core/management/base.py", line 249, in validate num_errors = get_validation_errors(s, app) File "/usr/lib/python2.7/dist-packages/django/core/management/validation.py", line 35, in get_validation_errors for (app_name, error) in get_app_errors().items(): File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 146, in get_app_errors self._populate() File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 61, in _populate self.load_app(app_name, True) File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 78, in load_app models = import_module('.models', app_name) File "/usr/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/me/Documents/wdws/wdws/../wdws/cityofwindows/models.py", line 73, in <module> class Image(models.Model): File "/home/me/Documents/wdws/wdws/../wdws/cityofwindows/models.py", line 83, in Image 'large': {'size': (640, 640)}, File "/usr/lib/python2.7/dist-packages/django/db/models/fields/files.py", line 233, in __init__ super(FileField, self).__init__(verbose_name, name, **kwargs) TypeError: __init__() got an unexpected keyword argument 'extra_thumbnails' I need to re-build the project just for visual documentation locally... so also any hints on how to quickly re-run outdated django-projects are very welcome!! Thanks a lot (using Ubuntu 12.04)

    Read the article

  • Can JMX operations take interfaces as parameters?

    - by Thor84no
    I'm having problems with an MBean that takes a Map<String, Object> as a parameter. If I try to execute it via JMX using a proxy object, I get an Exception: Caused by: javax.management.ReflectionException at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:231) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) Caused by: java.lang.IllegalArgumentException: Unable to find operation updateProperties(java.util.HashMap) It appears that it attempts to use the actual implementation class rather than the interface, and doesn't check if this is a child of the required interface. The same thing happens for extended classes (for example declare HashMap, pass in LinkedHashMap). Does this mean it's impossible to use an interface for such methods? At the moment I'm getting around it by changing the method signature to accept a HashMap, but it seems odd that I wouldn't be able to use interfaces (or extended classes) in my MBeans. Edit: The proxy object is being created by an in-house utility class called JmxInvocationHandler. The (hopefully) relevant parts of it are as follows: public class JmxInvocationHandler implements InvocationHandler { ... public static <T> T createMBean(final Class<T> iface, SFSTestProperties properties, String mbean, int shHostID) { T newProxyInstance = (T) Proxy.newProxyInstance(iface.getClassLoader(), new Class[] { iface }, (InvocationHandler) new JmxInvocationHandler(properties, mbean, shHostID)); return newProxyInstance; } ... private JmxInvocationHandler(SFSTestProperties properties, String mbean, int shHostID) { this.mbeanName = mbean + MBEAN_SUFFIX + shHostID; msConfig = new MsConfiguration(properties.getHost(0), properties.getMSAdminPort(), properties.getMSUser(), properties.getMSPassword()); } ... public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { if (management == null) { management = ManagementClientStore.getInstance().getManagementClient(msConfig.getHost(), msConfig.getAdminPort(), msConfig.getUser(), msConfig.getPassword(), false); } final Object result = management.methodCall(mbeanName, method.getName(), args == null? new Object[] {} : args); return result; } }

    Read the article

  • OpenStack: Keystone service stops immediately after starting

    - by user241618
    When restarting the Keystone service, it starts with a PID but within a fraction of second it stops. Checking the status immediately afterwards, it shows a different PID and when rechecking afterwards, it's dead. root@hyper5:~# service keystone restart stop: Unknown instance: keystone start/running, process 37746 root@hyper5:~# service keystone status keystone start/running, process 37750 root@hyper5:~# service keystone status keystone stop/waiting

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • CodePlex Daily Summary for Friday, February 19, 2010

    CodePlex Daily Summary for Friday, February 19, 2010New ProjectsApplication Management Library: Application Management makes your application life easier. It will automatic do memory management, handle and log unhandled exceptions, profiling y...Audio Service - Play Wave Files From Windows Service: This is a windows service that Check a registry key, when the key is updated with a new wave file path the service plays the wave file.Aviamodels: 3d drawing AviamodelsControl of payment proofs program for Greek citizens: This is a program that is used for Greek citizens who want to keep track of their payment proofs.Cover Creator: Cover Creator gives you the possibility to create and print CD covers. Content of CD is taken from http://www.freedb.org/ or can be added/modyfied ...DevBoard: DevBoard is a webbased scrum tool that helps developers/team get a clear overview of the project progress. It's developed in C# and silverlight.Flex AdventureWorks: The is mostly a skunk-works application to help me get acclimated to CodePlex. The long term goal is to integrate a Flex UI with the AdventureWor...GRE Wordlist: An intuitive and customizable word list for GRE aspirants. Developed in Java using a word list similar to Barron's.Indexer: A desktop file Index and Search tool which allows you to choose a list of folders to index, and then search on later. It is based on Lucene.net an...Project Management Office (PMO) for SharePoint: Sample web part for the Code Mastery event in Boston, February 11, 2010.Restart SQL Audit Policy and Job: Resolve SQL 2008 Audit Network Connectivity Issue.Rounded Corners / DIV Container: The RoundedDiv round corners container is a skin-able, CSS compliant UI control. Select which corners should be rounded, collapse and expand the c...Silverlight Google Search Application: The Silverlight Google Search Application uses Google Search API and behaves like Internet Search Application with option to preview desired page i...Weather Forecast Control: MyWeather forecast control pulls up to date weather forecast information from The Weather Channel for your website.New ReleasesApplication Management Library: ApplicationManagement v1.0: First ReleaseAudio Service - Play Wave Files From Windows Service: Audio Service v1.0: This is a working version of the Audio Service. Please use as you need to.AutoMapper: 1.0.1 for Silverlight 3.0 Alpha: AutoMapper for Silverlight 3.0. Features not supported: IDataReader mapping IListSource mapping All other features are supported.Buzz Dot Net: Buzz Dot Net v.1.10219: Buzz Dot Net Library (Parser & Objects) + WPF Example (using MVVM & Threading)Canvas VSDOC Intellisense: V 1.0.0.0a: This release contains two JavaScript files: canvas-utils.js (can be referenced in both runtime and development environment) canvas-vsdoc.js (must ...Control of payment proofs program for Greek citizens: Payment Proofs: source codeCourier: Beta 2: Added Rx Framework support and re-factored how message registration and un-registration works Blog post explaining the updates and re-factoring c...Cover Creator: Initial release: This is initial stable release. For now only in Polish language.Employee Scheduler: Employee Scheduler 2.2: Small Bug found. Small total hour calculation bug. See http://employeescheduler.codeplex.com/WorkItem/View.aspx?WorkItemId=6059 Extract the files...EnhSim: Release v1.9.7.1: Release v1.9.7.1Implemented Dislodged Foreign Object trinket Whispering Fanged Skull now also procs off Flame shock dots You can toggle bloodlust o...Extend SmallBasic: Teaching Extensions v.007: added SimpleSquareTest added Tortoise.Approve() for virtual proctor how to use virtual proctor: change the path in the proctor.txt file (located i...FolderSize: FolderSize.Win32.1.0.1.0: FolderSize.Win32.1.0.1.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...GLB Virtual Player Builder: v0.4.0 Beta: Allows for user to import and use archetypes for building players. The archetypes are contained in the file "archetypes.xml". This file is editab...Google Map WebPart from SharePoint List: GMap Stable Release: GMap Stable ReleaseHenge3D Physics Library for XNA: Henge3D Source (2010-02 R2): Fixed a build error related to an assembly attribute in XBOX 360 builds. Tweaked the controls in the sample when targeting the 360. Reduced the...Indexer: Beta Release 1: Just the initial/rough cut.NukeCS: NukeCS 5.2.3 Source Code: update version to 5.2.3ODOS: ODOS STABLE 1.5.0: Thank you for your patience while we develop this version. Not that much has been added, though. Just doing some sub-conscious stuff to make life...PoshBoard: PoshBoard 3.0 Beta 1: Welcome to the first beta release of PoshBoard 3.0 ! IMPORTANT WARNING : this release is absolutly not feature complete and is error-prone. Okay, ...Restart SQL Audit Policy and Job: Restart SQL 2008 Audit Policy and Job: This folder contains three pieces of source code: Server Audit Status (Started).xml - Import this on-schedule policy into your server's Policy-Ba...SAL- Self Artificial Learning: Artificial Learning 2AQV Working Proof Of Concept: This is the Simulation proof of concept version that comes after the 1aq version. AQ stands for Anwering Questions.SharePoint 2010 Word Automation: SP 2010 Word Automation - Workflow Actions 1.1: This release includes two new custom workflow activities for SharePoint designer Convert Folder Convert Library More information about these new...SharePoint Outlook Connector: Version 1.0.1.1: Exception Logging has been improved.Sharpy: Sharpy 1.2 Alpha: This is the third Sharpy release. A change has been made to allow overriding the master page from the controller. The release contains the single ...Silverlight Google Search Application: SL Google Search App Alpha: This is just a first alpha version of the application, as it looks like when I uploaded it to CodePlex. The application works, requires Silverlight...Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity 1.0 RC: EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowser VS 2010 RC MVC 2 RC db MSSQL2008thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.6): This release is an update for WSCF.blue V1. Below are the bug fixes made since the V1.0.5 release: The data contract type filter was not including...TS3QueryLib.Net: TS3QueryLib.Net Version 0.18.13.0: Changelog Added overloads to all methods of QueryRunenr class handling permission tasks to allow passing of permission name instead of permissionid...Umbraco CMS: Umbraco 4.1 Beta 2: This is the second beta of Umbraco 4.1. Umbraco 4.1 is more advanced than ever, yet faster, lighter and simpler to use than ever. We, on behalf of...VCC: Latest build, v2.1.30218.0: Automatic drop of latest buildZack's Fiasco - Code Generated DAL: v1.2.4: Enhancements: SQL Server CRUD Stored Procedures added option for USE <db> added option to create or not create INSERT sprocs added option to cr...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETMicrosoft SQL Server Community & SamplesDotNetNuke® Community EditionMost Active ProjectsRawrSharpyDinnerNow.netBlogEngine.NETjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFacebook Developer ToolkitFluent Ribbon Control Suite

    Read the article

  • How To Skip Commercials in Windows 7 Media Center

    - by DigitalGeekery
    If you use Windows 7 Media Center to record live TV, you’re probably interested in skipping through commercials. After all, a big reason to record programs is to avoid commercials, right? Today we focus on a fairly simple and free way to get you skipping commercials in no time. In Windows 7, the .wtv file format has replaced the dvr-ms file format used in previous versions of Media Center for Recorded TV. The .wtv file format, however, does not work very well with commercial skipping applications.  The Process Our first step will be to convert the recorded .wtv files to the previously used dvr-ms file format. This conversion will be done automatically by WtvWatcher. It’s important to note that this process deletes the original .wtv file after successfully converting to .dvr-ms. Next, we will use DVRMSToolBox with the DTB Addin to handle commercials skipping. This process does not “cut” or remove the commercials from the file. It merely skips the commercials during playback. WtvWatcher Download and install the WTVWatcher (link below). To install WtvWatcher, you’ll need to have Windows Installer 3.1 and .NET Framework 3.5 SP1. If you get the Publisher cannot be verified warning you can go ahead and click Install. We’ve completely tested this app and it contains no malware and runs successfully.   After installing, the WtvWatcher will pop up in the lower right corner of your screen. You will need to set the path to your Recorded TV directory. Click on the button for “Click here to set your recorded TV path…”   The WtvWatcher Preferences window will open…   …and you’ll be prompted to browse for your Recorded TV folder. If you did not change the default location at setup, it will be found at C:\Users\Public\Recorded TV. Click “OK” when finished. Click the “X” to close the Preferences screen. You should now see WtvWatcher begin to convert any existing WTV files.   The process should only take a few minutes per file. Note: If WtvWatcher detects an error during the conversion process, it will not delete the original WTV file.    You will probably want to run WtvWatcher on startup. This will allow WtvWatcher too constantly scan for new .wtv files to convert. There is no setting in the application to run on startup, so you’ll need to copy the WTV icon from your desktop into your Windows start menu “Startup” directory. To do so, click on Start > All Programs, right-click on Startup and click on Open all users. Drag and drop, or cut and paste, the WtvWatcher desktop shortcut into the Startup folder. DVRMSToolBox and DTBAddIn Next, we need to download and install the DVRMSToolBox and the DTBAddIn. These two pieces of software will do the actual commercial skipping. After downloading the DVRMSToolBox zip file, extract it and double-click the setup.exe file.  Click “Next” to begin the installation.   Unless DVRMSToolBox will only be used by Administrator accounts, check the “Modify File Permissions” box. Click “Next.” When you get to the Optional Components window, uncheck Download/Install ShowAnalyzer. We will not be using that application. When the installation is complete, click “Close.”    Next we need to install the DTBAddin. Unzip the download folder and run the appropriate .msi file for your system. It is available in 32 & 64 bit versions. Just double click on the file and take the default options. Click “Finish” when the install is completed. You will then be prompted to restart your computer. After your computer has restarted, open DVRMSToolBox settings by going to Start > All Programs, DVRMSToolBox, and click on DVRMStoMPEGSettings. On the MC Addin tab, make sure that Skip Commercials is checked. It should be by default.   On the Commercial Skip tab, make sure the Auto Skip option is selected. Click “Save.”   If you try to watch recorded TV before the file conversion and commercial indexing process is complete you’ll get the following message pop up in Media Center. If you click Yes, it will start indexing the commercials if WtvWatcher has already converted it to dvr-ms. Now you’re ready to kick back and watch your recorded tv without having to wait through those long commercial breaks. Conclusion The DVRMSToolBox is a powerful and complex application with a multitude of features and utilities. We’ve showed you a quick and easy way to get your Windows Media Center setup to skip commercials. This setup, like virtually all commercial skipping setups, is not perfect. You will occasionally find a commercial that doesn’t get skipped. Need help getting your Windows 7 PC configured for TV? Check out our previous tutorial on setting up live TV in Windows Media Center. Links Download WTV Watcher Download DVRMSToolBox Download DTBAddin Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Increase Skip and Replay Intervals in Windows 7 Media CenterSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL SERVER – Introduction to Adaptive ETL Tool – How adaptive is your ETL?

    - by pinaldave
    I am often reminded by the fact that BI/data warehousing infrastructure is very brittle and not very adaptive to change. There are lots of basic use cases where data needs to be frequently loaded into SQL Server or another database. What I have found is that as long as the sources and targets stay the same, SSIS or any other ETL tool for that matter does a pretty good job handling these types of scenarios. But what happens when you are faced with more challenging scenarios, where the data formats and possibly the data types of the source data are changing from customer to customer?  Let’s examine a real life situation where a health management company receives claims data from their customers in various source formats. Even though this company supplied all their customers with the same claims forms, they ended up building one-off ETL applications to process the claims for each customer. Why, you ask? Well, it turned out that the claims data from various regional hospitals they needed to process had slightly different data formats, e.g. “integer” versus “string” data field definitions.  Moreover the data itself was represented with slight nuances, e.g. “0001124” or “1124” or “0000001124” to represent a particular account number, which forced them, as I eluded above, to build new ETL processes for each customer in order to overcome the inconsistencies in the various claims forms.  As a result, they experienced a lot of redundancy in these ETL processes and recognized quickly that their system would become more difficult to maintain over time. So imagine for a moment that you could use an ETL tool that helps you abstract the data formats so that your ETL transformation process becomes more reusable. Imagine that one claims form represents a data item as a string – acc_no(varchar) – while a second claims form represents the same data item as an integer – account_no(integer). This would break your traditional ETL process as the data mappings are hard-wired.  But in a world of abstracted definitions, all you need to do is create parallel data mappings to a common data representation used within your ETL application; that is, map both external data fields to a common attribute whose name and type remain unchanged within the application. acc_no(varchar) is mapped to account_number(integer) expressor Studio first claim form schema mapping account_no(integer) is also mapped to account_number(integer) expressor Studio second claim form schema mapping All the data processing logic that follows manipulates the data as an integer value named account_number. Well, these are the kind of problems that that the expressor data integration solution automates for you.  I’ve been following them since last year and encourage you to check them out by downloading their free expressor Studio ETL software. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL, SSIS

    Read the article

  • Lost all privileges since upgrading to 13.10

    - by Chris Poole
    Since upgrading to 13.10, I no longer have the 'privileges' to do the following things: Mount USB/CDROM drives Run software centre or software updater Press the GUI shut down or restart buttons Unlock my account in the 'settings - user accounts' section (padlock is greyed out) Also, when logging on as a guest user I get error messages relating to Compiz crashing with SIGSEGV and it hangs on a blank wallpaper screen. However, I still am able to use sudo in the terminal. Output of 'groups' is jenchris adm dialout cdrom sudo audio video plugdev lpadmin admin pulse pulse-access sambashare sudo usermod -U username doesn't have any effect Output of sudo dpkg-reconfigure -phigh -a acpid stop/waiting acpid start/running, process 30454 * Starting AppArmor profiles Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd [ OK ] * Reloading AppArmor profiles Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd [ OK ] apport stop/waiting apport start/running gpg: key 437D05B5: "Ubuntu Archive Automatic Signing Key <[email protected]>" not changed gpg: key FBB75451: "Ubuntu CD Image Automatic Signing Key <[email protected]>" not changed gpg: key C0B21F32: "Ubuntu Archive Automatic Signing Key (2012) <[email protected]>" not changed gpg: key EFE21092: "Ubuntu CD Image Automatic Signing Key (2012) <[email protected]>" not changed gpg: Total number processed: 4 gpg: unchanged: 4 atd stop/waiting atd start/running, process 1388 avahi-daemon stop/waiting avahi-daemon start/running, process 1521 Rebuilding /usr/share/applications/bamf-2.index... update-alternatives: using /usr/share/man/man7/bash-builtins.7.gz to provide /usr/share/man/man7/builtins.7.gz (builtins.7.gz) in auto mode update-binfmts: warning: current package is openjdk-7, but binary format already installed by openjdk-6 binfmt-support stop/waiting bluetooth stop/waiting bluetooth start/running, process 4255 update-initramfs: deferring update (trigger activated) /var/lib/dpkg/info/compiz.config: 1: /var/lib/dpkg/info/compiz.config: [general]: not found /var/lib/dpkg/info/compiz.config: 2: /var/lib/dpkg/info/compiz.config: backend: not found /var/lib/dpkg/info/compiz.config: 3: /var/lib/dpkg/info/compiz.config: plugin_list_autosort: not found /var/lib/dpkg/info/compiz.config: 5: /var/lib/dpkg/info/compiz.config: [gnome_session]: not found /var/lib/dpkg/info/compiz.config: 6: /var/lib/dpkg/info/compiz.config: backend: not found /var/lib/dpkg/info/compiz.config: 7: /var/lib/dpkg/info/compiz.config: integration: not found /var/lib/dpkg/info/compiz.config: 8: /var/lib/dpkg/info/compiz.config: plugin_list_autosort: not found /var/lib/dpkg/info/compiz.config: 9: /var/lib/dpkg/info/compiz.config: profile: not found /var/lib/dpkg/info/compiz.config: 11: /var/lib/dpkg/info/compiz.config: [general_ubuntu]: not found /var/lib/dpkg/info/compiz.config: 12: /var/lib/dpkg/info/compiz.config: backend: not found /var/lib/dpkg/info/compiz.config: 13: /var/lib/dpkg/info/compiz.config: integration: not found /var/lib/dpkg/info/compiz.config: 14: /var/lib/dpkg/info/compiz.config: plugin_list_autosort: not found /var/lib/dpkg/info/compiz.config: 15: /var/lib/dpkg/info/compiz.config: profile: not found

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • Big Data – Is Big Data Relevant to me? – Big Data Questionnaires – Guest Post by Vinod Kumar

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions of SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I think the series from Pinal is a good one for anyone planning to start on Big Data journey from the basics. In my daily customer interactions this buzz of “Big Data” always comes up, I react generally saying – “Sir, do you really have a ‘Big Data’ problem or do you have a big Data problem?” Generally, there is a silence in the air when I ask this question. Data is everywhere in organizations – be it big data, small data, all data and for few it is bad data which is same as no data :). Wow, don’t discount me as someone who opposes “Big Data”, I am a big supporter as much as I am a critic of the abuse of this term by the people. In this post, I wanted to let my mind flow so that you can also think in the direction I want you to see these concepts. In any case, this is not an exhaustive dump of what is in my mind – but you will surely get the drift how I am going to question Big Data terms from customers!!! Is Big Data Relevant to me? Many of my customers talk to me like blank whiteboard with no idea – “why Big Data”. They want to jump into the bandwagon of technology and they want to decipher insights from their unexplored data a.k.a. unstructured data with structured data. So what are these industry scenario’s that come to mind? Here are some of them: Financials Fraud detection: Banks and Credit cards are monitoring your spending habits on real-time basis. Customer Segmentation: applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. Customer Sentiment Analysis: Responding to negative brand perception on social or amplify the positive perception. Sales and Marketing Campaign: Understand the impact and get closer to customer delight. Call Center Analysis: attempt to take unstructured voice recordings and analyze them for content and sentiment. Medical Reduce Re-admissions: How to build a proactive follow-up engagements with patients. Patient Monitoring: How to track Inpatient, Out-Patient, Emergency Visits, Intensive Care Units etc. Preventive Care: Disease identification and Risk stratification is a very crucial business function for medical. Claims fraud detection: There is no precise dollars that one can put here, but this is a big thing for the medical field. Retail Customer Sentiment Analysis, Customer Care Centers, Campaign Management. Supply Chain Analysis: Every sensors and RFID data can be tracked for warehouse space optimization. Location based marketing: Based on where a check-in happens retail stores can be optimize their marketing. Telecom Price optimization and Plans, Finding Customer churn, Customer loyalty programs Call Detail Record (CDR) Analysis, Network optimizations, User Location analysis Customer Behavior Analysis Insurance Fraud Detection & Analysis, Pricing based on customer Sentiment Analysis, Loyalty Management Agents Analysis, Customer Value Management This list can go on to other areas like Utility, Manufacturing, Travel, ITES etc. So as you can see, there are obviously interesting use cases for each of these industry verticals. These are just representative list. Where to start? A lot of times I try to quiz customers on a number of dimensions before starting a Big Data conversation. Are you getting the data you need the way you want it and in a timely manner? Can you get in and analyze the data you need? How quickly is IT to respond to your BI Requests? How easily can you get at the data that you need to run your business/department/project? How are you currently measuring your business? Can you get the data you need to react WITHIN THE QUARTER to impact behaviors to meet your numbers or is it always “rear-view mirror?” How are you measuring: The Brand Customer Sentiment Your Competition Your Pricing Your performance Supply Chain Efficiencies Predictive product / service positioning What are your key challenges of driving collaboration across your global business?  What the challenges in innovation? What challenges are you facing in getting more information out of your data? Note: Garbage-in is Garbage-out. Hold good for all reporting / analytics requirements Big Data POCs? A number of customers get into the realm of setting a small team to work on Big Data – well it is a great start from an understanding point of view, but I tend to ask a number of other questions to such customers. Some of these common questions are: To what degree is your advanced analytics (natural language processing, sentiment analysis, predictive analytics and classification) paired with your Big Data’s efforts? Do you have dedicated resources exploring the possibilities of advanced analytics in Big Data for your business line? Do you plan to employ machine learning technology while doing Advanced Analytics? How is Social Media being monitored in your organization? What is your ability to scale in terms of storage and processing power? Do you have a system in place to sort incoming data in near real time by potential value, data quality, and use frequency? Do you use event-driven architecture to manage incoming data? Do you have specialized data services that can accommodate different formats, security, and the management requirements of multiple data sources? Is your organization currently using or considering in-memory analytics? To what degree are you able to correlate data from your Big Data infrastructure with that from your enterprise data warehouse? Have you extended the role of Data Stewards to include ownership of big data components? Do you prioritize data quality based on the source system (that is Facebook/Twitter data has lower quality thresholds than radio frequency identification (RFID) for a tracking system)? Do your retention policies consider the different legal responsibilities for storing Big Data for a specific amount of time? Do Data Scientists work in close collaboration with Data Stewards to ensure data quality? How is access to attributes of Big Data being given out in the organization? Are roles related to Big Data (Advanced Analyst, Data Scientist) clearly defined? How involved is risk management in the Big Data governance process? Is there a set of documented policies regarding Big Data governance? Is there an enforcement mechanism or approach to ensure that policies are followed? Who is the key sponsor for your Big Data governance program? (The CIO is best) Do you have defined policies surrounding the use of social media data for potential employees and customers, as well as the use of customer Geo-location data? How accessible are complex analytic routines to your user base? What is the level of involvement with outside vendors and third parties in regard to the planning and execution of Big Data projects? What programming technologies are utilized by your data warehouse/BI staff when working with Big Data? These are some of the important questions I ask each customer who is actively evaluating Big Data trends for their organizations. These questions give you a sense of direction where to start, what to use, how to secure, how to analyze and more. Sign off Any Big data is analysis is incomplete without a compelling story. The best way to understand this is to watch Hans Rosling – Gapminder (2:17 to 6:06) videos about the third world myths. Don’t get overwhelmed with the Big Data buzz word, the destination to what your data speaks is important. In this blog post, we did not particularly look at any Big Data technologies. This is a set of questionnaire one needs to keep in mind as they embark their journey of Big Data. I did write some of the basics in my blog: Big Data – Big Hype yet Big Opportunity. Do let me know if these questions make sense?  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Upcoming Webcast on June 17: Gain Control Over Your Financial Close

    - by Theresa Hickman
    Accenture and Oracle EPM (Enterprise Perfromance Management) and GRC (Governance, Risk, and Compliance) will be hosting a live webcast called "Gain Control Over Your Financial Close - Confidence in the Process, Trust in the Numbers." When: Thursday, June 17, 2010 Time: 9:00am PST (Noon EST) Don't miss this chance to find out how you could optimize the financial close process and transform the speed, quality and integrity of your financial reporting. For more information and to register for this event, see this webpage.

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >