Search Results

Search found 1486 results on 60 pages for 'worker'.

Page 55/60 | < Previous Page | 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • calling hibernate callback

    - by vrkmurali
    HibernateCallback callback=new HibernateCallback() { @Override public Object doInHibernate(Session session) throws HibernateException, SQLException { Transaction transaction=session.beginTransaction(); Query query2 = session.createSQLQuery( "select user_id,user_name from usermasterdao where user_id not in('select usermaster_id from LoginHistoryDAO where logindate between :fromdate and :todate')"); ((SQLQuery) query2).addEntity(UserMasterDAO.class); //query.setParameter("stockCode", "7277"); query2.setParameter("todate", toDate); query2.setParameter("fromdate", fromDate); List result2 = query2.list(); listofNotUsing=result2; System.out.println(result2.size()+"sizeeee"); transaction.commit(); return result2;}} while executing the command getting error like as follows com.vaadin.event.ListenerMethod$MethodException: Invocation of method notUsingButton in com.iton.ioffice.admin.LoginHistory failed. at com.vaadin.event.ListenerMethod.receiveEvent(ListenerMethod.java:530) at com.vaadin.event.EventRouter.fireEvent(EventRouter.java:164) at com.vaadin.ui.AbstractComponent.fireEvent(AbstractComponent.java:1219) at com.vaadin.ui.Button.fireClick(Button.java:567) at com.vaadin.ui.Button.changeVariables(Button.java:223) at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.changeVariables(AbstractCommunicationManager.java:1460) at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.handleVariableBurst(AbstractCommunicationManager.java:1404) at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.handleVariables(AbstractCommunicationManager.java:1329) at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.doHandleUidlRequest(AbstractCommunicationManager.java:761) at com.vaadin.terminal.gwt.server.CommunicationManager.handleUidlRequest(CommunicationManager.java:318) at com.vaadin.terminal.gwt.server.AbstractApplicationServlet.service(AbstractApplicationServlet.java:501) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:619) Caused by: org.springframework.orm.hibernate3.HibernateQueryException: could not locate named parameter [todate]; nested exception is org.hibernate.QueryPa rameterException: could not locate named parameter [todate] at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:656) at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411) at org.springframework.orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java:339) at com.iton.ioffice.admin.DAO.impl.UserServiceDAOImpl.outOfProcess(UserServiceDAOImpl.java:304) at com.iton.ioffice.admin.LoginHistory.notUsingButton(LoginHistory.java:298) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.vaadin.event.ListenerMethod.receiveEvent(ListenerMethod.java:520) ... 23 more Caused by: org.hibernate.QueryParameterException: could not locate named parameter [todate] at org.hibernate.engine.query.ParameterMetadata.getNamedParameterDescriptor(ParameterMetadata.java:99) at org.hibernate.engine.query.ParameterMetadata.getNamedParameterExpectedType(ParameterMetadata.java:105) at org.hibernate.impl.AbstractQueryImpl.determineType(AbstractQueryImpl.java:437) at org.hibernate.impl.AbstractQueryImpl.setParameter(AbstractQueryImpl.java:407) at com.iton.ioffice.admin.DAO.impl.UserServiceDAOImpl$4.doInHibernate(UserServiceDAOImpl.java:285) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:406) ... 31 more please help me

    Read the article

  • Pass variables between separate instances of ruby (without writing to a text file or database)

    - by boulder_ruby
    Lets say I'm running a long worker-script in one of several open interactive rails consoles. The script is updating columns in a very, very, very large table of records. I've muted the ActiveRecord logger to speed up the process, and instruct the script to output some record of progress so I know how roughly how long the process is going to take. That is what I am currently doing and it would look something like this: ModelName.all.each_with_index do |r, i| puts i if i % 250 ...runs some process... r.save end Sometimes its two nested arrays running, such that there would be multiple iterators and other things running all at once. Is there a way that I could do something like this and access that variable from a separate rails console? (such that the variable would be overwritten every time the process is run without much slowdown) records = ModelName.all $total = records.count records.each_with_index do |r, i| $i = i ...runs some process... r.save end meanwhile mid-process in other console puts "#{($i/$total * 100).round(2)}% complete" #=> 67.43% complete I know passing global variables from one separate instance of ruby to the next doesn't work. I also just tried this to no effect as well unix console 1 $X=5 echo {$X} #=> 5 unix console 2 echo {$X} #=> "" Lastly, I also know using global variables like this is a major software design pattern no-no. I think that's reasonable, but I'd still like to know how to break that rule if I'd like. Writing to a text file obviously would work. So would writing to a separate database table or something. That's not a bad idea. But the really cool trick would be sharing a variable between two instances without writing to a text file or database column. What would this be called anyway? Tunneling? I don't quite know how to tag this question. Maybe bad-idea is one of them. But honestly design-patterns isn't what this question is about.

    Read the article

  • Apache 2.2 and FastCGI stops responding, warnings, crashes

    - by Brett
    I've seen this question posted a few times using a Google search, with no real answers. I have a multi-threaded FastCGI application running with Apache 2.2 on FreeBSD 7.2. There are a few issues with it, and I am unable to really figure out the source of the problem even after poking through a bunch of the mod_fastcgi source code. My FastCGI application gets anywhere from 2 to 15 or so hits per second, and mostly services a back-end API (the majority of web server usage is for this, and not actually serving content). Everything seems to work ok under normal conditions, but recently this problem has been becoming worse. It starts out with the FastCGI process manager apparently trying to close unneeded processes, sending them a SIGTERM signal. I catch the signal, clean up some stuff, and exit (by calling exit()) with status code 0. This process seems to result in three log messages in my httpd error log: [Tue Jun 01 14:03:31 2010] [warn] FastCGI: (dynamic) server "/home/program/wwwroot/domains/www.mydomain.com/cgi-bin/program.cgi" (pid 98182) termination signaled [Tue Jun 01 14:03:31 2010] [warn] FastCGI: (dynamic) server "/home/program/wwwroot/domains/www.mydomain.com/cgi-bin/program.cgi" (pid 98182) terminated by calling exit with status '0' [Tue Jun 01 14:03:31 2010] [warn] FastCGI: (dynamic) server "/home/program/wwwroot/domains/www.mydomain.com/cgi-bin/program.cgi" restarted (pid 98294) I am not sure why it says it is restarting the process, but in any case no core dump is ever generated so I do believe it is the FastCGI process manager doing it's thing. This makes sense because it begins to happen after the initial load increase from restarting Apache. Since it's down for a few seconds, it gets hit with a couple of hundred requests over the first few seconds it's running again (sometimes even hitting the upper limit of MAXCLIENTS in Apache), and this seems to be the process manager doing the work of spawning more processes to handle the increased load. So this all seems fine, but here is where things get weird. There are really two problems that I see. First, my multithreaded FastCGI process spawns 25 worker threads, and all seem to be used according to my internal log files (multiple processes are clearly using multiple threads to do work). However it seems that 3 or 4 FastCGI processes is not enough to handle the 5 to 15 hit per second load, even though the requests take about .02s or so to process internally. In order to be at all responsive, it seems I need 50 or more FastCGI processes, leading me to believe that FastCGI does not realize that my program is multithreaded. I've read the documentation at http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html and do not see any option pertaining to multithreaded-ness, and my internal code is more or less set up just like the examples provided by the FastCGI library. The second problem I am having is that once process termination has happened a bunch of times as above (and seemingly at random), I begin getting a lot of these messages in my error log: [Tue Jun 01 14:06:22 2010] [warn] (32)Broken pipe: FastCGI: write() to PM failed (ignore if a restart or shutdown is pending) The messages occur for about half the hits I get to the server, and it completely kills the responsiveness of my application - it seems FastCGI will look for a working "pipe" until it finds one, and fail to realize that whatever application it is trying to contact is dead. It does still work though, it's just incredibly unresponsive - sometimes taking up to 40 or so seconds to process a request. I recompiled mod_fastcgi with some extra debugging around the point of the error message, and it appears that the error happens when it tries to write() to the application. The call to write() fails with a -1 return code, and sets errno to EPIPE. I am noticing that the issue happens mostly when either a crash occurs in one of the FastCGI processes, or a bunch of them are seemingly terminated by the process manager. I haven't had any core dumps though, except for one, where the backtrace outputted by gdb is just a single call to free() at address 0x0000000000000000 with nothing else in the stack trace, so I don't really know what to make of that. I'm thinking it happens sometime after the SIGTERM signal is caught, maybe some global variable not being cleaned up properly or something.

    Read the article

  • Bye Bye Year of the Dragon, Hello BPM

    - by Ajay Khanna
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As 2012 fades and we usher in a New Year, let’s look back at some of the hottest BPM trends and those we’ll be seeing more of in the coming months. BPM is as much about people as it is about technology. As people adopt new ways of engagement, new channels of communications and new devices to interact , the changes are reflected in BPM practices. As Social and Mobile have become an integral part of our personal and professional lives, we’ll see tighter integration of social and mobile with BPM, and more use cases emerging for smarter process management in 2013. And with products and services becoming less differentiated, organizations will strive to differentiate on Customer Experience. Concepts like Pace Layered Architecture and Dynamic Case Management will provide more flexibility and agility to IT groups and knowledge workers. Take a look at some of these capabilities we showcased (see video) at Oracle OpenWorld 2012. Some of these trends that will continue to gain momentum in 2013: Social networks and social media have provided a new way for businesses to engage with customers. A prospect is likely to reach out to their social network before making any purchase. Companies are increasingly engaging with customers in social networks to influence their purchasing decisions, as well as listening to customers via tools like sentiment analysis to see what customers think about a particular product or process. These insights are valuable as companies look to improve their processes. Inside organizations, workers are using social tools to engage with each other to design new products and processes. Social collaboration tools are being used to resolve issues where an employee needs consultation to reach a decision. Oracle BPM Suite includes social interaction as an integral part of its process design and work management to empower today’s business users. Ubiquitous smart mobile devices are trending as a tool of choice for many workers. Many companies are adopting the policy of “Bring Your Own Device,” and the device of choice is a tablet. Devices like smart phones and tablets not only provide mobility to workers and customers, but they also provide additional important information – the context. By integrating the mobile context (location, photos, and preferences) into your processes, organizations can make much more informed decisions, as well as offer more personalized service to customers. Using Oracle ADF Mobile, you can easily create user interfaces for mobile devices and also capture location data for process execution. Customer experience was at the forefront of trending topics in 2012. Organizations are trying to understand their customers better and offer them more personalized and differentiated services. Customer experience is paramount when companies design sales and support processes. Companies are looking to BPM to consistently and efficiently orchestrate customer facing processes across disparate systems, departments and channels of communication. Oracle BPM Suite provides just the right capabilities for organizations to design and deliver an excellent customer experience. Pace Layered Architecture strategy is gaining traction as a way to maximize agility and minimize disruption in organizations. It provides a framework to manage the evolution of your information system when different pieces of it are changing at different rates and need to be updated independent of one another. Oracle Fusion Middleware and Oracle BPM Suite are designed with this in mind. The database layer, integration layer, application layer, and process layer should not be required to change at the same time. Most of the business changes to policy or process can be done at the process layer without disrupting the whole infrastructure. By understanding the type of change needed at a particular level, organizations can become much more agile and efficient. Adaptive Case Management proposes more flexibility to manage processes or cases that do not follow a structured process flow. In such situations, the knowledge worker managing the case needs to evaluate what step should occur next because the sequence of steps can’t be predetermined. Another characteristic is that it requires much more collaboration than straight-through process. As simple processes become automated, and customers adopt more and more self-service, cases that reach the case workers are much more complex and need more investigation. Oracle BPM suite includes comprehensive adaptive case management capability to manage such unstructured and complex processes. Smart BPM or making your BPM intelligent has been the holy grail for BPM practitioners who imagined that one day BPM would become one with Business Intelligence, Business Activity Monitoring and Complex Event Processing, making it much more responsive and helpful in organizational decision making. In 2013, organizations will begin to deploy these intelligent BPM solutions. Oracle offers an integrated solution that brings together the powerful functionality of BI, BAM, event processing, and Real Time Decisions to help organizations create smart process based solutions. In order to help customers reach their BPM goals faster and remove risks associated with BPM initiatives, Oracle has introduced Oracle Process Accelerators, pre-built best practices applications built on Oracle BPM Suite that are fully production grade and ready to deploy. These are exiting times for BPM practitioners and there is so much to look forward to in 2013. We wish you a very happy and prosperous New Year 2013. Happy BPMing!

    Read the article

  • DNS lookup failures while accessing my website some proxy error

    - by Bond
    Here is a situation until today morning,every thing has been working perfectly fine with me. From past 6 months many of my domains wer accessible as http://site1.myserver.com http://site2.myserver.com http://site3.myserver.com http://site4.myserver.com All these were Reverse Proxy configurations. I have some applications on each of them. until today morning some people reported me that http://site1.myserver.com/app1 is not working but http://site1.myserver.com is accessible but http://site2.myserver.com is accessible but http://site3.myserver.com is accessible but http://site4.myserver.com not accessible In past 6 months I have not changed any of these Apache configurations (things were working perfectly so) The error which can be seen in browser are while accessing http://site1.myserver.com/app1 Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /app1. Reason: DNS lookup failure for: myserver.com and same is the error for http://site4.myserver.com So what should I check in I have checked all the apache logs to an extent which I could see and 192.168.1.25 - - [10/Jan/2011:14:50:48 +0530] "GET /app1 HTTP/1.1" 502 531 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" Mon Jan 10 14:27:42 2011] [error] (113)No route to host: proxy: HTTP: attempt to connect to 192.168.1.3:80 (192.168.1.3) failed [Mon Jan 10 14:27:42 2011] [error] ap_proxy_connect_backend disabling worker for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:46 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:47 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:35:29 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:35:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:35:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:50:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:50:48 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 and for site4.myserver.com I get [Mon Jan 10 14:57:40 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 14:57:40 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 14:57:43 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:02:38 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by / [Mon Jan 10 15:03:04 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:03:04 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:03:08 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:03:08 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:03:10 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:06:21 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by / [Mon Jan 10 15:06:31 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:26:03 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /

    Read the article

  • Error when installing AppFabric 1.1 on Server 2012 64bit

    - by no9
    I am trying to install AppFabric 1.1 on 64bit Windows Server 2012 R2. All updates have been installed and updates are turned ON .NET Framework 4.0 is installed .NET Framework 3.5 is installed IIS is installed Windows Powershell 3.0 should already be included in Server 2012 I am getting the following error: 2014-03-21 11:02:34, Information Setup ===== Logging started: 2014-03-21 11:02:34+01:00 ===== 2014-03-21 11:02:34, Information Setup File: c:\6c4006b0b3f6dee1bf616f1967\setup.exe 2014-03-21 11:02:34, Information Setup InternalName: Setup.exe 2014-03-21 11:02:34, Information Setup OriginalFilename: Setup.exe 2014-03-21 11:02:34, Information Setup FileVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup FileDescription: Setup.exe 2014-03-21 11:02:34, Information Setup Product: Microsoft(R) Windows(R) Server AppFabric 2014-03-21 11:02:34, Information Setup ProductVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup Debug: False 2014-03-21 11:02:34, Information Setup Patched: False 2014-03-21 11:02:34, Information Setup PreRelease: False 2014-03-21 11:02:34, Information Setup PrivateBuild: False 2014-03-21 11:02:34, Information Setup SpecialBuild: False 2014-03-21 11:02:34, Information Setup Language: Language Neutral 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup OS Name: Windows Server 2012 R2 Standard 2014-03-21 11:02:34, Information Setup OS Edition: ServerStandard 2014-03-21 11:02:34, Information Setup OSVersion: Microsoft Windows NT 6.2.9200.0 2014-03-21 11:02:34, Information Setup CurrentCulture: sl-SI 2014-03-21 11:02:34, Information Setup Processor Architecture: AMD64 2014-03-21 11:02:34, Information Setup Event Registration Source : AppFabric_Setup 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1.0 Upgrade module. 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1 Upgrade pre-install. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed, not taking backup. 2014-03-21 11:02:55, Information Setup Executing C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe with commandline -iru. 2014-03-21 11:02:55, Information Setup Return code from aspnet_regiis.exe is 0 2014-03-21 11:02:55, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Microsoft CCR and DSS Runtime 2008 R3.msi" /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-55).log" 2014-03-21 11:02:57, Information Setup Process.ExitCode: 0x00000000 2014-03-21 11:02:57, Information Setup Windows features successfully enabled. 2014-03-21 11:02:57, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Packages\AppFabric-1.1-for-Windows-Server-64.msi" ADDDEFAULT=Worker,WorkerAdmin,CacheService,CacheAdmin,Setup /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-57).log" LOGFILE="C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1_CustomActions(2014-03-21 11-02-57).log" INSTALLDIR="C:\Program Files\AppFabric 1.1 for Windows Server" LANGID=en-US 2014-03-21 11:03:45, Information Setup Process.ExitCode: 0x00000643 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Information Setup Microsoft.ApplicationServer.Setup.Core.SetupException: AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.GenerateAndThrowSetupException(Int32 exitCode, LogEventSource logEventSource) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.Invoke(LogEventSource logEventSource, InstallMode installMode, String packageIdentity, List`1 updateList, List`1 customArguments) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.InstallSelectedFeatures() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.Install() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Client.ProgressPage.StartAction() 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup === Summary of Actions === 2014-03-21 11:03:45, Information Setup Required Windows components : Completed Successfully 2014-03-21 11:03:45, Information Setup IIS Management Console : Completed Successfully 2014-03-21 11:03:45, Information Setup Microsoft CCR and DSS Runtime 2008 R3 : Completed Successfully 2014-03-21 11:03:45, Information Setup AppFabric 1.1 for Windows Server : Failed 2014-03-21 11:03:45, Information Setup Hosting Services : Failed 2014-03-21 11:03:45, Information Setup Caching Services : Failed 2014-03-21 11:03:45, Information Setup Hosting Administration : Failed 2014-03-21 11:03:45, Information Setup Cache Administration : Failed 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup ===== Logging stopped: 2014-03-21 11:03:45+01:00 ===== I have tried this solution but no success: http://stackoverflow.com/questions/11205927/appfabric-installation-failed-because-installer-msi-returned-with-error-code-1 My system enviroment variable PSModulesPath has this value: C:\Windows\System32\WindowsPowerShell\v1.0\Modules I have also followed this link with no success: http://jefferytay.wordpress.com/2013/12/11/installing-appfabric-on-windows-server-2012/ Any help would be greatly appreciated !

    Read the article

  • IIS 7.5 + Windows Server 2008 R2 + ASP.NET 4.0 HTTP 500 Error?

    - by Dave
    Hi, I'm having an issue I cannot track down and I have looked through the forums and not found anything that sheds any light. I have a fresh install of a Server 2008 R2 Web that I am trying to load an application I created and tested on a Windows 7 machine running IIS 7.5 using ASP.NET 4.0. Everything works fine on the dev machine. But when I used the Web Deployment tool to move it to the server, I now get a HTTP 500 error without a lot of information: Module AspNetInitClrHostFailureModule Notification BeginRequest Handler StaticFile Error Code 0x80070002 Requested URL http://192.168.1.83:80/ Physical Path C:\JustStreamIt Logon Method Not yet determined Logon User Not yet determined Failed Request Tracing Log Directory C:\inetpub\logs\FailedReqLogFiles And in my trace file I get: view trace Warning -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. view trace Warning -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) And I get the following in the Application Log: Log Name: Application Source: Microsoft-Windows-IIS-W3SVC-WP Date: 5/28/2010 2:08:10 PM Event ID: 2299 Task Category: None Level: Error Keywords: Classic User: N/A Computer: win-ltfkdo1dnfp Description: An application has reported as being unhealthy. The worker process will now request a recycle. Reason given: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. . The data is the error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC-WP" Guid="{670080D9-742A-4187-8D16-41143D1290BD}" EventSourceName="W3SVC-WP" /> <EventID Qualifiers="49152">2299</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-05-28T21:08:10.000000000Z" /> <EventRecordID>1663</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>Application</Channel> <Computer>win-ltfkdo1dnfp</Computer> <Security /> </System> <EventData> <Data Name="Reason">An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. </Data> <Binary>02000780</Binary> </EventData> </Event> Anyone have a suggestion about where I should start looking?

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Nginx + PHP-FPM = "Random" 502 Bad Gateway

    - by david
    I am running Nginx and proxying php requests via FastCGI to PHP-FPM for processing. I will randomly receive 502 Bad Gateway error pages - I can reproduce this issue by clicking around my PHP websites very rapidly/refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. Here is my setup: nginx/0.7.64 PHP 5.3.2 (fpm-fcgi) (built: Apr 1 2010 06:42:04) Ubuntu 9.10 (Latest 2.6 Paravirt) I compiled PHP-FPM using this ./configure directive ./configure --enable-fpm --sysconfdir=/etc/php5/conf.d --with-config-file-path=/etc/php5/conf.d/php.ini --with-zlib --with-openssl --enable-zip --enable-exif --enable-ftp --enable-mbstring --enable-mbregex --enable-soap --enable-sockets --disable-cgi --with-curl --with-curlwrappers --with-gd --with-mcrypt --enable-memcache --with-mhash --with-jpeg-dir=/usr/local/lib --with-mysql=/usr/bin/mysql --with-mysqli=/usr/bin/mysql_config --enable-pdo --with-pdo-mysql=/usr/bin/mysql --with-pdo-sqlite --with-pspell --with-snmp --with-sqlite --with-tidy --with-xmlrpc --with-xsl My php-fpm.conf looks like this (the relevant parts): ... <value name="pm"> <value name="max_children">3</value> ... <value name="request_terminate_timeout">60s</value> <value name="request_slowlog_timeout">30s</value> <value name="slowlog">/var/log/php-fpm.log.slow</value> <value name="rlimit_files">1024</value> <value name="rlimit_core">0</value> <value name="chroot"></value> <value name="chdir"></value> <value name="catch_workers_output">yes</value> <value name="max_requests">500</value> ... I've tried increasing the max_children to 10 and it makes no difference. I've also tried setting it to 'dynamic' and setting max_children to 50, and start_server to '5' without any difference. I have tried using both 1 and 5 nginx worker processes. My fastcgi_params conf looks like: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; Nginx logs the error as: [error] 3947#0: *10530 connect() failed (111: Connection refused) while connecting to upstream, client: 68.40.xxx.xxx, server: www.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com" PHP-FPM logs the follow at the time of the error: [NOTICE] pid 17161, fpm_unix_init_main(), line 255: getrlimit(nofile): max:1024, cur:1024 [NOTICE] pid 17161, fpm_event_init_main(), line 93: libevent: using epoll [NOTICE] pid 17161, fpm_init(), line 50: fpm is running, pid 17161 [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17162 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17163 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17164 started [NOTICE] pid 17161, fpm_event_loop(), line 111: ready to handle connections My CPU usage maxes out around 10-15% when I recreate the issue. My Free mem (free -m) is 130MB I had this intermittent 502 Bad Gateway issue when in was using php5-cgi to service my php requests as well. Does anyone know how to fix this?

    Read the article

  • Avoid the “Social Silo” - Learn Why and How

    - by Brian Dayton
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} I’m not going to spend any more real estate than needed on this—social media is big. Facebook hit the Billion user mark in October, that’s 1 out of every 7 humans on the planet. This past Summer (in the Northern hemisphere) Twitter passed the 400 Million Tweet/day mark. The list of social properties and data points goes on and on. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} With social your customer, prospect, or constituent has pervasive access—through mobile—to a global audience, the ability to influence friends, friends of friends, and even people they will never meet. They also have the unique opportunity to forge a deeper relationship with your business—telling you what they like, what they don’t like, how you can help, and what they’d like to see more of. Are you listening? Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} What’s the Bottom Line for Business? Businesses need to be where their customers are—on social properties. They need to be available and responsive in those channels—24x7x365. They need to engage and communicate in new ways—sometimes in less than 140 characters and with empathy, not a 1-way megaphone. Finally, businesses need to look at social as an extension of their existing business practices. Not as a silo’d communication channel limited to marketing. Social Can’t Be a Silo – Learn Why @ Oracle CloudWorld When a business is on social networks they represent the whole business. That’s how a customer, constituent, partner or potential candidate sees it. Those organizations that have moved on the opportunity to build closer relationships through social marketing have already made the first step. Social Selling, Service, eCommerce, and Recruiting are external-facing opportunities that leading organizations are moving on right now. This strategy, one of weaving social into and across your business processes—and leveraging social concepts and technologies for internal collaboration—is something you can learn about during an Oracle CloudWorld event in a city near you. You’ll hear and see social relationship management concepts, best-practices, and recommendations woven into topics, discussions, and demonstrations throughout the event—from Marketing and Sales to Service and Human Resources. Stay Tuned and Avoid Potholes By all indications social is here to stay but it’s moving fast and social business strategies are evolving rapidly. At Oracle CloudWorld you’ll also get the opportunity to learn how to avoid some of the potholes on the road to #socialbusiness. Stay tuned to this blog. In future posts I’ll cover some of those potholes including the challenges of Social@Scale and Parallel Processes. Jump-start your social business strategy or learn how to refine and expand what you’re doing already at Oracle CloudWorld. Want to learn more about what Oracle is doing in social? Check out www.oracle.com/social or, if you're looking for a quick read my co-worker, Pat Ma, has a great post on this blog summarizing some popular Social Relationship Management use cases.

    Read the article

  • The Internet of Things Is Really the Internet of People

    - by HCM-Oracle
    By Mark Hurd - Originally Posted on LinkedIn As I speak with CEOs around the world, our conversations invariably come down to this central question: Can we change our corporate cultures and the ways we train and reward our people as rapidly as new technology is changing the work we do, the products we make and how we engage with customers? It’s a critical consideration given today’s pace of disruption, which already is straining traditional management models and HR strategies. Winning companies will bring innovation and vision to their employees and partners by attracting people who will thrive in this emerging world of relentless data, predictive analytics and unlimited what-if scenarios. So, where are we going to find employees who are as familiar with complex data as I am with orderly financial statements and business plans? I’m not just talking about high-end data scientists who most certainly will sit at or near the top of the new decision-making pyramid. Global organizations will need creative and motivated people who will devote their time to manipulating, reviewing, analyzing, sorting and reshaping data to drive business and delight customers. This might seem evident, but my conversations with business people across the globe indicate that only a small number of companies get it. In the past few years, executives have been busy keeping pace with seismic upheavals, including the rise of social customer engagement, the rapid acceleration of product-development cycles and the relentless move to mobile-first. But all of that, I think, is the start of an uphill climb to the top of a roller-coaster. Today, about 10 billion devices across the globe are connected to the Internet. In a couple of years, that number will probably double, and not because we will have bought 10 billion more computers, smart phones and tablets. This unprecedented explosion of Big Data is being triggered by the Internet of Things, which is another way of saying that the numerous intelligent devices touching our everyday lives are all becoming interconnected. Home appliances, food, industrial equipment, pets, pharmaceutical products, pallets, cars, luggage, packaged goods, athletic equipment, even clothing will be streaming data. Some data will provide important information about how to run our businesses and lead healthier lives. Much of it will be extraneous. How does a CEO cope with this unimaginable volume and velocity of data, much less harness it to excite and delight customers? Here are three things CEOs must do to tackle this challenge: 1) Take care of your employees, take care of your customers. Larry Ellison recently noted that the two most important priorities for any CEO today revolve around people: Taking care of your employees and taking care of your customers. Companies in today’s hypercompetitive business environment simply won’t be able to survive unless they’ve got world-class people at all levels of the organization. CEOs must demonstrate a commitment to employees by becoming champions for HR systems that empower every employee to fully understand his or her job, how it ties into the corporate framework, what’s expected of them, what training is available, and how they can use an embedded social network to communicate, collaborate and excel. Over the next several years, many of the world’s top industrialized economies will see a turnover in the workforce on an unprecedented scale. Across the United States, Europe, China and Japan, the “baby boomer” generation will be retiring and, by 2020, we’ll see turnovers in those regions ranging from 10 to 30 percent. How will companies replace all that brainpower, experience and know-how? How will CEOs perpetuate the best elements of their corporate cultures in the midst of this profound turnover? The challenge will be daunting, but it can be met with world-class HR technology. As companies begin replacing up to 30 percent of their workforce, they will need thousands of new types of data-native workers to exploit the Internet of Things in the service of the Internet of People. The shift in corporate mindset here can’t be overstated. The CEO has to be at the forefront of this new way of recruiting, training, motivating, aligning and developing truly 21-century talent. 2) Start thinking today about the Internet of People. Some forward-looking companies have begun pursuing the “democratization of data.” This allows more people within a company greater access to data that can help them make better decisions, move more quickly and keep pace with the changing interests and demands of their customers. As a result, we’ve seen organizations flatten out, growing numbers of well-informed people authorized to make decisions without corporate approval and a movement of engagement away from headquarters to the point of contact with the customer. These are profound changes, and I’m a huge proponent. As I think about what the next few years will bring as companies become deluged with unprecedented streams of data, I’m convinced that we’ll need dramatically different organizational structures, decision-making models, risk-management profiles and reward systems. For example, if a car company’s marketing department mines incoming data to determine that customers are shifting rapidly toward neon-green models, how many layers of approval, review, analysis and sign-off will be needed before the factory starts cranking out more neon-green cars? Will we continue to have organizations where too many people are empowered to say “No” and too few are allowed to say “Yes”? If so, how will those companies be able to compete in a world in which customers have more choices, instant access to more information and less loyalty than ever before? That’s why I think CEOs need to begin thinking about this problem right now, not in a year or two when competitors are already reshaping their organizations to match the marketplace’s new realities. 3) Partner with universities to help create a new type of highly skilled workers. Several years ago, universities introduced new undergraduate as well as graduate-level programs in analytics and informatics as the business need for deeper insights into the booming world of data began to explode. Today, as the growth rate of data continues to soar, we know that the Internet of Things will only intensify that growth. Moreover, as Big Data fuels insights that can be shaped into products and services that generate revenue, the demand for data scientists and data specialists will go on unabated. Beyond that top-level expertise, companies are going to need data-native thinkers at all levels of the organization. Where will this new type of worker come from? I think it’s incumbent on the business community to collaborate with universities to develop new curricula designed to turn out graduates who can capitalize on the data-driven world that the Internet of Things is surely going to create. These new workers will create opportunities to help their companies in fields as diverse as product design, customer service, marketing, manufacturing and distribution. They will become innovative leaders in fashioning an entirely new type of workforce and organizational structure optimized to fully exploit the Internet of Things so that it becomes a high-value enabler of the Internet of People. Mark Hurd is President of Oracle Corporation and a member of the company's Board of Directors. He joined Oracle in 2010, bringing more than 30 years of technology industry leadership, computer hardware expertise, and executive management experience to his role with the company. As President, Mr. Hurd oversees the corporate direction and strategy for Oracle's global field operations, including marketing, sales, consulting, alliances and channels, and support. He focuses on strategy, leadership, innovation, and customers.

    Read the article

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Cannot get official CentOS 5.4 BIND package to start

    - by Brian Cline
    Yesterday I installed CentOS 5.4 on one of my servers, and it appears that the official BIND/named package has trouble starting for reasons I cannot deduce. Here is what happens: [root@hal init.d]# service named start Starting named: Error in named configuration: /etc/named.conf:57: open: named.root.hints: permission denied [FAILED] The line in question, with the directory option for context: // further up in the file: directory "/var/named"; // line 57: include "named.root.hints"; Like you, my first reaction was to check permissions on /var/named/named.root.hints, /var/named, and /var to make sure the named user would be able to read it. Here are the permissions at each level: drwxr-xr-x 19 root root 4096 Nov 3 02:05 var drwxr-x--- 5 root named 4096 Nov 3 02:36 named -rw-r--r-- 1 named named 524 Mar 29 2006 named.root.hints Everything appears to be fine permission-wise. The same error occurs if the /var/named directory is writable by the named user. I've even temporarily allowed the named user to log in via bash, su'ed from root to named, and checked that I was, in fact, able to cat /var/named/named.root.hints successfully. (Yes, don't worry: I changed the shell back to nologin). My last endeavor showed that BIND is able to run under the named user account and start up just fine, if done so manually: [root@hal ~]# named -u named -g 03-Nov-2009 16:31:02.021 starting BIND 9.3.6-P1-RedHat-9.3.6-4.P1.el5 -u named -g 03-Nov-2009 16:31:02.021 adjusted limit on open files from 1024 to 1048576 03-Nov-2009 16:31:02.021 found 2 CPUs, using 2 worker threads 03-Nov-2009 16:31:02.021 using up to 4096 sockets 03-Nov-2009 16:31:02.028 loading configuration from '/etc/named.conf' 03-Nov-2009 16:31:02.030 using default UDP/IPv4 port range: [1024, 65535] 03-Nov-2009 16:31:02.031 using default UDP/IPv6 port range: [1024, 65535] 03-Nov-2009 16:31:02.034 listening on IPv4 interface lo, 127.0.0.1#53 03-Nov-2009 16:31:02.034 listening on IPv4 interface eth0, 10.0.0.5#53 03-Nov-2009 16:31:02.034 listening on IPv4 interface eth1, ww.xx.yy.zz#53 03-Nov-2009 16:31:02.040 command channel listening on 127.0.0.1#953 03-Nov-2009 16:31:02.040 command channel listening on ::1#953 03-Nov-2009 16:31:02.040 ignoring config file logging statement due to -g option 03-Nov-2009 16:31:02.041 zone 0.in-addr.arpa/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.042 zone 0.0.127.in-addr.arpa/IN/localhost_resolver: loaded serial 1997022700 03-Nov-2009 16:31:02.042 zone 255.in-addr.arpa/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.042 zone 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN/localhost_resolver: loaded serial 1997022700 03-Nov-2009 16:31:02.043 zone localdomain/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.043 zone localhost/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.043 zone x.y.z.in-addr.arpa/IN/internal: loaded serial 1 03-Nov-2009 16:31:02.044 zone x.y.z/IN/internal: loaded serial 2 03-Nov-2009 16:31:02.045 running What type and size of firearm should I use to resolve this? I'd prefer something with automatic ammunition, and, at worst, it should be able to fit on my shoulder. Of course I am open to suggestions.

    Read the article

  • dns server bind is not work

    - by milad
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; };` AND The content of "/var/named/data/named.mydomain.com": $TTL 38400 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 2012101201 ; serial number YYMMDDNN 28800 ; Refresh 7200 ; Retry 864000 ; Expire 38400 ; Min TTL ) mydomain.com. IN A 1.2.3.4 www IN A 1.2.3.4 ns1.mydomain.com. IN A 1.2.3.4 ns2.mydomain.com. IN A 1.2.3.4 mydomain.com. IN NS ns1.mydomain.com. mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running... Thanks for your answers. i know that the ping is not the job of bind, i use it just to check whether domain is pointed to host or not.(ping is open in my server as i got reply in pinging ip) i use network-tools.com to ping domain. here the output of dig utility: dig mydomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 6806 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;mydomain.com. IN A ;; Query time: 321 msec ;; SERVER: 5.6.7.8#53(5.6.7.8)##note that 5.6.7.8 is my idc dns ip ;; WHEN: Sun Oct 14 23:53:47 2012

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • The Faces in the Crowdsourcing

    - by Applications User Experience
    By Jeff Sauro, Principal Usability Engineer, Oracle Imagine having access to a global workforce of hundreds of thousands of people who can perform tasks or provide feedback on a design quickly and almost immediately. Distributing simple tasks not easily done by computers to the masses is called "crowdsourcing" and until recently was an interesting concept, but due to practical constraints wasn't used often. Enter Amazon.com. For five years, Amazon has hosted a service called Mechanical Turk, which provides an easy interface to the crowds. The service has almost half a million registered, global users performing a quarter of a million human intelligence tasks (HITs). HITs are submitted by individuals and companies in the U.S. and pay from $.01 for simple tasks (such as determining if a picture is offensive) to several dollars (for tasks like transcribing audio). What do we know about the people who toil away in this digital crowd? Can we rely on the work done in this anonymous marketplace? A rendering of the actual Mechanical Turk (from Wikipedia) Knowing who is behind Amazon's Mechanical Turk is fitting, considering the history of the actual Mechanical Turk. In the late 1800's, a mechanical chess-playing machine awed crowds as it beat master chess players in what was thought to be a mechanical miracle. It turned out that the creator, Wolfgang von Kempelen, had a small person (also a chess master) hiding inside the machine operating the arms to provide the illusion of automation. The field of human computer interaction (HCI) is quite familiar with gathering user input and incorporating it into all stages of the design process. It makes sense then that Mechanical Turk was a popular discussion topic at the recent Computer Human Interaction usability conference sponsored by the Association for Computing Machinery in Atlanta. It is already being used as a source for input on Web sites (for example, Feedbackarmy.com) and behavioral research studies. Two papers shed some light on the faces in this crowd. One paper tells us about the shifting demographics from mostly stay-at-home moms to young men in India. The second paper discusses the reliability and quality of work from the workers. Just who exactly would spend time doing tasks for pennies? In "Who are the crowdworkers?" University of California researchers Ross, Silberman, Zaldivar and Tomlinson conducted a survey of Mechanical Turk worker demographics and compared it to a similar survey done two years before. The initial survey reported workers consisting largely of young, well-educated women living in the U.S. with annual household incomes above $40,000. The more recent survey reveals a shift in demographics largely driven by an influx of workers from India. Indian workers went from 5% to over 30% of the crowd, and this block is largely male (two-thirds) with a higher average education than U.S. workers, and 64% report an annual income of less than $10,000 (keeping in mind $1 has a lot more purchasing power in India). This shifting demographic certainly has implications as language and culture can play critical roles in the outcome of HITs. Of course, the demographic data came from paying Turkers $.10 to fill out a survey, so there is some question about both a self-selection bias (characteristics which cause Turks to take this survey may be unrepresentative of the larger population), not to mention whether we can really trust the data we get from the crowd. Crowds can perform tasks or provide feedback on a design quickly and almost immediately for usability testing. (Photo attributed to victoriapeckham Flikr While having immediate access to a global workforce is nice, one major problem with Mechanical Turk is the incentive structure. Individuals and companies that deploy HITs want quality responses for a low price. Workers, on the other hand, want to complete the task and get paid as quickly as possible, so that they can get on to the next task. Since many HITs on Mechanical Turk are surveys, how valid and reliable are these results? How do we know whether workers are just rushing through the multiple-choice responses haphazardly answering? In "Are your participants gaming the system?" researchers at Carnegie Mellon (Downs, Holbrook, Sheng and Cranor) set up an experiment to find out what percentage of their workers were just in it for the money. The authors set up a 30-minute HIT (one of the more lengthy ones for Mechanical Turk) and offered a very high $4 to those who qualified and $.20 to those who did not. As part of the HIT, workers were asked to read an email and respond to two questions that determined whether workers were likely rushing through the HIT and not answering conscientiously. One question was simple and took little effort, while the second question required a bit more work to find the answer. Workers were led to believe other factors than these two questions were the qualifying aspect of the HIT. Of the 2000 participants, roughly 1200 (or 61%) answered both questions correctly. Eighty-eight percent answered the easy question correctly, and 64% answered the difficult question correctly. In other words, about 12% of the crowd were gaming the system, not paying enough attention to the question or making careless errors. Up to about 40% won't put in more than a modest effort to get paid for a HIT. Young men and those that considered themselves in the financial industry tended to be the most likely to try to game the system. There wasn't a breakdown by country, but given the demographic information from the first article, we could infer that many of these young men come from India, which makes language and other cultural differences a factor. These articles raise questions about the role of crowdsourcing as a means for getting quick user input at low cost. While compensating users for their time is nothing new, the incentive structure and anonymity of Mechanical Turk raises some interesting questions. How complex of a task can we ask of the crowd, and how much should these workers be paid? Can we rely on the information we get from these professional users, and if so, how can we best incorporate it into designing more usable products? Traditional usability testing will still play a central role in enterprise software. Crowdsourcing doesn't replace testing; instead, it makes certain parts of gathering user feedback easier. One can turn to the crowd for simple tasks that don't require specialized skills and get a lot of data fast. As more studies are conducted on Mechanical Turk, I suspect we will see crowdsourcing playing an increasing role in human computer interaction and enterprise computing. References: Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. 2010. Are your participants gaming the system?: screening mechanical turk workers. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI '10. ACM, New York, NY, 2399-2402. Link: http://doi.acm.org/10.1145/1753326.1753688 Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and Tomlinson, B. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI EA '10. ACM, New York, NY, 2863-2872. Link: http://doi.acm.org/10.1145/1753846.1753873

    Read the article

  • Why We Should Learn to Stop Worrying and Love Millennials

    - by HCM-Oracle
    By Christine Mellon Much is said and written about the new generations of employees entering our workforce, as though they are a strange specimen, a mysterious life form to be “figured out,” accommodated and engaged – at a safe distance, of course.  At its worst, this talk takes a critical and disapproving tone, with baby boomer employees adamantly refusing to validate this new breed of worker, let alone determine how to help them succeed and achieve their potential.   The irony of our baby-boomer resentments and suspicions is that they belie the fact that we created the very vision that younger employees are striving to achieve.  From our frustrations with empty careers that did not fulfill us, from our opposition to “the man,” from our sharp memories of our parents’ toiling for 30 years just for the right to retire, from the simple desire not to live our lives in a state of invisibility, came the seeds of hope for something better. One characteristic of Millennial workers that grew from these seeds is the desire to experience as much as possible.  They are the “Experiential Employee”, with a passion for growing in diverse ways and expanding personal and professional horizons.  Rather than rooting themselves in a single company for a career, or even in a single career path, these employees are committed to building a broad portfolio of experiences and capabilities that will enable them to make a difference and to leave a mark of significance in the world.  How much richer is the organization that nurtures and leverages this inclination?  Our curmudgeonly ways must be surrendered and our focus redirected toward building the next generation of talent ecosystems, if we are to optimize what future generations have to offer.   Accelerating Professional Development In spite of our Boomer grumblings about Millennials’ “unrealistic” expectations, the truth is that we have a well-matched set of circumstances.  We have executives-in-waiting who want to learn quickly and a concurrent, urgent need to ramp up their development time, based on anticipated high levels of retirement in the next 10+ years.  Since we need to rapidly skill up these heirs to the corporate kingdom, isn’t it a fortunate coincidence that they are hungry to learn, develop and move fluidly throughout our organizations??  So our challenge now is to efficiently operationalize the wisdom we have acquired about effective learning and development.   We have already evolved from classroom-based models to diverse instructional methods.  The next step is to find the best approaches to help younger employees learn quickly and apply new learnings in an impactful way.   Creating temporary or even permanent functional partnerships among Millennial employees is one way to maximize outcomes.  This might take the form of 2 or more employees owning aspects of what once fell under a single role.  While one might argue this would mean duplication of resources, it could be a short term cost while employees come up to speed.  And the potential benefits would be numerous:  leveraging and validating the inherent sense of community of new generations, creating cross-functional skills with broad applicability, yielding additional perspectives and approaches to traditional work outcomes, and accelerating the performance curve for incumbents through Cooperative Learning (Johnson, D. and Johnson R., 1989, 1999).  This well-researched teaching strategy, where students support each other in the absorption and application of new information, has been shown to deliver faster, more efficient learning, and greater retention. Alternately, perhaps short term contracts with exiting retirees, or former retirees, to help facilitate the development of following generations may have merit.  Again, a short term cost, certainly.  However, the gains realized in shortening the learning curve, and strengthening engagement are substantial and lasting. Ultimately, there needs to be creative thinking applied for each organization on how to accelerate the capabilities of our future leaders in unique ways that mesh with current culture. The manner in which performance is evaluated must finally shift as well.  Employees will need to be assessed on how well they have developed key skills and capabilities vs. end-to-end mastery of functional positions they have no interest in keeping for an entire career. As we become more comfortable in placing greater and greater weight on competencies vs. tasks, we will realize increased organizational agility via this new generation of workers, which will be further enhanced by their natural flexibility and appetite for change. Revisiting Succession  For many years, organizations have failed to deliver desired succession planning outcomes.  According to CEB’s 2013 research, only 28% of current leaders were pre-identified in a succession plan. These disappointing results, along with the entrance of the experiential, Millennial employee into the workforce, may just provide the needed impetus for HR to reinvent succession processes.   We have recognized that the best professional development efforts are not always linear, and the time has come to fully adopt this philosophy in regard to succession as well.  Paths to specific organizational roles will not look the same for newer generations who seek out unique learning opportunities, without consideration of a singular career destination.  Rather than charting particular jobs as precursors for key positions, the experiences and skills behind what makes an incumbent successful must become essential in succession mapping.  And the multitude of ways in which those experiences and skills may be acquired must be factored into the process, along with the individual employee’s level of learning agility. While this may seem daunting, it is necessary and long overdue.  We have talked about the criticality of competency-based succession, however, we have not lived up to our own rhetoric.  Many Boomers have experienced the same frustration in our careers; knowing we are capable of shining in a particular role, but being denied the opportunity due to how our career history lined up, on paper, with documented job requirements.  These requirements usually emphasized past jobs/titles and specific tasks, versus capabilities, drive and willingness (let alone determination) to learn new things.  How satisfying would it be for us to leave a legacy where such narrow thinking no longer applies and potential is amplified? Realizing Diversity Another bloom from the seeds we Boomers have tried to plant over the past decades is a completely evolved view of diversity.  Millennial employees assume a diverse workforce, and are startled by anything less.  Their social tolerance, nurtured by wide and diverse networks, is unprecedented.  College graduates expect a similar landscape in the “real world” to what they experienced throughout their lives.  They appreciate and seek out divergent points of view and experiences without needing any persuasion.  The face of our U.S. workforce will likely see dramatic change as Millennials apply their fresh take on hiring and building strong teams, with an inherent sense of inclusion.  This wonderful aspect of the Millennial wave should be celebrated and strongly encouraged, as it is the fulfillment of our own aspirations. Future Perfect The Experiential Employee is operating more as a free agent than a long term player, and their commitment will essentially last as long as meaningful organizational culture and personal/professional opportunities keep their interest.  As Boomers, we have laid the foundation for this new, spirited employment attitude, and we should take pride in knowing that.  Generations to come will challenge organizations to excel in how they identify, manage and nurture talent. Let’s support and revel in the future that we’ve helped invent, rather than lament what we think has been lost.  After all, the future is always connected to the past.  And as so eloquently phrased by Antoine Lavoisier, French nobleman, chemist and politico:  “Nothing is Lost, Nothing is Created, and Everything is Transformed.” Christine has over 25 years of diverse HR experience.  She has held HR consulting and corporate roles, including CHRO positions for Echostar in Denver, a 6,000+ employee global engineering firm, and Aepona, a startup software firm, successfully acquired by Intel. Christine is a resource to Oracle clients, to assist in Human Capital Management strategy development and implementation, compensation practices, talent development initiatives, employee engagement, global HR management, and integrated HR systems and processes that support the full employee lifecycle. 

    Read the article

  • The Future of Project Management is Social

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Kazim Isfahani, Director, Product Marketing, Oracle Rapid Ascent. Breakneck Speed. Lightning Fast. Perhaps even overwhelming. No matter which set of adjectives we use to describe it, social media’s rise into the enterprise mainstream has been unprecedented. Indeed, the big 4 social media powerhouses (Facebook, Google+, LinkedIn, and Twitter), have nearly 2 Billion users between them. You may be asking (as you should really) “That’s all well and good for the consumer, but for me at my company, what’s your point? Beyond the fact that I can check and post updates, that is.” Good question, kind sir. Impact of Social and Collaboration on Project Management I’ll dovetail this discussion to the project management realm, since that’s what I’m writing about. Speed is a big challenge for project-driven organizations. Anything that can help speed up project delivery - be it a new product introduction effort or a geographical expansion project - fast is a good thing. So where does this whole social thing fit particularly since there are already a host of tools to help with traditional project execution? The fact is companies have seen improvements in their productivity by deploying departmental collaboration and other social-oriented solutions. McKinsey’s survey on social tools shows we have reached critical scale: 72% of respondents report that their companies use at least one and over 40% say they are using social networks and blogs. We don’t hear as much about the impact of social media technologies at the project and project manager level, but that does not mean there is none. Consider the new hire. The type of individual entering the workforce and executing on projects is a generation of worker expecting visually appealing, easy to use and easy to understand technology meshing hand-in-hand with business processes. Consider the project manager. The social era has enhanced the role that the project manager must play. Today’s project manager must be a supreme communicator, an influencer, a sympathizer, a negotiator, and still manage to keep all stakeholders in the loop on project progress. Social tools play a significant role in this effort. Now consider the impact to the project team. The way that a project team functions has changed, with newer, social oriented technologies making the process of information dissemination and team communications much more fluid. It’s clear that a shift is occurring where “social” is intersecting with project management. The Rise of Social Project Management We refer to the melding of project management and social networking as Social Project Management. Social Project Management is based upon the philosophy that the project team is one part of an integrated whole, and that valuable and unique abilities exist within the larger organization. For this reason, Social Project Management systems should be integrated into the collaborative platform(s) of an organization, allowing communication to proceed outside the project boundaries. What makes social project management "social" is an implicit awareness where distributed teams build connected links in ways that were previously restricted to teams that were co-located. Just as critical, Social Project Management embraces the vision of seamless online collaboration within a project team, but also provides for, (and enhances) the use of rigorous project management techniques. Social Project Management acknowledges that projects (particularly large projects) are a social activity - people doing work with people, for other people, with commitments to yet other people. The more people (larger projects), the more interpersonal the interactions, and the more social affects the project. The Epitome of Social - Fusion Project Portfolio Management If I take this one level further to discuss Fusion Project Portfolio Management, the notion of Social Project Management is on full display. With Fusion Project Portfolio Management, project team members have a single place for interaction on projects and access to any other resources working within the Fusion ERP applications. This allows team members the opportunity to be informed with greater participation and provide better information. The application’s the visual appeal, and highly graphical nature makes it easy to navigate information. The project activity stream adds to the intuitive user experience. The goal of productivity is pervasive throughout Fusion Project Portfolio Management. Field research conducted with Oracle customers and partners showed that users needed a way to stay in the context of their core transactions and yet easily access social networking tools. This is manifested in the application so when a user executes a business process, they not only have the transactional application at their fingertips, but also have things like e-mail, SMS, text, instant messaging, chat – all providing a number of different ways to interact with people and/or groups of people, both internal and external to the project and enterprise. But in the end, connecting people is relatively easy. The larger issue is finding a way to serve up relevant, system-generated, actionable information, in real time, which will allow for more streamlined execution on key business processes. Fusion Project Portfolio Management’s design concept enables users to create project communities, establish discussion threads, manage event calendars as well as deliver project based work spaces to organize communications within the context of a project – all within a secure business environment. We’d love to hear from you and get your thoughts and ideas about how Social Project Management is impacting your organization. To learn more about Oracle Fusion Project Portfolio Management, please visit this link

    Read the article

  • apache webserver unresponsible with server-status showing all child processes waiting for connection

    - by Jeff
    My setup: i have 3 nearly identical webserver machines serving the same high loaded dynamic website with simple load balancing over dns. The service has been working for over two ears with the same apache config. apache2, php5, ubuntu 8.04 linux 2.6.24-29-server My problem: since about two weeks i'm experiencing problems with this config. Nearly every day i have one small moment about 5 minutes, in which the website is unreachable. I'm still able to login to the servers over ssh. If i run htop, i see the machine simply doing nothing. i have about 1000 apache processes running, but no cpu activity. i've used the apache mod_status to debug this situation. the process scoreboard looks like this: _C.___K_______________________R._______.__K_K____K___C_______.__ _______C__________.___________________________________.________C _.____K__________K___K_WK_____._K_____________________________._ W______K__________K________.____________________._______C_______ _C_.__K__K____.._.._____________________________________C_______ _R___________K___.______C________.C_________.______._____C______ ____________KKC____K_____K__WC_________________C_____.__.____.__ _____________________C_________K______.____C______._____________ _.___C____.___.___________________________.K______.____K________ W__.___________________C.__.____K________K_______R_._.__._______ __C__C_.__________C__C_______._____W______________C_.___C_______ ____.______C_____________C________.____C____________.________._K __.__________.K_____________K_________._____C____.K__________KW_ __K.W________R_________._______.___W___________.____.__K_____W__ W___.___..________W____K Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process So the most of the processes are just waiting for connection. after about 5 minutes the situation will return to normal: i have lot least processes on every machine, the most workers have the "."-status (meaing they are open to process a request) and of course the website is reachable! so i'm trying to find something in the logs, but there is simply nothing... the apache access log is silent for about 4 minutes, the same is for the error log. i also can not figure out anything wrong in other system logs. the situation is the same on all 3 webservers (all of them have this load peak and unresposibility at the same time), so i do not thing this is hardware related. but i think, this might be related to some network (tcp) issue. any ideas? EDIT: some more information, that i have just discovered: it has just happened again. and i was able to verify that i'm also not able to connect locally when this problem occurs. i have made some connection statistics with the following command after it happend netstat -an|awk '/tcp/ {print $6}'|sort|uniq -c 109 CLOSE_WAIT 2652 ESTABLISHED 2 FIN_WAIT1 11 LAST_ACK 12 LISTEN 91 SYN_RECV 1 SYN_SENT 16 TIME_WAIT If i execute the same command some time later, i have something like this: 4 CLOSING 108 ESTABLISHED 18 FIN_WAIT1 182 FIN_WAIT2 37 LAST_ACK 12 LISTEN 50 SYN_RECV 11276 TIME_WAIT So in the normal situation i have only 100-200 open connections by clients beeing handled by apache in this moment. when i have this "crash", i have a lot more connections. what is the best way to analyse this? EDIT2: the important lines in apache2.conf are: KeepAlive On MaxKeepAliveRequests 20 KeepAliveTimeout 1 <IfModule mpm_prefork_module> ServerLimit 920 StartServers 30 MinSpareServers 80 MaxSpareServers 120 MaxClients 920 MaxRequestsPerChild 700 </IfModule> it is an apache2 prefork with php_mod. the server has 8GB ram and a 4gb swap partition.

    Read the article

  • Translating Your Customizations

    - by Richard Bingham
    This blog post explains the basics of translating the customizations you can make to Fusion Applications products, with the inclusion of information for both composer-based customizations and the generic design-time customizations done via JDeveloper. Introduction Like most Oracle Applications, Fusion Applications installs on-premise with a US-English base language that is, in Release 7, supported by the option to add up to a total of 22 additional language packs (In Oracle Cloud production environments languages are pre-installed already). As such many organizations offer their users the option of working with their local language, and logically that should also apply for any customizations as well. Composer-based UI Customizations Customizations made in Page Composer take into consideration the session LOCALE, as set in the user preferences screen, during all customization work, and stores the customization in the MDS repository accordingly. As such the actual new or changed values used will only apply for the same language under which the customization was made, and text for any other languages requires a separate upload. See the Resource Bundles section below, which incidentally also applies to custom UI changes done in JDeveloper. You may have noticed this when you select the “Select Text Resource” menu option when editing the text on a page. Using this ensures that the resource bundles are used, whereas if you define a static value in Expression Builder it will never be available for translation. Notice in the screenshot below the “What’s New” custom value I have already defined using the ‘Select Text Resource’ feature is internally using the adfBundle groovy function to pull the custom value for my key (RT_S_1) from the ComposerOverrideBundle. Figure 1 – Page Composer showing the override bundle being used. Business Objects Customizing the Business Objects available in the Applications Composer tool for the CRM products, such as adding additional fields, also operates using the session language. Translating these additional values for these fields into other installed languages requires loading additional resource bundles, again as described below. Reports and Analytics Most customizations to Reports and BI Analytics are just essentially reorganizations and visualizations of existing number and text data from the system, and as such will use the appropriate values based on the users session language. Where a translated value or string exists for that session language, it will be used without the need for additional work. Extending through the addition of brand new reports and analytics requires another method of loading the translated strings, as part of what is known as ‘Localizing’ the BI Catalog and Metadata. This time it is via an export/import of XML data through the BI Administrators console, and is described in the OBIEE Admin Guide. Fusion Applications reports based on BI Publisher are already defined in template-per-locale, and in addition provide an extra process for getting the data for translation and reloading. This again uses the standard resource bundle format. Loading a custom report is illustrated in this video from our YouTube channel which shows the screen for both setting the template local and running an export for translation. Fusion Applications Menus Whilst the seeded Navigator and Global Menu values are fully translated when the additional language is installed, if they are customized then the change or new menu item will apply universally, not currently per language. This is set to change in a future release with the new UI Text Editor feature described below. More on Resource Bundles As mentioned above, to provide translations for most of your customizations you need to add values to a resource bundle. This is an industry open standard (OASIS) format XML file with the extension .xliff, and store translated values for the strings used by ADF at run-time. The general process is that these values are exported from the MDS repository, manually edited, and then imported back in again.This needs to be done by an administrator, via either WLST commands or through Enterprise Manager as per the screenshot below. This is detailed out in the Fusion Applications Extensibility Guide. For SaaS environments the Cloud Operations team can assist. Figure 2 – Enterprise Manager’s MDS export used getting resource bundles for manual translation and re-imported on the same screen. All customized strings are stored in an override bundle (xliff file) for each locale, suffixed with the language initials, with English ones being saved to the default. As such each language bundle can be easily identified and updated. Similarly if you used JDeveloper to create your own applications as extensions to Fusion Applications you would use the native support for resource bundles, and add them into the faces-config.xml file for inclusion in your application. An example is this ADF customization video from our YouTube channel. JDeveloper also supports automatic synchronization between your underlying resource bundles and any translatable strings you add – very handy. For more information see chapters on “Using Automatic Resource Bundle Integration in JDeveloper” and “Manually Defining Resource Bundles and Locales” in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework. FND Messages and Look-ups FND Messages, as defined here, are not used for UI labels (they are known as ‘strings’), but are the responses back to users as a result of an action, such as from a page submit. Each ‘message’ is defined and stored in the related database table (FND_MESSAGES_B), with another (FND_MESSAGES_TL) holding any language-specific values. These come seeded with the additional language installs, however if you customize the messages via the “Manage Messages” task in Functional Setup Manager, or add new ones, then currently (in Release 7) you’ll need to repeat it for each language. Figure 3 – An FND Message defined in an English user session. Similarly Look-ups are stored in a translation table (FND_LOOKUP_VALUES_TL) where appropriate, and can be customized by setting the users session language and making the change  in the Setup and Maintenance task entitled “Manage [Standard|Common] Look-ups”. Online Help Yes, in fact all the seeded help is applied as part of each language pack install as part of the post-install provisioning process. If you are editing or adding custom online help then the Create Help screen provides a drop-down of which language your help customization will apply to. This is shown in the video below from our YouTube channel, and obviously you’ll need to it for each language in use. What is Coming for Translations? Currently planned for Release 8 is something called the User Interface (UI) Text Editor. This tool will allow the editing of all the text shown on the pages and forms of Fusion Application. This will provide a search based on a particular term or word, say “Worker”, and will allow it to be adjusted, say to “Employee”, which then updates all the Resource Bundles that contain it. In the case of multi-language environments, it will use the users session language (locale) to know which Resource Bundles to apply the change to. This capability will also support customization sandboxes, to help ensure changes can be tested and approved.  It is also interesting to note that the design currently allows any page-specific customizations done using Page Composer or Application Composer to over-write the global changes done via the UI Text Editor, allowing for special context-sensitive values to still be used. Further Reading and Resources The following short list provides the mains resources for digging into more detail on translation support for both Composer and JDeveloper customization projects. There is a dedicated chapter entitled “Translating Custom Text” in the Fusion Applications Extensibility Guide. This has good examples and steps for many tasks, especially administering resource bundles. Using localization formatting (numbers, dates etc) for design-time changes is well documented in the Fusion Applications Developer Guide. For more guidelines on general design-time globalization, see either the ‘Internationalizing and Localizing Pages’ chapter in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework (Oracle Fusion Applications Edition) or the general Oracle Database Globalization Support Guide. The Oracle Architecture ‘A-Team’ provided a recent post on customizing the user session timeout popup, using design-time changes to resource bundles. It has detailed step-by-step examples which can be a useful illustration.

    Read the article

  • How I Work: Staying Productive Whilst Traveling

    - by BuckWoody
    I travel a lot. Not like some folks that are gone every week, mind you, although in the last month I’ve been to: Cambridge, UK; Anchorage, AK; San Jose, CA; Copenhagen, DK, Boston, MA; and I’m currently en-route to Anaheim, CA.  While this many places in a month is a bit unusual for me, I would say I travel frequently. I’ve travelled most of my 28+ years in IT, and at one time was a consultant traveling weekly.   With that much time away from my primary work location, I have to find ways to stay productive. Some might say “just rest – take a nap!” – but I’m not able to do that. For one thing, I’m a very light sleeper and I’ve never slept on a plane - even a 30+ hour trip to New Zealand in Business Class - so that just isn’t option. I also am not always in the plane, of course. There’s the hotel, the taxi/bus/train, the airport and then all that over again when I arrive. Since my regular jobs have many demands, I have to get work done.   Note: No, I’m not always focused on work. I need downtime just like everyone else. Sometimes I just think, watch a movie or listen to tunes – and I give myself permission to do that anytime – sometimes the whole trip. I have too fewheartbeats left in life to only focus on work – it’s just not that important, and neither am I. Some of these tasks are letters to friends and family, or other personal things. What I’m talking about here is a plan, not some task list I have to follow. When I get to the location I’m traveling to, I always build in as much time as I can to ensure I enjoy those sights and the people I’m with. I would find traveling to be a waste if not for that.   The Unrealistic Expectation As I would evaluate the trip I was taking – say a 6-8 hour flight – I would expect to get 10-12 hours of work done. After all, there’s the time at the airport, the taxi and so on, and then of course the time in the air with all of the room, power, internet and everything else I needed to get my work done. I would pile up tasks at home, pack my bags, and head happily to the magical land of the TSA.   Right. On return from the trip, I had accomplished little, had more e-mails and other work that had piled up, and I was tired, hungry, and unorganized. This had to change. So, I decided to do three things: Segment my work Set realistic expectations Plan accordingly  Segmenting By Available Resources The first task was to decide what kind of work I could do in each location – if any. I found that I was dependent on a few things to get work done, such as power, the Internet, and a place to sit down. Before I fly, I take some time at home to get all of the work I’d like to accomplish while away segmented into these areas, and print that out on paper, which goes in my suit-coat pocket along with a mechanical pencil. I print my tickets, and I’m all set for the adventure ahead. Then I simply do each kind of work whenever I’m in that situation. No power There are certain times when I don’t have power available. But not only that, I might not even be able to use most of my electronics. So I now schedule as many phone calls as I can for the taxi/bus/train ride and the airports as I can. I have a paper notebook (Moleskine, of course) and a pencil and I print out any notes or numbers I need prior to the trip. Once I’m airborne or at the airport, I work on my laptop. I check and respond to e-mails, create slides, write code, do architecture, whatever I can.  If I can’t use any electronics, or once the power runs out, I schedule time for reading. I can read at the airport or anywhere, actually, even in-flight or any other transport. I “read with a pencil”, meaning I take a lot of notes, which I liketo put in OneNote, but since in most cases I don’t have power, I use the Moleskine to do that. Speaking of which, sometimes as I’m thinking I come up with new topics, ideas, blog posts, or things to teach in my classes. Once again I take out the notebook and write it down. All of these notes get a check-mark when I get back to the office and transfer the writing to OneNote. I’ve tried those “smart pens” and so on to automate this, but it just never works out. Pencil and paper are just fine. As I mentioned, sometime I just need to think. I’ll do nothing, and let my mind wander, thinking of nothing in particular, or some math problem or science question I’m interested in. My only issue with this is that I communicate tothink, and I don’t want to drive people crazy by being that guy that won’t shut up, so I think in a different way. Power, but no Internet or Phone If I have power but no Internet or phone, I focus on the laptop and the tablet as before, and I also recharge my other gadgets. Power, Internet, Phone and a Place to Work At first I thought that when I arrived at the hotel or event I could get the same amount of work done that I do at the office. Not so. There’s simply too many distractions, things you need, or other issues that allow this. Of course, Ican work on any device, read, think, write or whatever, but I am simply not as productive as I am in my home office. So I plan for about 25-50% as much work getting done in this environment as I think I could really do. I’ve done some measurements, and this holds out to be true almost every time. The key is that I re-set my expectations (and my co-worker’s expectations as well) that this is the case. I use the Out-Of-Office notices to let people know that I’m just not going to be 100% at this time – it’s hard for everyone, but it’s more honest and realistic, and I’d rather they know that – and that I realize that – than to let them think I’m totally available. Because I’m not – I’m traveling. I don’t tend to put too much detail, because after all I don’t necessarily want to let people know when I’m not home :) but I do think it’s important to let people that depend on my know that I’ll get back with them later. I hope this helps you think through your own methodology of staying productive when you travel. Or perhaps you just go offline, and don’t worry about any of this – good for you! That’s completely valid as well.   (Oh, and yes, I wrote this at 35K feet, on Alaska Airlines on a trip. :)  Practice what you preach, Buck.)

    Read the article

  • nginx server over https using up all available file handles (upd: infinite loop?)

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it? EDIT: nginx 0.7.63, ubuntu linux, sinatra 1.0 EDIT 2: Here's the offending code. It's sinatra serving jnlp, which I finally figured out: get '/uploader' do #read in the launch.jnlp file theJNLP = "" File.open("/launch.jnlp", "r+") do |file| while theTemp = file.gets theJNLP = theJNLP + theTemp end end content_type :jnlp theJNLP end If I serve this with Sinatra via Mongrel and http, everything works fine. If I serve this with Sinatra and nginx via https, I get the above error. All other parts of the website appear to be equivalent. EDIT: I have since upgraded to passenger 2.2.14, ruby 1.9.1, nginx 0.8.40, openssl 1.0.0a, and no change. EDIT: The culprit appears to be infinite redirects due to using SSL. I don't know how to fix this, other than hosting the jnlp file in the root directory of the server (which I'd rather not do, since it limits me to one jnlp-based app at a time). The relevant lines from nginx.conf: # HTTPS server # server { listen 443; server_name MyServer.org root /My/Root/Dir; passenger_enabled on; expires 1d; proxy_set_header X-FORWARDED_PROTO https; proxy_set_header X_FORWARDED_PROTO https;#the almighty google is not clear on which to use location /upload { proxy_pass https://127.0.0.1:443; } } The funny thing about this is, first, I was putting the jnlp into a directory called 'uploader', not 'upload', but that still appeared to trigger the problem, since that proxy_pass directive appeared in the logs. Second, again, moving the jnlp into root avoided the problem, because there wasn't any of this proxying due to ssl. So, how can I avoid the infinite proxy_pass loop in nginx?

    Read the article

  • SpringMvc java.lang.NullPointerException When Posting Form To Server [closed]

    - by dev_darin
    I have a form with a user name field on it when i tab out of the field i use a RESTFUL Web Service that makes a call to a handler method in the controller. The method makes a call to a DAO class that checks the database if the user name exists. This works fine, however when the form is posted to the server i call the same exact function i would call in the handler method however i get a java.lang.NullPointerException when it accesses the class that makes a call to the DAO object. So it does not even access the DAO object the second time. I have exception handlers around the calls in all my classes that makes calls. Any ideas as to whats happening here why i would get the java.lang.NullPointerException the second time the function is called.Does this have anything to do with Spring instantiating DAO classes using a Singleton method or something to that effect? What can be done to resolve this? This is what happens the First Time The Method is called using the Web Service(this is suppose to happen): 13011 [http-8084-2] INFO com.crimetrack.jdbc.JdbcOfficersDAO - Inside jdbcOfficersDAO 13031 [http-8084-2] DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL query 13034 [http-8084-2] DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL statement [SELECT userName FROM crimetrack.tblofficers WHERE userName = ?] 13071 [http-8084-2] DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource 13496 [http-8084-2] DEBUG org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [adminz], value class [java.lang.String], SQL type unknown 13534 [http-8084-2] DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource 13537 [http-8084-2] INFO com.crimetrack.jdbc.JdbcOfficersDAO - No username was found in exception 13537 [http-8084-2] INFO com.crimetrack.service.ValidateUserNameManager - UserName :adminz does NOT exist The Second time When The Form Is 'Post' and a validation method handles the form and calls the same method the web service would call: 17199 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - UserName is not null so going to check if its valid for :adminz 17199 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - User Name in try.....catch block is adminz 17199 [http-8084-2] INFO com.crimetrack.service.ValidateUserNameManager - Inside Do UserNameExist about to validate with username : adminz 17199 [http-8084-2] INFO com.crimetrack.service.ValidateUserNameManager - UserName :adminz EXCEPTION OCCURED java.lang.NullPointerException ValidateUserNameManager.java public class ValidateUserNameManager implements ValidateUserNameIFace { private OfficersDAO officerDao; private final Logger logger = Logger.getLogger(getClass()); public boolean DoesUserNameExist(String userName) throws Exception { logger.info("Inside Do UserNameExist about to validate with username : " + userName); try{ if(officerDao.OfficerExist(userName) == true){ logger.info("UserName :" + userName + " does exist"); return true; }else{ logger.info("UserName :" + userName + " does NOT exist"); return false; } }catch(Exception e){ logger.info("UserName :" + userName + " EXCEPTION OCCURED " + e.toString()); return false; } } /** * @return the officerDao */ public OfficersDAO getOfficerDao() { return officerDao; } /** * @param officerdao the officerDao to set */ public void setOfficerDao(OfficersDAO officerDao) { this.officerDao = officerDao; } } JdbcOfficersDAO.java public boolean OfficerExist(String userName){ String dbUserName; try{ logger.info("Inside jdbcOfficersDAO"); String sql = "SELECT userName FROM crimetrack.tblofficers WHERE userName = ?"; try{ dbUserName = (String)getJdbcTemplate().queryForObject(sql, new Object[]{userName},String.class); logger.info("Just Returned from database"); }catch(Exception e){ logger.info("No username was found in exception"); return false; } if(dbUserName == null){ logger.info("Database did not find any matching records"); } logger.info("after JdbcTemplate"); if (dbUserName.equals(userName)) { logger.info("User Name Exists"); return true; }else{ logger.info("User Name Does NOT Exists"); return false; } }catch(Exception e){ logger.info("Exception Message in JdbcOfficersDAO is "+e.getMessage()); return false; } } OfficerRegistrationValidation.java public class OfficerRegistrationValidation implements Validator{ private final Logger logger = Logger.getLogger(getClass()); private ValidateUserNameManager validateUserNameManager; public boolean supports(Class<?> clazz) { return Officers.class.equals(clazz); } public void validate(Object target, Errors errors) { Officers officer = (Officers) target; if (officer.getUserName() == null){ errors.rejectValue("userName", "userName.required"); }else{ String userName = officer.getUserName(); logger.info("UserName is not null so going to check if its valid for :" + userName); try { logger.info("User Name in try.....catch block is " + userName); if (validateUserNameManager.DoesUserNameExist(userName)== true){ errors.rejectValue("userName", "userName.exist"); } } catch (Exception e) { logger.info("Error Occured When validating UserName"); errors.rejectValue("userName", "userName.error"); } } if(officer.getPassword()== null){ errors.rejectValue("password", "password.required"); } if(officer.getPassword2()== null){ errors.rejectValue("password2", "password2.required"); } if(officer.getfName() == null){ errors.rejectValue("fName","fName.required"); } if(officer.getlName() == null){ errors.rejectValue("lName", "lName.required"); } if (officer.getoName() == null){ errors.rejectValue("oName", "oName.required"); } if (officer.getEmailAdd() == null){ errors.rejectValue("emailAdd", "emailAdd.required"); } if (officer.getDob() == null){ errors.rejectValue("dob", "dob.required"); } if (officer.getGenderId().equals("A")){ errors.rejectValue("genderId","genderId.required"); } if(officer.getDivisionNo() == 1){ errors.rejectValue("divisionNo", "divisionNo.required"); } if(officer.getPositionId() == 1){ errors.rejectValue("positionId", "positionId.required"); } if (officer.getStartDate() == null){ errors.rejectValue("startDate","startDate.required"); } if(officer.getEndDate() == null){ errors.rejectValue("endDate","endDate.required"); } logger.info("The Gender ID is " + officer.getGenderId().toString()); if(officer.getPhoneNo() == null){ errors.rejectValue("phoneNo", "phoneNo.required"); } } /** * @return the validateUserNameManager */ public ValidateUserNameManager getValidateUserNameManager() { return validateUserNameManager; } /** * @param validateUserNameManager the validateUserNameManager to set */ public void setValidateUserNameManager( ValidateUserNameManager validateUserNameManager) { this.validateUserNameManager = validateUserNameManager; } } Update Error Log using Logger.Error("Message", e): 39024 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - UserName is not null so going to check if its valid for :adminz 39025 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - User Name in try.....catch block is adminz 39025 [http-8084-2] ERROR com.crimetrack.service.OfficerRegistrationValidation - Message java.lang.NullPointerException at com.crimetrack.service.OfficerRegistrationValidation.validate(OfficerRegistrationValidation.java:47) at org.springframework.validation.DataBinder.validate(DataBinder.java:725) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doBind(HandlerMethodInvoker.java:815) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolveHandlerArguments(HandlerMethodInvoker.java:367) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:171) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:436) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:424) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:923) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:789) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Unknown Source) 39025 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - Error Occured When validating UserName

    Read the article

  • Benefits of Behavior Driven Development

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/07/26/benefits-of-behavior-driven-development.aspxContinuing my previous article on BDD, I wanted to point out some benefits of BDD and since BDD is an extension of Test Driven Development (TDD), you get those as well. I’ll add another article on some possible downsides of this approach. There are many articles about the benefits of TDD and they apply to BDD. I’ve pointed out some here and copied some of the main points for each article, but there are many more including the book The Art of Unit Testing by Roy Osherove. http://geekswithblogs.net/leesblog/archive/2008/04/30/the-benefits-of-test-driven-development.aspx (Lee Brandt) Stability Accountability Design Ability Separated Concerns Progress Indicator http://tddftw.com/benefits-of-tdd/ Help maintainers understand the intention behind the code Bring validation and proper data handling concerns to the forefront. Writing the tests first is fun. Better APIs come from writing testable code. TDD will make you a better developer. http://www.slideshare.net/dhelper/benefit-from-unit-testing-in-the-real-world (from Typemock). Take a look at the slides, especially the extra time required for TDD (slide 10) and the next one of the bugs avoided using TDD (slide 11). Less bugs (slide 11) about testing and development (13) Increase confidence in code (14) Fearlessly change your code (14) Document Requirements (14) also see http://visualstudiomagazine.com/articles/2013/06/01/roc-rocks.aspx Discover usability issues early (14) All these points and articles are great and there are many more. The following are my additions to the benefits of BDD from using it in real projects for my company. July 2013 on MSDN - Behavior-Driven Design with SpecFlow Scott Allen did a very informative TDD and MVC module, but to me he is doing BDDCompile and Execute Requirements in Microsoft .NET ~ Video from TechEd 2012 Communication I was working through a complicated task that the decision tree kept growing. After writing out the Given, When, Then of the scenario, I was able tell QA what I had worked through for their initial test cases. They were able to add from there. It is also useful to use this language with other developers, managers, or clients to help make informed decisions on if it meets the requirements or if it can simplified to save time (money). Thinking through solutions, before starting to code This was the biggest benefit to me. I like to jump into coding to figure out the problem. Many times I don't understand my path well enough and have to do some parts over. A past supervisor told me several times during reviews that I need to get better at seeing "the forest for the trees". When I sit down and write out the behavior that I need to implement, I force myself to think things out further and catch scenarios before they get to QA. A co-worker that is new to BDD and we’ve been using it in our new project for the last 6 months, said “It really clarifies things”. It took him awhile to understand it all, but now he’s seeing the value of this approach (yes there are some downsides, but that is a different issue). Developers’ Confidence This is huge for me. With tests in place, my confidence grows that I won’t break code that I’m not directly changing. In the past, I’ve worked on projects with out tests and we would frequently find regression bugs (or worse the users would find them). That isn’t fun. We don’t catch all problems with the tests, but when QA catches one, I can write a test to make sure it doesn’t happen again. It’s also good for Releasing code, telling your manager that it’s good to go. As time goes on and the code gets older, how confident are you that checking in code won’t break something somewhere else? Merging code - pre release confidence If you’re merging code a lot, it’s nice to have the tests to help ensure you didn’t merge incorrectly. Interrupted work I had a task that I started and planned out, then was interrupted for a month because of different priorities. When I started it up again, and un-shelved my changes, I had the BDD specs and it helped me remember what I had figured out and what was left to do. It would have much more difficult without the specs and tests. Testing and verifying complicated scenarios Sometimes in the UI there are scenarios that get tricky, because there are a lot of steps involved (click here to open the dialog, enter the information, make sure it’s valid, when I click cancel it should do {x}, when I click ok it should close and do {y}, then do this, etc….). With BDD I can avoid some of the mouse clicking define the scenarios and have them re-run quickly, without using a mouse. UI testing is still needed, but this helps a bunch. The same can be true for tricky server logic. Documentation of Assumptions and Specifications The BDD spec tests (Jasmine or SpecFlow or other tool) also work as documentation and show what the original developer was trying to accomplish. It’s not a different Word document, so developers will keep this up to date, instead of letting it become obsolete. What happens if you leave the project (consulting, new job, etc) with no specs or at the least good comments in the code? Sometimes I think of a new scenario, so I add a failing spec and continue in the same stream of thought (don’t forget it because it was on a piece of paper or in a notepad). Then later I can come back and handle it and have it documented. Jasmine tests and JavaScript –> help deal with the non-typed system I like JavaScript, but I also dislike working with JavaScript. I miss C# telling me if a property doesn’t actually exist at build time. I like the idea of TypeScript and hope to use it more in the future. I also use KnockoutJs, which has observables that need to be called with ending (), since the observable is a function. It’s hard to remember when to use () or not and the Jasmine specs/tests help ensure the correct usage.   This should give you an idea of the benefits that I see in using the BDD approach. I’m sure there are more. It talks a lot of practice, investment and experimentation to figure out how to approach this and to get comfortable with it. I agree with Scott Allen in the video I linked above “Remember that TDD can take some practice. So if you're not doing test-driven design right now? You can start and practice and get better. And you'll reach a point where you'll never want to get back.”

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60  | Next Page >