Search Results

Search found 36521 results on 1461 pages for 'aq advanced queue oracle support streams propagation schedule dblink troubleshoo'.

Page 365/1461 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • Trouble with Ubuntu on Oracle VM

    - by Rosarch
    I'm running Ubuntu on an Oracle VirtualBox virtual machine. I clicked the "install Ubuntu" button, and it took me through a wizard. I am currently on the following step: Unfortunately, the "forward" button is disabled. What's up with this? I tried entering different data for the form on this step, going back and redoing part of the wizard, and rebooting the machine. Nothing worked. My host OS is Windows 7.

    Read the article

  • Does OS X support linux-like features?

    - by Xeoncross
    I have been using XP for almost a decade. Contrary to popular belief, it has served me well. In the last 4 years I don't remember ever having it crash on me. It has the most stable GUI I have ever used. However, an OS is only as good as it's GUI AND command line combined. Windows command line is awful and totally useless. So I have been using Ubuntu for a couple years and Debian on my servers. The only problem is that Gnome applications (ubuntu 6-10) constantly crash on me (Ubuntu Studio was the most unstable OS I ever used). I have high quality Gigabyte, MSI, and Asus motherboards and CPU's from old Semprons/Athlons to Celerons/Core 2 Quads. What are the odds that every PC I have ever owned can't remain stable with a linux GUI? Not to mention that Adobe CSx Suite doesn't work on linux. Anyway, I am now looking at moving to a MAC in the hope of finding a stable GUI and a feature-packed command line. Does Mac OS have an integrated command line where I can do linux-like-awesomeness like rsync, ssh, wget, crong jobs, package updates, and git without having an unstable GUI? Basically, until the linux GUI applications get a little better, is OS X what I need?

    Read the article

  • Need reasonably priced router with QoS support [closed]

    - by ULTRA_POROV
    I dont need wireless. I am expecting very heavy traffic, with possibly thousands of tcp connections open at one time. This would require that the router has good hardware. I also need to limit the different services i will provide. Lets say i need to guarantee 60% of all the bandwidth to HTTP, 10% FTP, and 10% for Mail... So the router software must have flexible QoS options as well. I don't know which one to chooose, because this information is usually not given on the router specs.

    Read the article

  • mod_secdownload in lighttpd support subdirectories for secure stream?

    - by zomail
    i want to know that lighttpd supports secure stream for subdirectories ? I want to secure my subdirectories within a directory but looks like its not working on subdirectories . I want to secure my subdirectories within download-area directory given below secdownload.secret = "MySecretSecurePassword" secdownload.document-root = "/home/lighttpd/download-area/" secdownload.uri-prefix = "/dl/" secdownload.timeout = 3600

    Read the article

  • Oracle error when logging into database

    - by Bryan
    When I try to log into my db with a specific user I get this message. Below is from the alert log. I can login as system just fine. Anyone know how to figure out what is causing this? Thanks in advance for the help. ----- Error Stack Dump ----- ORA-00604: error occurred at recursive SQL level 1 ORA-01438: value larger than specified precision allowed for this column ORA-06512: at line 2 Oracle 10g OEL 5.5

    Read the article

  • Oracle export problem

    - by abhiram86
    I use exp username/password@servername owner={ownername} file=backuppath; The parameters given are correct but still not able to connect Below is the error trace: EXP-00056: ORACLE error 12545 encountered ORA-12545: Connect failed because target host or object does not exist EXP-00000: Export terminated unsuccessfully What can be the possible reasons?

    Read the article

  • mod_rewrite and Apache questions

    - by John
    We have an interesting situation in relation to some help desk software that we are trying to setup. This is a web based software application that allows customers and staff to log into it and access tickets and supply updates, etc. The challenge we are having deals with the two different domains that we use and the mod_rewrite rules to make it all work with our SSL certificate that is only bound to one of the domains. I will list the use case scenarios below and the challenges that we are having. If you access http://support.domain1.com/support then it redirects fine to https://support.domain2.com/support If you access http://support.domain2.com/support then it redirects fine to https://support.domain2.com/support If you access https://support.domain1.com/support then it throws an error of "server cannot be found" If you access https://support.domain1.com/support/ after having visited https://support.domain2.com/support then you are presented with a "this connection is untrusted" error about the certificate only being valid for the domain2 domain instead of the domain1 domain name I have tried just about every mod_rewrite rule that I can think of to help make this work and I have not been able to locate the correct combination. I was curious if anyone had some ideas on how to make the redirects work correctly. In the end, we are needing all customers and staff to land at https://support.domain2.com/support regardless of the previous URL combinations that they enter, like listed above. Thanks in advance for your help with this.

    Read the article

  • Different behavior for REF CURSOR between Oracle 10g and 11g when unique index present?

    - by wweicker
    Description I have an Oracle stored procedure that has been running for 7 or so years both locally on development instances and on multiple client test and production instances running Oracle 8, then 9, then 10, and recently 11. It has worked consistently until the upgrade to Oracle 11g. Basically, the procedure opens a reference cursor, updates a table then completes. In 10g the cursor will contain the expected results but in 11g the cursor will be empty. No DML or DDL changed after the upgrade to 11g. This behavior is consistent on every 10g or 11g instance I've tried (10.2.0.3, 10.2.0.4, 11.1.0.7, 11.2.0.1 - all running on Windows). The specific code is much more complicated but to explain the issue in somewhat realistic overview: I have some data in a header table and a bunch of child tables that will be output to PDF. The header table has a boolean (NUMBER(1) where 0 is false and 1 is true) column indicating whether that data has been processed yet. The view is limited to only show rows in that have not been processed (the view also joins on some other tables, makes some inline queries and function calls, etc). So at the time when the cursor is opened, the view shows one or more rows, then after the cursor is opened an update statement runs to flip the flag in the header table, a commit is issued, then the procedure completes. On 10g, the cursor opens, it contains the row, then the update statement flips the flag and running the procedure a second time would yield no data. On 11g, the cursor never contains the row, it's as if the cursor does not open until after the update statement runs. I'm concerned that something may have changed in 11g (hopefully a setting that can be configured) that might affect other procedures and other applications. What I'd like to know is whether anyone knows why the behavior is different between the two database versions and whether the issue can be resolved without code changes. Update 1: I managed to track the issue down to a unique constraint. It seems that when the unique constraint is present in 11g the issue is reproducible 100% of the time regardless of whether I'm running the real world code against the actual objects or the following simple example. Update 2: I was able to completely eliminate the view from the equation. I have updated the simple example to show the problem exists even when querying directly against the table. Simple Example CREATE TABLE tbl1 ( col1 VARCHAR2(10), col2 NUMBER(1) ); INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); /* View is no longer required to demonstrate the problem CREATE OR REPLACE VIEW vw1 (col1, col2) AS SELECT col1, col2 FROM tbl1 WHERE col2 = 0; */ CREATE OR REPLACE PACKAGE pkg1 AS TYPE refWEB_CURSOR IS REF CURSOR; PROCEDURE proc1 (crs OUT refWEB_CURSOR); END pkg1; CREATE OR REPLACE PACKAGE BODY pkg1 IS PROCEDURE proc1 (crs OUT refWEB_CURSOR) IS BEGIN OPEN crs FOR SELECT col1 FROM tbl1 WHERE col1 = 'TEST1' AND col2 = 0; UPDATE tbl1 SET col2 = 1 WHERE col1 = 'TEST1'; COMMIT; END proc1; END pkg1; Anonymous Block Demo DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin first test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end first test'); END; /* After creating this index, the problem is seen */ CREATE UNIQUE INDEX unique_col1 ON tbl1 (col1); /* Reset data to initial values */ TRUNCATE TABLE tbl1; INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin second test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end second test'); END; Example of what the output on 10g would be:   begin first test   TEST1   end first test   begin second test   TEST1   end second test Example of what the output on 11g would be:   begin first test   TEST1   end first test   begin second test   end second test Clarification I can't remove the COMMIT because in the real world scenario the procedure is called from a web application. When the data provider on the front end calls the procedure it will issue an implicit COMMIT when disconnecting from the database anyways. So if I remove the COMMIT in the procedure then yes, the anonymous block demo would work but the real world scenario would not because the COMMIT would still happen. Question Why is 11g behaving differently? Is there anything I can do other than re-write the code?

    Read the article

  • How can I transfer files to a Kindle Fire with a Micro-USB cable?

    - by Jeff
    I'm running Ubuntu 11.10, and when I connect my Kindle Fire to my computer via micro usb, it is not recognized automatically. Other usb devices, such as my ipod and digital camera, are recognized just fine. It does not appear to be a usb power issue, since the Kindle Fire wakes up from sleeping when it is plugged in. I never get the message on the Kindle telling me it is ready to accept files from the computer, though. Here are the last 15 lines of dmesg after plugging the kindle in: jeff@prime:~$ dmesg | tail -n 15 [45918.269671] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [45929.072149] wlan0: no IPv6 routers present [46743.224217] usb 1-1: new high speed USB device number 5 using ehci_hcd [46743.364623] scsi8 : usb-storage 1-1:1.0 [46744.366102] scsi 8:0:0:0: Direct-Access Amazon Kindle 0001 PQ: 0 ANSI: 2 [46744.366356] scsi: killing requests for dead queue [46744.372494] scsi: killing requests for dead queue [46744.384510] scsi: killing requests for dead queue [46744.392348] scsi: killing requests for dead queue [46744.392731] scsi: killing requests for dead queue [46744.396853] scsi: killing requests for dead queue [46744.397214] scsi: killing requests for dead queue [46744.400795] scsi: killing requests for dead queue [46744.401589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [46744.407520] sd 8:0:0:0: [sdb] Attached SCSI removable disk And here are my mounted filesystems: jeff@prime:~$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 298594984 174663712 108763480 62% / udev 1407684 4 1407680 1% /dev tmpfs 566924 896 566028 1% /run none 5120 0 5120 0% /run/lock none 1417308 300 1417008 1% /run/shm /home/jeff/.Private 298594984 174663712 108763480 62% /home/jeff I should note that, since I got Dropbox working on my Kindle, the usb is no longer strictly necessary, but as a matter of principle I'd love to get it working.

    Read the article

  • WebLogic Server JMS WLST Script – Who is Connected To My Server

    - by james.bayer
    Ever want to know who was connected to your WebLogic Server instance for troubleshooting?  An email exchange about this topic and JMS came up this week, and I’ve heard it come up once or twice before too.  Sometimes it’s interesting or helpful to know the list of JMS clients (IP Addresses, JMS Destinations, message counts) that are connected to a particular JMS server.  This can be helpful for troubleshooting.  Tom Barnes from the WebLogic Server JMS team provided some helpful advice: The JMS connection runtime mbean has “getHostAddress”, which returns the host address of the connecting client JVM as a string.  A connection runtime can contain session runtimes, which in turn can contain consumer runtimes.  The consumer runtime, in turn has a “getDestinationName” and “getMemberDestinationName”.  I think that this means you could write a WLST script, for example, to dump all consumers, their destinations, plus their parent session’s parent connection’s host addresses.    Note that the client runtime mbeans (connection, session, and consumer) won’t necessarily be hosted on the same JVM as a destination that’s in the same cluster (client messages route from their connection host to their ultimate destination in the same cluster). Writing the Script So armed with this information, I decided to take the challenge and see if I could write a WLST script to do this.  It’s always helpful to have the WebLogic Server MBean Reference handy for activities like this.  This one is focused on JMS Consumers and I only took a subset of the information available, but it could be modified easily to do Producers.  I haven’t tried this on a more complex environment, but it works in my simple sandbox case, so it should give you the general idea. # Better to use Secure Config File approach for login as shown here http://buttso.blogspot.com/2011/02/using-secure-config-files-with-weblogic.html connect('weblogic','welcome1','t3://localhost:7001')   # Navigate to the Server Runtime and get the Server Name serverRuntime() serverName = cmo.getName()   # Multiple JMS Servers could be hosted by a single WLS server cd('JMSRuntime/' + serverName + '.jms' ) jmsServers=cmo.getJMSServers()   # Find the list of all JMSServers for this server namesOfJMSServers = '' for jmsServer in jmsServers: namesOfJMSServers = jmsServer.getName() + ' '   # Count the number of connections jmsConnections=cmo.getConnections() print str(len(jmsConnections)) + ' JMS Connections found for ' + serverName + ' with JMSServers ' + namesOfJMSServers   # Recurse the MBean tree for each connection and pull out some information about consumers for jmsConnection in jmsConnections: try: print 'JMS Connection:' print ' Host Address = ' + jmsConnection.getHostAddress() print ' ClientID = ' + str( jmsConnection.getClientID() ) print ' Sessions Current = ' + str( jmsConnection.getSessionsCurrentCount() ) jmsSessions = jmsConnection.getSessions() for jmsSession in jmsSessions: jmsConsumers = jmsSession.getConsumers() for jmsConsumer in jmsConsumers: print ' Consumer:' print ' Name = ' + jmsConsumer.getName() print ' Messages Received = ' + str(jmsConsumer.getMessagesReceivedCount()) print ' Member Destination Name = ' + jmsConsumer.getMemberDestinationName() except: print 'Error retrieving JMS Consumer Information' dumpStack() # Cleanup disconnect() exit() Example Output I expect the output to look something like this and loop through all the connections, this is just the first one: 1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection:   Host Address = 127.0.0.1   ClientID = None   Sessions Current = 16    Consumer:      Name = consumer40      Messages Received = 1      Member Destination Name = myJMSModule!myQueue Notice that it has the IP Address of the client.  There are 16 Sessions open because I’m using an MDB, which defaults to 16 connections, so this matches what I expect.  Let’s see what the full output actually looks like: D:\Oracle\fmw11gr1ps3\user_projects\domains\offline_domain>java weblogic.WLST d:\temp\jms.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'offline_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection: Host Address = 127.0.0.1 ClientID = None Sessions Current = 16 Consumer: Name = consumer40 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer34 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer37 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer16 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer46 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer49 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer43 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer55 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer25 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer22 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer19 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer52 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer31 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer58 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer28 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer61 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Disconnected from weblogic server: AdminServer     Exiting WebLogic Scripting Tool. Thanks to Tom Barnes for the hints and the inspiration to write this up. Image of telephone switchboard courtesy of http://www.JoeTourist.net/ JoeTourist InfoSystems

    Read the article

  • GDC 2012: DXT is NOT ENOUGH! Advanced texture compression for games

    GDC 2012: DXT is NOT ENOUGH! Advanced texture compression for games (Pre-recorded GDC content) Tired of fighting to fit your textures on disk? Too many bad reviews on long download times? Fix it! Don't settle for putting your raw DXT files in a ZIP, instead, compress your DXT textures by an extra 50%-70%! This talk will cover various ways to increase the compression of your game textures to allow for smaller distributables without introducing error, and allowing for fast on-demand decompression at run time. We'll cover how to losslessly squeeze your data with Huffman, block expansion, vector quantization, and we'll even take a look at what MegaTexture is doing too. If you've ever fought to fit textures into memory, this is the talk for you. Speaker: Colt McAnlis From: GoogleDevelopers Views: 1132 21 ratings Time: 33:05 More in Science & Technology

    Read the article

  • Java - System design with distributed Queues and Locks

    - by sunny
    Looking for inputs to evaluate a design for a system (java) which would have a distributed queue serving several (but not too many) nodes. These nodes would process objects present in the distributed queue and on occasion require a distributed lock across the cluster on an arbitrary (distributed) data structures. These (distributed) data structures could potentially lie in a distributed cache. Eliminating Terracotta (DSO),Hazelcast and Akka what could be alternative choices. Currently considering zookeeper as a distributed locking mechanism. Since the recommendation of a znode is not to exceed the 1M size , the understanding is that zookeeper should not be used a distributed queue. And also from Netflix curator tech note 4. So should a distributed cache, say like memcached, or redis be used to emulate a distributed queue ? i.e. The distributed queue will be stored in the caches and will be locked cluster-wide via zookeeper. Are there potential pitfalls with this high-level approach. The objects don't need to be taken off the queue. The object will pass through a lifecycle which will determine its removal from the queue. There would be about 10k+ objects in a queue at a given time changing states and any node could service one stage of the object's lifecycle. (Although not strictly necessary .. i.e. one node could serve the entire lifecycle if that is more efficient.) Any suggestions/alternatives ? sidenote: new to zookeeper ; redis etc.

    Read the article

  • Create a Social Community of Trust Along With Your Federal Digital Services Governance

    - by TedMcLaughlan
    The Digital Services Governance Recommendations were recently released, supporting the US Federal Government's Digital Government Strategy Milestone Action #4.2 to establish agency-wide governance structures for developing and delivering digital services. Figure 1 - From: "Digital Services Governance Recommendations" While extremely important from a policy and procedure perspective within an Agency's information management and communications enterprise, these recommendations only very lightly reference perhaps the most important success enabler - the "Trusted Community" required for ultimate usefulness of the services delivered. By "ultimate usefulness", I mean the collection of public, transparent properties around government information and digital services that include social trust and validation, social reach, expert respect, and comparative, standard measures of relative value. In other words, do the digital services meet expectations of the public, social media ecosystem (people AND machines)? A rigid governance framework, controlling by rules, policies and roles the creation and dissemination of digital services may meet the expectations of direct end-users and most stakeholders - including the agency information stewards and security officers. All others who may share comments about the services, write about them, swap or review extracts, repackage, visualize or otherwise repurpose the output for use in entirely unanticipated, social ways - these "stakeholders" will not be governed, but may observe guidance generated by a "Trusted Community". As recognized members of the trusted community, these stakeholders may ultimately define the right scope and detail of governance that all other users might observe, promoting and refining the usefulness of the government product as the social ecosystem expects. So, as part of an agency-centric governance framework, it's advised that a flexible governance model be created for stewarding a "Community of Trust" around the digital services. The first steps follow the approach outlined in the Recommendations: Step 1: Gather a Core Team In addition to the roles and responsibilities described, perhaps a set of characteristics and responsibilities can be developed for the "Trusted Community Steward/Advocate" - i.e. a person or team who (a) are entirely cognizant of and respected within the external social media communities, and (b) are trusted both within the agency and outside as practical, responsible, non-partisan communicators of useful information. The may seem like a standard Agency PR/Outreach team role - but often an agency or stakeholder subject matter expert with a public, active social persona works even better. Step 2: Assess What You Have In addition to existing, agency or stakeholder decision-making bodies and assets, it's important to take a PR/Marketing view of the social ecosystem. How visible are the services across the social channels utilized by current or desired constituents of your agency? What's the online reputation of your agency and perhaps the service(s)? Is Search Engine Optimization (SEO) a facet of external communications/publishing lifecycles? Who are the public champions, instigators, value-adders for the digital services, or perhaps just influential "communicators" (i.e. with no stake in the game)? You're essentially assessing your market and social presence, and identifying the actors (including your own agency employees) in the existing community of trust. Step 3: Determine What You Want The evolving Community of Trust will most readily absorb, support and provide feedback regarding "Core Principles" (Element B of the "six essential elements of a digital services governance structure") shared by your Agency, and obviously play a large, though probably very unstructured part in Element D "Stakeholder Input and Participation". Plan for this, and seek input from the social media community with respect to performance metrics - these should be geared around the outcome and growth of the trusted communities actions. How big and active is this community? What's the influential reach of this community with respect to particular messaging or campaigns generated by the Agency? What's the referral rate TO your digital services, FROM channels owned or operated by members of this community? (this requires governance with respect to content generation inclusive of "markers" or "tags"). At this point, while your Agency proceeds with steps 4 ("Build/Validate the Governance Structure") and 5 ("Share, Review, Upgrade"), the Community of Trust might as well just get going, and start adding value and usefulness to the existing conversations, existing data services - loosely though directionally-stewarded by your trusted advocate(s). Why is this an "Enterprise Architecture" topic? Because it's increasingly apparent that a Public Service "Enterprise" is not wholly contained within Agency facilities, firewalls and job titles - it's also manifested in actual, perceived or representative forms outside the walls, on the social Internet. An Agency's EA model and resulting investments both facilitate and are impacted by the "Social Enterprise". At Oracle, we're very active both within our Enterprise and outside, helping foster social architectures that enable truly useful public services, digital or otherwise.

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

  • PeopleSoft at Alliance 2012 Executive Forum

    - by John Webb
    Guest Posting From Rebekah Jackson This week I jointed over 4,800 Higher Ed and Public Sector customers and partners in Nashville at our annual Alliance conference.   I got lost easily in the hallways of the sprawling Gaylord Opryland Hotel. I carried the resort map with me, and I would still stand for several minutes at a very confusing junction, studying the map and the signage on the walls. Hallways led off in many directions, some with elevators going down here and stairs going up there. When I took a wrong turn I would instantly feel stuck, lose my bearings, and occasionally even have to send out a call for help.    It strikes me that the theme for the Executive Forum this year outlines a less tangible but equally disorienting set of challenges that our higher education customer’s CIOs are facing: Making Decisions at the Intersection of Business Value, Strategic Investment, and Enterprise Technology. The forces acting upon higher education institutions today are not neat, straight-forward decision points, where one can glance to the right, glance to the left, and then quickly choose the best course of action. The operational, technological, and strategic factors that must be considered are complex, interrelated, messy…and the stakes are high. Michael Horn, co-author of “Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns”, set the tone for the day. He introduced the model of disruptive innovation, which grew out of the research he and his colleagues have done on ‘Why Successful Organizations Fail’. Highly simplified, the pattern he shared is that things start out decentralized, take a leap to extreme centralization, and then experience progressive decentralization. Using computers as an example, we started with a slide rule, then developed the computer which centralized in the form of mainframes, and gradually decentralized to mini-computers, desktop computers, laptops, and now mobile devices. According to Michael, you have more computing power in your cell phone than existed on the planet 60 years ago, or was on the first rocket that went to the moon. Applying this pattern to Higher Education means the introduction of expensive and prestigious private universities, followed by the advent of state schools, then by community colleges, and now online education. Michael shared statistics that indicate 50% of students will be taking at least one on line course by 2014…and by some measures, that’s already the case today. The implication is that technology moves from being the backbone of the campus, the IT department’s domain, and pushes into the academic core of the institution. Innovative programs are underway at many schools like Bellevue and BYU Idaho, joined by startups and disruptive new players like the Khan Academy.   This presents both threat and opportunity for higher education institutions, and means that IT decisions cannot afford to be disconnected from the institution’s strategic plan. Subsequent sessions explored this theme.    Theo Bosnak, from Attain, discussed the model they use for assessing the complete picture of an institution’s financial health. Compounding the issue are the dramatic trends occurring in technology and the vendors that provide it. Ovum analyst Nicole Engelbert, shared her insights next and suggested that incremental changes are no longer an option, instead fundamental changes are affecting the landscape of enterprise technology in higher ed.    Nicole closed with her recommendation that institutions focus on the trends in higher education with an eye towards the strategic requirements and business value first. Technology then is the enabler.   The last presentation of the day was from Tom Fisher, Sr. Vice President of Cloud Services at Oracle. Tom runs the delivery arm of the Cloud Services group, and shared his thoughts candidly about his experiences with cloud deployments as well as key issues around managing costs and security in cloud deployments. Okay, we’ve covered a lot of ground at this point, from financials planning, business strategy, and cloud computing, with the possibility that half of the institutions in the US might not be around in their current form 10 years from now. Did I forget to mention that was raised in the morning session? Seems a little hard to believe, and yet Michael Horn made a compelling point. Apparently 100 years ago, 8 of the top 10 education institutions in the world were German. Today, the leading German school is ranked somewhere in the 40’s or 50’s. What will the landscape be 100 years from now? Will there be an institution from China, India, or Brazil in the top 10? As Nicole suggested, maybe US parents will be sending their children to schools overseas much sooner, faced with the ever-increasing costs of a US based education. Will corporations begin to view skill-based certification from an online provider as a viable alternative to a 4 year degree from an accredited institution, fundamentally altering the education industry as we know it?

    Read the article

  • Nashorn, the rhino in the room

    - by costlow
    Nashorn is a new runtime within JDK 8 that allows developers to run code written in JavaScript and call back and forth with Java. One advantage to the Nashorn scripting engine is that is allows for quick prototyping of functionality or basic shell scripts that use Java libraries. The previous JavaScript runtime, named Rhino, was introduced in JDK 6 (released 2006, end of public updates Feb 2013). Keeping tradition amongst the global developer community, "Nashorn" is the German word for rhino. The Java platform and runtime is an intentional home to many languages beyond the Java language itself. OpenJDK’s Da Vinci Machine helps coordinate work amongst language developers and tool designers and has helped different languages by introducing the Invoke Dynamic instruction in Java 7 (2011), which resulted in two major benefits: speeding up execution of dynamic code, and providing the groundwork for Java 8’s lambda executions. Many of these improvements are discussed at the JVM Language Summit, where language and tool designers get together to discuss experiences and issues related to building these complex components. There are a number of benefits to running JavaScript applications on JDK 8’s Nashorn technology beyond writing scripts quickly: Interoperability with Java and JavaScript libraries. Scripts do not need to be compiled. Fast execution and multi-threading of JavaScript running in Java’s JRE. The ability to remotely debug applications using an IDE like NetBeans, Eclipse, or IntelliJ (instructions on the Nashorn blog). Automatic integration with Java monitoring tools, such as performance, health, and SIEM. In the remainder of this blog post, I will explain how to use Nashorn and the benefit from those features. Nashorn execution environment The Nashorn scripting engine is included in all versions of Java SE 8, both the JDK and the JRE. Unlike Java code, scripts written in nashorn are interpreted and do not need to be compiled before execution. Developers and users can access it in two ways: Users running JavaScript applications can call the binary directly:jre8/bin/jjs This mechanism can also be used in shell scripts by specifying a shebang like #!/usr/bin/jjs Developers can use the API and obtain a ScriptEngine through:ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn"); When using a ScriptEngine, please understand that they execute code. Avoid running untrusted scripts or passing in untrusted/unvalidated inputs. During compilation, consider isolating access to the ScriptEngine and using Type Annotations to only allow @Untainted String arguments. One noteworthy difference between JavaScript executed in or outside of a web browser is that certain objects will not be available. For example when run outside a browser, there is no access to a document object or DOM tree. Other than that, all syntax, semantics, and capabilities are present. Examples of Java and JavaScript The Nashorn script engine allows developers of all experience levels the ability to write and run code that takes advantage of both languages. The specific dialect is ECMAScript 5.1 as identified by the User Guide and its standards definition through ECMA international. In addition to the example below, Benjamin Winterberg has a very well written Java 8 Nashorn Tutorial that provides a large number of code samples in both languages. Basic Operations A basic Hello World application written to run on Nashorn would look like this: #!/usr/bin/jjs print("Hello World"); The first line is a standard script indication, so that Linux or Unix systems can run the script through Nashorn. On Windows where scripts are not as common, you would run the script like: jjs helloWorld.js. Receiving Arguments In order to receive program arguments your jjs invocation needs to use the -scripting flag and a double-dash to separate which arguments are for jjs and which are for the script itself:jjs -scripting print.js -- "This will print" #!/usr/bin/jjs var whatYouSaid = $ARG.length==0 ? "You did not say anything" : $ARG[0] print(whatYouSaid); Interoperability with Java libraries (including 3rd party dependencies) Another goal of Nashorn was to allow for quick scriptable prototypes, allowing access into Java types and any libraries. Resources operate in the context of the script (either in-line with the script or as separate threads) so if you open network sockets and your script terminates, those sockets will be released and available for your next run. Your code can access Java types the same as regular Java classes. The “import statements” are written somewhat differently to accommodate for language. There is a choice of two styles: For standard classes, just name the class: var ServerSocket = java.net.ServerSocket For arrays or other items, use Java.type: var ByteArray = Java.type("byte[]")You could technically do this for all. The same technique will allow your script to use Java types from any library or 3rd party component and quickly prototype items. Building a user interface One major difference between JavaScript inside and outside of a web browser is the availability of a DOM object for rendering views. When run outside of the browser, JavaScript has full control to construct the entire user interface with pre-fabricated UI controls, charts, or components. The example below is a variation from the Nashorn and JavaFX guide to show how items work together. Nashorn has a -fx flag to make the user interface components available. With the example script below, just specify: jjs -fx -scripting fx.js -- "My title" #!/usr/bin/jjs -fx var Button = javafx.scene.control.Button; var StackPane = javafx.scene.layout.StackPane; var Scene = javafx.scene.Scene; var clickCounter=0; $STAGE.title = $ARG.length>0 ? $ARG[0] : "You didn't provide a title"; var button = new Button(); button.text = "Say 'Hello World'"; button.onAction = myFunctionForButtonClicking; var root = new StackPane(); root.children.add(button); $STAGE.scene = new Scene(root, 300, 250); $STAGE.show(); function myFunctionForButtonClicking(){   var text = "Click Counter: " + clickCounter;   button.setText(text);   clickCounter++;   print(text); } For a more advanced post on using Nashorn to build a high-performing UI, see JavaFX with Nashorn Canvas example. Interoperable with frameworks like Node, Backbone, or Facebook React The major benefit of any language is the interoperability gained by people and systems that can read, write, and use it for interactions. Because Nashorn is built for the ECMAScript specification, developers familiar with JavaScript frameworks can write their code and then have system administrators deploy and monitor the applications the same as any other Java application. A number of projects are also running Node applications on Nashorn through Project Avatar and the supported modules. In addition to the previously mentioned Nashorn tutorial, Benjamin has also written a post about Using Backbone.js with Nashorn. To show the multi-language power of the Java Runtime, there is another interesting example that unites Facebook React and Clojure on JDK 8’s Nashorn. Summary Nashorn provides a simple and fast way of executing JavaScript applications and bridging between the best of each language. By making the full range of Java libraries to JavaScript applications, and the quick prototyping style of JavaScript to Java applications, developers are free to work as they see fit. Software Architects and System Administrators can take advantage of one runtime and leverage any work that they have done to tune, monitor, and certify their systems. Additional information is available within: The Nashorn Users’ Guide Java Magazine’s article "Next Generation JavaScript Engine for the JVM." The Nashorn team’s primary blog or a very helpful collection of Nashorn links.

    Read the article

  • How can we plan projects realistically while accounting for support issues?

    - by Thomas Clayson
    We're having a problem at work: we're trying to schedule work so that we can assess time scales and get deadline dates. The problem is that it's difficult to plan for a project without knowing everything that's going to happen. For instance, right now we've planned all our projects through the start of December, however in that time we will have various in house and external meetings, teleconferences and extra work. It's all well and good to say that a project will take three weeks, but if there is a week's worth of interruption in that time then the date of completion will be pushed back a week. The problem is 3 fold: When we schedule projects the time scales are taken literally. If we estimate three weeks, the deadline is set for three week's time, the client is told, and there is no room for extension. Interim work and such means that we lose productive time working on the project. Sometimes clients don't have the time that we need to take to do the work, so they'll sometimes come to us and say they need a project done by the end of the month even when we think that the work will take two months - not to mention we already have work to be doing. We have a Gantt chart which we are trying to fill in with all the projects we have and we fill in timesheets, but they're not compared to the Gantt chart at all. This makes it difficult to say "Well, we scheduled 3 weeks for this project, but we've lost a week here so the deadline has to move back a week." It's also not professional to keep missing deadlines we've communicated to the client. How do other people deal with this type of situation? How do you manage the planning of projects? How much "extra" time do you schedule into a project to account for non-project work that occurs during a project? How do you deal with support issues and bugs and stuff? Things you can't account for during planning? UPDATE Lots of good answers thank you.

    Read the article

  • Say What? Podcasting As Part of Your Content Marketing

    - by Mike Stiles
    What do you usually do in your car on the way to work?  Sing along to radio? Stream Pandora or iHeartRadio? Talk on the phone? Sit in total silence? Whatever it is you do, you could be using that time to make yourself an expert in any range of topics…using podcasts. We invite you to follow or subscribe to the daily Oracle Social Spotlight podcast, a quick roundup of the day’s top stories around social marketing and the social networks. After podcasts arrived in 2004, growth was steady but slow. The concept was strong: anyone with a passion for any subject could make a show for anyone who cared to listen. Enter the smartphone, iTunes, new podcasting platforms, and social, and podcasting became easier than ever and made more sense for both podcasters and listeners. Stats show 1 in 5 smartphone owners are podcast consumers and 29% of Americans have listened to a podcast. The potential audience is also larger than ever. “Baked in” podcast apps on over 200 million devices expose users to volumes of audio content with just a tap. 97 million Americans are driving to work every day by themselves. And 38% of Americans listen to audio on a digital device each week, a number that’s projected to double by 2015. Does that mean your brand should be podcasting? That’s part of a larger discussion about your overall content strategy, provided you have one. But if you do and podcasting is a component of it, here are some things to keep in mind: Don’t podcast just to do it. Podcast because you thought of a show customers and prospects will like that they can’t get anywhere else. Sound quality matters. Good microphones are not expensive. Bad sound is annoying, makes your brand feel cheap, and will turn today’s sophisticated ears off. The host matters. Many think they belong on the radio. Few actually do. Your brand’s host should be comfortable & likeable. A top advantage of a podcast is people can bond with a real person. It’s a trust opportunity, so don’t take it lightly. The content matters. “All killer, no filler” means don’t allow babbling just to fill enough time for an episode. Value the listeners’ time, because that time is hard to get. Put time, effort and creativity into it. Sure you’re a business, but you’re competing with content from professional media and showbiz producers. If you can include music, sound effects, and things that amuse the ears, do it. If you start, be consistent. The #1 flaw in podcasting is when listeners can’t count on another episode or don’t know when it’s coming. Don’t skip doing shows just because you can. Get committed. Get your cover art right. Podcasting is about audio, but people shop for podcasts by glancing through graphics. Yours has to be professional, cool, and informative to get listeners interested. Cross-promote your podcast on all your channels. The competition for listeners is fierce, so if you have existing audiences you can leverage to launch your show, use them. Optimize it for mobile. Assume that’s where most listening will take place. If you’re using one of the podcast platform apps, you should be in good shape. Frankly, the percentage of brands that are podcasting is quite low, and that’s okay. Once you move beyond blogging and start connecting with real voices, poor execution can do damage. But more (32%) marketers want to learn how to use podcasting, and more (23%) were increasing their podcasting throughout this year. Bottom line, you want to share your brand’s message and stories wherever your audience might be and in whatever way they prefer to take in content. Many prefer to do that while driving or working out, using the eyes and hands-free medium of audio. @mikestilesPhoto: stock.xchng

    Read the article

  • 3 Trends for SMBs around Social, Mobile, and Sensor

    - by Socially_Aware_Enterprise
    While I often am talking to big companies or discussing enterprise solutions. There are times when individuals ask me about Small or Medium sized business trends.  Interestingly,  the Enterprise Social, Mobile, and Sensor initiatives I regularly discuss are in fact related to even the Mom and Pop storefront. The eco-system of new service players in the Social-Mobile-Sensor space generally emerge developing partnerships with enterprises as they develop and bring economy to scale to their services for the larger market. And of course Oracle has an entire division dedicated for delivering products and support to help emerging companies compete without the need to open an industrial strength credit line.. So here are some trends that we are helping large enterprises to deploy today, but small and medium businesses should be able to take advantage of by the end of this year and starting into 2015. 1) The typical small business is generally "Localized". But the ability to be "Hyper-Localized" will come as location based services become ubiquitous. Many small businesses have one or several storefronts and theirs are typically within a single regional economic footprint. While the internet provides global reach, it will be the businesses that invest in social, mobile and local that will win in the end.  Of course I am a huge SoMoLo evangelist. The SMBs' content and targeting with platforms for Geo-Fencing, Geo-Conquesting and Path-Matching to HHI are all going to be accessible to them, if not for Mobile Apps, then via Mobile messaging in Social Networks that offer it.. Expect to be able to target FaceBook messaging not by city, but by store or mall… This makes being able to be "Hyper-Local" even more important. And with new proximity services coming online more than ever before, SMBs will operate and service customers with pinpoint accuracy right down to where they stand in an aisle. Geo-Conquesting will be huge for small players to place ads when customers pass through competitors regions. Car Dealers are doing this now.. But also of course iBeacons are now very cheap and getting easier to put in retail stores. The ability for sales to happen anywhere in the store via a mobile phone or tablet is huge, as it will give the small shop the flexibility to not have to "Guard the Register" as more or most transactions will be digital. Thus, M-Commerce and T-Commerce will change the job of cashier dramatically.. 2) Intra-Brand Advocacy, the idea now is that rather than just depend on your trusty social media manager and his team, you are going to push more and more individuals with expertise inside the organization to help manage, reach-out, and utilize social channels to manage the incoming questions and answers customers need. While for years CRM was the tool of the enterprise, today CRMs enable this now "Salesforce et al" capability to trickle throughout the company. This gives greater pressure to organize roles, but also flatten out the organization. Internal collaboration around topics and customer needs is going to be the key for SMBs to finally get serious about customer experiences. Their customers are online and in social networks. This includes not just B2C SMBs but also B2B companies as well. Don't believe me? To find the players just use hashtag #SocialSelling and you will see… 3) The Visual Networks will begin to move from Content Aggregators to Content Collaboration platforms, which means Pinterest, Instagram, Vine, & others will begin to move to add more features brands want, first marketing platforms, rather than unique brand partnerships as they do today, but this will open ways for SMBs to engage with clear brand messaging and metrics. Eventually providing more "Collaboration" between Brand and Consumer.. Don't think for a minute Facebook bought Oculus Rift so you could see your timeline in 3-D. The Social Networks I advise customers to invest in are ones that are audio and visual intrinsically. Players from SoundCloud to Pinterest are deploying ways for brands to harness their interactive visual or audio based social networks to sell ad units aka brand messaging. While the Social Media revolution is going on, the emphasis was on the social, today it more and more about the media in social, that enterprises soon small and medium businesses will be connected to. 

    Read the article

  • How Important is Project Team Communication in the Public Sector?

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Paul Bender, Director of Public Administration Strategy, Oracle Primavera It goes without saying that communication between project team members is a core competency that connects every member of a project team to a common set of strategies, goals and actions. If these components are not effectively shared by project leads and understood by stakeholders, project outcomes can be jeopardized and budgets may incur unnecessary risk. As reported by PMI’s 2013 Pulse of the Profession, an organization’s ability to meet project timelines, budgets and especially goals significantly impacts its ability to survive—and even thrive. The Pulse study revealed that the most crucial success factor in project management is effective communication to all stakeholders—a critical core competency for public agencies. PMI’s 2013 Pulse of the Profession report revealed that US$135 million is at risk for every US$1 billion spent on a project. Further research on the importance of effective project team communication uncovers that a startling 56 percent (US$75 million of that US$135 million) is at risk due to ineffective communication. Simply stated: public agencies cannot execute strategic initiatives unless they can effectively communicate their strategic alignment and business benefits. Executives and project managers around the world agree that poor communication between project team members contributes to project failure. A Forbes Insights 2010 Strategic Initiatives Study “Adapting Corporate Strategy to the Changing Economy,” found that nine out of ten CEOs believe that communication is critical to the success of their strategic initiatives, and nearly half of respondents cite communication as an integral and active component of their strategic planning and execution process. Project managers see it similarly from their side as well. According to PMI’s Pulse research, 55 percent of project managers agree that effective communication to all stakeholders is the most critical success factor in project management. As we all know, not all projects succeed. On average, two in five projects do not meet their original goals and business intent, and one-half of those unsuccessful projects are related to ineffective communication. Results reveal that while all aspects of project communication can be challenging to public agencies, the biggest problem areas are: A gap in understanding the business benefits. Challenges surrounding the language used to deliver project-related information, which is often unclear and peppered with project management jargon. Public agencies -- federal, state, and local -- have difficulty communicating with the appropriate levels with clarity and detail. This difficulty is likely exacerbated by the divide between each key audience and its understanding of project-specific, technical language. For those involved in public sector project and portfolio management, I would be interested to hear your thoughts and please visit Primavera EPPM solutions for public sector.

    Read the article

  • Provocative Tweets From the Dachis Social Business Summit

    - by Mike Stiles
    On June 20, all who follow social business and how social is changing how we do business and internal business structures, gathered in London for the Dachis Social Business Summit. In addition to Oracle SVP Product Development, Reggie Bradford, brands and thought leaders posed some thought-provoking ideas and figures. Here are some of the most oft-tweeted points, and our thoughts that they provoked. Tweet: The winners will be those who use data to improve performance.Thought: Everyone is dwelling on ROI. Why isn’t everyone dwelling on the opportunity to make their product or service better (as if that doesn’t have an effect on ROI)? Big data can improve you…let it. Tweet: High performance hinges on integrated teams that interact with each other.Thought: Team members may work well with each other, but does the team as a whole “get” what other teams are doing? That’s the key to an integrated, companywide workforce. (Internal social platforms can facilitate that by the way). Tweet: Performance improvements come from making the invisible visible.Thought: Many of the factors that drive customer behavior and decisions are invisible. Through social, customers are now showing us what we couldn’t see before…if we’re paying attention. Tweet: Games have continuous feedback, which is why they’re so engaging.  Apply that to business operations.Thought: You think your employees have an obligation to be 100% passionate and engaged at all times about making you richer. Think again. Like customers, they must be motivated. Visible insight that they’re advancing on their goals helps. Tweet: Who can add value to the data?  Data will tend to migrate to where it will be most effective.Thought: Not everybody needs all the data. One team will be able to make sense of, use, and add value to data that may be irrelevant to another team. Like a strategized football play, the data has to get sent to the spot on the field where it’s needed most. Tweet: The sale isn’t the light at the end of the tunnel, it’s the start of a new marketing cycle.Thought: Another reason the ROI question is fundamentally flawed. The sale is not the end of the potential return on investment. After-the-sale service and nurturing begins where the sales “victory” ends. Tweet: A dead sale is one that’s not shared.  People must be incentivized to share.Thought: Guess what, customers now know their value to you as marketers on your behalf. They’ll tell people about your product, but you’ve got to answer, “Why should I?” And you’ve got to answer it with something substantial, not lame trinkets. Tweet: Social user motivations are competition, affection, excellence and curiosity.Thought: Your followers will engage IF; they can get something for doing it, love your culture so much they want you to win, are consistently stunned at the perfection and coolness of your products, or have been stimulated enough to want to know more. Tweet: In Europe, 92% surveyed said they couldn’t care less about brands.Thought: Oh well, so much for loving you or being impressed enough with your products & service that they want you to win. We’ve got a long way to go. Tweet: A complaint is a gift.Thought: Our instinct where complaints are concerned is to a) not listen, b) dismiss the one who complains as a kook, c) make excuses, and d) reassure ourselves with internal group-think that they’re wrong and we’re right. It’s the perfect recipe for how to never, ever grow or get better. In a way, this customer cares more than you do. Tweet: 78% of consumers think peer recommendation is the best form of advertising.  Eventually, engagement is going to eat advertising.Thought: Why is peer recommendation best? Trust. If a friend tells me how great a movie was, I believe him. He has credibility with me. He’s seen it, and he could care less if I buy a ticket. He’s telling me it was awesome because he sincerely believes that it was.  That’s gold. Tweet: 86% of customers are willing to pay more for a better customer experience. Thought: This “how mad can we make our customers without losing them” strategy has to end. The customer experience has actual monetary value, money you’re probably leaving on the table. @mikestilesPhoto: stock.xchng

    Read the article

  • BYOD-The Tablet Difference

    - by Samantha.Y. Ma
    By Allison Kutz, Lindsay Richardson, and Jennifer Rossbach, Sales Consultants Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Less than three years ago, Apple introduced a new concept to the world: The Tablet. It’s hard to believe that in only 32 months, the iPad induced an entire new way to do business. Because of their mobility and ease-of-use, tablets have grown in popularity to keep up with the increasing “on the go” lifestyle, and their popularity isn’t expected to decrease any time soon. In fact, global tablet sales are expected to increase drastically within the next five years, from 56 million tablets to 375 million by 2016. Tablets have been utilized for every function imaginable in today’s world. With over 730,000 active applications available for the iPad, these tablets are educational devices, portable book collections, gateways into social media, entertainment for children when Mom and Dad need a minute on their own, and so much more. It’s no wonder that 74% of those who own a tablet use it daily, 60% use it several times a day, and an average of 13.9 hours per week are spent tapping away. Tablets have become a critical part of a user’s personal life; but why stop there? Businesses today are taking major strides in implementing these devices, with the hopes of benefiting from efficiency and productivity gains. Limo and taxi drivers use tablets as payment devices instead of traditional cash transactions. Retail outlets use tablets to find the exact merchandise customers are looking for. Professors use tablets to teach their classes, and business professionals demonstrate solutions and review reports from tablets. Since an overwhelming majority of tablet users have started to use their personal iPads, PlayBooks, Galaxys, etc. in the workforce, organizations have had to make a change. In many cases, companies are willing to make that change. In fact, 79% of companies are making new investments in mobility this year. Gartner reported that 90% of organizations are expected to support corporate applications on personal devices by 2014. It’s not just companies that are changing. Business professionals have become accustomed to tablets making their personal lives easier, and want that same effect in the workplace. Professionals no longer want to waste time manually entering data in their computer, or worse yet in a notebook, especially when the data has to be later transcribed to an online system. The response: the Bring Your Own Device phenomenon. According to Gartner, BOYD is “an alternative strategy allowing employees, business partners and other users to utilize a personally selected and purchased client device to execute enterprise applications and access data.” Employees whose companies embrace this trend are more efficient because they get to use devices they are already accustomed to. Tablets change the game when it comes to how sales professionals perform their jobs. Sales reps can easily store and access customer information and analytics using tablet applications, such as Oracle Fusion Tap. This method is much more enticing for sales reps than spending time logging interactions on their (what seem to be outdated) computers. Forrester & IDC reported that on average sales reps spend 65% of their time on activities other than selling, so having a tablet application to use on the go is extremely powerful. In February, Information Week released a list of “9 Powerful Business Uses for Tablet Computers,” ranging from “enhancing the customer experience” to “improving data accuracy” to “eco-friendly motivations”. Tablets compliment the lifestyle of professionals who strive to be effective and efficient, both in the office and on the road. Three Things Businesses Need to do to Embrace BYOD Make customer-facing websites tablet-friendly for consistent user experiences Develop tablet applications to continue to enhance the customer experience Embrace and use the technology that comes with tablets Almost 55 million people in the U.S. own tablets because they are convenient, easy, and powerful. These are qualities that companies strive to achieve with any piece of technology. The inherent power of the devices coupled with the growing number of business applications ensures that tablets will transform the way that companies and employees perform.

    Read the article

  • What do the participants say about the Open Day in South Africa?

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 On the 26th of September, a group of students who were specifically selected to attend an Open day at Oracle South Africa, joined us at our offices in Woodmead, Johannesburg. The Conference room was filled with inquisitive minds. What we had in store for them was a detailed presentation about Oracle which was delivered by Zuko - Cluster Leader: Tech GB South Africa. The student’s many questions were all answered especially when we started addressing the opportunities we have and detailed information on our Graduate Programme. Our employees then came to talk about their experience. This allowed all the students to have an integrated learning experience. By inviting the students to walk around our Oracle Offices allowed them to see, talk, experience a bit of the culture and ask more questions. Here is some of the feedback from the attendees: Maxwell Moloi: “The open day truly served its purpose and exceeded expectations in the sense that I got to find out more about Oracle and all the different opportunities it has to offer. The fact that Oracle supplies a full solution to a customer and not just part of it and how the company manages to setup professional development for their employees is what entices me to want to join the rapidly growing team of Oracle.” Nqobile Mabaso: “I found the open day to be quite informative and enlightening because coming from a marketing background I could apply the knowledge I got from varsity to the Company I was able to point out what they do as part of their corporate social responsibility (Oracle recently partnered with the department of education to build a school), how Oracle emphasizes on relationship building because they know they sell to people and not companies and how they offer the full stack of solutions which gives them a competitive advantage over their competitors.” Nondumiso Mvelase: “The Open Day was a wonderful experience for me especially because I have never been part of an Open Day before, so it was absolutely amazing for me. It gave me a good idea of how it is to be part of Oracle. We were served with lovely breakfast and lunch which I enjoyed. I wish the Open Day went on for a whole week. Seeing and hearing from 2013 Graduates, telling us about their experience within Oracle was very inspiring to me. They were encouraging us to work hard if we ever got the opportunity they had. After hearing this from them I will definitely not take it for granted.” Itumeleng Moraka: “Before I walked into the Oracle offices all that was in my mind was databases and cloud storage. I was then surrounded by passionate, enthusiastic and welcoming employees. I came across a positive energy within the multinational company. I realized that Oracle is not a company that operates in survival mode. This may sound idealistic, but they operate in a non-traditional way investing more into innovation, they stay focused on what matters most about where technology is going and at the same time they are not losing sight of how their products make a difference in the world.” For more information on how to be part of the Oracle Graduate Programme please follow us on Facebook! https://www.facebook.com/CampusAtOracle /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Application Composer: Exposing Your Customizations in BI Analytics and Reporting

    - by Richard Bingham
    Introduction This article explains in simple terms how to ensure the customizations and extensions you have made to your Fusion Applications are available for use in reporting and analytics. It also includes four embedded demo videos from our YouTube channel (if they don't appear check the browser address bar for a blocking shield icon). If you are new to Business Intelligence consider first reviewing our getting started article, and you can read more about the topic of custom subject areas in the documentation book Extending Sales. There are essentially four sections to this post. First we look at how custom fields added to standard objects are made available for reporting. Secondly we look at creating custom subject areas on the standard objects. Next we consider reporting on custom objects, starting with simple standalone objects, then child custom objects, and finally custom objects with relationships. Finally this article reviews how flexfields are exposed for reporting. Whilst this article applies to both Cloud/SaaS and on-premises deployments, if you are an on-premises developer then you can also use the BI Administration Tool to customize your BI metadata repository (the RPD) and create new subject areas. Whilst this is not covered here you can read more in Chapter 8 of the Extensibility Guide for Developers. Custom Fields on Standard Objects If you add a custom field to your standard object then it's likely you'll want to include it in your reports. This is very simple, since all new fields are instantly available in the "[objectName] Extension" folder in existing subject areas. The following two minute video demonstrates this. Custom Subject Areas for Standard Objects You can create your own subject areas for use in analytics and reporting via Application Composer. An example use-case could be to simplify the seeded subject areas, since they sometimes contain complex data fields and internal values that could confuse business users. One thing to note is that you cannot create subject areas in a sandbox, as it is not supported by BI, so once your custom object is tested and complete you'll need to publish the sandbox before moving forwards. The subject area creation processes is essentially two-fold. Once the request is submitted the ADF artifacts are generated, then secondly the related metadata is sent to the BI presentation server API's to make the updates there. One thing to note is that this second step may take up to ten minutes to complete. Once finished the status of the custom subject area request should show as 'OK' and it is then ready for use. Within the creation processes wizard-like steps there are three concepts worth highlighting: Date Flattening - this feature permits the roll up of reports at various date levels, such as data by week, month, quarter, or year. You simply check the box to enable it for that date field. Measures - these are your own functions that you can build into the custom subject area. They are related to the field data type and include min-max for dates, and sum(), avg(), and count() for  numeric fields. Implicit Facts - used to make the BI metadata join between your object fields and the calculated measure fields. The advice is to choose the most frequently used measure to ensure consistency. This video shows a simple example, where a simplified subject area is created for the customer 'Contact' standard object, picking just a few fields upon which users can then create reports. Custom Objects Custom subject areas support three types of custom objects. First is a simple standalone custom object and for which the same process mentioned above applies. The next is a custom child object created on a standard object parent, and finally a custom object that is related to a parent object - usually through a dynamic choice list. Whilst the steps in each of these last two are mostly the same, there are differences in the way you choose the objects and their fields. This is illustrated in the videos below.The first video shows the process for creating a custom subject area for a simple standalone custom object. This second video demonstrates how to create custom subject areas for custom objects that are of parent:child type, as well as those those with dynamic-choice-list relationships. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt; Flexfields Dynamic and Extensible Flexfields satisfy a similar requirement as custom fields (for Application Composer), with flexfields common across the Fusion Financials, Supply Chain and Procurement, and HCM applications. The basic principle is when you enable and configure your flexfields, in the edit page under each segment region (for both global and context segments) there is a BI Enabled check box. Once this is checked and you've completed your configuration, you run the Scheduled Process job named 'Import Oracle Fusion Data Extensions for Transactional Business Intelligence' to generate and migrate the related BI artifacts and data. This applies for dynamic, key, and extensible flexfields. Of course there is more to consider in terms of how you wish your flexfields to be implemented and exposed in your reports, and details are given in Chapter 4 of the Extending Applications guide.

    Read the article

  • Chart Filtering

    - by Tim Dexter
    Interesting question from a colleague this week. Can you add a filter to a chart to just show a specific set of data? In an RTF template, you need to do a little finagling in the chart definition. In an online template, a couple of clicks and you're done. RTF Build your chart as you would normally to include all the data to start with. Now flip to the Advanced tab to see the code behind the chart. Its not very pretty but with a little effort you can get it looking a little more friendly. Here's my chart showing employees and their salaries. <Graph depthAngle="50" depthRadius="8" seriesEffect="SE_AUTO_GRADIENT"> <LegendArea visible="true"/>  <Title text="Executive Department Only" visible="true" horizontalAlignment="CENTER"/>  <LocalGridData colCount="{count(.//G_2)}" rowCount="1">   <RowLabels>    <Label>SALARY</Label>   </RowLabels>   <ColLabels>    <xsl:for-each select=".//G_2">     <Label><xsl:value-of select="EMP_NAME"/></Label>    </xsl:for-each>   </ColLabels>   <DataValues>    <RowData>     <xsl:for-each select=".//G_2">      <Cell><xsl:value-of select="SALARY"/></Cell>     </xsl:for-each>    </RowData>   </DataValues>  </LocalGridData> </Graph> Note the emboldened text. Its currently grabbing all values in the G_2 level of the data. We can use an XPATH expression to filter the data to the set we want to see. In my case I want to only see the employees that are in the Executive department. My  data is structured thus:   <DATA_DS>     <G_1>         <DEPARTMENT_NAME>Accounting</DEPARTMENT_NAME>         <G_2>             <MANAGER>Higgins</MANAGER>             <EMPLOYEE_ID>206</EMPLOYEE_ID>             <HIRE_DATE>2002-06-07T00:00:00.000-04:00</HIRE_DATE>             <SALARY>8300</SALARY>             <JOB_TITLE>Public Accountant</JOB_TITLE>             <PARAS>11000</PARAS>             <EMP_NAME>William Gietz</EMP_NAME>         </G_2> So the XPATH expression Im going to use to limit the data to the Executive department would be .//G_2[../DEPARTMENT_NAME='Executive'] Note the ../ moves the parser up the XML tree to be able to test the DEPARTMENT_NAME value. I added this XPATH expression to the three instances that need it ColCount, ColLabels and RowData. Its simple enough to do. Testing your XPATH expression is easier to do using a table of data. Please note, as soon as you make changes to the chart code. Going back to the Builder tab, you'll find that everything is grayed out. I recommend you make all the changes you can via the chart dialog before updating the code. Online Template Implementing the filter is much simpler, there is a dialog box to help you out. Add you chart and fill out the various data points you want to show. then hit the Filter item in the ribbon above the chart. That will pop the filter dialog box where you can then add a filter to the chart.   You can add multiple filters if needed and of course you can use the Manage Filters button to re-open and edit the filters. Pretty straightforward stuff!

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >