Search Results

Search found 1121 results on 45 pages for 'driving'.

Page 38/45 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • CPU Lockup when loading folder of bookmarks in Firefox

    - by Gary M. Mugford
    I am running Firefox 3.6 on WinXPSP3 on a duo Core machine with 4G of memory. I am also running Avast! free anti-virus and ZoneAlarm free firewall, both latest versions. Within the last month, my service provider basically forced me to upgrade to a Docsis 3.0 compliant modem (offered me a deal I couldn't turn down). At that point, I also upgraded to FF3.6. Basically, I am not unhappy with many aspects of this switchover, EXCEPT, when I now load a folder of bookmarks (anywhere from 10 to 38) I get nothing like the load times I experienced a couple of months back. It's taking minutes rather than seconds. And the first bookmark in one of the folders, GMail, rarely loads before timing out. I have used the old trick of powering off the cable modem before my day's work. This used to fix 'load-lag' in the old days. I have switched from my ISP's DNS server to Google, OpenDNS and back. And nothing seems to work currently. It's not my DNS cache. That's been flushed and secondary computers also have the same issue when loading folders. I have watched the CPU usage and loading the folder will send VSMon (ZoneAlarm) usage over 40 percent, AVastSvc (Avast!) over 30 and Firefox will then push the needle to 100. There's a brief burst by SVCHost when the others falter in devouring cycles. Then everything subsides to single digits once the last tab is loaded. The only other nominal nastiness is VSMon ALWAYS hitting 50 percent when ANY program starts downloading content from the internet. If I shutdown ZoneAlarm (and VSMon with it), the same slow loading takes place, but this time System is running 50% plus, again driving the usage to 100 per cent. I have my doubts FF3.6 vs FF3.5 is an issue, since the other computers are still running 3.5 and suffer the same issue. Those computers are on, but inactive, most of the time, being backups. Obviously, when the CPU hits 100, I can't do much of anything in FireFox OR in other programs. Video play through WMPC or VLC is extremely choppy, although it doesn't seem to affect the audio. Any ideas what I can try next? Thanks, GM

    Read the article

  • Strange permission errors with Windows Server 2008

    - by Spirit
    I just don't know a better way to describe my issue that is driving me nuts. I am trying to establish a test domain with virtual machines on a box that has Win7 with VMwware workstation installed. The purpouse with this domain will be so that we can try and test different situations before they go into the production network. I build a VM with WinSrv2008R2 and I am using that VM as a template to make other servers for the domain by making clones of it. Now I raise a DC with one clone and a member server with another clone - I add the server to the domain. I am following a standard procedure as always (it is not my first domain). Then I make an admin account and I am adding the admin to be a member of the Domain and Enterprise Admins group. That admin is admin with full priviledges on the DC.. no problem there. But on the other server has ... somewhat half the privileges and I cant log in via RDP. I tryed with another account. Same issues. For example (with half the privileges): I can't open the Even Viewer if I go via Start - Administrative Tools - Event Viewer. But I can open the Even Viewer via the server manager. You can notice this on the image below. I mean WTF??? I am going crazy, I haven't experienced anything similar in my three years of expertise. I already lost 3 days troubleshooting this. Could this be related with the cloning? Perhaps if I make fresh installs of WinSrv2008 there won't be any problems? I've had raised test domains as VMs on other occasions before, and there weren't any problems then. This is VMware Workstation 8. I've made clones before, on Workstation 7 it didn't had any problems. Anyone has any ideas? UPDATE: This is the info from the event log when I try to access via RDP: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: pat.coleman Account Domain: lab Failure Information: Failure Reason: Domain sid inconsistent. Status: 0xc000006d Sub Status: 0xc000019b

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

  • http://localhost/ always gives 502 unknown host

    - by Nitesh Panchal
    My service for World Wide Web Publishing Service started successfully but whenever I browse to http://localhost/ I always get 502 Unknown host. Also, wampapache is installed side by side but when I stop IIS service and start wampapache from services.msc I get error and when I view it in System event log I get this error: - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="Service Control Manager" Guid="{555908d1-a6d7-4695-8e1e-26931d2012f4}" EventSourceName="Service Control Manager" /> <EventID Qualifiers="49152">7024</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2011-06-12T17:43:28.223498400Z" /> <EventRecordID>346799</EventRecordID> <Correlation /> <Execution ProcessID="456" ThreadID="3936" /> <Channel>System</Channel> <Computer>MACHINENAME</Computer> <Security /> </System> - <EventData> <Data Name="param1">wampapache</Data> <Data Name="param2">%%1</Data> </EventData> </Event> I am fed up of this error and it is driving me nuts. I feel like banging my head against the laptop. I am really serious. Without concentrating on my real application I am trying to solve this issue since 3 hours. I google various threads and few of them said that there could be issue of Reporting Services or Skype. But I have uninstalled Skype and Reporting Services are disabled. What more should I do? I have hosts file present in etc directory and it does have mapping for localhost to 127.0.0.1. What more could I do?

    Read the article

  • libreadline history lines combine

    - by jettero
    This has been driving me crazy for about three years. I don't know how to fully describe the problem, but I think I can finally describe a way to recreate it. Your milage may vary. I have a mixture of ubuntu server and desktop machines of various versions and a few gentoo machines with various states of disrepair. They all seem to kindof do their own thing, although with similarities. Try this and let me know if you see the same thing. pop open two xterms (TERM=xterm) resize one so they're not the same issue screen -R test1 in one (TERM=screen) and screen -x test1 in the other hooray, typing in one shows up in the other; although notice that their different size produces artifacts and things issue a couple commands in your shell hit ^AF in the one that doesn't fit quite right, now it fits!! scroll back over the history a little goto 6 Eventually you'll notice a couple history lines combine. If you don't, then it's something unique to my setup, which spans various distributions and computers; so that's a confusing concept to me. If you see the thing I'm seeing then this: bash$ ls -al bash$ ps auxfw becomes this: bash$ ls -al; ps auxfw It doesn't happen every time. I have to really play with it — unless I don't want it to happen, then it always does. On some systems (or combinations), I get a line separator like the example above. On some systems, I do not. That I get the line separator on some systems seems to indicate to me that bash supports this behavior. Its history is entirely handled by libreadline and after perusing (ie, carefully reading) the man pages, I couldn't find a single readline setting for combining two history lines. Nor can I find anything in the bash manpage. So, how can I invoke this on purpose? Or, if I can't do that, how can I disable it completely? I would take either answer as a solution. Currently, I only see it when I don't want it.

    Read the article

  • Computer freezes +/- 30 seconds, suspicion on SSD

    - by Robert vE
    My computer freezes for about 30 seconds, this happens occasionally. When it happens I can still move the mouse, sometimes even alternate between tabs in google chrome. If I try to open windows explorer nothing happens. Also chrome rapports "waiting for cache". It also happens in starcraft II, during which the sounds loops. I have made a Trace as this topic describes: How do I troubleshoot a Windows 7 freeze or slowness? Trace: https://docs.google.com/open?id=0B_VkKdh535p6NklhSDdBLURUMnc I have looked at it, but I couldn't figure it out. My system specs are: AMD Athlon X4 651 Asus Ati HD6670 ADATA SSD sp900 Asus f1a55 mainboard 4 GB crucial 1333 ram 500 watt atx ps I'm running Windows 7, fully updated. Any help is much appreciated. Update: I tried something before your reply that may have helped the problem. I don't know for sure if it has, it's too soon to tell. A bit of history first. I had problems installing win7 on my ssd from the start. In IDE mode it worked, but I had the same problems as above. AHCI was a total fail, with it on before install as well as turning it on after install (including tweaking register). I didn't bother installing the AMD chipset/AHCI as it was reported to have no TRIM function and thus make problems worse. Eventually I did install the AMD SMbus driver as the stability issues were driving me crazy. It worked, no more issues, until I installed some extra drivers and software. Audio/LAN/ASUS suite, I don’t see the relation, but somehow it screwed up my system again. As a last effort I posted here on this site. After which the thought occurred to me turn on AHCI again as by now I had all necessary drivers installed anyway. (plus all windows updates downloaded/installed in the meantime) I did and stability didn’t seem great the first few reboots, but eventually everything seemed to work great. I tried to play starcraft II – an almost guaranteed freeze before – and I had no problems. I’m basically crossing my fingers and hope the problem is gone for good. I still think it has something to do with my SSD. In my research into the problem I noticed a lot of these issues with sandforce 2281 firmware, the exact same firmware I have. People reported the same problem that I had, freezes. Additionally they reported that during a freeze the hdd light stayed on, I noticed after I read this that this happened with my computer as well. None of this is conclusive evidence that my SSD is really to fault, but it is suspicious. And why turning on AHCI would fix it I don’t know. Thank you Tom for taking a look, if the problem returns I will certainly do what you advised.

    Read the article

  • Can only bring up one of two interfaces

    - by mstaessen
    I'm having a bizarre issue with my HP Proliant DL 360 G4p server. It has two gigabit ethernet interfaces but I can bring up only one of them. This is starting to freak me out and that's why I turned here. I'm running the x64 ubuntu 11.10 server edition. lshw -c network shows that the second interface is disabled. I have no idea why ans how to enable it. $ sudo lshw -c network *-network:0 description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 logical name: eth0 version: 10 serial: 00:18:71:e3:6d:26 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 duplex=full firmware=5704-v3.27b, ASFIPMIc v2.36 ip=10.48.8.x latency=64 link=yes mingnt=64 multicast=yes port=twisted pair speed=100Mbit/s resources: irq:25 memory:fdf70000-fdf7ffff *-network:1 DISABLED description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2.1 bus info: pci@0000:02:02.1 logical name: eth1 version: 10 serial: 00:18:71:e3:6d:25 capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 firmware=5704-v3.27b latency=64 link=no mingnt=64 multicast=yes port=twisted pair resources: irq:26 memory:fdf60000-fdf6ffff If I try to ifup eth1, then I get $ sudo ifup eth1 Ignoring unknown interface eth1=eth1. I figured that's what happens when there is no eth1 listed in /etc/network/interfaces. But when I add the configuration for eth1, I still can't ifup. $ sudo ifup eth1 RTNETLINK answers: File exists Failed to bring up eth1. I've also tried ifconfig eth1 up but without any result. For clarity, I have added a masked version of /etc/network/interfaces. I don't think it is the cause of the problem though. $ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.48.8.x netmask 255.255.255.y network 10.48.8.z broadcast 10.48.8.t gateway 10.48.8.u auto eth1 iface eth1 inet static address 193.190.253.x netmask 255.255.255.y network 193.190.253.z broadcast 193.190.253.t gateway 193.190.253.u I really need some help fixing this. It's driving me crazy. Thanks.

    Read the article

  • New Product: Oracle Java ME Embedded 3.2 – Small, Smart, Connected

    - by terrencebarr
    The Internet of Things (IoT) is coming. And, with todays launch of the Oracle Java ME Embedded 3.2 product, Java is going to play an even greater role in it. Java in the Internet of Things By all accounts, intelligent embedded devices are penetrating the world around us – driving industrial processes, monitoring environmental conditions, providing better health care, analyzing and processing data, and much more. And these devices are becoming increasingly connected, adding another dimension of utility. Welcome to the Internet of Things. As I blogged yesterday, this is a huge opportunity for the Java technology and ecosystem. To enable and utilize these billions of devices effectively you need a programming model, tools, and protocols which provide a feature-rich, consistent, scalable, manageable, and interoperable platform.  Java technology is ideally suited to address these technical and business problems, enabling you eliminate many of the typical challenges in designing embedded solutions. By using Java you can focus on building smarter, more valuable embedded solutions faster. To wit, Java technology is already powering around 10 billion devices worldwide. Delivering on this vision and accelerating the growth of embedded Java solutions, Oracle is today announcing a brand-new product: Oracle Java Micro Edition (ME) Embedded 3.2, accompanied by an update release of the Java ME Software Development Kit (SDK) to version 3.2. What is Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a complete Java runtime client, optimized for ARM architecture connected microcontrollers and other resource-constrained systems. The product provides dedicated embedded functionality and is targeted for low-power, limited memory devices requiring support for a range of network services and I/O interfaces.  What features and APIs are provided by Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a Java ME runtime based on CLDC 1.1 (JSR-139) and IMP-NG (JSR-228). The runtime and virtual machine (VM) are highly optimized for embedded use. Also included in the product are the following optional JSRs and Oracle APIs: File I/O API’s (JSR-75)  Wireless Messaging API’s (JSR-120) Web Services (JSR-172) Security and Trust Services subset (JSR-177) Location API’s (JSR-179) XML API’s (JSR-280)  Device Access API Application Management System (AMS) API AccessPoint API Logging API Additional embedded features are: Remote application management system Support for continuous 24×7 operation Application monitoring, auto-start, and system recovery Application access to peripheral interfaces such as GPIO, I2C, SPIO, memory mapped I/O Application level logging framework, including option for remote logging Headless on-device debugging – source level Java application debugging over IP Connection Remote configuration of the Java VM What type of platforms are targeted by Oracle Java ME 3.2 Embedded? The product is designed for embedded, always-on, resource-constrained, headless (no graphics/no UI), connected (wired or wireless) devices with a variety of peripheral I/O.  The high-level system requirements are as follows: System based on ARM architecture SOCs Memory footprint (approximate) from 130 KB RAM/350KB ROM (for a minimal, customized configuration) to 700 KB RAM/1500 KB ROM (for the full, standard configuration)  Very simple embedded kernel, or a more capable embedded OS/RTOS At least one type of network connection (wired or wireless) The initial release of the product is delivered as a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN).  What types of applications can I develop with Oracle Java ME Embedded 3.2? The Oracle Java ME Embedded 3.2 product is a full-featured embedded Java runtime supporting applications based on the IMP-NG application model, which is derived from the well-known MIDP 2 application model. The runtime supports execution of multiple concurrent applications, remote application management, versatile connectivity, and a rich set of APIs and features relevant for embedded use cases, including the ability to interact with peripheral I/O directly from Java applications. This rich feature set, coupled with familiar and best-in class software development tools, allows developers to quickly build and deploy sophisticated embedded solutions for a wide range of use cases. Target markets well supported by Oracle Java ME Embedded 3.2 include wireless modules for M2M, industrial and building control, smart grid infrastructure, home automation, and environmental sensors and tracking. What tools are available for embedded application development for Oracle Java ME Embedded 3.2? Along with the release of Oracle Java ME Embedded 3.2, Oracle is also making available an updated version of the Java ME Software Development Kit (SDK), together with plug-ins for the NetBeans and Eclipse IDEs, to deliver a complete development environment for embedded application development.  OK – sounds great! Where can I find out more? And how do I get started? There is a complete set of information, data sheet, API documentation, “Getting Started Guide”, FAQ, and download links available: For an overview of Oracle Embeddable Java, see here. For the Oracle Java ME Embedded 3.2 press release, see here. For the Oracle Java ME Embedded 3.2 data sheet, see here. For the Oracle Java ME Embedded 3.2 landing page, see here. For the Oracle Java ME Embedded 3.2 documentation page, including a “Getting Started Guide” and FAQ, see here. For the Oracle Java ME SDK 3.2 landing and download page, see here. Finally, to ask more questions, please see the OTN “Java ME Embedded” forum To get started, grab the “Getting Started Guide” and download the Java ME SDK 3.2, which includes the Oracle Java ME Embedded 3.2 device emulation.  Can I learn more about Oracle Java ME Embedded 3.2 at JavaOne and/or Java Embedded @ JavaOne? Glad you asked Both conferences, JavaOne and Java Embedded @ JavaOne, will feature a host of content and information around the new Oracle Java ME Embedded 3.2 product, from technical and business sessions, to hands-on tutorials, and demos. Stay tuned, I will post details shortly. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "Oracle Java ME Embedded", Connected, embedded, Embedded Java, Java Embedded @ JavaOne, JavaOne, Smart

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • Diagnose PC Hardware Problems with an Ubuntu Live CD

    - by Trevor Bekolay
    So your PC randomly shuts down or gives you the blue screen of death, but you can’t figure out what’s wrong. The problem could be bad memory or hardware related, and thankfully the Ubuntu Live CD has some tools to help you figure it out. Test your RAM with memtest86+ RAM problems are difficult to diagnose—they can range from annoying program crashes, or crippling reboot loops. Even if you’re not having problems, when you install new RAM it’s a good idea to thoroughly test it. The Ubuntu Live CD includes a tool called Memtest86+ that will do just that—test your computer’s RAM! Unlike many of the Live CD tools that we’ve looked at so far, Memtest86+ has to be run outside of a graphical Ubuntu session. Fortunately, it only takes a few keystrokes. Note: If you used UNetbootin to create an Ubuntu flash drive, then memtest86+ will not be available. We recommend using the Universal USB Installer from Pendrivelinux instead (persistence is possible with Universal USB Installer, but not mandatory). Boot up your computer with a Ubuntu Live CD or USB drive. You will be greeted with this screen: Use the down arrow key to select the Test memory option and hit Enter. Memtest86+ will immediately start testing your RAM. If you suspect that a certain part of memory is the problem, you can select certain portions of memory by pressing “c” and changing that option. You can also select specific tests to run. However, the default settings of Memtest86+ will exhaustively test your memory, so we recommend leaving the settings alone. Memtest86+ will run a variety of tests that can take some time to complete, so start it running before you go to bed to give it adequate time. Test your CPU with cpuburn Random shutdowns – especially when doing computationally intensive tasks – can be a sign of a faulty CPU, power supply, or cooling system. A utility called cpuburn can help you determine if one of these pieces of hardware is the problem. Note: cpuburn is designed to stress test your computer – it will run it fast and cause the CPU to heat up, which may exacerbate small problems that otherwise would be minor. It is a powerful diagnostic tool, but should be used with caution. Boot up your computer with a Ubuntu Live CD or USB drive, and choose to run Ubuntu from the CD or USB drive. When the desktop environment loads up, open the Synaptic Package Manager by clicking on the System menu in the top-left of the screen, then selecting Administration, and then Synaptic Package Manager. Cpuburn is in the universe repository. To enable the universe repository, click on Settings in the menu at the top, and then Repositories. Add a checkmark in the box labeled “Community-maintained Open Source software (universe)”. Click close. In the main Synaptic window, click the Reload button. After the package list has reloaded and the search index has been rebuilt, enter “cpuburn” in the Quick search text box. Click the checkbox in the left column, and select Mark for Installation. Click the Apply button near the top of the window. As cpuburn installs, it will caution you about the possible dangers of its use. Assuming you wish to take the risk (and if your computer is randomly restarting constantly, it’s probably worth it), open a terminal window by clicking on the Applications menu in the top-left of the screen and then selection Applications > Terminal. Cpuburn includes a number of tools to test different types of CPUs. If your CPU is more than six years old, see the full list; for modern AMD CPUs, use the terminal command burnK7 and for modern Intel processors, use the terminal command burnP6 Our processor is an Intel, so we ran burnP6. Once it started up, it immediately pushed the CPU up to 99.7% total usage, according to the Linux utility “top”. If your computer is having a CPU, power supply, or cooling problem, then your computer is likely to shutdown within ten or fifteen minutes. Because of the strain this program puts on your computer, we don’t recommend leaving it running overnight – if there’s a problem, it should crop up relatively quickly. Cpuburn’s tools, including burnP6, have no interface; once they start running, they will start driving your CPU until you stop them. To stop a program like burnP6, press Ctrl+C in the terminal window that is running the program. Conclusion The Ubuntu Live CD provides two great testing tools to diagnose a tricky computer problem, or to stress test a new computer. While they are advanced tools that should be used with caution, they’re extremely useful and easy enough that anyone can use them. Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Persistent Bootable Ubuntu USB Flash DriveAdding extra Repositories on UbuntuHow to Share folders with your Ubuntu Virtual Machine (guest)Building a New Computer – Part 3: Setting it Up TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • Interview with Tomas Ulin at the MySQL Innovation Day

    - by Monica Kumar
    MySQL Innovation Day held on June 5, 2012 was a great event for the MySQL engineers, users and customers to gather, share and network. I was able to get a few minutes with Tomas Ulin, Vice President of MySQL Engineering at Oracle, to ask him some questions. Here are the highlights of my interview with Tomas. Monica: This was the first MySQL Innovation Day, correct?  Why now, what was the strategy behind hosting this kind of event? Tomas: In the last year, we have rolled out an incredible number of MySQL events worldwide – some targeted at developers that are new to MySQL and others for the MySQL savvy. At the MySQL Innovation Day, our first event of this kind,, we had a number of our key engineers presenting lightning talks delivering previews of key new features as well as discussing roadmap. Our goal is to keep an open dialogue with the MySQL community. In fact, we are hosting a two-day conference, another first, for the MySQL community called MySQL Connect on Sept. 29-30 in San Francisco. If you attended the MySQL Innovation Day and liked what we did, you are going to love MySQL Connect. We’ll have a lot more of our engineers and many users and community members presenting hour long sessions and hands on labs. Our engineers will be presenting new MySQL features as well offer previews of upcoming enhancements. Monica: What's the big take-away from today's MySQL Innovation Day? Tomas: I hope the most important takeaway for attendees was to see that Oracle has been driving, and continues to drive MySQL innovation with a steady stream of new great GA and Development Milestone releases. Monica: What were attendees most interested in? What feedback did they have? Tomas: Feedback from attendees was incredibly positive and encouraging. In particular, they liked the interaction with the MySQL engineers and were also excited about the new early access features in MySQL 5.6 and MySQL Cluster 7.3. In addition, sessions delivered by MySQL users like Facebook, Pinterest and Twitter were very well received. For example, Pinterest talked about using MySQL to scale from 0 to billions of page views/month, Twitter talked about “Scaling twitter with MySQL” and Facebook discussed the many options to implement MySQL master failover solutions. The presentations are already available for download while some of the session videos will be made available on the MySQL Innovation Day web page shortly. Monica: How would you distinguish the use of MySQL vs. Oracle Database? What key factors should customers consider? Tomas: MySQL and Oracle Database complement each other. They are very different products, best suited to different use cases. Customers can choose world-class solutions from Oracle to fulfill a variety of needs. MySQL is a great choice for enterprise web-based, custom and embedded apps. Oracle Database is the leading choice for enterprise packaged applications such as ERP, CRM as well as high-end data warehousing and business intelligence applications. Monica: What are the highlights of the current MySQL 5.6 Development Milestone Release and early access features for MySQL Cluster 7.3? Tomas: MySQL 5.6 development milestone release builds on MySQL 5.5 by improving: Optimizer for better Performance, Scalability Performance Schema for better instrumentation InnoDB for better transactional throughput Replication for higher availability, data integrity NoSQL options for more flexibility We announced some new early access features in MySQL 5.6, including binary log group commit. We also announced early access features in MySQL Cluster 7.3 including support for foreign key constraints. Monica: How do people get these releases? Tomas: You can access development milestone releases by going to: http://dev.mysql.com/downloads/mysqlThen select the “Development Release” tab. The MySQL Cluster 7.3 and other early access features can be downloaded at: http://labs.mysql.com Monica: What's coming up next for MySQL? Tomas: Our development team is working in overdrive, cranking out new features with community feedback. Don’t miss the MySQL Connect conference being held in San Francisco on Sept. 29 and 30th. My team and I will be there. I hope you can join us! Monica: Thank you for your time, Tomas. I look forward to seeing you at the MySQL Connect conference. To our followers, I hope you found this interview informative. I welcome your comments. Please stay tuned here for more updates on MySQL. Note: Monica Kumar is Senior Director of product marketing for Linux, Virtualization and MySQL at Oracle.

    Read the article

  • What’s Your Tax Strategy? Automate the Tax Transfer Pricing Process!

    - by tobyehatch
    Does your business operate in multiple countries? Well, whether you like it or not, many local and international tax authorities inspect your tax strategy.  Legal, effective tax planning is perceived as a “moral” issue. CEOs are being asked to testify on their process of tax transfer pricing between multinational legal entities.  Marc Seewald, Senior Director of Product Management for EPM Applications specializing in all tax subjects and Product Manager for Oracle Hyperion Tax Provisioning, and Bart Stoehr, Senior Director of Product Strategy for Oracle Hyperion Profitability and Cost Management joined me for a discussion/podcast on this interesting subject.  So what exactly is “tax transfer pricing”? Marc defined it this way. “Tax transfer pricing is a profit allocation methodology required to be used by multinational corporations. Specifically, the ultimate goal of the transfer pricing is to ensure that the global multinational pays their fair share of income tax in each of their local markets. Specifically, it prevents companies from unfairly moving profit from ‘high tax’ countries to ‘low tax’ countries.” According to Marc, in today’s global economy, profitability can be significantly impacted by goods and services exchanged between the related divisions within a single multinational company.  To ensure that these cost allocations are done fairly, there are rules that govern the process. These rules ensure that intercompany allocations fairly represent the actual nature of the businesses activity- as if two divisions were unrelated - and provide a clear audit trail of how the costs have been allocated to prove that allocations fall within reasonable ranges.  What are the repercussions of improper tax transfer pricing? How important is it? Tax transfer pricing allocations can materially impact the amount of overall corporate income taxes paid by a company worldwide, in some cases by hundreds of millions of dollars!  Since so much tax revenue is at stake, revenue agencies like the IRS, and international regulatory bodies like the Organization for Economic Cooperation and Development (OECD) are pushing to reform and clarify reporting for tax transfer pricing. Most recently the OECD announced an “Action Plan for Base Erosion and Profit Shifting”. As Marc explained, the times are changing and companies need to be responsive to this issue. “It feels like every other week there is another company being accused of avoiding taxes,” said Marc. Most recently, Caterpillar was accused of avoiding billions of dollars in taxes. In the last couple of years, Apple, GE, Ikea, and Starbucks, have all been accused of tax avoidance. It’s imperative that companies like these have a clear and auditable tax transfer process that enables them to justify tax transfer pricing allocations and avoid steep penalties and bad publicity. Transparency and efficiency are what is needed when it comes to the tax transfer pricing process. Bart explained that tax transfer pricing is driving a deeper inspection of profit recognition specifically focused on the tax element of profit.  However, allocations needed to support tax profitability are nearly identical in process to allocations taking place in other parts of the finance organization. For example, the methods and processes necessary to arrive at tax profitability by legal entity are no different than those used to arrive at fully loaded profitability for a product line. In fact, there is a great opportunity for alignment across these two different functions.So it seems that tax transfer pricing should be reflected in profitability in general. Bart agreed and told us more about some of the critical sub-processes of an overall tax transfer pricing process within the Oracle solution for tax transfer pricing.  “First, there is a ton of data preparation, enrichment and pre-allocation data analysis that is managed in the Oracle Hyperion solution. This serves as the “data staging” to the next, critical sub-processes.  From here, we leverage the Oracle EPM platform’s ability to re-use dimensions and legal entity driver data and financial data with Oracle Hyperion Profitability and Cost Management (HPCM).  Within HPCM, we manage the driver data, define the legal entity to legal entity allocation rules (like cost plus), and have the option to test out multiple, simultaneous tax transfer pricing what-if scenarios.  Once processed, a tax expert can evaluate the effectiveness of any one scenario result versus another via a variance analysis configured with HPCM’s pre-packaged reporting capability known as Oracle Hyperion SmartView for Office.”   Further, Bart explained that the ability to visibly demonstrate how a cost or revenue has been allocated is really helpful and auditable.  “HPCM’s Traceability Maps are that visual representation of all allocation flows that have been executed and is the tax transfer analyst’s best friend in maintaining clear documentation for tax transfer pricing audits. Simply click and drill as you inspect the chain of allocation definitions and results. Once final, the post-allocated tax data can be compared to the GL to create invoices and journal entries for posting to your GL system of choice.  Of course, there is a framework for overall governance of the journal entries, allocation percentages, and reporting to include necessary approvals.” Lastly, Marc explained that the key value in using the Oracle Hyperion solution for tax transfer pricing is that it keeps everything in alignment in one single place. Specifically, Oracle Hyperion effectively becomes the single book of record for the GAAP, management, and the tax set of books. There are many benefits to having one source of the truth. These include EFFICIENCY, CONTROLS and TRANSPARENCY.So, what’s your tax strategy? Why not automate the tax transfer pricing process!To listen to the entire podcast, click here.To learn more about Oracle Hyperion Profitability and Cost Management (HPCM), click here.

    Read the article

  • On Life and TechEd&hellip;

    - by MOSSLover
    I haven’t been writing here I know.  I am very sorry, but I am just too busy trying to make my personal life just as good as my SharePoint life.  So I was at TechEd again helping with the hands on labs, but I had a very long couple of weeks.  I will have a long week again.  I started out going to my friend, Randy Walker’s, wedding and I ended up at my grandmother’s condo in Fort Lauderdale.  It was a very trying week.  I had to drive 5 1/2 hours to the wedding from St. Louis and back.  So Randy is an awesome person.  He is a great guy and he was the first person ever to nominate me for MVP.  I knew it probably wasn’t going anywhere, but it was the thought that count.  I met Randy in 2008 at Tulsa Techfest.  We had fun jamming out to Rockband.  I knew he was good people back then.  He has let me vent and I have let him vent over countless personal problems.  He has always been a great friend.  So it was a no brainer when I decided to go to his wedding no matter how much driving or stress or lack of sleep it was worth it.  I am incredibly happy for him to finally find a diamond amongst a lot of coal.  To take part in his celebration was so awesome.  I thank him again for letting me participate in this ceremony. Now after Randy’s wedding I drove 5 1/2 hours landed in St. Louis, fed a cat an asthma pill hidden in wet cat food, and slept for 4 hours.  I immediately saw my best friend who dropped me off at the airport and proceeded to TechEd.  I slept 1 hour on each flight, then ended up working a 3 hour shift at TechEd.  The rest of the week was a haze of connecting with people and sleeping very little.  I got to see my friend Tasha and her husband Casey plus a billion other people.  It was a great week and then I got a call from my grandmother.  It turns out her husband was admitted to the hospital. My grandparents on my dad's side have been divorced since the 60s, which means I never got to see them together.  I always felt like they never cliqued.  When I was a kid we would always spend half our time in Chicago at grandma’s and half our time at grandpa’s houses.  We would hang out with my grandpa’s wife Bobbi and my grandma’s husband Leo.  My cousin’s always called Leo by Pappa and my brother and I would use Leo.  My cousins lived in Chicago up until my cousin Gavi was born then they moved to Philadelphia.  I remember complaining to my dad that we never visited anywhere cool just Chicago and Kansas City.  I also remember Leo teaching me and my brother, Sam, how to climb a tree and play tennis on the back of the apartment wall.  My grandfather’s was kind of stuffy and boring, but we always enjoyed my grandmother’s.  She had games and Disney Movies and toys.  Leo always made it a ton of fun to visit.  I’m not sure a lot of people knew this fact, but when I was 16 years old I saw my grandfather on my mother’s side slowly die of congestive heart failure in a nursing home.  When I was 18 I saw my grandmother on my mother’s side slowly die of an infection over a 30 day period.  I hate hospitals and I hate nursing homes, but sometimes you have to suck in your gut and be strong.  I tried to do that this weekend and I hope I did an ok job.  I really hope things get better, but if they don’t I really appreciate everything he has given me in life.  I am a terrible tennis player and I haven’t climbed a tree in ages, but they both encouraged me when encouragement from my parents was lacking.  It was Father’s Day today and I have to at least pay tribute to Leo Morris, because he has been a good person to me all my life.  Technorati Tags: TechEd,Grandparents,Father's Day

    Read the article

  • &ldquo;My life at Oracle&rdquo;

    - by cristian.condurache(at)oracle.com
    Hello everybody! My name is Eva and I currently work in Oracle Italy as Sales Programs Manager for the Technology Sales organization. Since 2009, I also proudly represent the Oracle Education Foundation within my country as the Ambassador for Italy. My career path in this amazing company began 5 years ago as a fresh graduate: after various years studying abroad, in Germany and Ireland mainly, I was looking for a valuable and concrete opportunity which could fulfill my energetic spirit. I wanted to develop myself inside a stimulating and “fast” business environment.. and here came Oracle and I really couldn’t ask for anything better!  THE PARTNER EXPERIENCE The first department I had the chance to work into was the Alliances and Channels organization, where I had the opportunity to join a brilliant team of great and visionary guys. I began having the responsibility to analyze and rationalize the portfolio of Oracle business partners and to identify potential cross-area solutions, which had to be highlighted both on the local market and internationally: this ended up with the implementation of the “Partner Community” model, a business environment of selected Oracle partners, specialized on the different technology focus areas. This new concept was then recognized as an EMEA Best Practice and replicated internationally. Having the opportunity to strengthen day after day strategic relationships with several business partners and study the market positioning of their technology solutions, I was given the role to develop the “Oracle Partner Network Innovation Award” in Italy: the EMEA competition encouraging and rewarding proven and successful technology innovations, creating high value for our common customers and generating new business potential. Several Italian partner solutions won different prizes and I decided that it was worth collecting all those valuable projects, winners and short-listed, inside two specific books in order also to provide them an international market visibility: OPN Innovation Award Booklet 2007 and OPN Innovation Award Booklet 2008 Inside the Alliances and Channels department I really had the opportunity to do    amazing things, like for example working side-by-side with one of the most exceptional teams in Oracle I have ever worked with: the EMEA Recruitment Team. Together, in fact, we conceived a brand new business initiative for our partners, called “Oracle Campus Joint Program”. This program was awarded as an EMEA Best Practice and acknowledged by both Italian public institutions and press media. Italy   is currently running its 5th edition.   Briefly, the “Oracle Campus Joint Program” aims at facing the growing issue of lack of  technology competences and skills on the market. By identifying a specific technology area and developing an intensive 4-6 week Oracle University training course and by collaborating with important academic institutes, international “gurus” and professionals, our business partners are able to benefit from a pool of brilliant top talented young consultants and offer them a significant career opportunity. BUSINESS BUT NOT ONLY: THE NO-PROFIT EXPERIENCE OF ORACLE Currently my mission in Oracle is to continue driving the implementation of strategic business development and sales programs for the entire Oracle Technology stack, involving both partners and the end-customers. But as a completely distinguished role from the day-today business, I’m also honored to represent in Italy the charity global organization founded by Oracle - the Oracle Education Foundation - and drive its corporate citizenship and marketing programs. Oracle Education Foundation is an independent charitable organization funded by Oracle and is dedicated to helping students develop 21st century skills through project learning and the use of technology. It provides “ThinkQuest” as a free program to primary and secondary (K12) schools. Just some significant numbers: today 548,000 students/teachers in 47 countries use ThinkQuest and the Oracle Education Foundation partners with 40+ no-profit or government organizations globally. ABOUT MYSELF AND MY INTERESTS About myself…I’m very enthusiastic and positive, trying always to transform difficult issues in challenging opportunities. My day usually begins very early in the morning with running, swimming or when I need to collect some “zen” energies with a yoga session or better with a long walk with my dog. I definitely love animals and generally speaking I’m very keen on environmental issues and try, as much as I can, to carry out a healthy and “planet respectful” lifestyle. My thirst for knowledge pushed me some time ago to begin a new personal challenge: I decided to enroll, dedicating a good part of my free time, for a second university degree: I chose “Neuroeconomics”, an innovative academic path which combines psychology, economics, and neuroscience and studies how people make decisions and the role of the brain when people evaluate these decisions, categorizing risks and rewards and generally interacting with each other. I’ve been very glad to talk about my experience in this article, as working for Oracle is something very stimulating. This company ensures you the opportunity to face new challenges, work with highly talented people and be professionally highlighted also globally. Motivation, good results and innovation is always pursued, recognized and fully supported. Thanks and wish you all an amazing career! If you have any question please contact [email protected]. For our job opportunities, please look at http://campus.oracle.com.   Technorati Tags: EMEA,Oracle Partners,Oracle Campus,Oracle Education,experience,EMEA Recruitment Team

    Read the article

  • Oracle Day 2013 Istanbul – 14.Kasim.2013

    - by TUFEKCIOGLU,FATIH
    Oracle Day 2013 Istanbul – 14.Kasim.2013 Yeni Teknolojiler. Yeni Dünya.  Sayfa duzgun goruntulenemiyorsa etkinlik takvimine asagidaki linkten ulasabilirsiniz : https://blogs.oracle.com/fatihtufekcioglu/resource/Oracle_Day_Istanbul_14.Kasim.2013_blogs.oracle.com.fatihtufekcioglu.html#! charset=iso-8859-1"> "> Hemen Kaydolun! Oracle Day 2013 Istanbul Istanbul Kongre Merkezi Taskisla Caddesi Harbiye 34367 Istanbul / Türkiye 14 Kasim 2013, Persembe 08:30 - 18:00 LCV: [email protected] Oracle Day 2013 Istanbul Yeni Teknolojiler. Yeni Dünya. 14 Kasim 2013 Sayin Davetlimiz, Teknoloji, dünyayi sürekli degistirmeye devam ediyor. Bulut bilisim, mobil çözümler, büyük veri ve nesnelerin interneti güçlerini birlestirerek, eski is modellerini altüst edip inovasyonu ön plana çikartiyorlar. Organizasyonlar ise bu yeni dünyaya ayak uydurmak için bir yandan sürekli degisimi saglamaya çalisirken, diger yandan da isletmelerini en iyi sekilde yönetmeye devam etmek zorundalar. Peki organizasyonlar bu dengeyi nasil koruyorlar? Oracle Day 2013 Istanbul'da birçok Oracle müsterisinden dinleyeceginiz basari hikayeleriyle bunun nasil mümkün oldugunu görebilirsiniz. Etkinlige katilarak: Teknolojinin, yeni is modellerini ve firsatlarini nasil atesledigini ögrenebilir, Inovasyonu ön plana çikartmak için bilgi teknolojilerini sadelestiren sirketlerin basari hikayelerini dinleyebilir, Sizinle ayni zorluklari tecrübe eden sektör profesyonelleriyle biraraya gelebilir, Oracle'dan en yeni ürün haberlerini ve duyurularini takip etme firsati bulabilirsiniz. Oracle ve is ortaklariyla bulusmak, müsteri basari hikayelerini dinlemek, sosyal ve mobil etkilesimler için firsatlar yakalamak, uygulamali demolar izlemek ve daha fazlasi için Oracle Day 2013 Istanbul'da bize katilmanizdan memnuniyet duyacagiz. Hemen Kaydolun. Saygilarimizla, Oracle Türkiye Oracle Is Ortagi Müsteri Basari Hikayesi TROUG Sunum Ingilizcedir Program 08:30-09:30 Kayit 09:30-10:00 Hos Geldiniz Filiz Dogan, Genel Müdür, Oracle Türkiye 10:00-10:30 Keynote Andrew Sutherland, SVP, EMEA Technology, Oracle 10:30-11:00 Yapi Kredi ve Oracle Cahit Erdogan, Bilisim Teknolojileri ve Operasyon Yönetimi Genel Müdür Yardimcisi, Yapi Kredi Bankasi 11:00-11:30 150. Yasinda Ziraat'te Teknolojinin Dünü, Bugünü, Yarini Yunus Uygur Kocaoglu, Ziraat Teknoloji Genel Müdürü & Bilgi Teknolojileri Yönetimi Genel Müdür Yardimcisi, Ziraat Bankasi 11:30-11:45 Oracle Day'de 5. Yil Sezgin Aslan, Is Gelistirme Grup Yöneticisi, Innova 11:45-13:00 Ögle Yemegi Salon 1 Salon 2 Salon 3 Salon 4 Salon 5 Salon 6 Salon 7 Salon 8 Bulut Bilisim Çözümleri Büyük Veri & Analitik Çözümler Mobil Dünyada Orta Katman Çözümleri Is Uygulamalari I Is Uygulamalari II Modern Veri Merkezi Oracle & Is Ortaklari Çözümleri & Basari Hikayeleri TROUG (Oracle User Group) 13:00-13:30 Bulutunuza Yön Verin! Big Data at Work: Transform Your Business with Analytics Your Blueprint For Driving Enterprise Mobile Strategy Empowering Modern Business in the Cloud - Is Optimizasyonu Hayal Degil! Degisime Dünden Hazir Olmak: "BAT E-Fatura Projesi” Panel: "Türkiye'de Nitelikli Bilisim Elemani Yetistirilmesi" Oracle Oracle Oracle Oracle Oracle Idea Teknoloji British American Tobacco Oracle TROUG 13:30-13:40 Kahve Molasi 13:40-14:10 12c ile Veritabani Buluta Tasindi, Peki ya Siz? Büyüklük Sizde Kalsin BT Öncüleri için Uygulama Sunucusu Platformu Vakif Emeklilik Muhasebe ve Lojistik Sistemler Dönüsüm Projesi JD Edwards EnterpriseOne: Kapsamli, Kullanici Dostu ve Yenilikçi Modern Veri Merkezleri için Etkin Oracle Sunuculari Ziraat Bankasi Exadata Basari Hikayesi Oracle Database 12c Önemli Özellikler (DB/DWH) Oracle Etiya Oracle Innova Oracle Oracle Ziraat Bankasi TROUG 14:10-14:20 Kahve Molasi 14:20-14:50 Oracle Bulutunuza Bir Mimarin Bakisi Odakliligin Gücü: Oracle BI Dashboard Kullanimiyla Performans Yönetimi Kurumsal Uygulamalarin Mobil Dünyaya Entegrasyonu Müsteri Karsisinda Tutarli, Etkin ve Tekil Durus: Mükemmel Müsteri Deneyimi Etkin Planlama ve Satin Alma ile Tedarik Zincirinden "Deger Zinciri"ne Dönüsüm Oracle Uygulamalari için Tasarlanmis Veri Depolama Sistemleri Bütünlesik Sistemler (Engineered Systems) için Platinum Hizmetler Sql/PLSQL Yeni Özellikler Oracle ING Oracle Oracle Oracle Oracle Oracle TROUG 14:50-15:00 Kahve Molasi 15:00-15:30 Oracle Enterprise Manager 12c: How Does It Support the Cloud? Veri Analizinde Yeni Yorum, Endeca ile Yeni Bakis Kurumsal Bilginin Yolculugu Basariya Giden Yol: Satis, Pazarlama ve Sosyal Medya Elele Kurumsal Performans Yönetimi (EPM) ile Is Potansiyelinizi Açiga Çikarin! Oracle Sunucu ve Veri Depolama Teknolojilerine Hizli Geçis - Avea Basari Hikayesi SAP Uygulamalari Oracle Engineered (Bütünlesik) Sistemler ile Daha Iyi Çalisiyor - Koçtas Basari Hikayesi Oracle EBS R12.2 Yeni Özellikler Oracle Gtech Oracle Innova Oracle Oracle Avea Oracle Koçtas TROUG 15:30-15:40 Kahve Molasi 15:40-16:10 Oracle Real Application Testing (RAT) ile Partitioning Islemleri, Oracle RAT'in Finansbank'taki Kullanim Alanlari Degisen Pazar Kosullarina Hizli Bütçe Revizyonu ile Adaptasyon Mobil Cihazlar için Erisim Yönetimi ve Güvenlik Yönetisim, Yasal Uyumluluk ve Süreç Optimizasyonu Lider Ise Alim Çözümü Taleo ile Bireysel Basaridan Kurumsal Basariya Oracle Bütünlesik Sistemleri ile Kendi Bulut Ortaminizi Yaratin Akçelik ERP Seçimi JDEdwards Exadata - Maximum Availability Architecture Best Practices Avea Finansbank IBTECH Gtech Oracle PwC Oracle Oracle Akçelik Kora Oracle 16:10-16:20 Kahve Molasi 16:20-16:50 Orta Katman da Bulutlu Hizli Veri mi Büyük Veri mi? Her Tür Süreç Ihtiyaci için Oracle BPM - Demo CRM Dünyasinin Tecrübeli Yildizi Siebel'in Bugünü ve Yarini Fusion ile IK Stratejilerinizi Bulutlara Tasimak Esnek ve Dinamik Veri Merkezi Altyapilari Kurulum Süresi En SADE ERP. Oracle Business Accelerator ile ERP'de Jet Hizi! Nefis: EDQ, OGG ve ODI Exadata Üzerinde Oracle Oracle Oracle KRBB Oracle Oracle AWR Oracle SADE Organik Labrys Danismanlik TROUG 16:50-18:00 Kokteyl & Oracle Infiniband Konseri PANELISTLER: Esref Adali: ITÜ Bilgisayar Mühendisligi Bölüm Baskani ve Bilisim Enstitüsü Bilgi Teknolojileri Programi Anabilim Dali Baskani Kemal Ciliz: TÜBISAD Yönetim Kurulu Baskani Zekeriya Besiroglu: Oracle Bilgisayar Programcilar Dernegi Yönetim Kurulu Baskani MODERATÖR: Cem Satana, Oracle Genel Müdür Yardimcisi ">Eger bir kamu kurumunun/kurulusunun çalisani veya görevlisi iseniz, bu etkinlige iliskin önemli etik kurallara iliskin bilgi için lütfen buraya tiklayiniz -- Copyright 2013, Oracle and/or its affiliates. All rights reserved. Bize Ulasin | Yasal Uyarilar | Gizlilik Beyani Etkinlik takvimi : https://blogs.oracle.com/fatihtufekcioglu/resource/Oracle_Day_Istanbul_14.Kasim.2013_blogs.oracle.com.fatihtufekcioglu.html#!

    Read the article

  • Learn About Oracle’s Strategy for a Simple, Modern User Experience at OpenWorld 2012

    - by Applications User Experience
    By Kathy Miedema, Oracle Applications User Experience If you’re interested in what the best possible user experience looks like, you’ll want to hear what Oracle’s Applications User Experience team is planning for OpenWorld 2012, Sept. 30-Oct. 4 in San Francisco. This year, we will talk Fusion, Fusion, Fusion. We were among the first to show Oracle Fusion Applications in the last couple of years, and we’ll be showing it again this year so you can see what Oracle is planning for the next generation of enterprise applications. Attend our sessions to learn more about the user experience strategy in which Oracle is investing. Simplicity is the driving force behind the demos that we are unveiling now, which you can see at OpenWorld. We want to create opportunities for productivity and efficiency, and deliver enterprise data across devices to help you do your work in the way best suited to your job and needs, said Jeremy Ashley, Vice President, Oracle Applications User Experience. You can see the new look for Fusion Applications at a general session led by Ashley at 3:30 p.m. on Wednesday, Oct. 3. You’ll also have the chance to learn more about tailoring in Oracle Fusion Applications, and gain a new understanding of the investment in the user experience behind Fusion Applications at our sessions (see session information below). Inside the Oracle Applications User Experience team’s on-site lab at Oracle OpenWorld 2011. Head to the demogrounds to see new demos from the Applications User Experience team, including the new look for Fusion Applications and what we’re building for mobile platforms. Take a spin on our eye tracker, a very cool tool that we use to research the usability of a particular design. Visit the Usable Apps OpenWorld page to find out where our demopods will be located. We are also recruiting participants for our on-site lab, in which we gather feedback on new user experience designs, and taking reservations for a charter bus that will bring you to Oracle headquarters for a lab tour Thursday, Oct. 4, or Friday, Oct. 5. Tours leave at 10 a.m. and 1:45 p.m. from the Moscone Center in San Francisco. You’ll see more of our newest designs at the lab tour, and some of our research tools in action. Can’t participate in a customer feedback session or take a lab tour this time around? Visit Usable Apps to participate or book a tour another time. For more information on any OpenWorld sessions, check the content catalog – also available at www.oracle.com/openworld. For information on Applications User Experience (Apps UX) sessions and activities, go to the Usable Apps OpenWorld page. APPS UX OPENWORLD SESSIONS Oracle’s Roadmap to a Simple, Modern User Experience Presenter: Jeremy Ashley, Vice President Applications User Experience, Oracle; with Debra Lilley, Fujitsu Consulting; Basheer Khan, Innowave; and Edward Roske, InterRelSession ID: CON9467Date: Wednesday, Oct. 3 Time: 3:30 - 4:30 p.m.Location: Moscone West - 3002/3004 Jeremy Ashley Oracle Fusion Applications: Transforming Insight into Action Presenters: Killian Evers and Kristin Desmond, OracleSession ID: CON8718Date: Thursday, Oct. 4Time: 11:15 a.m. - 12:15 p.m.Location: Moscone West - 2008 “FRIENDS OF UX” OPENWORLD SESSIONS Sessions by the Oracle Usability Advisory Board (OUAB) members: Advances in Oracle Enterprise Governance, Risk, and Compliance Manager  Presenters: Koen Delaure, KPMG Advisory NV, and Oracle Usability Advisory Board member; Russell Stohr, Oracle Session ID: CON9389Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: Palace Hotel - Concert Optimize Oracle E-Busines Suite Procure-to-Pay: Cut Inefficiences/Fraud with Oracle GRC Apps Presenters: Koen Delaure, KPMG Advisory NV, and Solveig Wagner, Seadrill Management AS, both Oracle Usability Advisory Board members; and Swarnali Bag, OracleSession ID: CON9401Date: Monday, Oct. 1Time: 12:15 - 1:15 p.m.Location: Intercontinental - Sutter Showcase of JD Edwards EnterpriseOne Mobility Presenters: Jon Wells, Westmoreland Coal Co., Oracle Usability Advisory Board member; Rob Mills and Liz Davson, Town of Oakville; Keith Sholes and Louise Farner, Oracle Session ID: CON9123Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: InterContinental - Grand Ballroom B Sessions by the Fusion User Experience Adovcates (FXA) Usability and Features of Oracle Fusion Applications, Built upon Oracle Fusion Middleware Presenters: Debra Lilley, Fujitsu Consulting and Oracle Usability Advisory Board member; John King, King Training ResourcesSession ID: UGF10371Date: Sunday, Sept. 30Time: 11 a.m. - 11:45 a.m. Location: Moscone West – 2010 Ten Things to Love About Oracle Fusion Project Portfolio Management  Presenter: Floyd Teter, EiS TechnologiesSession ID: CON6021Date: Tuesday, Oct. 2Time: 10:15 - 11:15 a.m.Location: Moscone West – 2003

    Read the article

  • SQL Saturday #44 Huntington Beach Recap

    What a great day. It was long and tiring, but rewarding in so many ways. On Sunday morning, I was driving home and I decided to take the Pacific Coast Highway from Huntington Beach.  It was a great chance to exhale and just enjoy the sun and smells of the beach (I really love SoCal sometimes). And for future reference for all you speakers, the beach and ocean are only 5 minutes from the SQL Saturday location.  I just could help noticing also the shocking number of high priced cars on the road (4 Bentleys, 3 Ferraris, 1 Aston Martins, 3 Maserati, 1 Rolls Royce, and 2 Lamborghinis).  It made me think about this: Price of all those cars: $ 150,000+.  Impacting the ability of people to learn: Priceless.  We have positively impacted the education, knowledge, capabilities of not only our attendees, but also all of their companies and people they might help as well.  That is just staggering and something to be immensely proud of. To all of my fellow community leaders, I salute you. So lets talk about the event Overall We had over 220 people register for the event and had 180+ people attend the event. I was shooting for the magical 200 number, but I guess it just gives us more motivation to make it even bigger and better next time. We had a few snags along the way, but what event doesnt, but I think everything turned out great. I did not hear any negative comments and heard lots of positive comments along with people asking when the next one is going to be (More on that later). Location- Golden West College We could not have asked for a better partner for the event. Herb Cohen from Golden West College was the wizard behind the curtains. From the beginning, he was our advocate to the GWC Board and was instrumental in getting our event approved. The day off, Herb was a HUGE help getting any and all logistics that we needed taken care of. In the craziness of the early morning registration crush it was a big help knowing that he and Bret Stateham (Blog | Twitter) were taking care of testing projectors in all the rooms. Anything we needed he was there and was even proactive in getting some things that I had not even thought of (i.e. a dumpster for all of our garbage). I cannot thank Herb enough along with other members of the GWC staff including Minnie Higgins of the Career and Technical Education Division office, Jack Taylor, public safety, and Ron Pryor, Tech Services Support. And last, but not least, the Wireless on campus was absolutely FANTASTIC! Some lessons learned Unless you are a glutton for punishment, as I no doubt am, you most certainly want to give yourself more than six weeks to plan the event. I am lucky that I have a very understanding wife and had a wonderful set of co-coordinators helping me out. A big thanks goes out to Phil, Marlon (Blog | Twitter), Nitin (Twitter), Thomas (Blog | Twitter), Bret (Blog | Twitter), Ben, and Laurie. Thankfully, the sponsor and speaker community was hugely supportive and we were able to fill out the entire event with speakers and sponsors. I have to say that there is not a lot that I would change after this years event. There are obviously going to be some things that we can do better or differently next time, but overall I think it was a great event and I was more than happy with the response we received from the community. Sponsors We obviously could not have put together our event without our sponsors. So certainly have to show them some love. Platinum Sponsors Quest Software http://www.quest.com My Space http://www.myspace.com/ Gold Strategy Companion http://www.strategycompanion.com Silver Fusion-IO http://www.fusionio.com Bronze WestClinTech http://westclintech.com Professional Association For SQL Server http://www.sqlpass.org Attunity http://www.attunity.com Sharepoint 360 http://www.sharepoint360.com Some additional Thanks Andy Warren (Blog | Twitter) Always there to answer my question and help out when I had some issues or questions with the website. The amount of work that he and everyone else put into SQL Saturday is very amazing. What a great gift to the community! Einstein Bros. Bagels They were our Breakfast Vendor and arrived perfectly on time with yummy bagels, sweets and most importantly coffee. Luccis Deli (http://www.luccisdeli.com) Luccis was out Lunch Vendor. They were great to work with and the food was excellent. They worked with us to give us a great price. Heard lots of great comments about the lunches. Definitely not your ordinary box lunch. Moving Forward Unfortunately, the work does not end after the event. We have a few things to clear up such as surveys, sponsor stuff, presentations uploaded to the website, expense reimbursement, stuff like that. Hopefully, all that should be cleared up within the next couple weeks. After that as a group we are going to get together and decide what our next steps are. We definitely want to keep some of the momentum that we are building as a SQL Community and channel that into future SQL Saturdays and other types of community events. In the meantime, for additional training be sure to check out your local User Group and PASS. San Diego SQL Server Users Group ( http://www.sdsqlug.org/home/index.cfm ) Orange County SQL Server Users Group ( http://www.sqloc.com/ ) L.A. SQL Server Users Group ( http://www.sql.la/ ) SQL PASS ( http://www.sqlpass.org/ ) 24 Hours of PASS ( http://www.sqlpass.org/24hours/2010/ ) So stay tuned, there will be more events to come in SoCal!!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Unique Business Value vs. Unique IT

    - by barry.perkins
    When the age of computing started, technology was new, exciting, full of potential and had a long way to grow. Vendor architectures were proprietary, and limited in function at first, growing in capability and complexity over time. There were few if any "standards", let alone "open standards" and the concepts of "open systems", and "open architectures" were far in the future. Companies employed intelligent, talented and creative people to implement the best possible solutions for their company. At first, those solutions were "unique" to each company. As time progressed, standards emerged, companies shared knowledge, business capability supplied by technology grew, and companies continued to expand their use of technology. Taking advantage of change required companies to struggle through periodic "revolutionary" change cycles, struggling through costly changes that were fraught with risk, resulted in solutions with an increasingly shorter half-life, and frequently required altering existing business processes and retraining employees and partner businesses. The pace of technological invention and implementation grew at an ever increasing rate, making the "revolutionary" approach based upon "proprietary" or "closed" architectures or technologies no longer viable. Concurrent with the advancement of technology, the rate of change in business increased, leading us to the incredibly fast paced, highly charged, and competitive global economy that we have today, where the most successful companies are companies that are good at implementing, leveraging and exploiting change. Fast forward to today, a world where dramatic changes in business and technology happen continually, a world where "evolutionary" change is crucial. Companies can no longer afford to build "unique IT", nor can they afford regular intervals of "revolutionary" change, with the associated costs and risks. Human ingenuity was once again up to the task, turning technology into a platform supporting business through evolutionary change, by employing "open": open standards; open systems; open architectures; and open solutions. Employing "open", enables companies to implement systems based upon technology, capability and standards that will evolve over time, providing a solid platform upon which a company can drive business needs, requirements, functions, and processes down into the technology, rather than exposing technology to the business, allowing companies to focus on providing "unique business value" rather than "unique IT". The big question! Does moving from "older" technology that no longer meets the needs of today's business, to new "open" technology require yet another "revolutionary change"? A "revolutionary" change with a short half-life, camouflaging reality with great marketing? The answer is "perhaps". With the endless options available to choose from, it is entirely possible to implement a solution that may work well today, but in 5 years time will become yet another albatross for the company to bear. Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution. There is no such thing as a "1 size fits all" IT solution for business. If all companies implemented business solutions based upon technology that required, or forced the same business processes across all businesses in an industry, it would be extremely difficult to show competitive advantage through "unique business value". It would be equally difficult to "evolve" to meet or exceed business needs and keep up with today's rapid pace of change. How does one ensure that they do not jump from one trap directly into another? Or to put it positively, there are solutions available today that can address these challenges and issues. How does one ensure that the buying decision of today will serve the business well for years into the future? Intelligent & Informed decisions - "buying right" In a previous blog entry, we discussed the value of linking tactical to strategic The key is driving the focus to what is best for your business, handling today's tactical issues while also aligning with a roadmap/strategy that is tightly aligned with your strategic business objectives. When considering the plethora of possible options that provide various approaches to solving today's complex business problems, it is extremely important to ensure that vendors supplying those options, focus on what is best for your business, supplying sufficient information, providing adequate answers to questions, addressing challenges, issues, concerns and objections honestly and openly, and focus on supplying solutions that are tailored for, and deliver the most business value possible for your business. Here are a few questions to consider relative to the proposed options that should help ensure that today's solution doesn't become tomorrow's problem. Do the proposed solutions: Solve the problem(s) you are trying to address? Provide a solid foundation upon which to grow/enhance your business? Provide tactical gains that align with and enable your strategic business goals/objectives? Provide an infrastructure that can be leveraged with subsequent projects? Solve problems for the business overall, the lines of business, or just IT? Simplify your current environment Provide the basis for business: Efficiency Agility Clarity governance, risk, compliance real time business visibility and trend analysis Does your IT staff have the knowledge/experience to successfully manage the proposed systems once they are deployed in production? Done well, you will be presented with options tailored to your business, that enable you to drive the "unique business value" necessary to help your business stand out from others, creating a distinct competitive advantage, delivering what your customers need, when they need it, so you can attract new customers, new business, and grow top line revenue, all at a cost that provides a strong Return on Investment/Return on Assets. The net result is growth with managed cost providing significantly improved profit margin and shareholder value.

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Super Joybox 5 HID 0925:8884 not recognized as joystick in Ubuntu 12.04 LTS

    - by Tim Evans
    Problem: When using the "Super JoyBox 5" 4 port playstation 2 to USB adapter, the device is not recognized as a joystick. there is no js0 created, but instead another input eventX and mouseX are created in /dev/input. When using the directional buttons (up down left right) on a Playstation 1 controller attached to the device, the mouse cursor moves to the top, bottom, left, and right edges of the screen respectively. Buttons are unresponsive. The joypads attached to the device cannot be used in any games or other programs. Attempted remedies: Creating a symlink from the eventX to js0 does not solve the problem. Addl Info: joydev is loaded and running peroperly according to LSMOD. evtest can be run on the created eventX (sudo evtest /dev/input/event14 in my case) and the buttons and axes all register inputs. Here is a paste of EVTEST's diagnostic and the first couple button events. [code] sudo evtest /dev/input/event14 Input driver version is 1.0.1 Input device ID: bus 0x3 vendor 0x925 product 0x8884 version 0x100 Input device name: "HID 0925:8884" Supported events: Event type 0 (EV_SYN) Event type 1 (EV_KEY) Event code 288 (BTN_TRIGGER) Event code 289 (BTN_THUMB) Event code 290 (BTN_THUMB2) Event code 291 (BTN_TOP) Event code 292 (BTN_TOP2) Event code 293 (BTN_PINKIE) Event code 294 (BTN_BASE) Event code 295 (BTN_BASE2) Event code 296 (BTN_BASE3) Event code 297 (BTN_BASE4) Event code 298 (BTN_BASE5) Event code 299 (BTN_BASE6) Event code 300 (?) Event code 301 (?) Event code 302 (?) Event code 303 (BTN_DEAD) Event code 304 (BTN_A) Event code 305 (BTN_B) Event code 306 (BTN_C) Event code 307 (BTN_X) Event code 308 (BTN_Y) Event code 309 (BTN_Z) Event code 310 (BTN_TL) Event code 311 (BTN_TR) Event code 312 (BTN_TL2) Event code 313 (BTN_TR2) Event code 314 (BTN_SELECT) Event code 315 (BTN_START) Event code 316 (BTN_MODE) Event code 317 (BTN_THUMBL) Event code 318 (BTN_THUMBR) Event code 319 (?) Event code 320 (BTN_TOOL_PEN) Event code 321 (BTN_TOOL_RUBBER) Event code 322 (BTN_TOOL_BRUSH) Event code 323 (BTN_TOOL_PENCIL) Event code 324 (BTN_TOOL_AIRBRUSH) Event code 325 (BTN_TOOL_FINGER) Event code 326 (BTN_TOOL_MOUSE) Event code 327 (BTN_TOOL_LENS) Event code 328 (?) Event code 329 (?) Event code 330 (BTN_TOUCH) Event code 331 (BTN_STYLUS) Event code 332 (BTN_STYLUS2) Event code 333 (BTN_TOOL_DOUBLETAP) Event code 334 (BTN_TOOL_TRIPLETAP) Event code 335 (BTN_TOOL_QUADTAP) Event type 3 (EV_ABS) Event code 0 (ABS_X) Value 127 Min 0 Max 255 Flat 15 Event code 1 (ABS_Y) Value 127 Min 0 Max 255 Flat 15 Event code 2 (ABS_Z) Value 127 Min 0 Max 255 Flat 15 Event code 3 (ABS_RX) Value 127 Min 0 Max 255 Flat 15 Event code 4 (ABS_RY) Value 127 Min 0 Max 255 Flat 15 Event code 5 (ABS_RZ) Value 127 Min 0 Max 255 Flat 15 Event code 6 (ABS_THROTTLE) Value 127 Min 0 Max 255 Flat 15 Event code 7 (ABS_RUDDER) Value 127 Min 0 Max 255 Flat 15 Event code 8 (ABS_WHEEL) Value 127 Min 0 Max 255 Flat 15 Event code 9 (ABS_GAS) Value 127 Min 0 Max 255 Flat 15 Event code 10 (ABS_BRAKE) Value 127 Min 0 Max 255 Flat 15 Event code 11 (?) Value 127 Min 0 Max 255 Flat 15 Event code 12 (?) Value 127 Min 0 Max 255 Flat 15 Event code 13 (?) Value 127 Min 0 Max 255 Flat 15 Event code 14 (?) Value 127 Min 0 Max 255 Flat 15 Event code 15 (?) Value 127 Min 0 Max 255 Flat 15 Event code 16 (ABS_HAT0X) Value 0 Min -1 Max 1 Event code 17 (ABS_HAT0Y) Value 0 Min -1 Max 1 Event code 18 (ABS_HAT1X) Value 0 Min -1 Max 1 Event code 19 (ABS_HAT1Y) Value 0 Min -1 Max 1 Event code 20 (ABS_HAT2X) Value 0 Min -1 Max 1 Event code 21 (ABS_HAT2Y) Value 0 Min -1 Max 1 Event code 22 (ABS_HAT3X) Value 0 Min -1 Max 1 Event code 23 (ABS_HAT3Y) Value 0 Min -1 Max 1 Event type 4 (EV_MSC) Event code 4 (MSC_SCAN) Testing ... (interrupt to exit) Event: time 1351223176.126127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90001 Event: time 1351223176.126130, type 1 (EV_KEY), code 288 (BTN_TRIGGER), value 1 Event: time 1351223176.126166, -------------- SYN_REPORT ------------ Event: time 1351223178.238127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90001 Event: time 1351223178.238130, type 1 (EV_KEY), code 288 (BTN_TRIGGER), value 0 Event: time 1351223178.238167, -------------- SYN_REPORT ------------ Event: time 1351223180.422127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002 Event: time 1351223180.422129, type 1 (EV_KEY), code 289 (BTN_THUMB), value 1 Event: time 1351223180.422163, -------------- SYN_REPORT ------------ Event: time 1351223181.558099, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002 Event: time 1351223181.558102, type 1 (EV_KEY), code 289 (BTN_THUMB), value 0 Event: time 1351223181.558137, -------------- SYN_REPORT ------------ Event: time 1351223182.486137, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90003 Event: time 1351223182.486140, type 1 (EV_KEY), code 290 (BTN_THUMB2), value 1 Event: time 1351223182.486172, -------------- SYN_REPORT ------------ Event: time 1351223183.302130, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90003 Event: time 1351223183.302132, type 1 (EV_KEY), code 290 (BTN_THUMB2), value 0 Event: time 1351223183.302165, -------------- SYN_REPORT ------------ Event: time 1351223184.030133, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90004 Event: time 1351223184.030136, type 1 (EV_KEY), code 291 (BTN_TOP), value 1 Event: time 1351223184.030166, -------------- SYN_REPORT ------------ Event: time 1351223184.558135, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90004 Event: time 1351223184.558138, type 1 (EV_KEY), code 291 (BTN_TOP), value 0 Event: time 1351223184.558168, -------------- SYN_REPORT ------------ [/code] The directional buttons on the pad are being identified as HAT0Y and HAT0X axes, thats zero, not the letter O. Aparently, this device used to work flawlessly on kernel 2.4.x systems, and even as late as ubunto 10.04. Perhaps the Joydev rules for identifying joypads has changed? Currently, this kind of bug is affecting a few different type of controller adapters, but since this is the one that i PERSONALLY have (and has been driving me my own special brand of crazy), its the one im documenting. What i think should be happening instead: The device should be registering js0 through js3, one for each port, or JS0 that will handle all of the connected devices with different numbered axes for each connected joypad. Either way, it should work as a joystick and stop controlling the mouse cursor. Please help!

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 2

    - by Steve Loethen
    When we last left off, we had a web application spinning away in the cloud, and a local console application watching it and reacting to changes in demand.  Reactions that were specified by a set of rules.  Let’s talk about those rules. Constraints.  The first set of rules this application answered to were the constraints. Here is what they looked like: <constraintRules> <rule name="default" enabled="true" rank="1" description="The default constraint rule"> <actions> <range min="1" max="4" target="AutoscalingApplicationRole"/> </actions> </rule> </constraintRules> Pretty basic.  We have one role, the “AutoscalingApplicationRole”, and we have decided to have it live within a range of 1 to 4.  This rule does not adjust, but instead, set’s limits on what other rules can do.  It has a rank, so you can have you can specify other sets of constraints, perhaps based on time or date, to allow for deviations from this set.  But for now, let’s keep it simple.  In the real world, you would probably use the minimum to set a lower end SLA.  A common value might be a 2, to prevent the reactive rules from ever taking you down to 1 role.  The maximum is often used to keep a rule from driving the cost up, setting an upper limit to prevent you waking up one morning and find a bill for hundreds of instances you didn’t expect.  So, here we have the range we want our application to live inside.  This is good for our investigation and testing.  Next, let’s take a look at the reactive rules.  These rules are what you use to react (hence reactive rules) to changing demands on your application.  The HOL has two simple rules.  One that looks at a queue depth, and one that looks at a performance counter that reports cpu utilization.  the XML in the rules file looks like this: <reactiveRules> <rule name="ScaleUp" rank="10" description="Scale Up the web role" enabled="true"> <when> <any> <greaterOrEqual operand="Length_05_holqueue" than="10"/> <greaterOrEqual operand="CPU_05_holwebrole" than="65"/> </any> </when> <actions> <scale target="AutoscalingApplicationRole" by="1"/> </actions> </rule> <rule name="ScaleDown" rank="10" description="Scale down the web role" enabled="true"> <when> <all> <less operand="Length_05_holqueue" than="5"/> <less operand="CPU_05_holwebrole" than="40"/> </all> </when> <actions> <scale target="AutoscalingApplicationRole" by="-1"/> </actions> </rule> </reactiveRules> <operands> <performanceCounter alias="CPU_05_holwebrole" performanceCounterName="\Processor(_Total)\% Processor Time" source="AutoscalingApplicationRole" timespan="00:05:00" aggregate="Average" /> <queueLength alias="Length_05_holqueue" queue="hol-queue" timespan="00:05:00" aggregate="Average"/> </operands> These rules are currently contained in a file called rules.xml, that is in the root of the console application.  The console app, starts up, grabs the rules and starts watching the 2 operands.  When it detects a rule has been satisfied, it performs the desired action.  (here, scale up or down my 1). But I want to host the autoscaler  in the cloud.  For my first trick, I will move the rules (and another file called services.xml) to azure blob storage.  Look for part 3.

    Read the article

  • Identity R2 - Experts Podcast Series

    - by Tanu Sood
    To follow up on the Identity Management R2 launch, a series of podcasts were recorded with subject matter experts from customer organizations, our partners and Oracle’s PM team to discuss key trends, R2 capabilities, implementation best practices and more. Below is a roll-up of the podcast series that is available on Fusion Middleware radio. R2 Podcasts:   ·         Designing the Next-Generation Identity Platform Vadim Lander, Oracle Highlights: Common architecture model, integration, interoperability and the driving factors behind R2 innovation IT Departments are shifting their Identity Management strategy to be able to support mobile, cloud and social applications. Oracle has anticipated this shift and has built a product roadmap to take advantage of this focus. Join Vadim as he discusses the design strategy behind the latest 11gR2 release and talks about how IDM services have to evolve to meet this new challenge.   ·         BETA Customer Perspective on R2 Ravi Meduri, Kaiser Permanente Highlights: R2 scalability and high availability In this podcast Ravi discusses the new features in 11gR2 that he is most interested in, including High Availability options for Access Management, multi-datacenter architecture, and what it was like working with the Oracle product team during the BETA program.   ·         Partner Perspective on R2 Rex Thexton, PricewaterhouseCoopers Highlights: Usability Enhancements for Users and Administrators A lot of new usability features went into the 11gR2 release making this the most business friendly IDM release to date. In this podcast Rex Thexton, Managing Director from PwC, talks about some of the new UI changes for both end users and administrators, and also about the new connector creation framework.   Access Request Updates in R2 Marc Boroditsky, Oracle Highlights: Access request User Interface innovations A lot of changes have been made to the Access Request user interface in the latest version of Oracle Identity Manager 11gR2. A real focus has been put on making the request process more business user friendly, and a lot of new customization capability has been added for the IT administrators. Hear Marc discuss the updated UI, and explain how administrators will be able to customize OIM to meet their company's requirements   ·         Oracle Optimized System for Oracle Unified Directory (OOS4OUD) Nick Kloski, Oracle Highlights: New Optimized System configuration for Unified Directory One of the new features in 11gR2 is the availability of an Optimized System configuration for Oracle Unified Directory. Oracle engineers installed the OUD software onto off the shelf hardware and then created a performance tuned configuration. Join us as we talk to Nick Kloski, Infrastructure Solutions Manager, all about the testing process and the resulting performance metrics.   Privileged Account Management Mark Wilcox, Oracle Highlights: Oracle Privileged Account Manager key capabilities, use cases The new release of Oracle Identity Management 11g R2 includes the capability to manage privileged accounts. Privileged accounts, if compromised, create a risk for fraud in the enterprise and as a result controlling access to privileged accounts is critical. Hear what Mark Wilcox, Principal Product Manager of Oracle Privileged Account Manager has to say about the capabilities of the offering in this podcast.   ·         Browser-based User Interface (UI) Customization Clayton Donley, Oracle Highlights: Benefits of Durable UI Configuration framework Business users need user interfaces that are not only friendly but also easily customizable. However the downside of any customization project is the cost and complexity involved in developing, testing, deploying and managing custom code. In this podcast, we examine how a new capability in Oracle Identity Management around browser based UI customization can reduce costs and complexity of customization while simplifying self service integration with corporate portal strategies.   ·         Simplifying Mobile and Social Sign-On Dan Killmer, Oracle Highlights: Secure mobile sign-on and consumption of social identities with Oracle Access Management The proliferation of mobile devices has spurred a new trend where employees tend to bring their own mobile devices to work and access corporate applications the same way they would access from a desktop or laptop. In this podcast, we examine how Oracle's latest innovation in Identity Management around Mobile and Social Sign On can simplify security and access management challenges posed by the widespread adoption of mobile devices in the enterprise. ·         Enabling Your Business with IDM R2 Scott Bonnell, Oracle Highlights: Self service, mobile access, personalization Gone are the days when Identity Management was just about stopping unauthorized users in their tracks. Identity Management if done right, can also enable your business. Join Scott Bonnell as he discusses how the IDM 11gR2 release enables the enterprise by providing self service, personalization and mobile access to corporate resources.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >