Search Results

Search found 21589 results on 864 pages for 'primary key'.

Page 565/864 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • Troubleshoot broken ZFS

    - by BBK
    I have one zpool called tank in RaidZ1 with 5x1TB SATA HDDs. I'm using Ubuntu Server 11.10 Oneric, kernel 3.0.0-15-server. Installed ZFS from ppa also I'm using zfs-auto-snapshot. The ZFS file system when zfs module loaded to the kernel hangs my computer. Before it I created few new file systems: zfs create -V 10G tank/iscsi1 zfs create -V 10G tank/iscsi2 zfs create -V 10G tank/iscsi3 I shared them through iSCSI by /dev/tank/iscsiX path. And my computer started to hanging sometimes when I used tank/iscsiX by iSCSI, do not know why exactly. I switched off iSCSI and started to remove this file systems: zfs destroy tank/iscsi3 I'm also using zfs-auto-snapshot so I had snapshots and without -r key my command not destroying the FS. So I issued next command: zfs destroy tank/iscsi3 -r The tank/iscsi3 FS was clean and contain nothing - it was destroyed without an issue. But tank/iscsi2 and tank/iscsi1 contained a lot of information. I tried zfs destroy tank/iscsi2 -r After some time my computer hang out. I rebooted computer. It didn't boot very fast, HDDs starts working like a crazy making a lot of noise, after 15 minutes HDDs stopped go crazy and OS booted at last. All seems to be ok - tank/iscsi2 was destroyed. After file systems at the tank was accessible, zpool status showed no corruption. I issued new command: zfs destroy tank/iscsi1 -r Situation was repeated - after some time my computer hang out. But this time ZFS seams not to healed itself. After computer switched on it started to work: loading scripts and kernel modules, after zfs starting to work it hanging my computer. I need to recover else ZFS file systems which lying in the same zpool. Few month ago I backup OS to flash drive. Booting from backed-up OS and import have the same results - OS starts hanging. How to recover my data at ZFS tank?

    Read the article

  • "Give with Bing" - Help raise money for Sports relief while searching for whatever you want

    - by Testas
    While Sport Relief drives fundraising by challenging people to do physical activities such as running a mile, we’re introducing the ‘Bing Search Mile,’ which gives people the ability to search using Bing and raise money for charity. For every 10 searches made, Bing.com will donate 5p to Sport Relief 2010, enabling you, and your friends and family, to raise money just by searching with Bing until the end of March. With the average mile taking about 10 minutes to run, in the same time, you can make up to 150 searches online - that’s 75p raised for a good cause per ‘search mile’. And while you’re at it,  why not step it up a gear and aim to complete a ‘Search Mile’ each day or even a ‘Search Marathon’ over the 5 week campaign with your colleagues, friends and family? How to get involved: 1.      Visit GiveWithBing.com and download the Official Sport Relief Bing Counter. Once downloaded, the Sport Relief counter will count all the searches you do on Bing from that point on.  2.      Now that you’re registered (and signed in), invite your friends, family, colleagues or classmates to join in the fundraising with you – GiveWithBing.com automatically generates an email explaining how it works for you to send them – the more people who search with you, the more money you raise. People can also register a school 3.      Run your ‘search mile’ every day and watch how your searches turn into life-changing cash for charity, with every 10 searches equalling 5p for Sport Relief. You can check your progress by visiting your individual page (more info here).  This is such a positive initiative and I challenge everyone in the UK to invite their key contacts to be part of Give with Bing.   Chris

    Read the article

  • Ubuntu Lucid startup events

    - by becomingGuru
    On starting up Ubuntu, I seem to have to do the following, all the time: Enter the password for the keyring to unlock, so that it connects to wifi Enable the "Extra" Visual effects from the Appearances preferences Start skype. How can I automate all of these. Bonus points, if I can use the existing chat bundled in the system to use my skype account. Also, since I dual boot, I get the grub options initially to select Windows or Ubuntu that waits for 10 seconds for me to choose. How do I make it go to ubuntu, unless I explicitly not press a key to boot to windows. Thanks in advance.

    Read the article

  • How can I get Opera speed-dial and password management features in other browsers?

    - by Howard Guo
    I heavily rely on Opera's speed dial and password management features. Lack of these two features is really stopping me from switching to another web browser such as Chrome or Firefox. Opera's password management has two unique characteristics which I rely on heavily: It saves passwords on all pages, (apparently) despite the page's meta data asking not to save passwords. It offers keyboard shortcut and button to automatically fill in username/passwords and all other fields in a login form, then automatically submit the form. (So I'm only one key/click away from logging into a website) How can I get those functions in other browsers? Thank you! Edit: Reason being that I use other web browsers at work and I wish they could have those functions.

    Read the article

  • What is the purpose of a boot priority sequence or order in BIOS

    - by rbeede
    A BIOS provides multiple options for specifying an order/priority to search for boot devices. Is there really much of a purpose now to have to specify more than one possible boot device? It would seem to me it is only useful when popping in an CD/DVD to install an OS after which the common scenario is to always boot from the hard drive unless something is broken. I'm curious as to why not simply have 1 option/device to set in the BIOS and expect the user to press a key to do alternate boot instead? Is there still a scenario for having the BIOS try multiple devices in a configured order?

    Read the article

  • WebCenter Marketing and Upcoming Events

    - by rituchhibber
    Events: Events: Date Event Name Location/Country October 30, 2012 ResCare Solves Content Lifecycle Challenges with Oracle WebCenter Webcast November 1, 2012 Paper Burying Your HR Processes? Dig Your Way Out With Oracle WebCenter! Webcast November 15, 2012 Social Business Thought Leader Webcast: Three Ways to Fix Your Broken Organization, featuring Christian Finn Webcast Marketing: Marketing: WebCenter Sites Sales eVite:Embrace the Base: Create an Exceptional Online Customer Experience with Oracle WebCenter Sites Directs recipients to the Connected Customer Experience Resource Center to see the latest demos, analyst reports, and customer webcasts promoting WebCenter Sites. For more information Click  here. WebCenter Social Business Thought Leaders Series: Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and TechnologyBrian Solis, Altimeter Group digital analyst and futuristDecember 13, 2012 10am PDTRegistration available soon, find other content from this speaker here. Webcast: WebCenter Sites for Applications: Disconnected Online Customer Experience? Connect it with Oracle WebCenter November 8, 2012  eVite | Registration Page WebCenter in Action Customer & Partner webcast series: Started earlier in FY13, a new webcast series featuring WebCenter customer deployments that are executed by a partner.The next webcast in the series will be November 14th:Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter Click here to learn more. OnDemand Webcast: ResCare Solves Content Lifecycle Challenges with Oracle WebCenterComplex documents must be created, assembled, reviewed, and tracked. To avoid fragmented, chaotic information processes, organizations must adopt an integrated set of strategies, standards, best practices, and technologies for managing information. Attend this webcast to learn how Oracle WebCenter has allowed ResCare to: solve content lifecycle challenges, reduce compliance and business risks and increase adoption of intranet as primary business communication tool. On-Demand Assets Date Event Name Location/Country On Demand Avoid Social Media Fatigue - Learn the 9 C’s of Customer Engagement, featuring Ray Wang, Principal Analyst and CEO, Constellation Research Webcast On Demand WebCenter in Action Series: Hitachi Data Systems Improves Global Web Experience with Oracle WebCenter, presented by Hitachi Data Systems and Lingotek. Webcast On Demand Managing Social Relationships for the Enterprise, featuring Jeremiah Owyang, Industry Analyst, Altimeter Group and Reggie Bradford, Vice President, Oracle Webcast On Demand Oracle’s Vision for the Social-Enabled Enterprise, presented by Mark Hurd, Thomas Kurian and Reggie Bradford Webcast On Demand WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, presented by Qualcomm and Keste. Webcast On Demand Social Business Thought Leaders Series: 6 Counterintuitive Best Practices for Social Collaboration Adoption, featuring John Brunswick, Oracle. Webcast On Demand Oracle WebCenter Connects Patients and Researchers in Cancer Control Mission, presented by Canadian Partnership Against Cancer and App-Systems Webcast On Demand Oracle WebCenter: Modernize, Aggregate and Extend Your Portals Webcast On Demand Top 10 Technology Trends Driving Business Innovation, featuring Andy Mulholland, CTO, Capgemini Webcast On Demand Ancestry.com Helps Families Uncover History with Oracl e WebCenter Webcast On Demand Organic Business Networks: Doing Business in a Hyper-Connected World, featuring Mike Fauscette, GVP, IDC Webcast On Demand Social Business and Innovation, featuring John Mancini, President, AIIM Webcast On Demand Do More with Oracle WebCenter: Expand Beyond Web Experience Management Webcast On Demand Race Against the Machine, featuring Andrew McAfee, author and principal scientist at MIT Webcast On Demand Introducing Oracle WebCenter Sites 11gR1: Transforming the Online Experience Webcast On Demand Mobile is the New Face of Engagement, featuring Ted Schadler, Vice President & Principal Analyst, Forrester Research Inc Webcast Analyst Report: IDC Research: Oracle Debuts New Release of Oracle WebCenter Sites.

    Read the article

  • All traffic is passed through OpenVPN although not requested

    - by BFH
    I have a bash script on a Ubuntu box which searches for the fastest openvpn server, connects, and binds one program to the tun0 interface. Unfortunately, all traffic is being passed through the VPN. Does anybody know what's going on? The relevant line follows: openvpn --daemon --config $cfile --auth-user-pass ipvanish.pass --status openvpn-status.log There don't seem to be any entries in iptables when I enter sudo iptables --list. The config files look like this: client dev tun proto tcp remote nyc-a04.ipvanish.com 443 resolv-retry infinite nobind persist-key persist-tun persist-remote-ip ca ca.ipvanish.com.crt tls-remote nyc-a04.ipvanish.com auth-user-pass comp-lzo verb 3 auth SHA256 cipher AES-256-CBC keysize 256 tls-cipher DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA There is nothing in there that would direct everything through tun0, so maybe it's a new vagary of Ubuntu? I don't remember this happening in the past.

    Read the article

  • SELinux "allow httpd_t httpd_sys_content_t:dir write;"

    - by alexus
    I'm getting following message in my /var/log/audit/audit.log: type=AVC msg=audit(1402615093.053:68): avc: denied { write } for pid=799 comm="httpd" name="php" dev="xvda1" ino=8667365 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:httpd_sys_content_t:s0 tclass=dir type=SYSCALL msg=audit(1402615093.053:68): arch=c000003e syscall=2 success=no exit=-13 a0=7f7a5ca697a8 a1=241 a2=1b6 a3=1 items=0 ppid=662 pid=799 auid=4294967295 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=4294967295 comm="httpd" exe="/usr/sbin/httpd" subj=system_u:system_r:httpd_t:s0 key=(null) pipe audit2allow outputs: #============= httpd_t ============== #!!!! This avc can be allowed using the boolean 'httpd_unified' allow httpd_t httpd_sys_content_t:dir write; How do I apply allow httpd_t httpd_sys_content_t:dir write; to my current SELinux policy?

    Read the article

  • Tips on combining the right Art Assets with a 2D Skeleton and making it flexible

    - by DevilWithin
    I am on my first attempt to build a skeletal animation system for 2D side-scrollers, so I don't really have experience of what may appear in later stages. So i ask advice from anyone that been there and done that! My approach: I built a Tree structure, the root node is like the center-of-mass of the skeleton, allowing to apply global transformations to the skeleton. Then, i make the hierarchy of the bones, so when moving a leg, the foot also moves. (I also make a Joint, which connects two bones, for utility). I load animations to it from a simple key frame lerp, so it does smooth movement. I map the animation hierarchy to the skeleton, or a part of it, to see if the structure is alike, otherwise the animation doesnt start. I think this is pretty much a standard implementation for such a thing, even if i want to convert it to a Rag Doll on the fly.. Now to my question: Imagine a game like prototype, there is a skeleton animation of the main character, which can animate all meshes in the game that are rigged the same way.. That way the character can transform into anything without any extra code. That is pretty much what i want to do for a side-scroller, in theory it sounds easy, but I want to know if that will work well. If the different people will be decently animated when using the same skeleton-animation pair. I can see this working well with a Stickman, but what about actual humans? Will the perspective look right, or i will need to dynamically change the sprites attached to bones? What do you recommend for such a system?

    Read the article

  • Weirdness After Reinstall The Windows Operating System

    - by Eka Anggraini
    I want to ask, and ask advice from you guys. I have reinstall my OS and successful, I'm turn off the restart a few times is fine .. Later a few hours later, when I turn it on, a sudden there was an error : Windows could not start because the following file is missing or corrupt: \WINDOWS\SYSTEM32\CONFIG\SYSTEM You can attempt to repair this file by starting Windows Setup using the original Setup CD-ROM Select 'R' at the first screen to start repair. So I intend to repair, reinstall all again .. I enter cd windows : press any key .. with blue background text below: Setup is loading files (windows executive) Setup is loading files (hardware abstraction layer) Then I was waiting until half an hour, and no changes, I repeated the process several times is also nil. Please advice and solutions about the problem where? Hardware / cd Windows ?

    Read the article

  • MySQL Cluster 7.3 - Join This Week's Webinar to Learn What's New

    - by Mat Keep
    The first Development Milestone and Early Access releases of MySQL Cluster 7.3 were announced just several weeks ago. To provide more detail and demonstrate the new features, Andrew Morgan and I will be hosting a live webinar this coming Thursday 25th October at 0900 Pacific Time / 16.00 UTC Even if you can't make the live webinar, it is still worth registering for the event as you will receive a notification when the replay will be available, to view on-demand at your convenience In the webinar, we will discuss the enhancements being previewed as part of MySQL Cluster 7.3, including: - Foreign Key Constraints: Yes, we've looked into the future and decided Foreign Keys are it ;-) You can read more about the implementation of Foreign Keys in MySQL Cluster 7.3 here - Node.js NoSQL API: Allowing web, mobile and cloud services to query and receive results sets from MySQL Cluster, natively in JavaScript, enables developers to seamlessly couple high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. You can study the Node.js / MySQL Cluster tutorial here - Auto-Installer: This new web-based GUI makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments on-premise or in the cloud You can view a YouTube tutorial on the MySQL Cluster Auto-Installer here  So we have a lot to cover in our 45 minute session. It will be time well spent if you want to know more about the future direction of MySQL Cluster and how it can help you innovate faster, with greater simplicity. Registration is open 

    Read the article

  • InfiniBand Enabled Diskless PXE Boot

    - by Neeraj Gupta
    When you want to bring up a compute server in your environment and need InfiniBand connectivity, usually you go through various installation steps. This could involve operating systems like Linux, followed by a compatible InfiniBand software distribution, associated dependencies and configurations. What if you just want to run some InfiniBand diagnostics or troubleshooting tools from a test machine ? What if something happened to your primary machine and while recovering in rescue mode, you also need access to your InfiniBand network ? Often times we use opensource community supported small Linux distributions but they don't come with required InfiniBand support and tools. In this weblog, I am going to provide instructions on how to add InfniBand support to a specific Linux image - Parted Magic.This is a free to use opensource Linux distro often used to recover or rescue machines. The distribution itself will not be changed at all. Yes, you heard it right ! I have built an InfiniBand Add-on package that will be passed to the default kernel and initrd to get this all working. Pr-requisites You will need to have have a PXE server ready on your ethernet based network. The compute server you are trying to PXE boot should have a compatible IB HCA and must be connected to an active IB network. Required Downloads Download the Parted Magic small distribution for PXE from Parted Magic website. Download InfiniBand PXE Add On package. Right Click and Download from here. Do not extract contents of this file. You need to use it as is. Prepare PXE Server Extract the contents of downloaded pmagic distribution into a temporary directory. Inside the directory structure, you will see pmagic directory containing two files - bzImage and initrd.img. Copy this directory in your TFTP server's root directory. This is usually /tftpboot unless you have a different setup. For Example: cp pmagic_pxe_2012_2_27_x86_64.zip /tmp cd /tmp unzip pmagic_pxe_2012_2_27_x86_64.zip cd pmagic_pxe_2012_2_27_x86_64 # ls -l total 12 drwxr-xr-x  3 root root 4096 Feb 27 15:48 boot drwxr-xr-x  2 root root 4096 Mar 17 22:19 pmagic cp -r pmagic /tftpboot As I mentioned earlier, we dont change anything to the default pmagic distro. Simply provide the add-on package via PXE append options. If you are using a menu based PXE server, then add an entry to your menu. For example /tftpboot/pxelinux.cfg/default can be appended with following section. LABEL Diskless Boot With InfiniBand Support MENU LABEL Diskless Boot With InfiniBand Support KERNEL pmagic/bzImage APPEND initrd=pmagic/initrd.img,pmagic/ib-pxe-addon.cgz edd=off load_ramdisk=1 prompt_ramdisk=0 rw vga=normal loglevel=9 max_loop=256 TEXT HELP * A Linux Image which can be used to PXE Boot w/ IB tools ENDTEXT Note: Keep the line starting with "APPEND" as a single line. If you use host specific files in pxelinux.cfg, then you can use that specific file to add the above mentioned entry. Boot Computer over PXE Now boot your desired compute machine over PXE. This does not have to be over InfiniBand. Just use your standard ethernet interface and network. If using menus, then pick the new entry that you created in previous section. Enable IPoIB After a few minutes, you will be booted into Parted Magic environment. Open a terminal session and see if InfiniBand is enabled. You can use commands like: ifconfig -a ibstat ibv_devices ibv_devinfo If you are connected to InfiniBand network with an active Subnet Manager, then your IB interfaces must have come online by now. You can proceed and assign IP address to them. This will enable you at IPoIB layer. Example InfiniBand Diagnostic Tools I have added several InfiniBand Diagnistic tools in this add-on. You can use from following list: ibstat, ibstatus, ibv_devinfo, ibv_devices perfquery, smpquery ibnetdiscover, iblinkinfo.pl ibhosts, ibswitches, ibnodes Wrap Up This concludes this weblog. Here we saw how to bring up a computer with IPoIB and InfiniBand diagnostic tools without installing anything on it. Its almost like running diskless !

    Read the article

  • Future of BizTalk

    - by Vamsi Krishna
    The future of BizTalk- The last TechEd Conference was a very important one from BizTalk perspective. Microsoft will continue innovating BizTalk and releasing BizTalk versions for “for next few years to come”. So, all the investments that clients have made so far will continue giving returns. Three flavors of BizTalk: BizTalk On-Premise, BizTalk as IaaS and BizTalk as PaaS (Azure Service Bus).Windows Azure IaaS and How It WorksDate: June 12, 2012Speakers: Corey SandersThis session covers the significant investments that Microsoft is making in our Windows Azure Infrastructure-as-a-Service (IaaS) solution and how it http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR201 TechEd provided two sessions around BizTalk futures:http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012AZR 207: Application Integration Futures - The Road Map and What's Next on Windows Azure http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR207AZR211: Building Integration Solutions Using Microsoft BizTalk On-Premises and on Windows Azurehttp://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR211Here is are some highlights from the two sessions at TechEd. Bala Sriram, Director of Development for BizTalk provided the introduction at the road map session and covered some key points around BizTalk:More then 12,000 Customers 79% of BizTalk users have already adopted BizTalk Server 2010 BizTalk Server will be released for “years to come” after this release. I.e. There will be more releases of BizTalk after BizTalk 2010 R2. BizTalk 2010 R2 will be releasing 6 months after Windows 8 New Azure (Cloud-based) BizTalk scenarios will be available in IaaS and PaaS BizTalk Server on-premises, IaaS, and PaaS are complimentary and designed to work together BizTalk Investments will be taken forward The second session was mainly around drilling in and demonstrating the up and coming capabilities in BizTalk Server on-premise, IaaS, and PaaS:BizTalk IaaS: Users will be able to bring their own or choose from a Azure IaaS template to quickly provision BizTalk Server virtual machines (VHDs) BizTalk Server 2010 R2: Native adapter to connect to Azure Service Bus Queues, Topics and Relay. Native send port adapter for REST. Demonstrated this in connecting to Salesforce to get and update Salesforce information. ESB Toolkit will be incorporate are part of product and setup BizTalk PaaS: HTTP, FTP, Service Bus, LOB Application connectivity XML and Flat File processing Message properties Transformation, Archiving, Tracking Content based routing

    Read the article

  • #altnetseattle &ndash; CQRS

    - by GeekAgilistMercenary
    This is a topic I know nothing about, and thus, may be supremely disparate notes.  Have fun translating.  : )   . . .and coolness that the session is well past capacity. Separates things form the UI and everything that needs populated is done through commands.  The domain and reports have separate storage. Events populate these stores of data, such as "sold event". What it looks like, is that the domain controls the requests by event, which would be a product order or something similar. Event sourcing is a key element of the logic. DDD (Domain Driven Design) is part of the core basis for this methodology/structure. The architecture/methodology/structure is perfect for blade style plugin hardware as needed. Good blog entry DDDD: Why I love CQRS and another Command and Query Responsibility Segregation (CQRS), more, CQRS à la Greg Young, a bit by Udi Dahan and there are more.  Google, Bing, etc are there for a reason. It appears the core underpinning architectural element of this is the break out of unique identifiable actions, or I suppose better described as events.  Those events then act upon specific pipelines such as read requests, write requests, etc.  I will be doing more research on this topic and will have something written up shortly.  At this time it seems like nothing new, just a large architectural break out of identifiable needs of the entire enterprise system.  The reporting is in one segment of the architecture, the domain is in another, hydration broken out to interfaces, and events are executed to incur events on the Reports, or what appears by the description to be events on the domain. Anyway, more to come on this later.

    Read the article

  • Announcement: DTrace for Oracle Linux General Availability

    - by Zeynep Koch
    Today we are announcing the general availability of DTrace for Oracle Linux. It is available to download from ULN for Oracle Linux Support customers.  DTrace is a comprehensive dynamic tracing framework that was initially developed for the Oracle Solaris operating system, and is now available to Oracle Linux customers. DTrace is designed to give operational insights that allow users to tune and troubleshoot the operating system. DTrace provides Oracle Linux developers with a tool to analyze performance, and increase observability into the systems they own to see how they work. DTrace enables higher quality applications development, reduced downtime, lower cost, and greater utilization of existing resources. Key benefits and features of DTrace on Oracle Linux include: • Designed to work on finding performance bottlenecks • Dynamically enables the kernel with a number of probe points, improving ability to service software • Enables maximum resource utilization and application performance • Fast and easy to use, even on complex systems with multiple layers of software If you already have Oracle Linux support, you can download DTrace from ULN channel. We have a dedicated Forum for DTrace on Oracle Linux, to discuss your experience and questions.

    Read the article

  • Integrating Social, Marketing, and Loyalty to Deliver Great Customer Experiences

    - by Charles Knapp
    Eighty nine percent of consumers quit a brand after one bad experience. With the high cost of acquiring new customers, what can brand leaders do? At the Loyalty World Conference this week in London, global business leaders such as the co-founder of Ben & Jerry’s shared the latest in how to retain customers and boost advocacy. Melissa Boxer and Sundar Swaminathan of Oracle shared that by taking an outside-in approach, you can deliver a differentiated, loyalty-building experience throughout the customer lifecycle, from researching and selecting through to using and recommending. To transform customer experiences, you need to integrate your brand’s social, marketing, and loyalty functions from the commonplace silos. Three key strategies: Know more and understand, unifying and capturing insights across touch points to better understand who to serve, how to serve, and when to serve.  Connect and engage across social and traditional channels, empowering a relationship ecosystem between social communities, customers, and employees. Make the personalized experience easy and rewarding. Visit us on twitter.com/oraclecrm to learn more useful highlights from the #lwconf conference.

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • Oracle Policy Automation at OpenWorld 2012

    - by jeffrey.waterman
    Oracle Policy Automation (OPA)atOpenWorld 2012 Oracle Policy Automation (OPA), the breakthrough policy automation platform, enables organizations to deliver: Consistent policy-based decision making throughout the organization across all channels Agile response to policy changes and analysis Transparency and auditability This year there will be: 8 sessions – combination of customer panels & product strategy sessions Standalone OPA DEMOpod – Moscone Center WEST, W044 Key highlights Hear Davin Fifield discuss the Product Roadmap for OPA (including OPA + RightNow) he will also be joined by Sean Haynes from Stewart Title who will share the success they are having with OPA. OPA Public Sector Customer Panel - This year the OPA panel consists of some of OPA’s most successful & largest customers, speakers include: Department Works & Pension (UK) Toll – Department of Defence (AU) Municipality of Sao Paulo (Brazil) SCHEDULE HIGHLIGHTS Monday October 1, 2012 SESSION ID TIME TITLE LOCATION CON9655 12:15 pm  1:15 pm PST (Pacific Standard Time) Oracle Policy Automation Roadmap: Supercharging the Customer Experience Davin Fifield, VP OPA Development, OracleSean Haynes, VP Stewart Title Westin San Francisco - Metropolitan I CON9700 12:15 m – 1:15 pm PST (Pacific Standard Time) Siebel CRM Overview, Strategy, and RoadmapGeorge Jacob - Group Vice President, CRM Applications / XML, OracleUma Welingkar - Director, Product Management, Oracle Moscone West - 2009 Wednesday October 3, 2012 SESSION ID TIME TITLE LOCATION CON8840 5.00pm – 6.00pm PST (Pacific Standard Time) Achieving Agility Through Closed-Loop Policy AutomationCustomer PanelFacilitator – Surend Dayal, Oracle Dept. Works & Pension (UK) – Haydn Leary Municipality of Sao Paulo (Brazil) - Luiz Cesar Michielin Kiel Toll (AU) – Nigel Maloney   Westin San Francisco - Franciscan I CON8952 5.00pm – 6.00pm PST (Pacific Standard Time) BPM: An Extension Strategy for Enterprise ApplicationsHarish Gaur -  OracleSrikant Subramaniam - Oracle Moscone West - 3003 Thursday October 4, 2012 SESSION ID TIME TITLE LOCATION CON11515 2:15 pm – 3:15 pm PST (Pacific Standard Time) Oracle Policy Automation + RightNow: Agile self-service and agent experiencesDavin Fifield, VP OPA Development, Oracle Westin San Francisco - City

    Read the article

  • Thinktecture.IdentityServer Beta 1

    - by Your DisplayName here!
    I just upload beta 1 to codeplex. Please test this version and give me feedback. Some quick notes on setup Watch the intro screencast on the codeplex site. Use the setup tool to set the signing and SSL certificate. You can now also set the ACLs on the private key for your worker pool account. IIS is required . SSL for the IIS site the STS runs in is required. Users of the STS must be in the 'IdentityServerUsers' role. Admins of the STS must be in the 'IdentityServerAdministrators' roles. What’s new? Mainly smaller bits and pieces and some refactoring. The biggest under the cover change is a new authorization model for the STS itself. If, e.g. you don’t like the new roles I introduced, you can easily change the behavior in the claims authorization manager in the STS web site project. What’s missing? The big one is Azure support. Not that I ran into unforeseeable problems here, I just wanted to wait until the on-premise version is more stabilized. Now with B1 I can start adding Azure support back.

    Read the article

  • Issue 15: SVP Focus

    - by rituchhibber
         SVP FOCUS FOCUS -- Chris Baker SVP Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners’ business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. RESOURCES -- Oracle PartnerNetwork (OPN) OPN Solutions Catalog Oracle Exastack Program Oracle Exastack Optimized Oracle Cloud Computing Oracle Engineered Systems Oracle and Java SUBSCRIBE FEEDBACK PREVIOUS ISSUES "By taking part in marketing activities, our partners accelerate their sales cycles." -- Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best—building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best-in-class application platform so ISVs are free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success. How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use of our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme. How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry. What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data—a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle. What opportunities are immediately opened to new ISV partners joining the OPN? As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation. Finally, are there any other messages that you would like to share with the Oracle ISV community? The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: "I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud". The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them. Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • How to make HOME or END keys work in mc running on OS X (ssh)

    - by Sorin Sbarnea
    I installed MacPorts on OS X 10.5 and I found out that when I connect to the computer using SSH and use mc - Midnight Commander - the HOME and END keys do not work. I have to mention that I'm using putty and I am able to use the keyboard very well on Linux machines like Fedora, Ubuntu,... Here is putty keyboard configuration (a configuration I found to be optimal over time): Backspace key: 127 Home/End keys: Standard Function keys: Xterm R6 Cursor keys: Normal Numpad: normal Terminal type string: xterm-color I'm looking for a command line solution/script that does these changes, this make much easier to create a prepare OS script for configuring a new OS.

    Read the article

  • Why does httpd handle requests for wrong hostnames in SSL mode?

    - by Manuel
    I have an SSL-enabled virtual host for my sites at example.com:10443 Listen 10443 <VirtualHost _default_:10443> ServerName example.com:10443 ServerAdmin [email protected] ErrorLog "/var/log/httpd/error_log" TransferLog "/var/log/httpd/access_log" SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLCertificateFile "/etc/ssl/private/example.com.crt" SSLCertificateKeyFile "/etc/ssl/private/example.com.key" SSLCertificateChainFile "/etc/ssl/private/sub.class1.server.ca.pem" SSLCACertificateFile "/etc/ssl/private/StartCom.pem" </VirtualHost> Browsing to https://example.com:10443/ works as expected. However, also browsing to https://subdomain.example.com:10443/ (with DNS set) shows me the same pages (after SSL certificate warning). I would have expected the directive ServerName example.com:10443 to reject all connection attempts to other server names. How can I tell the virtual host not to serve requests for URLs other than the top-level one?

    Read the article

  • Password Management for Oracle WebLogic customers

    - by Anthony Shorten
    One of the most common requests for enhancements I get across my desk is that customers wish to allow end users to change their passwords from our products. Now, typically password management is not in the realm of individual applications but it is an infrastructure requirement, so we don't usually add this to our roadmaps by default. The issue is that with the vast range of security stores that can be used with our product line across the Web Application Servers we support, it is almost impossible to come up with a generic enough API to work across them. If you have a specific security store on a specific Web Application Server platform then there are simpler solutions. There are a number of ways of implementing this without providing functionality specific functionality: Oracle sells Identity Management software that offers common API's to manage passwords. You can purchase those products and link to the password change dialog in those products using Navigation Keys. If you are a customer using Oracle WebLogic, then there is a sample JSP's that can be linked to provide this functionality under Oracle TechNet (registration required) under Code Samples (project S20). These can be added as a Navigation Key to complete the functionality. This will allow end users to manage their own passwords. Obviously these are all samples and should be treated as customizations when you implement them. If you wish to understand Navigation Keys, then look at the Oracle Utilities Application Framework Integration Guidelines (Doc Id: 789060.1) available from My Oracle Support.

    Read the article

  • Connect to localdb using Sql Server management studio

    - by Magnus Karlsson
    I was trying to find my databse for local db under localhost etc but no luck. The following led me to just connect to it, kind of obvious really when you look at your connections string but.. its sunday morning or something.. From: http://blogs.msdn.com/b/sqlexpress/archive/2011/07/12/introducing-localdb-a-better-sql-express.aspx High-Level Overview After the lengthy introduction it's time to take a look at LocalDB from the technical side. At a very high level, LocalDB has the following key properties: LocalDB uses the same sqlservr.exe as the regular SQL Express and other editions of SQL Server. The application is using the same client-side providers (ADO.NET, ODBC, PDO and others) to connect to it and operates on data using the same T-SQL language as provided by SQL Express. LocalDB is installed once on a machine (per major SQL Server version). Multiple applications can start multiple LocalDB processes, but they are all started from the same sqlservr.exe executable file from the same disk location. LocalDB doesn't create any database services; LocalDB processes are started and stopped automatically when needed. The application is just connecting to "Data Source=(localdb)\v11.0" and LocalDB process is started as a child process of the application. A few minutes after the last connection to this process is closed the process shuts down. LocalDB connections support AttachDbFileName property, which allows developers to specify a database file location. LocalDB will attach the specified database file and the connection will be made to it.

    Read the article

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >