Search Results

Search found 22993 results on 920 pages for 'global load balancing'.

Page 562/920 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • CIFS shares do not mount after upgrade to 12.10 from 12.04

    - by Mothball
    I have seen issues close to my problem but no one seems to have a definitive answer as to what is going on and why the failure occurs. I have a number of NAS devices on my home network and on a previous install of 12.04 and version prior mounting at login worked using this entry for each in fstab: //servername/sharename /media/windowsshare cifs guest,uid=1000,iocharset=utf8,codepage=cp850,cp850 0 0 Now when I use this, 12.10 reports the standard - cannot mount bad option ... blah blah... The kern log reports that the CIFS option "codepage" unknown... changed entry to "unicode" and received the same error message. There are no other error messages or log entries that would indicate another issue, but this is the statement I used for quite awhile with version 12.04 and before. Is the codepage option obsolete in 12.10/CIFS now? Is there a codepage support program that I must load? Is there some kind of helper program that is required to supports the codepage option? A current review of the man pages at samba.org does not make mention of the option "codepage". Extremely confused - any help/insight would be greatly appreciated.

    Read the article

  • Please recommend tools for PC, browser, home network performance problems?

    - by mobibob
    My client is experiencing some odd response behavior in their browser for the past few days. Classic, "nothing has changed" so I am starting at ground zero. Browsing a website will timeout or take a ridiculous time to load -- other times, the same site and query is immediately responsive. Once a connection is established, video streams are uninterrupted. The home network hosts a website, but it is not experiencing any activity in Apache's 'access.log' I am using speedtest.net to check if the ISP through the internet is 'OK' -- which looks typical (average +/-). I have to suspect the home network is beaconing or something very abnormal, but I don't know where to start.

    Read the article

  • Extract MP3 URL from a SWF file

    - by Charles
    I have a SWF file that basically shows a streamer, and it plays a song (I'm guessing it's MP3) that it links to, somewhere. I know the audio isn't embedded in the file since the SWF's file size is ~370KB. With most flash FLV and MP3 players, Internet Download Manager catches the URL as soon as the video/audio starts to load, and I can easily download it using IDM. In this case, IDM doesn't seem to sense anything - so I thought maybe I could extract the MP3 location myself. I tried decompiling the SWF file, as I'd heard before that with some files, decompiling can help in breaking down a file and getting just the info you need - but I honestly don't know what to look for in this particular file. Suggestions?

    Read the article

  • What can cause peaks in pagetables in /proc/meminfo ?

    - by Fuzzy76
    I have a gameserver running Debian Lenny on a VPS host. Even when experiencing a fairly low load, the players start experiencing major lag (ping times rise from 50 ms to 150-500 ms) in bursts of 3 - 10 seconds. I have installed Munin server monitoring, but when looking at the graphs it looks like the server has plenty of CPU, RAM and bandwidth available. The only weird thing I noticed is some peaks in the memory graph attributed to "page_tables" which maps to PageTables in /proc/meminfo but I can't find any good information on what this might mean. Any ideas what might be causing this? If you need any more graps, just let me know. The interrupts/second count is at roughly 400-600 during this period (nearly all from eth0). The drop in committed was caused by me trying to lower the allocated memory for the server from 512MB to 256MB, but that didn't seem to help.

    Read the article

  • How to prevent mod_cluster as single point failure?

    - by Hitesh
    In my configuration i used Apche+mod_cluster as a front_end(load balancer) server and two JBOSS AS 7.1.0 as backend server. In my configuration on one system i have install Apache+mod_cluster + JBOSS AS 7.1.0 and on other system JBOSS AS 7.1.0. Both JBOSS AS 7.1.0 are in domain mode means in clustering. My problem is that if Apche+mod_cluster crash than client can not access any JBOSS AS . I want to configure Apache+mod_cluster in Master_Slave format, means if one of Apache+mod_cluter down another Apache+mod_cluster become active and pass client(browser) request to back-end server in normal form without any interruption. Is there any way to make two Apache+mod_cluster to communicate with each other , means to check health status of each other and if any one get down other Apche+mod_cluster will do it's task ......

    Read the article

  • USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Installing mod_xsendfile on MAMP

    - by mail4alberto
    Hello, I'm having trouble installing mod_xsendfile on MAMP. I've used some sources to try to help me install it: http://iprog.com/posting/2008/04/compiling_mod_xsendfile_for_mac_os_x http://groups.google.com/group/phusion-passenger/browse_thread/thread/e6dac9d5ea0de9c1 I ended up installing apache20 via macports and used apsx command to create the module and then copy it into MAMP's modules folder. I was able to seem to load the module at least. But then I get this error in my apache logs: [Thu May 27 19:08:28 2010] [notice] child pid 68606 exit signal Bus error (10) [Thu May 27 19:08:41 2010] [notice] child pid 68607 exit signal Bus error (10) Can anyone help me out? :S

    Read the article

  • Synaptics drivers are not loading on Kubuntu 13.10 on Dell Vostro 2420

    - by Alok Singh Mahor
    I freshly installed Kubuntu 13.10, 32 bit version on newly purchased Dell Vostro 2420. everything is working fine except scrolling and multitouch features through touchpad. I am able to change position of cursor using touchpad and able to tap (single click and double click) but scrolling is not working I tried to find out solution by searching on google but could not find proper solution to load synaptics drivers. i am listing some details: Laptop: Dell Vostro 2420 Linux Kernel version and distribution: 3.11.0-12-generic, Kubuntu 13.10 output of xinput list is ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=12 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Power Button id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=10 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=11 [slave keyboard (3)] ? Dell WMI hotkeys id=13 [slave keyboard (3)] Output of synclient -l is Couldn't find synaptics properties. No synaptics driver loaded? output of lshw is at http://paste.ubuntu.com/6645687/ xserver log and dmesg dont have trace of synaptics kindly tell me how to troubleshoot this problem.

    Read the article

  • How to change theme in Windows 7 with Powershell script?

    - by Greg McGuffey
    I would like to have a script that would change the current theme of Windows 7. I found the registry entry where this stored, but I apparently need to take some further action to get windows to load the theme. Any ideas? Here is the script that I'm trying to use, but isn't working (registry updated, but theme not changed): ###################################### # Change theme by updating registry. # ###################################### # Define argument which defines which theme to apply. param ( [string] $theme = $(Read-Host -prompt "Theme") ) # Define the themes we know about. $knownThemes = @{ "myTheme" = "mytheme.theme"; "alien" = "oem.theme" } # Identify paths to user themes. $userThemes = " C:\Users\yoda\AppData\Local\Microsoft\Windows\" # Get name of theme file, based on theme provided $themeFile = $knownThemes["$theme"] # Build path to theme and set registry. $newThemePath = "$userThemes$themeFile" $regPath = "HKCU:\Software\Microsoft\Windows\CurrentVersion\Themes\" Set-ItemProperty -path $regPath -name CurrentTheme -value $newThemePath # Update system with this info...this isn't working! rundll32.exe user32.dll, UpdatePerUserSystemParameters Thanks!

    Read the article

  • Rule of thumb for cost vs. savings for code re-use

    - by Styler
    Is it a good rule of thumb to always write code for the intent of re-using it somewhere down the road? Or, depending on the size of the component you are writing, is it better practice to design it for re-use when it makes sense with regards to time spent on it. What is a good rule of thumb for spending extra time on analysis and design on project components that have "some probability" of being needed later down the road for other things that may or may need this part. For example, if I have the need for project X to do things A, and B. A definitely needs to be written for re-use because it just makes sense to do so. B is very project specific at the moment, and I can hack it all together in a couple days to finish the project on time and give everyone kudos for being a great team, etc. Or if we say, lets spend a whole friggin' 2 weeks figuring out what project Y/Z might need this thing for and spend a load of extra time on on part B because someday we might need to use it on project Y/Z (where the savings will be realized). I'd imagine a perfect world situation would be a nicely crafted combination of project specific vs. re-use architected components given the project. However some code shops might feel it would be a great idea to write everything for the intention of using it at some point down the road.

    Read the article

  • Ubuntu 12.10: Installing proprietary Nvidia driver causes freeze at boot

    - by Greg
    Ok, so I just installed Ubuntu on my laptop, and I immediately encountered an issue: the HDMI audio output won't work. Yes, I know about the sound settings thing where you have to select the HDMI option, but even when it's selected I get no sound out of the TV I'm hooking it up to. This is a dealbreaker for me, because my laptop speakers are terrible, it's one of the big reasons I use my TV monitor. So I decided to work on solving the problem by upgrading my Nvidia drivers. I switched to one of the propriety drivers offered in that software updating utility that comes with the OS, the one option that said (tested). Viola, sound over the HDMI is now working. Unfortunately, this now brings me to my next problem: when I reboot Ubuntu with this or any other proprietary driver installed, it freezes when it tries to load my desktop. As in I can see my wallpaper, but no icons or options of any kind. The system is totally frozen, and gives me one of those "we've experienced an error, do you want to report it messages." So there's my bind. I need HDMI audio out, that's a total dealbreaker for me, but installing the drivers that give me that capability crash the system. Does anyone have any idea what's causing this

    Read the article

  • Slow Starting DHCP Client Service - HP Thin Clients

    - by Ryan
    We have recently began adding XPe thin clients to our domain in preperation for a new citrix environment. One thing that has been picked up on in testing is that they appear slow to boot. The issue manifest's it's self as the classic "Applying Computer Settings..." screen we are all used to seeing. After digging into the issue it appears the DHCP Client service is taking some time to load on boot, this varies but I would estimate it can take around 1 minute in some cases. I've eliminated the classic issues, DHCP is responding correctly and in quick time. DNS is not the cause and GPO's are applying promptly. A simple workaround is to assign the client a static IP which work's great so the TCP/IP servies are obviously firing up quickly, just not DHCP Client. Does anyone have any idea's on how I may be able to improve the service start time? Keen to find a better solution before I get my arm twisted into setting up 250 thin clients with static addressing!

    Read the article

  • Internet Explorer 9 auto "feeling lucky" for gmail bing search

    - by Gareth Jones
    When Im at school and using school computers, i have to use has IE9. When I want to access my gmail, I type in "gmail" in the URL bar, and thus IE9 does a bing search. The page half loads (As in, loads just about every thing but the search results) and then opens my gmail, kinda like google's "i'm feeling lucky". My question is this: Why? IE9 doesn't have the URL of gmail, as i can watch the bing search load, and then the url changes to gmail, and it only happens for Gmail, having tired with searching Google and Facebook in the same method. The computer is running Windows 7 with Windows Aero disabled, and limited account privileges. While its a cool thing, I would like to known what causes it to happen. Thanks

    Read the article

  • Apache connection intermittantly times out on LAN

    - by jonescb
    I'm using Fedora 12 with Apache 2.2.14, and I was having this error on 2.2.13 as well. Even when I connect to my server over LAN, Firefox will occasionally time out while connecting. I can't figure out what is causing this. The error log isn't showing anything. I even cleaned the error log file, so that if something happened, it'd be a little easier to spot. But I'm still getting time outs, and nothing in the error log. Can anybody help me find what the problem is? Here is my httpd.conf Pastebin It's the default Fedora configuration; I've only changed the ServerName if I remember correctly. I'm pretty sure it's not the Timeout setting, because on LAN it should never time out. I don't believe it's a load issue either, I'm the only one connecting to it. I'm not an Apache expert, so if more information is needed I'll need some instructions on how to get that data.

    Read the article

  • How to treat your genius team-mate [closed]

    - by Shiplu
    I am soon going to be in a team where a very talented programmer works. Everyone int the company likes him as he knows a lot of thing and does a lot of programming. The PM and the CEO likes him a lot. I am his fan as a programmer. But as a team mate? I always try to avoid him. The reason is, in the very early days of our company our CEO used to choose both of us in a same team we worked together. Then I had many terrible experiences. Most of the time he is doing others work. When team leader breaks the work load and distributes it, He used to work more than a workday everyday and also doing my own work. The result was same duplicate code. He is not working my working finishing his own, he is doing it in the middle. how do you treat such team-mates.

    Read the article

  • How do you manage large web farms?

    - by Andrew Katz
    I have a quickly growing web farm running IIS 7 (30+ servers). All servers are identical copies of each other and all servers are physical. We update the software about once a month, and in the current process, we follow the following steps: Disable server from pool on F5 load balancer. Disable HTTP Keep-alives in IIS so connections drop quickly. Change default directory of website to new folder containing new binaries. Test server Enable HTTP Keep-alives. Enable server in F5 pool. Move to server 2 Microsoft used to have Application Center which was abandoned a while ago. They have made a second attempt with the Web Farm Framework, but this adds as much QA time testing the release package as it saves in the deployment. Has anyone seen a commercial off the shelf application that is tailored for managing and deploying to large web farms? Thanks!

    Read the article

  • When row estimation goes wrong

    - by Dave Ballantyne
    Whilst working at a client site, I hit upon one of those issues that you are not sure if that this is something entirely new or a bug or a gap in your knowledge. The client had a large query that needed optimizing.  The query itself looked pretty good, no udfs, UNION ALL were used rather than UNION, most of the predicates were sargable other than one or two minor ones.  There were a few extra joins that could be eradicated and having fixed up the query I then started to dive into the plan. I could see all manor of spills in the hash joins and the sort operations,  these are caused when SQL Server has not reserved enough memory and has to write to tempdb.  A VERY expensive operation that is generally avoidable.  These, however, are a symptom of a bad row estimation somewhere else, and when that bad estimation is combined with other estimation errors, chaos can ensue. Working my way back down the plan, I found the cause, and the more I thought about it the more i came convinced that the optimizer could be making a much more intelligent choice. First step is to reproduce and I was able to simplify the query down a single join between two tables, Product and ProductStatus,  from a business point of view, quite fundamental, find the status of particular products to show if ‘active’ ,’inactive’ or whatever. The query itself couldn’t be any simpler The estimated plan looked like this: Ignore the “!” warning which is a missing index, but notice that Products has 27,984 rows and the join outputs 14,000. The actual plan shows how bad that estimation of 14,000 is : So every row in Products has a corresponding row in ProductStatus.  This is unsurprising, in fact it is guaranteed,  there is a trusted FK relationship between the two columns.  There is no way that the actual output of the join can be different from the input. The optimizer is already partly aware of the foreign key meta data, and that can be seen in the simplifiction stage. If we drop the Description column from the query: the join to ProductStatus is optimized out. It serves no purpose to the query, there is no data required from the table and the optimizer knows that the FK will guarantee that a matching row will exist so it has been removed. Surely the same should be applied to the row estimations in the initial example, right ?  If you think so, please upvote this connect item. So what are our options in fixing this error ? Simply changing the join to a left join will cause the optimizer to think that we could allow the rows not to exist. or a subselect would also work However, this is a client site, Im not able to change each and every query where this join takes place but there is a more global switch that will fix this error,  TraceFlag 2301. This is described as, perhaps loosely, “Enable advanced decision support optimizations”. We can test this on the original query in isolation by using the “QueryTraceOn” option and lo and behold our estimated plan now has the ‘correct’ estimation. Many thanks goes to Paul White (b|t) for his help and keeping me sane through this

    Read the article

  • APC module causing strange error

    - by clifgriffin
    When I run php -v I get: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/apc.so' - /usr/lib64/php/modules/apc.so: undefined symbol: php_pcre_exec in Unknown on line 0 This isn't my first rodeo. I've setup APC multiple times. This is a MediaTemple Dedicated Virtual 4.0 with Plesk 11. Plesk 11 is the only thing essentially different from the other servers I've set this up on. I've verified that pcre-devel is installed. I've compiled APC from source as well as used pecl to install it. No difference. I also tried downgrading to APC 3.0.19, with no love.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • wifi routers and concurrent devices

    - by Joelio
    We have a Linksys WRT54G WiFi router in our office which was working great when we had 5-6 folks. Now on peak days we have 10-15 people, each with a computer, smartphone, etc, and an ooma VOIP device. On average 1-2 times a day I need to go hard reboot the router, and sometimes the border router (Cox-supplied device). I assume this is just because the router cant handle this many concurrent users. So my question is can these consumer routers handle this kind of load? If not, would adding more devices solve the problem, and how close proximity can I put 2 routers without having interference problems (our office area is not that big physically)?

    Read the article

  • How to set the request start time with HAProxy?

    - by Tupy
    I would like to measure the time of full request stack. The New Relic capture time of the middleware (e.g. java, python, ruby) and request time (See https://newrelic.com/docs/features/tracking-front-end-time). For this, I need to configure the X-Request-Start header as the request pass through the HAProxy load balance. The haproxy.cfg should look like: backend www balance roundrobin mode http reqadd "X-Request-Start" UNKNOWN_TIME_FUNCTION() server servername 192.168.0.1:80 weight 1 check There is a haproxy native function to replace the UNKNOWN_TIME_FUNCTION()?

    Read the article

  • The Three-Legged Milk Stool - Why Oracle Fusion Incentive Compensation makes the difference!

    - by Richard Lefebvre
    During the London Olympics, we were exposed to dozens of athletes who worked with sports psychologists to maximize their performance. Executives often hire business psychologists to coach their teams to excellence. In the same vein, Fusion Incentive Compensation can be used to get people to change their sales behavior so we can make our numbers. But what about using incentive compensation solutions in a non-sales scenario to drive change? Recently, I was working an opportunity where a company was having a low user adoption rate for Salesforce.com, which was causing problems for them. I suggested they use Fusion Incentive Comp to change the reps' behavior. We tossed around the idea of tracking user adoption by creating a variable bonus for reps based on how well they forecasted revenues in the new system. Another thought was to reward the reps for how often they logged into the system or for the percentage of leads that became opportunities and turned into revenue. A new twist on a great product. Fusion CRM's Sweet Spot I'm excited about the sales performance management (SPM) tools in Fusion CRM. This trio of Incentive Compensation, Territory Management, and Quota Management sets us apart from the competition because Oracle is the only vendor that provides all three of these capabilities on a single tech stack, in a single application, and with a single look and feel. The niche vendors offer standalone territory or incentive compensation solutions, but then the customer has to custom build the other tools and can end up with a Frankenstein-type environment. On average, companies overpay sales commissions by three to eight percent. You calculate that number for a company the size of Oracle for one quarter and it makes a pretty air-tight financial case for using SPM tools to figure accurate commissions. Plus when sales reps get the right compensation, they can be out selling rather than spending precious time figuring out what they didn't get paid or looking for another job. And one more thing ... Oracle knows incentive comp. We have been a Gartner Market Scope leader in this space for the last five years. Our solution gets high marks because of its scalability and because of its interoperability with other technologies. And now that we're leading with Fusion, our incentive compensation offering includes the innovations that the Fusion team built, plus enhancements from the E-Business Suite Incentive Comp team. It's a case of making a good thing even better. (See product video.) The "Wedge" Apps In a number of accounts that I'm working on, there is a non-Oracle CRM system of record. That gives me the perfect opportunity to introduce the benefits of our SPM tools and to get the customer using Fusion. Then the door is wide open for the company to uptake more of Fusion CRM, especially since all the integrations they need are out of the box. I really believe that implementing this wedge of SPM tools is the ticket to taking market share away from other vendors. It allows us to insert ourselves in an environment where no other CRM solution in the market has the extending capabilities of Fusion. Not Just Your Usual Suspects Usually the stakeholders that I talk to for Territory Management are tightly aligned with the sales management team. When I sell the quota planning tool, I'm talking to finance people on the ERP side of the house who are measuring quotas and forecasting revenue. And then Incentive Comp is of most interest to the sales operations people, and generally these people roll up to either HR or the payroll department. I think of our Fusion SPM tools as a three-legged stool straddling an organization's Sales, Finance, and HR departments. So when you're prospecting for opportunities -- yes, people with a CRM perspective will be very interested -- but don't limit yourselves to that constituency. You might find stakeholders in accounting, revenue planning, or HR compensation teams. You just might discover, as I did at United Airlines, that the HR organization is spearheading the CRM project because incentive compensation is what they need ... and they're the ones with the budget. Jason Loh Global Solutions Manager, Fusion CRM Sales Planning Oracle Corporation

    Read the article

  • IIS not listening over external network, all other traffic working

    - by Beuy
    Hello there, I have a very odd situation, I have a server (let's call it X) running 2008 R2 with two NIC's in it, one is connected to the work domain and has a subnet of 192.168.10.0/24 the other is connected to a ADSL connection and has a subnet of 192.168.1.0/24. The server has IIS installed. On the ADSL connection I have setup a dynamic dns and port forwarding to allow external HTTP, HTTPS, FTP and RDP connections. FTP and RDP are working fine however neither HTTP or HTTPS are working at all. I can browse the websites by going to localhost on the machine, the HTTP and HTTPS ports appear as "Filtered" when I try to scan them using PortQueryUI and browsers respond with a "Server took too long to load or was not responding" error. This was working fine just a few days ago, Windows firewall is disabled I don't have any software firewall on it. And I'm really lost. Any help would be great.

    Read the article

  • How to automatically resume php-fpm?

    - by alfish
    I am using nginx+php-fpm on Debian Squeeze for a busy server and have had great difficulty to deal with maximum connections being reached. Here the problem is that php processes sometimes just die randomly under high load and leave the server with no php process. Then I need to manually restart php5-fpm service to bring back the server to life. I am wondering how to avoid this to happen, or at least treat the symptoms by restarting the php5-fpm automatically whenever there is not php process left to listen to incoming requests. My relevant configs are: pm = dynamic pm.max_children = 1400 pm.start_servers = 10 pm.max_spare_servers = 20 pm.process_idle_timeout = 1s; #not sure it will be useful when pm=dynamic pm.max_requests = 100000 request_terminate_timeout = 30 I appreciate your suggestions to cope with this nasty problem.

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >