Search Results

Search found 477 results on 20 pages for 'charles kihe'.

Page 3/20 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How Can You Get More Productive In Life Sciences Sales?

    - by charles.knapp
    Only half of all doctors will meet with pharmaceutical sales reps, and that percentage continues to decrease. Furthermore, when reps are granted an opportunity to share information, the average interaction is only about a minute and a half. Concurrently, call quotas continue to increase. What does this matter? Sales reps need to spend less time on traditional planning and after-call reporting, more time making calls, and make more productive use of short presentation times. Fortunately for sales reps, Oracle offers the first life sciences CRM that is designed to double sales time and halve reporting time. In particular, our new Life Sciences Edition Offline Client is designed so that you can actually turn the screen around, so that your CRM is useful for presentations and not just reporting, whether you are connected to cloud or working offline such as in restricted clinical environments. Watch Piers Evans, Industry Strategy Director, show what this looks like in the day of a typical pharmaceutical sales representative. By use of this code snippet, I agree to the Brightcove Publisher T and C found at https://accounts.brightcove.com/en/terms-and-conditions/. -- This script tag will cause the Brightcove Players defined above it to be created as soon as the line is read by the browser. If you wish to have the player instantiated only after the rest of the HTML is processed and the page load is complete, remove the line. -- brightcove.createExperiences();

    Read the article

  • Pricing: Meet or Beat?

    - by charles.knapp
    My home dishwasher started making some really interesting noises. I heard radio advertisements from two retailers who promised to meet any competitor's price. Then, I heard another retailer promising that their everyday prices beat their competitors. That got me to thinking about the power of pricing and promotions in the marketing mix (product, price, placement, promotions, and people). What is more powerful to say in a competitive market: your company will meet a similar offer, or your company will beat the others? Will you sell more if you meet or if you beat? I found that the retailer who promised to beat the others really had the best everyday pricing. Even better for me, another retailer had an exclusive promotional sale for long-term customers. Their loyalty promotion beat the best everyday discounter. So, I got the quality and performance I wanted at a tremendous price. So, I have two challenges for marketers. First, where you really have to compete on price as a dominant factor, give people strong reasons to do business with you. If you try to meet other's prices, make the leap to actually beat and not merely meet. Second, upgrade your firm's capabilities where needed. Oracle offers a complete range of great CRM software for loyalty management, marketing promotions, and pricing management that will help you to grow your business.

    Read the article

  • Pricing: Meet or Beat?

    - by charles.knapp
    My home dishwasher started making some really interesting noises. It was time to shop. I heard radio advertisements from two retailers who promised to meet any competitor's price. Then, another retailer promised that their everyday prices beat their competitors. That got me to thinking about the power of pricing and promotions in the marketing mix (product, price, placement, promotions, and people). What is more powerful to say in a competitive market: your company will meet a similar offer, or your company will beat the others? Will you sell more if you meet or if you beat? I found that the retailer who promised to beat the others really had the best everyday pricing. I was close to making a purchase. Then, another retailer had an exclusive promotional sale for long-term customers. Their loyalty promotion beat the best everyday discounter. So, I got the quality and performance I wanted at a tremendous price. So, I have two challenges for marketers. First, where you really have to compete on price as a dominant factor, give people strong reasons to do business with you. If you try to meet other's prices, make the leap to actually beat and not merely meet competitor prices. Second, upgrade your firm's capabilities where needed. Oracle offers a complete range of great CRM capabilities for loyalty management, marketing promotions, and pricing management that will help you to grow your business.

    Read the article

  • Tellago releases a RESTful API for BizTalk Server business rules

    - by Charles Young
    Jesus Rodriguez has blogged recently on Tellago Devlabs' release of an open source RESTful API for BizTalk Server Business Rules.   This is an excellent addition to the BizTalk ecosystem and I congratulate Tellago on their work.   See http://weblogs.asp.net/gsusx/archive/2011/02/08/tellago-devlabs-a-restful-api-for-biztalk-server-business-rules.aspx   The Microsoft BRE was originally designed to be used as an embedded library in .NET applications. This is reflected in the implementation of the Rules Engine Update (REU) Service which is a TCP/IP service that is hosted by a Windows service running locally on each BizTalk box. The job of the REU is to distribute rules, managed and held in a central database repository, across the various servers in a BizTalk group.   The engine is therefore distributed on each box, rather than exploited behind a central rules service.   This model is all very well, but proves quite restrictive in enterprise environments. The problem is that the BRE can only run legally on licensed BizTalk boxes. Increasingly we need to deliver rules capabilities across a more widely distributed environment. For example, in the project I am working on currently, we need to surface decisioning capabilities for use within WF workflow services running under AppFabric on non-BTS boxes. The BRE does not, currently, offer any centralised rule service facilities out of the box, and hence you have to roll your own (and then run your rules services on BTS boxes which has raised a few eyebrows on my current project, as all other WCF services run on a dedicated server farm ).   Tellago's API addresses this by providing a RESTful API for querying the rules repository and executing rule sets against XML passed in the request payload. As Jesus points out in his post, using a RESTful approach hugely increases the reach of BRE-based decisioning, allowing simple invocation from code written in dynamic languages, mobile devices, etc.   We developed our own SOAP-based general-purpose rules service to handle scenarios such as the one we face on my current project. SOAP is arguably better suited to enterprise service bus environments (please don't 'flame' me - I refuse to engage in the RESTFul vs. SOAP war). For example, on my current project we use claims based authorisation across the entire service bus and use WIF and WS-Federation for this purpose.   We have extended this to the rules service. I can't release the code for commercial reasons :-( but this approach allows us to legally extend the reach of BRE far beyond the confines of the BizTalk boxes on which it runs and to provide general purpose decisioning capabilities on the bus.   So, well done Tellago.   I haven't had a chance to play with the API yet, but am looking forward to doing so.

    Read the article

  • What Should You Look for In a CRM Demo?

    - by charles.knapp
    I have helped firms evaluate software demos and delivered demos in diverse industries such as manufacturing, healthcare, life sciences, and travel (to name just a few). Here are a few suggestions. First, which vendor has the best fit for your industry? Make sure that the vendor demo staff tell you clearly throughout the demo (not just in a passing comment), what portion of each business process and screen is standard, what has been configured, what has been custom coded, and what has been provided by a partner. If you don't keep asking, what you buy may be less useful than what you saw. This will lead to added (and unbudgeted) costs and time. Second, what are the roles of the primary users? What are their top-most needs, such as exception-oriented dashboards or rapid data entry? Can you get a demo for each key role, showing how the software fits a typical workday? Have the vendor repeatedly tell you what is standard, configured, custom coded, or provided by a partner. Third, how well does the demo balance ease of use with completeness of business processes? One common approach is to hide needed fields or steps that are of low visual value. Another approach is to focus heavily on a visually appealing capability, while downplaying the fit with your key business processes. Result: despite their business acumen, demo attendees may not focus adequately on gaps in business fit So, look for complete disclosure and complete CRM. To arrange a demo from Oracle, please visit http://www.oracle.com/crm.

    Read the article

  • How Can You Work Smarter In Life Sciences Sales?

    - by charles.knapp
    One major reason why executives keep choosing Oracle CRM On Demand and Siebel CRM is our ongoing investments that deliver comprehensive business process support, tailored "at the factory" for specific industries. For example, life sciences sales in many cases globally follows an indirect, "influence" model, where a medical clinician uses expert working knowledge to prescribe products that are sold by independent pharmacies. Smarter, presentations to clinicians can increase sales. Oracle's life sciences CRM is built for sales reps by sales reps. We worked with representatives at 15 of the top 20 pharmaceutical firms on our latest release. Oracle helps reps work smarter from planning their day to delivering samples and rapidly presenting details to busy clinicians. Watch Piers Evans, Industry Strategy Director, show what this looks like in the day of a typical pharmaceutical sales rep. By use of this code snippet, I agree to the Brightcove Publisher T and C found at https://accounts.brightcove.com/en/terms-and-conditions/. -- This script tag will cause the Brightcove Players defined above it to be created as soon as the line is read by the browser. If you wish to have the player instantiated only after the rest of the HTML is processed and the page load is complete, remove the line. -- brightcove.createExperiences();

    Read the article

  • Functional Adaptation

    - by Charles Courchaine
    In real life and OO programming we’re often faced with using adapters, DVI to VGA, 1/4” to 1/8” audio connections, 110V to 220V, wrapping an incompatible interface with a new one, and so on.  Where the adapter pattern is generally considered for interfaces and classes a similar technique can be applied to method signatures.  To be fair, this adaptation is generally used to reduce the number of parameters but I’m sure there are other clever possibilities to be had.  As Jan questioned in the last post, how can we use a common method to execute an action if the action has a differing number of parameters, going back to the greeting example it was suggested having an AddName method that takes a first and last name as parameters.  This is exactly what we’ll address in this post. Let’s set the stage with some review and some code changes.  First, our method that handles the setup/tear-down infrastructure for our WCF service: 1: private static TResult ExecuteGreetingFunc<TResult>(Func<IGreeting, TResult> theGreetingFunc) 2: { 3: IGreeting aGreetingService = null; 4: try 5: { 6: aGreetingService = GetGreetingChannel(); 7: return theGreetingFunc(aGreetingService); 8: } 9: finally 10: { 11: CloseWCFChannel((IChannel)aGreetingService); 12: } 13: } Our original AddName method: 1: private static string AddName(string theName) 2: { 3: return ExecuteGreetingFunc<string>(theGreetingService => theGreetingService.AddName(theName)); 4: } Our new AddName method: 1: private static int AddName(string firstName, string lastName) 2: { 3: return ExecuteGreetingFunc<int>(theGreetingService => theGreetingService.AddName(firstName, lastName)); 4: } Let’s change the AddName method, just a little bit more for this example and have it take the greeting service as a parameter. 1: private static int AddName(IGreeting greetingService, string firstName, string lastName) 2: { 3: return greetingService.AddName(firstName, lastName); 4: } The new signature of AddName using the Func delegate is now Func<IGreeting, string, string, int>, which can’t be used with ExecuteGreetingFunc as is because it expects Func<IGreeting, TResult>.  Somehow we have to eliminate the two string parameters before we can use this with our existing method.  This is where we need to adapt AddName to match what ExecuteGreetingFunc expects, and we’ll do so in the following progression. 1: Func<IGreeting, string, string, int> -> Func<IGreeting, string, int> 2: Func<IGreeting, string, int> -> Func<IGreeting, int>   For the first step, we’ll create a method using the lambda syntax that will “eliminate” the last name parameter: 1: string lastNameToAdd = "Smith"; 2: //Func<IGreeting, string, string, int> -> Func<IGreeting, string, int> 3: Func<IGreeting, string, int> addName = (greetingService, firstName) => AddName(greetingService, firstName, lastNameToAdd); The new addName method gets us one step close to the signature we need.  Let’s say we’re going to call this in a loop to add several names, we’ll take the final step from Func<IGreeting, string, int> -> Func<IGreeting, int> in line as a lambda passed to ExecuteGreetingFunc like so: 1: List<string> firstNames = new List<string>() { "Bob", "John" }; 2: int aID; 3: foreach (string firstName in firstNames) 4: { 5: //Func<IGreeting, string, int> -> Func<IGreeting, int> 6: aID = ExecuteGreetingFunc<int>(greetingService => addName(greetingService, firstName)); 7: Console.WriteLine(GetGreeting(aID)); 8: } If for some reason you needed to break out the lambda on line 6 you could replace it with 1: aID = ExecuteGreetingFunc<int>(ApplyAddName(addName, firstName)); and use this method: 1: private static Func<IGreeting, int> ApplyAddName(Func<IGreeting, string, int> addName, string lastName) 2: { 3: return greetingService => addName(greetingService, lastName); 4: } Splitting out a lambda into its own method is useful both in this style of coding as well as LINQ queries to improve the debugging experience.  It is not strictly necessary to break apart the steps & functions as was shown above; the lambda in line 6 (of the foreach example) could include both the last name and first name instead of being composed of two functions.  The process demonstrated above is one of partially applying functions, this could have also been done with Currying (also see Dustin Campbell’s excellent post on Currying for the canonical curried add example).  Matthew Podwysocki also has some good posts explaining both Currying and partial application and a follow up post that further clarifies the difference between Currying and partial application.  In either technique the ultimate goal is to reduce the number of parameters passed to a function.  Currying makes it a single parameter passed at each step, where partial application allows one to use multiple parameters at a time as we’ve done here.  This technique isn’t for everyone or every problem, but can be extremely handy when you need to adapt a call to something you don’t control.

    Read the article

  • Berkeley DB Java Edition 4.0.103 Available

    - by charles.lamb
    We'd like to let you know that JE 4.0.103 is now at http://www.oracle.com/technology/software/products/berkeley-db/je/index.html. The patch release contains both small features and bug fixes, many of which were prompted by feedback on this forum. Some items to note: New CacheMode values for more control over cache policies, and new statistics to enable better interpretation of caching behavior. These are just one initial part of our continuing work in progress to make JE caching more efficient. Fixes for proper cache utilization calculations when using the -XX:+UseCompressedOops JVM option. A variety of other bug fixes. There is no file format or API changes. As always, we encourage users to move promptly to this new release.

    Read the article

  • RIF PRD: Presentation syntax issues

    - by Charles Young
    Over Christmas I got to play a bit with the W3C RIF PRD and came across a few issues which I thought I would record for posterity. Specifically, I was working on a grammar for the presentation syntax using a GLR grammar parser tool (I was using the current CTP of ‘M’ (MGrammer) and Intellipad – I do so hope the MS guys don’t kill off M and Intellipad now they have dropped the other parts of SQL Server Modelling). I realise that the presentation syntax is non-normative and that any issues with it do not therefore compromise the standard. However, presentation syntax is useful in its own right, and it would be great to iron out any issues in a future revision of the standard. The main issues are actually not to do with the grammar at all, but rather with the ‘running example’ in the RIF PRD recommendation. I started with the code provided in Example 9.1. There are several discrepancies when compared with the EBNF rules documented in the standard. Broadly the problems can be categorised as follows: ·      Parenthesis mismatch – the wrong number of parentheses are used in various places. For example, in GoldRule, the RHS of the rule (the ‘Then’) is nested in the LHS (‘the If’). In NewCustomerAndWidgetRule, the RHS is orphaned from the LHS. Together with additional incorrect parenthesis, this leads to orphanage of UnknownStatusRule from the entire Document. ·      Invalid use of parenthesis in ‘Forall’ constructs. Parenthesis should not be used to enclose formulae. Removal of the invalid parenthesis gave me a feeling of inconsistency when comparing formulae in Forall to formulae in If. The use of parenthesis is not actually inconsistent in these two context, but in an If construct it ‘feels’ as if you are enclosing formulae in parenthesis in a LISP-like fashion. In reality, the parenthesis is simply being used to group subordinate syntax elements. The fact that an If construct can contain only a single formula as an immediate child adds to this feeling of inconsistency. ·      Invalid representation of compact URIs (CURIEs) in the context of Frame productions. In several places the URIs are not qualified with a namespace prefix (‘ex1:’). This conflicts with the definition of CURIEs in the RIF Datatypes and Built-Ins 1.0 document. Here are the productions: CURIE          ::= PNAME_LN                  | PNAME_NS PNAME_LN       ::= PNAME_NS PN_LOCAL PNAME_NS       ::= PN_PREFIX? ':' PN_LOCAL       ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* PN_CHARS)? PN_CHARS       ::= PN_CHARS_U                  | '-' | [0-9] | #x00B7                  | [#x0300-#x036F] | [#x203F-#x2040] PN_CHARS_U     ::= PN_CHARS_BASE                  | '_' PN_CHARS_BASE ::= [A-Z] | [a-z] | [#x00C0-#x00D6] | [#x00D8-#x00F6]                  | [#x00F8-#x02FF] | [#x0370-#x037D] | [#x037F-#x1FFF]                  | [#x200C-#x200D] | [#x2070-#x218F] | [#x2C00-#x2FEF]                  | [#x3001-#xD7FF] | [#xF900-#xFDCF] | [#xFDF0-#xFFFD]                  | [#x10000-#xEFFFF] PN_PREFIX      ::= PN_CHARS_BASE ((PN_CHARS|'.')* PN_CHARS)? The more I look at CURIEs, the more my head hurts! The RIF specification allows prefixes and colons without local names, which surprised me. However, the CURIE Syntax 1.0 working group note specifically states that this form is supported…and then promptly provides a syntactic definition that seems to preclude it! However, on (much) deeper inspection, it appears that ‘ex1:’ (for example) is allowed, but would really represent a ‘fragment’ of the ‘reference’, rather than a prefix! Ouch! This is so completely ambiguous that it surely calls into question the whole CURIE specification.   In any case, RIF does not allow local names without a prefix. ·      Missing ‘External’ specifiers for built-in functions and predicates.  The EBNF specification enforces this for terms within frames, but does not appear to enforce (what I believe is) the correct use of External on built-in predicates. In any case, the running example only specifies ‘External’ once on the predicate in UnknownStatusRule. External() is required in several other places. ·      The List used on the LHS of UnknownStatusRule is comma-delimited. This is not supported by the EBNF definition. Similarly, the argument list of pred:list-contains is illegally comma-delimited. ·      Unnecessary use of conjunction around a single formula in DiscountRule. This is strictly legal in the EBNF, but redundant.   All the above issues concern the presentation syntax used in the running example. There are a few minor issues with the grammar itself. Note that Michael Kiefer stated in his paper “Rule Interchange Format: The Framework” that: “The presentation syntax of RIF … is an abstract syntax and, as such, it omits certain details that might be important for unambiguous parsing.” ·      The grammar cannot differentiate unambiguously between strategies and priorities on groups. A processor is forced to resolve this by detecting the use of IRIs and integers. This could easily be fixed in the grammar.   ·      The grammar cannot unambiguously parse the ‘->’ operator in frames. Specifically, ‘-’ characters are allowed in PN_LOCAL names and hence a parser cannot determine if ‘status->’ is (‘status’ ‘->’) or (‘status-’ ‘>’).   One way to fix this is to amend the PN_LOCAL production as follows: PN_LOCAL ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* ((PN_CHARS)-('-')))? However, unilaterally changing the definition of this production, which is defined in the SPARQL Query Language for RDF specification, makes me uncomfortable. ·      I assume that the presentation syntax is case-sensitive. I couldn’t find this stated anywhere in the documentation, but function/predicate names do appear to be documented as being case-sensitive. ·      The EBNF does not specify whitespace handling. A couple of productions (RULE and ACTION_BLOCK) are crafted to enforce the use of whitespace. This is not necessary. It seems inconsistent with the rest of the specification and can cause parsing issues. In addition, the Const production exhibits whitespaces issues. The intention may have been to disallow the use of whitespace around ‘^^’, but any direct implementation of the EBNF will probably allow whitespace between ‘^^’ and the SYMSPACE. Of course, I am being a little nit-picking about all this. On the whole, the EBNF translated very smoothly and directly to ‘M’ (MGrammar) and proved to be fairly complete. I have encountered far worse issues when translating other EBNF specifications into usable grammars.   I can’t imagine there would be any difficulty in implementing the same grammar in Antlr, COCO/R, gppg, XText, Bison, etc. A general observation, which repeats a point made above, is that the use of parenthesis in the presentation syntax can feel inconsistent and un-intuitive.   It isn’t actually inconsistent, but I think the presentation syntax could be improved by adopting braces, rather than parenthesis, to delimit subordinate syntax elements in a similar way to so many programming languages. The familiarity of braces would communicate the structure of the syntax more clearly to people like me.  If braces were adopted, parentheses could be retained around ‘var (frame | ‘new()’) constructs in action blocks. This use of parenthesis feels very LISP-like, and I think that this is my issue. It’s as if the presentation syntax represents the deformed love-child of LISP and C. In some places (specifically, action blocks), parenthesis is used in a LISP-like fashion. In other places it is used like braces in C. I find this quite confusing. Here is a corrected version of the running example (Example 9.1) in compliant presentation syntax: Document(    Prefix( ex1 <http://example.com/2009/prd2> )    (* ex1:CheckoutRuleset *)  Group rif:forwardChaining (     (* ex1:GoldRule *)    Group 10 (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"Silver"])        (Forall ?shoppingCart such that ?customer[ex1:shoppingCart->?shoppingCart]           (If Exists ?value (And(?shoppingCart[ex1:value->?value]                                  External(pred:numeric-greater-than-or-equal(?value 2000))))            Then Do(Modify(?customer[ex1:status->"Gold"])))))      (* ex1:DiscountRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Or( ?customer[ex1:status->"Silver"]                ?customer[ex1:status->"Gold"])         Then Do ((?s ?customer[ex1:shoppingCart-> ?s])                  (?v ?s[ex1:value->?v])                  Modify(?s [ex1:value->External(func:numeric-multiply (?v 0.95))]))))      (* ex1:NewCustomerAndWidgetRule *)    Group (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"New"] )        (If Exists ?shoppingCart ?item                   (And(?customer[ex1:shoppingCart->?shoppingCart]                        ?shoppingCart[ex1:containsItem->?item]                        ?item # ex1:Widget ) )         Then Do( (?s ?customer[ex1:shoppingCart->?s])                  (?val ?s[ex1:value->?val])                  (?voucher ?customer[ex1:voucher->?voucher])                  Retract(?customer[ex1:voucher->?voucher])                  Retract(?voucher)                  Modify(?s[ex1:value->External(func:numeric-multiply(?val 0.90))]))))      (* ex1:UnknownStatusRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Not(Exists ?status                       (And(?customer[ex1:status->?status]                            External(pred:list-contains(List("New" "Bronze" "Silver" "Gold") ?status)) )))         Then Do( Execute(act:print(External(func:concat("New customer: " ?customer))))                  Assert(?customer[ex1:status->"New"]))))  ) )   I hope that helps someone out there :-)

    Read the article

  • Consumer Electronics Show (CES) Summit:Best Practices in Transforming Channels and Partnerships

    - by charles.knapp
    Expanding consumer demand is driving the entire high technology industry, accompanied by product lifecycles as short as a few months, continued pricing and promotion pressures, and increased globalization. Unifying global channel management, operations, and execution flow will increase efficiency and growth. IT can help, but one must think beyond generic ERP and CRM. Please join Oracle and IBM at the Bellagio Hotel in Las Vegas, Wednesday January 5, 1-7 pm. Learn from IBM, VTech, Plantronics, Cisco, Symantec and Oracle High Tech Product Strategy how to improve:Channel sales, marketing, and operations management - enhance NPI, sales, forecasts, training, promotion planning, execution and settlement Winning the deal - determining the right price for the right deal for the "perfect quote", capturing the order and order management Collaborative and rapid supply chain planning - improve agility, inventory turns, and profits Register now for this FREE event. We hope you'll join us for our Oracle High Technology CES Summit and networking reception with your peers.

    Read the article

  • Webinar: Integrated Sales & Marketing - An Impossible Dream?

    - by charles.knapp
    Are you making the most of the latest B2B marketing thinking? Are your marketing tactics, your outbound email campaigns and your SEO generating enough of the prospects and leads that your sales teams need? Are your sales and marketing functions aligned and working together with optimised results? In this Webinar with MarketingWeek Magazine, find out how: - To ensure your marketers create and deliver consistently effective, and targeted campaigns - You can triple the customer intelligence your marketers gather, ensuring your sales teams are better informed and qualified than ever before - Generate up to 200% growth in lead volume and start measuring marketing effectiveness against increase in sales and size of an average deal - And hear how BPI OnDemand has delivered integrated sales & marketing across industries, with results such as 100% ROI on system cost for Heal's after just one campaign

    Read the article

  • Problem upgrading from 13.04 to 13.10

    - by Charles
    Part way through upgrading from 13.04 to 13.10 the process ground to a halt with an error message. Now on retrying by going to 'Check for updates' I get the following: Failed to load the package list This is a serious problem. Try again later. If this problem appears again, please report an error to the developers. E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_saucy_universe_i18n_Translation-en%%5fGB, E:The package lists or status file could not be parsed or opened. Problem reported but my question is, "what can I do now?; Do I have to do a fresh install?; if so will settings etc. in my Home folder (on its own partition) be saved?" 13.04 still seems to be working perfectly, while upgrading I had a terrible internet connection varying between 'dead slow' and 'dead stop', not sure if that caused the problem.

    Read the article

  • Webcast: Navigating the Future of Customer Service

    - by Charles Knapp
    Customer service is set to change dramatically over the next five years – and now is the time to ensure you have the tools to help you succeed. On  Wednesday, June 13, join Oracle and Forrester Research to discover what the future holds and learn how you can: Empower your agents Delight your customers Shape your customer service future Our speakers are Kate Leggett, Senior Analyst, Forrester Research, and John Perez, Customer Experience Strategist, Oracle RightNow. Kate is a leading expert on customer service strategies, as well as a published author on customer service trends and best practices. Her research focuses on helping organizations establish customer service strategies and deliver successful customer service projects. John has extensive experience of working on customer experience programs with organizations across a range of industries. He works with Oracle RightNow clients to build customer experience strategies that improve efficiency and productivity, increase sales, and drive customer loyalty.

    Read the article

  • How Do Top Performing High Tech Companies Measure Online Marketing Success?

    - by Charles Knapp
    You might expect a focus on Net Promoter scores, open rates, and click metrics. The real answers from top performers may surprise you. I've been working for a few months with Aberdeen Group and colleagues from IBM and Oracle to survey high technology firms worldwide on best practices in marketing and channel sales effectiveness.  Now, we will share the results of our original customer research in a new white paper and webcast. Register today to learn how leading High Tech companies are increasing their Return on Marketing Investment (ROMI) and growing channel sales revenue. Discover how top performing high tech companies manage and use customer data, measure marketing spend effectiveness, and support internal and channel sales. Learn how best in class high tech companies use enterprise data throughout their customer lifecycle -- messaging to leads, selling to prospects, and serving customers. Our speakers will be: Peter Ostrow, Research Director - Sales Effectiveness, Aberdeen Group David Lasher, Global Business Services Partner, IBM Jonathan Oomrigar, Vice President, Global High Technology Business Unit, Oracle Reserve your place now! This global webinar is on Tuesday, November 15, 10-11 am PST / 1-2 pm EST / 6-7 GMT / 7-8 CET

    Read the article

  • Do you have to be good at math to be a good programmer?

    - by Charles Roper
    It seems that conventional wisdom suggests that good programmers are also good at math. Or that the two are somehow intrinsically linked. Many programming books I have read provide many examples that are solutions to math problems, or are somehow related to math as if these examples are what make sense to most people. So the question I would like to float is: do you have to be good at math to be a good programmer?

    Read the article

  • 11/15 Webinar: How Top High Tech Companies Grow Channel Revenue and ROMI

    - by Charles Knapp
    See the results of recent Aberdeen research on best practices in sales and marketing effectiveness. Discover how top performing high tech companies manage and use enterprise customer data, measure marketing spend effectiveness, and support internal and channel sales throughout their customer lifecycle -- messaging to leads, selling to prospects, and serving customers. Our speakers will be: Peter Ostrow, Research Director - Sales Effectiveness, Aberdeen Group David Lasher, Global Business Services Partner, IBM Jonathan Oomrigar, Vice President, Global High Technology Business Unit, Oracle Reserve your place now! This global webinar is on Tuesday, November 15, 10-11 am PST / 1-2 pm EST / 6-7 GMT / 7-8 CET

    Read the article

  • SFTP permission denied on files owned by www-data

    - by Charles Roper
    I have a pretty standard server set up running Apache and PHP. An app I am running creates files and these are owned by the Apache user www-data. Files that I upload via SFTP are owned by my own user charlesr. All files are part of the www-data group. My problem is that I cannot modify or overwrite any of the files via SFTP which are owned by www-data, even though charlesr is part of the www-data group. I can modify the files no problem via a SSH session. So I'm not sure what to do. How do I give my SFTP session permissions to modify www-data owned files? For a bit of background, these are the notes I wrote for myself when setting-up the server: Now set up permissions on `/var/www` where your files are served from by default: $ sudo adduser $USER www-data $ sudo chgrp -R www-data /var/www $ sudo chmod -R g+rw /var/www $ sudo chmod -R g+s /var/www Now log out and log in again to make the changes take hold. The previous set of commands does the following: 1. adds the current user ($USER) to the `www-data` group; 2. changes `/var/www` to belong to the `www-data` group; 3. adds read/write permissions to the group that `/var/www` belongs to; 4. sets the SGID bit on `/var/www`; this final point bears some explaining. And then I go on to explain to myself what setting the SGID bit means (i.e. all files created in /var/www become part of the www-data group automatically). Btw, nothing feels sweeter than going back and reading your own detailed notes on the what, how and why of your own server set up when trying to troubleshoot like this - I recommend it highly to all beginners like myself :-)

    Read the article

  • 12.04 making BCM4313 card work with aircrack-ng?

    - by Charles Forest
    I'm a real Linux Noob, just started using it (this month) and until now i had no issues. now i'm trying to set-up aircrack-ng on my laptop, but it seems like it's using the worst card possible (or almost) there is a TON of tutorial on this card (seems to be hell to set-up) i have tryed some, but i ended up uninstalling my drivers, messing with my desktops, and ended by having no more "X" to close my windows (i have no clue how i ended there) i just re-installed my linux (took me 2 hours to setup everything again), but now i'm a bit "Scared" to try tutorials randomly again. Right now it says the driver is wl, wich is not the one i want (AFAIK it's not supported) i'm not sure what kind of informations are needed, but here's what i think could be usefull. lspci -knn 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: agpgart-intel 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port [8086:0101] (rev 09) Kernel driver in use: pcieport Kernel modules: shpchp 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 [8086:1c3a] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 [8086:1c2d] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: ehci_hcd 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller [8086:1c20] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 [8086:1c10] (rev b4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 [8086:1c16] (rev b4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.4 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 [8086:1c18] (rev b4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 [8086:1c26] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge [0601]: Intel Corporation HM65 Express Chipset Family LPC Controller [8086:1c49] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel modules: iTCO_wdt 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: ahci 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel modules: i2c-i801 01:00.0 3D controller [0302]: NVIDIA Corporation GF108 [GeForce GT 540M] [10de:0df4] (rev a1) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: nouveau Kernel modules: nouveau, nvidiafb WIRELESS CARD 02:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller [14e4:4727] (rev 01) Subsystem: Wistron NeWeb Corp. Device [185f:051a] Kernel driver in use: wl Kernel modules: wl, bcma, brcmsmac REST... 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 06) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: r8169 Kernel modules: r8169 04:00.0 USB controller [0c03]: NEC Corporation uPD720200 USB 3.0 Host Controller [1033:0194] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: xhci_hcd Also, if i'm "screwed" with my hardware, just tell me.

    Read the article

  • What Can You Do When You Need More Than Just CRM?

    - by charles.knapp
    Sometimes a company needs more than just CRM to grow profitably. What if you also need ERP for streamlining the rest of your operations? Unlike CRM-only companies, Oracle can help you - today. For example, Myriad Genetics was an early pioneer and is currently a global life sciences leader in the exciting field of molecular diagnostic products. To keep pace with company growth, Myriad needed to integrate disparate systems and automate paper-based processes. Furthermore, Myriad needed to increase sales pipeline visibility to maximize customer service. Myriad selected Oracle CRM On Demand and E-Business Suite ERP applications. As a result, Myriad standardized sales processes, ensured greater visibility into the pipeline, and improved customer service. Read more here about Myriad and their business results.

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • The Bing Sting - an alternative opinion

    - by Charles Young
    I know I'm a bit of an MS fanboy at times, but please, am I missing something here? Microsoft, with permission of users, exploits clickstream data gathered by observing user behaviour. One use for this data is to improve Bing queries. Google equips twenty of its engineers with laptops and installs the widgets required to provide Microsoft with clickstream data. It then gets their engineers to repeatedly (I assume) type in 'synthetic' queries which bring back 'doctored' hits. It asks its engineers to then click these results (think about this!). So, the behaviour of the engineers is observed and the resulting clickstream data goes off to Microsoft. It is processed and 'improves' Bing results accordingly.   What exactly did Microsoft do wrong here?   Google's so-called 'Bing sting' is clearly a very effective attack from a propaganda perspective, but is poor practice from a company that claims to do no evil. Generating and sending clickstream data deliberately so that you can then subsequently claim that your competitor 'copied' that data from you is neither fair nor reasonable, and suggests to me a degree of desperation in the face of real competition.   Monopolies are undesirable, whether they are Microsoft monopolies or Google monopolies.    Personally, I'm glad Microsoft has technology in place to observe user behaviour (with permission, of course) and improve their search results using such data. I can only assume Google doesn't implement similar capabilities. Sounds to me as if, at least in this respect, Microsoft may offer the better technology.

    Read the article

  • Create a Loyalty Program That Sticks - Thursday 30 Minute Webcast

    - by Charles Knapp
    Loyalty programs don't necessarily translate into loyal or profitable customers. What are market leaders doing to retain customers? Webcast Alert: Live complimentary webcast, Creating a Holistic Loyalty Program That Sticks, on Thursday, 11/15 at 1:00-1:30 pm EST. Southwest Airlines joins 1to1 Media to share insights on developing loyalty programs that are focused on customer needs and preferences. Hope to see you there! 

    Read the article

  • Oracle NoSQL Database Using FusionIO ioDrive2

    - by Charles Lamb
    We ran some benchmarks using FusionIO ioDrive2 SSD drives and Oracle NoSQL Database. FusionIO has published a whitepaper with the results of the benchmarks. "Results of testing showed that using an ioDrive2 for data delivered nearly 30 times more operations per second than a 300GB 10k SAS disk on a 90 percent read and 10 percent write workload and nearly eight times more operations per second on a 50 percent read and 50 percent write workload. Equally impressive, an ioDrive2 reduced latency over 700 percent (seven times) on inserts in a 90 percent read and 10 percent write workload and over 5800 percent (58 times) on reads in a 50 percent read and 50 percent write workload."

    Read the article

  • How can I deal with actor translations and other "noise" in third-party motion capture data?

    - by Charles
    I'm working on a game, and I've run into a problem with motion capture data. My team is using 3DS Max 2011 and trying to put free motion capture files on our models. The problem we're having is it has become extremely hard to find motion capture data that stays in place. We've found some great motion captures of things like walking and jumping but the actors themselves move within the data, so when we attach these animations to our models and bring them into XNA, the models walk forward even when they should technically be standing still (and then there's also the problem of them resetting at the end of the animation). How can we clean up, at runtime or asset-processing time, the animation in these motion capture files?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >