Search Results

Search found 4272 results on 171 pages for 'processes'.

Page 103/171 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • What to do when 'dpkg --configure -a' fails with too many errors?

    - by rudivonstaden
    During an upgrade from lucid (10.04) to precise (12.04), the X session froze, and I have been trying to recover the upgrade to get a stable system. I have performed the following steps: Used ssh to log in to the stalled system over the network. Checked the contents of the /var/log/dist-upgrade directory. There was no activity on main.log, apt.log or term.log. top showed that process 'precise' was using about 3% CPU, but I could find no evidence that the upgrade process was still doing anything. 'dpkg' did not show up in top, but it came up with pgrep dpkg | xargs ps Killed the 'dpkg' and 'precise' processes Tried to recover the upgrade by running sudo fuser -vki /var/lib/dpkg/lock;sudo dpkg --configure -a. This was partially successful (some packages were configured), but failed with the message Processing was halted because there were too many errors. I ran the same command a few times, and each time some packages were configured but others failed. Tried running sudo apt-get -f install. It fails with similar errors to dpkg. The current situation is that dpkg --configure -a and sudo apt-get -f install fails with two kinds of error: Dependency issues, e.g.: dpkg: dependency problems prevent configuration of cifs-utils: cifs-utils depends on samba-common; however: Package samba-common is not configured yet. dpkg: error processing cifs-utils (--configure): dependency problems - leaving unconfigured Resource conflict, e.g.: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Additionally, it seems there's reference to potential boot problems, so I'm not keen to reboot without fixing the install first: dpkg: too many errors, stopping Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab So my question is, how to get a working install when dpkg --configure -a fails?

    Read the article

  • Ubuntu 12.04 - syslog showing "SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled"

    - by Tom G
    I have been seeing these random logs in syslog on our production system. There is no XFS setup. Fstab only shows local partitions, only EXT3 . There is nothing in crontabs either. The only file system related package I have installed is 'nfs-kernel-server' Kernel version is 3.2.0-31-generic . kernel: [601730.795990] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled kernel: [601730.798710] SGI XFS Quota Management subsystem kernel: [601730.828493] JFS: nTxBlock = 8192, nTxLock = 65536 kernel: [601730.897024] NTFS driver 2.1.30 [Flags: R/O MODULE]. kernel: [601730.964412] QNX4 filesystem 0.2.3 registered. kernel: [601731.035679] Btrfs loaded os-prober: debug: running /usr/lib/os-probes/mounted/10freedos on mounted /dev/vda1 10freedos: debug: /dev/vda1 is not a FAT partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/10qnx on mounted /dev/vda1 10qnx: debug: /dev/vda1 is not a QNX4 partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/20macosx on mounted /dev/vda1 macosx-prober: debug: /dev/vda1 is not an HFS+ partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/20microsoft on mounted /dev/vda1 20microsoft: debug: /dev/vda1 is not a MS partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/30utility on mounted /dev/vda1 30utility: debug: /dev/vda1 is not a FAT partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/40lsb on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/70hurd on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/80minix on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/83haiku on mounted /dev/vda1 83haiku: debug: /dev/vda1 is not a BeFS partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/90bsd-distro on mounted /dev/vda1 83haikuos-prober: debug: running /usr/lib/os-probes/mounted/90linux-distro on mounted /dev/vda1 os-prober: debug: running /usr/lib/os-probes/mounted/90solaris on mounted /dev/vda1 os-prober: debug: /dev/vda2: is active swap Why would this randomly show up? This also spawns multiple "jfsCommit" processes.

    Read the article

  • Announcing Two Papers Addressing the RPAS Fusion Client

    - by Oracle Retail Documentation Team
    Oracle Retail has published two documents to My Oracle Support addressing the Retail Predictive Application Server (RPAS) Fusion Client, a web-based rich client developed using the latest Oracle Application Development Framework (ADF). The Fusion Client provides an enhanced user experience for communicating with the RPAS server. Oracle Retail Predictive Application Server Fusion Client Getting Started Guide Doc ID 1492759.1The Retail Predictive Application Server (RPAS) is a configurable platform that provides capabilities such as a multidimensional database structure, batch and online processing, a configurable user interface, a configurable calculation engine, user security, and utility functions such as importing and exporting, all on a highly scalable technical environment that can be deployed on a variety of hardware. This paper addresses typical questions that arise during setting up and deploying the Fusion Client, provides performance recommendations, and highlights the differences between the Classic Client and the Fusion Client. Oracle Retail RPAS Fusion Client Performance Issue Report Doc ID 1493747.1Performance issues can be frustrating for customers, and Oracle Retail will strive to assist you as you attempt to enhance the performance of your systems. To ensure the timeliest processing of your issue, retailers and partners are encouraged to respond as thoroughly as possible to each question in this document, which can be sent back for analysis by logging a Service Request and following typical Customer Support processes. The sections of the document solicit information about the following: Performance Issue Description Performance Issue Details System Configuration Data Application Configuration Data Performance Log Files

    Read the article

  • Turn O&M Operations into Optimized Projects with Oracle Primavera

    - by mark.kromer
    Oracle enterprise project portfolio management with Primavera is much more than optimizing project performance and eliminating project failure on new projects, capital programs, etc. A very common use case that we see is small-scale frequent and recurring projects based on on-going operations and maintenance. As opposed to assigning resources to various activities when you are building a new network infrastructure, for example, Oracle has teamed-up the Primavera and eBusiness Suite teams to provide direct integration for work orders from Oracle's Enterprise Asset Management (eAM) system to populate into Primavera P6 project schedules. So now that your network infrastructure build-out project is complete, planners and operations managers can use the world-class what-if and scheduling capabilities in Primavera tools to assign work orders, maximize resource utilization and to reuse templates for typical O&M operations in Primavera and share that back to the operations teams using eAM for maintenance. Also, large-scale maintenance operations related to large assets in the asset lifecycle will include phase-outs, shutdowns and turn-arounds which are classic maintenance projects, as opposed to building something new, that Oracle Primavera with Oracle e-Business Suite provides full coverage to optimize your ALM processes in your business. Read more about these new capabilities from Oracle in the ERP space from the Oracle eAM data sheet.

    Read the article

  • What's the canonical process for backing up a website?

    - by Walkerneo
    This is going to sound terrible, but bear with me. I currently have a cron job that does a mysql dump, a git add all and commit, and a git push to bitbucket. I set this up almost a year ago, when I didn't know much about git, backups, and general web development and administration. I haven't had the time to fix this and do it properly, but the repo has now grown quite big from accumulating large temporary files from my forum, so now I have to do something and I want to do it properly this time around. What processes do semi-large websites and personal site admins use for backing up server content? Based on what I've learned since I set this up, what I'm currently think of doing is: Making changes on a development domain and committing the code frequently Archiving the entire site after a successful deployment from the development domain Having automatic daily database and user-content backups. I still like the idea of backing up sqldumps with git, though. I know git isn't a backup tool and that this is beyond its purpose, but the textual queries that are exported would be easily managed by git and would save a lot of space in archives.

    Read the article

  • Quantifying the Value Derived from Your PeopleSoft Implementation

    - by Mark Rosenberg
    As product strategists, we often receive the question, "What's the value of implementing your PeopleSoft software?" Prospective customers and existing customers alike are compelled to justify the cost of new tools, business process changes, and the business impact associated with adopting the new tools. In response to this question, we have been working with many of our customers and implementation partners during the past year to obtain metrics that demonstrate the value obtained from an investment in PeopleSoft applications. The great news is that as a result of our quest to identify value achieved, many of our customers began to monitor their businesses differently and more aggressively than in the past, and a number of them informed us that they have some great achievements to share. For this month, I'll start by pointing out that we have collaborated with one of our implementation partners, Huron Consulting Group, Inc., to articulate the levers for extracting value from implementing the PeopleSoft Grants solution. Typically, education and research institutions, healthcare organizations, and non-profit organizations are the types of enterprises that seek to facilitate and automate research administration business processes with the PeopleSoft Grants solution. If you are interested in understanding the ways in which you can look for value from an implementation, please consider registering for the webcast scheduled for Friday, December 14th at 1pm Central Time in which you'll get to see and hear from our team, Huron Consulting, and one of our leading customers. In the months ahead, we'll plan to post more information about the value customers have measured and reported to us from their implementations and upgrades. If you have a great story about return on investment and want to share it, please contact either [email protected]  or [email protected]. We'd love to hear from you.

    Read the article

  • Necessary Infrastructure for large project with many components communicating through IPCs

    - by jluzwick
    I have a fairly in depth question which probably doesn't have an exact answer. As a software engineer, I am usually tasked with working on a program or project with minimal understanding of how other components or programs in the project interact with each other. When one program fails in a sea of multiple components and processes, what infrastructure elements are necessary to ensure that the problem can be accurately tracked to the violating application? More specifically, what infrastructure elements should be necessary for this large project and which are optional but very helpful. One such example I can think of is some form of a common logging infrastructure that allows for a developer or tester to easily browse through a log that contains numerous components for messages that might allude to the culprit program along with a "trail" of what happened before the issue occurred. I'm thinking of something similar to Androids alogcat tool. These necessary infrastructure elements should be language-agnostic. While these elements should be understood by all engineers on the team in question, which elements should be understood at great detail by the technical system engineers and what should the individual software engineers be responsible for adding to their tools to allow for such infrastructures to take hold? Please feel free to ask for clarification if something does not make sense as I understand this question is very broad and needs some refinement. I will refine as necessary from the answers and comments I receive. Thanks for any help!

    Read the article

  • Trade In, Trade Up Promotion: SPARC Consolidation Now Through May 31st

    - by swalker
    Dear Partner, Installed Base Business (IBB) technology refresh is one of the most important activities for Oracle, for you and for your customers. It allows your existing customers to benefit from the most up-to-date, best-of-breed Oracle products. And it’s an exciting time to perform a technology refresh: a new SPARC promotion is available now, closing 31st May 2012. Customers trading in older SPARC systems and upgrading to a new SPARC SuperCluster T4-4 or SPARC Enterprise M8000/M9000 can get $4,000 per CPU. Discount is pre-approved and upfront (maximum discounts apply). The major highlights are as follows: Targeted Systems: Upgrade to SPARC M8000, M9000, SuperCluster Qualified installed base upgrade from: All older-generations of SPARC systemsPromotional offer: Trade-in Value: $4K per CPU Pre-approved maximum discount (including trade-in) not to exceed 60% on M8/9000 systems and 25% on SuperCluster No-cost dock-to-dock shipping, and environmentally safe disposal of the returned hardware through Oracle best-of-class recycling processes. Recommendations: We recommend you to take the following actions: As usual, please register your opportunities in OMM When you do so, please make sure you place the following Campaign Names in the “Marketing Initiative” field of OMM: Campaign Name : EMEA_Tech Refresh-IBB Campaign_12H1_Follow Up_O For all the details: Please view rules, and FAQs. For more information, please visit the Promo Partner Site here. For more information on IBB and the Oracle Upgrade Advantage Program (UAP):http://www.oracle.com/us/products/servers-storage/upgrade-advantage-program/index.html http://www.oracle.com/partners/secure/sales/oracle-ibb-program-for-partners-184291.html Contacts: For questions, please contact your favorite Oracle Partner Account Manager.

    Read the article

  • How to model the components of a non Information System?

    - by Adel C Kod
    So I am working on a project that's related to the Kernel code(specifically related to the TCP/IP stack of the kernel). I need to build some models to describe the functionality and components of my system. Initially I thought about Class Diagram, it can describe the general architecture of my system but it doesn't make sense since my code is VERY structured(written in standard C). I also thought about DFDs, they'd describe the processes of my system, and how the data is flowing. But they contain something which doesn't really fit in; data-storages. I have no databases here(at all). For the functionality, other team members suggested using Activity and Sequence diagrams, which is kinda okay with me, but what about the system components? So basically my question is; I want to describe the components of my system; what do you suggest as a meaningful diagram to follow? (Again, the project is a research low-level systems-oriented project with almost no user-interface at all)

    Read the article

  • Have you Been Missing the 'About This Record' Functionality on the Customer Form...?

    - by MargaretW
    Do you have fond memories of the 'Help -> About This Record'  functionality that used to be available in the old Customer form - when it was a form, and not a java html screen?  Back in Release 11i, we had the ability to identify when the customer record had last been updated and by whom.  When some forms were replaced by Java HTML screens, you could identify some of this information via the 'About this Page' hyperlink at the bottom left hand corner of the HTML page.  You could enable this by enabling the FND: Diagnostics profile option, but many customers found this had an adverse effect on performance and additionally was not user-friendly.   Our customers tell us that this feature was widely used to identify owner/update information in many business processes, including auditing, customer entry/update, research and testing.  There have been various efforts to revert this feature by customising java pages, but this was not fully successful in some cases.  Oracle Support is happy to announce that this functionality has now been included in the Customer screens in Release 12.2 onwards.   You will be able to query the record history at customer level, at site level, at site address levels and for all tabs relating to the customer. Simply click on the 'Record History' icon, available in the Record History column on a summary screen, or via the same icon on the individual detail screen to display the following information: Last Updated Date: Last Updated By Creation Date Created By Last Update Login

    Read the article

  • .NET app - Should we use SQL Server and duplicate some reference data from an external Oracle DB? Or use Oracle and have a DB link?

    - by Daventry
    We're looking to migrate some existing Excel/Access processes into a new system which will provide the users with a Silverlight frontend to run and view the reports instead of using MS Access. The initial idea was to have SQL Server 2008 as RDBMS. The problem is that we've got some static data such as country codes, counterparties, etc which live in an existing Oracle DB. Since we do not want to duplicate that data (if possible), we were thinking of having a DB link between SQL Server and Oracle, but our firm does not allow that. So the options are either duplicate the data or use Oracle as RDBMS - surprise, the firm does allow DB links between Oracle databases. The initial idea was also to use WCF RIA Services, Entity Framework, etc which we're not sure they play well with Oracle, that's why it was decided to go with SQL Server in the first place. Would you advise to go for Oracle so that we can just link the static data? Or use SQL Server 2008 and replicate it because it's "safer" to stay within the Microsoft land? To use or not to use Entity Framework and WCF RIA Services at all? Regards. UPDATE: Thanks everyone for your answers. Nothing is set in stone yet. We'll try to import the data instead of linking, as if the other DB goes down, our system can still carry on. We're likely to use SQL Server just because most developers are more experienced with it. Even if we used RIA Services, we can swap out the Data Access Layer and use other frameworks such those mentioned below.

    Read the article

  • Why have we got so many Linux distributions? [closed]

    - by nebukadnezzar
    Pointed to from an answer to another question, I came across this graphic, and I'm shocked how many linux distributions currently exist. However, it seems that most of these distributions are forks of already popular distributions with minimal changes, usually limited to themes, wallpapers and buttons. It would still seem easier to create a sub-distribution with the required changes, such as XUbuntu with XFCE4, KUbuntu with KDE4, Fluxbuntu with Fluxbox, etc. In my mind there are a number of problems with having so many distributions - perhaps less security/stability due to smaller group of developers, and also the confusingly vast range of choice for newcomers to Linux. Some reasons that developers might decide to fork are: Specializing on a particular topic (work-related topic - i.e. for a Hospital, etc) An exceptional architecture, that requires a special set of software Use of non-FOSS, proprietary technology, and such So what other reasons are there that have caused so many people decided to create their own distributions? What are the thought processes that have led to this? And are these "valid" reasons - do we need so many distributions? If you can back your experiences up with references that would be great.

    Read the article

  • A Web Service to collect data from local servers every hour

    - by anilerduran
    I'm trying to find a way to collect data from different servers around the world. Here are the details: There is only one single PowerShell script on servers that encrypts data (simple csv file) and sends with preferred method (HTTP/HTTPS Post could be) There is no more control on that servers. Can't install any service, process etc. Just I can configure script to execute every hour. This script also will have encrypted username/password/license key for every server. Script will compress data and send to me with these information. So I need a service (I'm not sure if Web Service is the rigth solution) on the cloud that will help me to: Will get data that is sent from servers using a method. Will authenticate request to recognize sender using license key/username/password and most importantly, Will redirect/send this filecab to my SQL Server on the cloud (Azure). Also it should seperate data according to customer information in license key. So every data for every customer will be stored in dedicated DB/Tables on my SQL All the processes above should be completed automatically. No way for manual steps. Question: A Web Service (SOAP or Restful) is the rigth solution for that?

    Read the article

  • IIS 7 SSO stops working during high CPU load? [migrated]

    - by DanB
    On our IIS7 site (Windows 2008 Server), we have set up single sign-on (SSO). It seems to work fine most of the time, but when the CPU load becomes high, SSO authentication completely stops working. I did some research and tried this suggestion to increase the max number of worker processes in the default app pool, but the increase did not help. Some details: The site is a WordPress blog. The server has plenty of RAM (2 GB) and free disk space. SSO is achieved by putting a copy of the WordPress login page (wp-login.php) into a subfolder below the root that has anonymous authentication disabled, and then redirecting the browser to it. This was the recommendation of Microsoft given to our consultants. To increase CPU load for testing, I have three scripts hit the home page simultaneously, over and over. This drives CPU to 100%. When these scripts are running, SSO authentication simply doesn't happen. As soon as I stop the scripts, SSO works again. (I should mention that the SSO problem also happens when many users visit the site at once....) The WordPress database process (mysqld) is not stressed at all by the scripts. I would be happy to provide further diagnostics. Any help appreciated!

    Read the article

  • Deal Registration Moves to Oracle Partner Store (OPS)- The Four Action Items for Partners

    - by Richard Lefebvre
    In November 2012, Oracle’s partner deal registration process will move to the Oracle Partner Store (OPS). During this time, OPS will become the single source for partners to register deals, obtain deal status, and place orders. What will partners need to do? 1. Request an OPS Account – If your company is new to OPS the first thing you need to do is request an account (if your company already has an OPS account, go to step 2). It’s important to have the person who will be managing your OPS account make this request as soon as possible. They will be set up as your company’s primary administrator. 2. Set-Up Users in OPS – Setup of users can start immediately, and will be handled by the primary OPS administrator at your company. The process is simple, but all existing users of Global PRM (Partner Relationship Management) deal registration will need to be set up in OPS before November 14, 2012.  3. Review/Action Any Registrations Pending Submission in PRM – Prior to November 14, 2012, all pending registrations should be submitted in the existing PRM system. It is important that this step is complete so registrations will not need to be re-entered when the system is moved to OPS on November 17, 2012. Registrations pending submission are easily identified on the registration listing screen with either “Incomplete” or “Returned to Partner” in the status column.  4. Attend Training – Oracle will offer multiple VAD and VAR training sessions beginning October 29, 2012. It is recommended that all users attend one of these important sessions.  Detailed instructions on each of these tasks can be found on the OPS Information Page. OPS will offer several enhancements to the deal registration process, including: Simplified Registration Form Easier Product Selection Expanded Browser Support Shared Registration Visibility Between VAD and VAR Pre-set Customer Selection From Partner Ordering Base Best Regards, Titina Ott Vice President, Worldwide A&C Systems And Business Processes 

    Read the article

  • PeopleSoft CRM 9.2 Release Value Proposition

    - by Race Bannon
    Oracle's PeopleSoft Customer Relationship Management (CRM) delivers solutions that have been tailored to fit your industry business processes, your customer strategies, and your success criteria. With PeopleSoft CRM 9.2, organizations will be able to deploy a solution that delivers built-in best practices specific to your industry with a highly configurable, tightly integrated platform, ensuring that solutions will be fast to implement. The result is less configuration, less customization, and less integration. PeopleSoft Customer Relationship Management (CRM) is a world-class solution for organizations of every size and Oracle’s planned product roadmap for PeopleSoft applications is to deliver valuable, needed features for all of an organization’s constituents along three design principles — Simplicity, Productivity, and Lowered Total Cost of Ownership — as well as new application functionality as prioritized by our customers. The upcoming 9.2 release of PeopleSoft Customer Relationship Management focuses on these themes of Simplicity, Productivity, and Lower Total Cost of Ownership while also delivering robust new functionality to help your organization succeed. The recently published PeopleSoft CRM 9.2 Release Value Proposition provides overviews of the new features and enhancements planned for these applications for Release 9.2. This document offers customers a road map intended to help them assess the business benefits of upgrading to the 9.2 release while also helping them plan their IT projects and investments. (Link is to a My Oracle Support page, available to customers and partners.) Oracle continues to deliver enterprise-wide features that enhance our customer ownership experience and helps them run their businesses more efficiently and profitably. With the CRM 9.2 release, we continue to abide by this firm commitment we’ve made to our customers.

    Read the article

  • 11/28 Thought Leaders Webinar: Marketing Strategies for Great Customer Experiences

    - by Charles Knapp
    With the growing use of mobile and social, it's tempting to bolt on these new channels to existing processes. However, that piecemeal approach may not lead to satisfying customer experiences or solid returns on investments. Furthermore, the volume of information businesses have access to is growing exponentially. Is this leading to better business insight and customer experiences? Join the Internet Marketing Association, The University of California at Irvine, and Oracle as we discuss marketing strategies that will help your customers have better experiences with your brand. You'll learn effective strategies for harnessing the power of "big data" to know more and understand your customers better, empowering customers and employees to make every interaction easy and rewarding, and adapting the customer experience to connect and engage effectively with each customer. Our speakers are Melissa Boxer, Vice President of Product Strategy, Oracle Cloud and CX Applications, who is a conference keynote speaker on integrated social marketing and loyalty analytics, and Dean Abbott, CEO of Abbott Analytics, who is a thought leader in commercial predictive analytics. This learning opportunity takes place on Wednesday, November 28, 11 am to 12 pm Pacific. Register today to learn from these thought leaders.

    Read the article

  • Updated Business Activity Monitoring (BAM) Class

    - by Gary Barg
    We have just completed an extensive upgrade to the Business Activity Monitoring course, bringing it up to PS5 level and doing some major rework of content and topic flow. This should be a GREAT course for anyone needing to learn to use BAM effectively to analyze their SOA data. Details of the Course This course explains how to use Oracle BAM to monitor enterprise business activities across an enterprise in real time. You can measure your key performance indicators (KPIs), determine whether you are meeting service-level agreements (SLAs), and take corrective action in real time. Learn To: Create dashboards and alerts using a business-friendly, wizard-based design environment Monitor BPM and BPEL processes Configure drilling, driving, and time-based filtering Create alerts Build applications with a dynamic user interface Manage BAM users and roles In addition to learning Oracle BAM architecture, you learn how to perform administrative tasks related to Oracle BAM. You create and work with the different types of message sources that send data into Oracle BAM. You build interactive, real-time, actionable dashboards, and you configure alerts on abnormal conditions. You learn how to monitor both BPEL and BPM composite applications with Oracle BAM. Lastly, you create and use Oracle BAM data control to build applications with a dynamic user interface that changes based on real-time business events. Registration The Oracle University course page with more course details and registration information, is here. The next scheduled class: Date: 5-Dec-2012 Duration: 3 days Hours: 9:00 AM – 5:00 PM CT Location: Chicago, IL Class ID: 3325708

    Read the article

  • 724% Return on an SFA project with Oracle Sales Cloud and Marketing Cloud combined!

    - by Richard Lefebvre
    Oracle Sales Cloud and Marketing Cloud customer Apex IT gained just that?a 724% return on investment (ROI) when it implemented these Oracle Cloud solutions in its fast-moving, rapidly-growing business. Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm, highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent. Fast Facts: Return on Investment – 724% Payback – 2 months Average annual benefit – $91,534 Cost : Benefit Ratio – 1:48 Business Benefits In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams: Improved ability to identify new opportunities and focus sales resources on higher-probability deals Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46% Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations Read the full Apex IT ROI Case Study. You also can learn more about Apex IT’s business, including the company’s work with Oracle Sales and Marketing Cloud on behalf of its clients. You can point your prospects and customers to the CX blog for a similar recap of the Apex IT award and a link to the Case Study.

    Read the article

  • Static / Shared Helper Functions vs Built-In Methods

    - by Nathan
    This is a simple question but a design consideration that I often run across in my day to day development work. Lets say that you have a class that represents some kinds of collection. Public Class ModifiedCustomerOrders Public Property Orders as List(Of ModifiedOrders) End Class Within this class you do all kinds of important work, such as combining many different information sources and, eventually, build the Modified Customer Orders. Now, you have different processes that consume this class, each of which needs a slightly different slice of the ModifiedCustomerOrders items. To enable this, you want to add filtering functionality. How do you go about this? Do you: Add Filtering calls to the ModifiedCustomerOrders class so that you can say: MyOrdersClass.RemoveCanceledOrders() Create a Static / Shared "tooling" class that allows you to call: OrdersFilters.RemoveCanceledOrders(MyOrders) Create an extension method to accomplish the same feat as #2 but with less typing: MyOrders.RemoveCanceledOrders() Create a "Service" method that handles the getting of Orders as appropriate to the calling function, while using one of the previous approaches "under the hood". OrdersService.GetOrdersForProcessA() Others? I tend to prefer the tooling / extension method approaches as they make testing a little bit simpler. Although I dependency inject all my sourcing data into the ModifiedCustomerOrders, having it as part of the class makes it a little bit more complicated to test. Typically, I choose to use extension methods where I am doing parameterless transformations / filters. As they get more complex, I will move it into a static class instead. Thoughts on this approach? How would you approach it?

    Read the article

  • Quadcopters Play Catch [Video]

    - by Jason Fitzpatrick
    Working like a group of hive-minded bees, these quadcopters come off as almost playful with their ball throwing antics. Courtesy of the folks at the Swiss Federal Institute of Technology in Zurich’s Institute for Dynamic Systems and Control, we’re treated to a video of three quadcopters playing catch in the research facility’s Flying Machine Area. They explain the processes demonstrated in the video: This video shows three quadrocopters cooperatively tossing and catching a ball with the aid of an elastic net. To toss the ball, the quadrocopters accelerate rapidly outward to stretch the net tight between them and launch the ball up. Notice in the video that the quadrocopters are then pulled forcefully inward by the tension in the elastic net, and must rapidly stabilize in order to avoid a collision. Once recovered, the quadrotors cooperatively position the net below the ball in order to catch it. Because they are coupled to each other by the net, the quadrocopters experience complex forces that push the vehicles to the limits of their dynamic capabilities. To exploit the full potential of the vehicles under these circumstances requires several novel algorithms, including: HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • Why is my HDD going back from standy?

    - by Pablo
    My hard drives, connected to Ubuntu server are producing the following log every exactly 5 minutes. Nov 1 14:10:50 localhost kernel: [ 1602.884936] ata2.00: hard resetting link Nov 1 14:10:51 localhost kernel: [ 1603.226804] ata2.01: hard resetting link Nov 1 14:10:52 localhost kernel: [ 1604.274533] ata2.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 1 14:10:52 localhost kernel: [ 1604.274548] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 1 14:10:52 localhost kernel: [ 1604.356669] ata2.00: configured for UDMA/133 Nov 1 14:10:52 localhost kernel: [ 1604.375247] ata2.01: configured for UDMA/133 Nov 1 14:10:52 localhost kernel: [ 1604.375265] ata2: EH complete I don't think this is related to hard drive failure, because it happens for ALL hard drives connected and ONLY when I write spindown_time = 12 in /etc/hdparm.conf. The reason I add this value is to put disks into standby mode after 60 seconds, which is happening after that period (checked with hdparm -C). The first clue I thought that smartd was running and spinning the drive. However, I couldn't find it in ps -aux | grep smart. Additionally, iostat does show that nobody accessed those drives, since Blk_read, Blk_wrtn remain unchanged. I also killed all processes that may be doing something with hdd(eg SAMBA). So I guess the problem is solely with hdparm... I have no more clue where that 5 minute value hides.

    Read the article

  • 11.10 system crashes for no apparent reason [closed]

    - by varanoid
    I'm relatively new to Linux, and while I know a fair amount about computers and programming, it certainly isn't my specialty. I have a dual boot with Windows 7, and it has been working very well for me until recently. Just randomly the computer will freeze. The last time I was smart enough to keep the System Monitor open when it happened, and it looks like at the time of freezing "bash" and a bunch of other processes that I don't recognize seem to have flooded the memory. So it looks like my memory is getting overloaded and this is what is crashing the system, but I honestly have no idea what could be doing it. Generally I have a bunch of programs running, but they don't take much RAM or CPU: Transmission, Libre Office Writer, Firefox, Empathy, and Banshee. Sometimes I also have Text Editor and Terminal open, but it crashes regardless. When it crashes, it seems that all of the programs are working fine but things like windows, the taskbar, and operations like Alt+Tab just stop working properly or at all. Sometimes the mouse and keyboard freeze and I have to power off manually. Other than that I don't know what the problem is. The only irregularity I've experienced is that I can't download "Debian package management system" even though other updates download fine.

    Read the article

  • Touchpad and Keyboard both stop working after long uptime

    - by Sepero
    I have an Asus N53SM laptop that I leave running for several weeks at a time. I never put it in hibernate or suspend, I only close the lid when I'm not using it. After a few days or weeks of running, the touchpad and keyboard will Both lock up (at the same time) for no apparent reason. I could be just surfing the internet when it happens. The touchpad and keyboard seem to only lock up when I'm actively using the laptop (not when idle), which may mean it's related to something I press, but I'm not sure? The touchpad never locks or unlocks when Pressing FN and the designated touchpad lock key (it does not seem to work on Linux). While the touchpad and keyboard are locked, I am able to plug in my USB mouse and successfully use it to control the screen cursor. I can also remotely get into the system with vnc and ssh, everything seems to run fine there as well. No processes appear out of control. It's just the laptops physical touchpad and keyboard that are locking up. How might I go about diagnosing this problem? What system logs to look at? (anything specific to look for in them?) Perhaps I should try reloading some modules? Any thing else I should inspect?

    Read the article

  • Get the Latest Security Inside Out Newsletter, October Edition

    - by Troy Kitch
    The latest October edition of the Security Inside Out newsletter is now available and covers the following important security news: Securing Oracle Database 12c: A Technical Primer The new multitenant architecture of Oracle Database 12c calls for adopting an updated approach to database security. In response, Oracle security experts have written a new book that is expected to become a key resource for database administrators. Find out how to get a complimentary copy.  Read More HIPAA Omnibus Rule Is in Effect: Are You Ready? On September 23, 2013, the HIPAA Omnibus Rule went into full effect. To help Oracle’s healthcare customers ready their organizations for the new requirements, law firm Ballard Spahr LLP and the Oracle Security team hosted a webcast titled “Addressing the Final HIPAA Omnibus Rule and Securing Protected Health Information.” Find out three key changes affecting Oracle customers.  Read More The Internet of Things: A New Identity Management Paradigm By 2020, it’s predicted there will be 50 billion devices wirelessly connected to the internet, from consumer products to highly complex industrial and manufacturing equipment and processes. Find out the key challenges of protecting identity and data for the new paradigm called the Internet of Things.  Read More

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >