Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 459/924 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • Are there any concrete examples of where a paralellizing compiler would provide a value-adding benefit?

    - by jamie
    Paul Graham argues that: It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. ... The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for? A good answer is one where there is a process where the computer runs at 100% CPU on a single core for a painful period of time. That time might be 10 seconds, if the task is meant to be quick. It might be 500ms if the task is meant to be interactive. It might be 10 hours. Please describe such a problem. Really, that's all I'm looking for: candidate areas for further investigation. (Hence, raytracing is off the list because all the low-hanging fruit have been feasted upon.) I am not interested in why it cannot be done. There are a million people willing to point to the sound reasons why it cannot be done. Such answers are not useful.

    Read the article

  • How to visualize the design of a program in order to communicate it to others

    - by Joris Meys
    I am (re-)designing some packages for R, and I am currently working out the necessary functions, objects, both internal and for the interface with the user. I have documented the individual functions and objects. So I have the description of all the little parts. Now I need to give an overview of how the parts fit together. The scheme of the motor so to say. I've started with making some flowchart-like graphs in Visio, but that quickly became a clumsy and useless collection of boxes, arrrows and-what-not. So hence the question: Is there specific software you can use for vizualizing the design of your program If so, care to share some tips on how to do this most efficiently If not, how do other designers create the scheme of their programs and communicate that to others? Edit: I am NOT asking how to explain complex processes to somebody, nor asking how to illustrate programming logic. I am asking how to communicate the design of a program/package, i.e.: the objects (with key features and representation if possible) the related functions (with arguments and function if possible) the interrelation between the functions at the interface and the internal functions (I'm talking about an extension package for a scripting language, keep that in mind) So something like this : But better. This is (part of) the interrelations between functions in the old package that I'm now redesigning for obvious reasons :-) PS : I made that graph myself, using code extraction tools on the source and feeding the interrelation matrix to yEd Graph Editor.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 4: Application Development Considerations

    - by Tim Murphy
    This is the final part in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Application Development Considerations Now we get to the actual building of your solutions.  What are the skills and resources that will be needed in order to develop a smartphone application in the enterprise? Language Knowledge One of the first things you need to consider when you are deciding which platform language do you either have the most in house skill base or can you easily acquire.  If you already have developers who know Java or C# you may want to use either Android or Windows Phone.  You should also take into consideration the market availability of developers.  If your key developer leaves how easy is it to find a knowledgeable replacement? A second consideration when it comes to programming languages is the qualities exposed by the languages of a particular platform.  How well does that development language and its associated frameworks support things like security and access to the features of the smartphone hardware?  This will play into your overall cost of ownership if you have to create this infrastructure on your own. Manage Limited Resources Everything is limited on a smartphone: battery, memory, processing power, network bandwidth.  When developing your applications you will have to keep your footprint as small as possible in every way.  This means not running unnecessary processes in the background that will drain the battery or pulling more data over the airwaves than you have to.  You also want to keep your on device in as compact a format as possible. Mobile Design Patterns There are a number of design patterns that have either come to life because of smartphone development or have been adapted for this use.  The main pattern in the Windows Phone environment is the MVVM (Model-View-View-Model).  This is great for overall application structure and separation of concerns.  The fun part is trying to keep that separation as pure as possible.  Many of the other patterns may or may not have strict definitions, but some that you need to be concerned with are push notification, asynchronous communication and offline data storage. Real estate is limited on smartphones and even tablets. You are also limited in the type of controls that can be represented in the UI. This means rethinking how you modularize your application. Typing is also much harder to do so you want to reduce this as much as possible.  This leads to UI patterns.  While not what we would traditionally think of as design patterns the guidance each platform has for UI design is critical to the success of your application.  If user find the application difficult navigate they will not use it. Development Process Because of the differences in development tools required, test devices and certification and deployment processes your teams will need to learn new way of working together.  This will include the need to integrate service contracts of back-end systems with mobile applications.  You will also want to make sure that you present consistency across different access points to corporate data.  Your web site may have more functionality than your smartphone application, but it should have a consistent core set of functionality.  This all requires greater communication between sub-teams of your developers. Testing Process Testing of smartphone apps has a lot more to do with what happens when you lose connectivity or if the user navigates away from your application. There are a lot more opportunities for the user or the device to perform disruptive acts.  This should be your main testing concentration aside from the main business requirements.  You will need to do things like setting the phone to airplane mode and seeing what the application does in order to weed out any gaps in your handling communication interruptions. Need For Outside Experts Since this is a development area that is new to most companies the need for experts is a lot greater. Whether these are consultants, vendor representatives or just development community forums you will need to establish expert contacts. Nothing is more dangerous for your project timelines than a lack of knowledge.  Make sure you know who to call to avoid lengthy delays in your project because of knowledge gaps. Security Security has to be a major concern for enterprise applications. You aren't dealing with just someone's game standings. You are dealing with a companies intellectual property and competitive advantage. As such you need to start by limiting access to the application itself.  Once the user is in the app you need to ensure that the data is secure at all times.  This includes both local storage and across the wire.  This means if a platform doesn’t natively support encryption for these functions you will need to find alternatives to secure your data.  You also need to keep secret (encryption) keys obfuscated or locked away outside of the application. People can disassemble the code otherwise and break your encryption. Offline Capabilities As we discussed earlier one your biggest concerns is not having connectivity.  Because of this a good portion of your code may be dedicated to handling loss of connection and reconnection situations.  What do you do if you lose the network?  Back up all your transactions and store of any supporting data so that operations can continue off line. In order to support this you will need to determine the available flat file or local data base capabilities of the platform.  Any failed transactions will need to support a retry mechanism whether it is automatic or user initiated.  This also includes your services since they will need to be able to roll back partially completed transactions.  What ever you do, don’t ignore this area when you are designing your system. Deployment Each platform has different deployment capabilities. Some are more suited to enterprise situations than others. Apple's approach is probably the most mature at the moment. Prior to the current generation of smartphone platforms it would have been Windows CE. Windows Phone 7 has the limitation that the app has to be distributed through the same network as public facing applications. You mark them as private which means that they are only accessible by a direct URL. Unfortunately this does not make them undiscoverable (although it is very difficult). This will change with Windows Phone 8 where companies will be able to certify their own applications and distribute them.  Given this Windows Phone applications need to be more diligent with application access in order to keep them restricted to the company's employees. My understanding of the Android deployment schemes is that it is much less standardized then either iOS or Windows Phone. Someone would have to confirm or deny that for me though since I have not yet put the time into researching this platform further. Given my limited exposure to the iOS and Android platforms I have not been able to confirm this, but there are varying degrees of user involvement to install and keep applications updated. At one extreme the user just goes to a website to do the install and in other case they may need to download files and perform steps to install them. Future Bluetooth Today we use Bluetooth for keyboards, mice and headsets.  In the future it could be used to interrogate car computers or manufacturing systems or possibly retail machines by service techs.  This would open smartphones to greater use as a almost a Star Trek Tricorder.  You would get you all your data as well as being able to use it as a universal remote for just about any device or machine. Better corporation controlled deployment At least in the Windows Phone world the upcoming release of Windows Phone 8 will include a private certification and deployment option that is currently not available with Windows Phone 7 (Mango). We currently have to run the apps through the Marketplace certification process and use a targeted distribution method. Platform independent approaches HTML5 and JavaScript with Web Service has become a popular topic lately for not only creating flexible web site, but also creating cross platform mobile applications.  I’m not yet convinced that this lowest common denominator approach is viable in most cases, but it does have it’s place and seems to be growing.  Be sure to keep an eye on it. Summary From my perspective enterprise smartphone applications can offer a great competitive advantage to many companies.  They are not cheap to build and should be approached cautiously.  Understand the factors I have outlined in this series, do you due diligence and see if there is a portion of your business that can benefit from the mobile experience. del.icio.us Tags: Architecture,Smartphones,Windows Phone,iOS,Android

    Read the article

  • website attack form submission triggering emails related questions

    - by IberoMedia
    We are experiencing website attacks that trigger the submission of a form, and send alert emails. Normal process of form submission is to fill up a couple of text fields, and when the user is redirected, the next page processes $_POST. If $_POST exists, then the email to intended form recipients is triggered. What is happening right now, we are receiving the email of the form submission, three emails at a time with same information. The information per email is the same, but not all of the spam emails contain the same information, each batch of triggered emails has unique information. The form has no captcha, and if possible we would like to keep it this way. The website has worked fine and had no spamming problems until today. We have monitoring software for the website, but whoever is submitting this form over and over is not being recorded by the tracking software WHY IS THIS? IS THE PERSON ACTUALLY VISITING THE WEBSITE? The only suspicious visit tracked was on November 10th, and this record ALSO shows three forms submitted (this is how I identified possible first visit by attacker). Then no incidents until today. WHAT IS THE GOAL of the spam attack? Is the attacker expecting us to respond to the bogus emails? What can they achieve with repeated submission of form Why are three emails triggered in the row? Is this indicative that they may be using a script? This is a PHP website. Is there a way for a client to view the PHP code of a page? Thank you

    Read the article

  • Fixed Assets Recommended Patch Collections

    - by Cindy A B-Oracle
    After the introduction of the Recommended Patch Collections (RPCs) in late 2012, Fixed Assets development has released an RPC about every six months.  You may recall that an RPC is a collection of recommended patches consolidated into a single, downloadable patch, ready to be applied.  The RPCs are created with the following goals in mind: Stability:  Address issues that occur often and interfere with the normal completion of crucial business processes, such as period close--as observed by Oracle Development and Global Customer Support. Root Cause Fixes:  Deliver a root cause fix for data corruption issues that delay period close, normal transaction flow actions, performance, and other issues. Compact:  While bundling a large number of important corrections, the file footprint is kept as small as possible to facilitate uptake and minimize testing. Reliable:  Reliable code with multiple customer downloads and comprehensive testing by QA, Support and Proactive Support.  There has been a revision to the RPC release process for spring 2014.  Instead of releasing product-specific RPCs, development has released a 12.1.3 RPC that is EBS-wide.  This EBS RPC includes all product-recommended patches along with their dependencies. To find out more about this EBS-wide RPC, please review Oracle E-Business Suite Release 12.1.3+ Recommended Patch Collection 1 (RPC1) (Doc ID 1638535.1).

    Read the article

  • Why The Athene Group Chose Fusion CRM

    - by Tony Berk
    A guest post by Vikas Bhambri, Managing Partner, The Athene Group This year, The Athene Group (www.theathenegroup.com) celebrated our tenth anniversary. The company has accomplished a lot in ten years overcoming a number of hurdles and challenges to have grown organically to a 150+ person global company with offices in the US, UK, and India and customers in the US, Canada, and Europe. Now more than ever with the current global landscape from an economic and competitive standpoint it was vital that we make some changes to remain successful for the next ten years. There were two key initiatives that we discussed internally that would enable us to successfully accomplish this – collaboration and the concept of “insight to action”. With our existing Oracle CRM On Demand platform we had components of this but not the full depth and breadth that we were looking for. When we started to discuss Fusion CRM we immediately saw several next generation tools that would embrace these two objectives. For a consulting and development organization the collaboration required between business development and consulting delivery is as important as the collaboration required during the projects between the project delivery and account management teams. The Activity Streams functionality in Fusion CRM immediately addressed the communication of key discussion topics and exchanges around our clients. Of course when we saw the Oracle Social Network (which is part of our Fusion CRM roadmap) we were blown away. The combination OSN and our CRM is going to make us more effective as we discuss and work cohesively on client engagements – ensuring mutual success for both Athene and our clients. When we looked at “insight to action” we saw that we had a great platform when folks were at their desks, unfortunately a lot of our business development and consulting folks are on the road. The Fusion Mobile Sales and Fusion Outlook Desktop provide information to our teams when they are on the go. So that they can provide real-time information and react to real-time information provided by their peers. We are in the early stages of our transformative experience with Fusion CRM but we believe the platform along with our people and processes are going to help us achieve our goals in the future.

    Read the article

  • links for 2010-06-15

    - by Bob Rhubart
    You're invited : Oracle Solaris Day, June 28th, Herzliya - Openomics How open innovation and technology adoption translates to business value, with stories from our developer support work at Sun ISV Engineering (tags: ping.fm) Edwin Biemond: Enriching and Forwarding your data with the Spring Component in SOA Suite 11g PS2 Oracle ACE Edwin Biemond describes "how easy it is to use Java in the Spring Component, how you can wire this Component to other Components, Services or References adapters." (tags: oracleace soa oracle middleware) Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 - Currency Conversions & FX Translations &ndash; Part 1 "As part of the BI EE setup we need to ensure that such local currency transactions are converted to a common reporting currency," says Rittman Mead's Venkatakrishnan. (tags: oracle businessintelligence) Richard Veryard: Ecosystem SOA 2 "What are the problems of large complex sociotechnical systems?" asks Rich Veryard?  "How far do SOA and enterprise architecture help to address this problem space, and what else might we need?" (tags: soa entarch) Khanderao Kand: Oracle BPM Suite .. unified engine.. "This Suite is based on unified process foundation of Oracle Business Process Management Suite 11g . It has the same engine that executes both BPEL and BPMN processes, " says Kand.  (tags: bpel soa bpm oracle) Webcast: Revealing the Secrets that will Re-Energize your Services Strategies  Oracle's Peter Heller and Robert Covington discuss how to overcome the many unforeseen technical and organizational barriers in order to meet the high expectations of dynamic business requirements in this live webcast, July 14, 2010, 9:00 AM PDT / Noon EDT (tags: entarch oracle webcast)

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Player sprite moving slower on iPhone 4

    - by nvillec
    I just finished getting movement/jump animation for a player sprite in Xcode using Cocos2D. The basic movement algorithm is a timer that updates every 0.01 sec, changing the sprite position to (sprite.position.x + xVel, sprite.position.y + yVel). Each time a movement button is tapped, the appropriate velocity (initialized to 0) is changed to whatever speed I choose, then a stop movement button returns the velocity to 0. It's not an ideal solution but I'm very new at this and stoked to at least have that working with little help from the internet. So I may not have explained that perfectly, but it is in fact working to my satisfaction in Xcode's iPhone Simulator, however when I build it for my device and run it on my phone, the sprite's movement speed is noticeably slower than in Xcode. At first I thought it must have to do with the resolution of the iPhone 4, making the sprite's movement path twice as long, but I found that if I pull up the multitask bar, then return to the app the speed will sometimes jump back to normal. My second theory was that the code is just inefficient and is bogging the processes down, but I would see this reflected in the frame rate wouldn't I? It stays at 59-60 the whole time, and the spritesheet animation runs at the correct speed. Has anyone experienced this? Is this a really obvious issue that I'm completely missing? Any help (or tips for optimizing my approach to movement) would be much appreciated!

    Read the article

  • Introduction to Oracle ADF

    - by Arda Eralp
    The Oracle Application Development Framework (Oracle ADF) is an end-to-end application framework that builds on Java Platform, Enterprise Edition (Java EE) standards and open-source technologies. You can use Oracle ADF to implement enterprise solutions that search, display, create, modify, and validate data using web, wireless, desktop, or web services interfaces. Because of its declarative nature, Oracle ADF simplifies and accelerates development by allowing users to focus on the logic of application creation rather than coding details. Used in tandem, Oracle JDeveloper 11g and Oracle ADF give you an environment that covers the full development lifecycle from design to deployment, with drag-and-drop data binding, visual UI design, and team development features built in. In line with community best practices, applications you build using the Fusion web technology stack achieve a clean separation of business logic, page navigation, and user interface by adhering to a model-view-controller architecture. MVC architecture: The model layer represents the data values related to the current page The view layer contains the UI pages used to view or modify that data The controller layer processes user input and determines page navigation The business service layer handles data access and encapsulates business logic Each ADF module fits in the Fusion web application architecture. The core module in the framework is ADF Model, a data binding facility. The ADF Model layer enables a unified approach to bind any user interface to any business service, without the need to write code. The other modules that make up a Fusion web application technology stack are: ADF Business Components, which simplifies building business services. ADF Faces rich client, which offers a rich library of AJAX-enabled UI components for web applications built with JavaServer Faces (JSF). ADF Controller, which integrates JSF with ADF Model. The ADF Controller extends the standard JSF controller by providing additional functionality, such as reusable task flows that pass control not only between JSF pages, but also between other activities, for instance method calls or other task flows.

    Read the article

  • Now Available: Profit November 2012

    - by user462779
    The November 2012 issue of Profit is now available. In the five years I've worked on Profit, there has been measurable interest in content related to project management. Stories featuring project management as a key component have resulted in extra clicks, likes, and RTs (for you Twitter users) from our readers. I've chatted about this with Oracle customers, partners, and experts and received an assortment of ideas about why this might be. This issue of Profit is a bit of a culmination of those conversations, and the trends that are driving interest in project management best practices. Also, two online developments for Profit: check out my newly relaunched blog, Editor's Notebook, at blogs.oracle.com/profit, where readers can get a peek at the development of each issue of Profit as it happens. We've also launched a new LinkedIn group for our social media-inclined readers. In this issue: Three Keys to Project Management What can organizations with world-class project management teach the rest of us? Strong Medicine Gilead Sciences simplifies business processes to establish a foundation for continued growth. Architects of Reform Enterprise architecture plays an essential role in establishing Oregon as a leader in healthcare reform. Answering the Call Turkcell CIO Ilker Kuruoz finds IT-powered growth and innovation to be the calling card for success. Projected Results Sound project management practices and technology can have an immediate impact on the bottom line. Preparing for Impact Plans for dealing with enterprise information will define the big data winners. Is one issue of Profit not enough to get you through to February? Visit the Profit archives, or follow @OracleProfit on Twitter for a daily dose of enterprise technology news from Profit.

    Read the article

  • Report from OpenWorld Shanghai

    - by jmorourke
    Oracle OpenWorld Shanghai 2013 was held July 22nd – 25th at the International Expo Center in Shanghai, China. The conference drew over 19,000 attendees from 44 countries. In addition, 580 CxOs attended the Executive Edge program, and 430+ partners attended the Oracle Partner Network Exchange. The conference included a number of sessions on Big Data, Business Analytics, Business Intelligence and Enterprise Performance Management delivered by Oracle, our partners and customers.  I had the pleasure to attend the conference and delivered three sessions focused on Oracle’s Hyperion Enterprise Performance Management (EPM) applications. Each of my sessions was well-attended, and in a few cases was standing room only, so there is clearly a lot of interest in the China market in EPM. The EPM and BI demo pods in the DemoGrounds at the conference also received a lot of traffic. In addition to the conference sessions I delivered, I had several meetings with customers and partners in Shanghai.These sessions and meetings I attended made clear the interest that customers in China have in improving their planning, management reporting, financial reporting, and profitability management processes. In fact, with the China Ministry of Finance now standardizing on XBRL for annual reporting across multiple agencies in China, there is a great opportunity here for our disclosure management application. One interesting finding is that the China market may not be ready for cloud-based applications as many companies are state-owned and have security concerns, so on-premise applications are likely to see continued demand.  For more information about the Oracle OpenWorld China 2013 conference, please check the web  site:  http://www.oracle.com/events/apac/cn/en/openworld/index.htmlAnd don’t forget, Oracle OpenWorld San Francisco 2013 is just around the corner in September of 2013. Please check the web site for registration and content information: http://www.oracle.com/openworld/index.html

    Read the article

  • Online Media Daily: Oracle Takes Social Marketing Seriously

    - by Kathryn Perry
    In the article published on Nov 12, 2012 and titled "Oracle Integrates Social Marketing Into Enterprise To Gain Marketing Revs," Online Media Daily explores Oracle's approach to social marketing. The publication says that Oracle is focused on showing marketers how to integrate social data into corporate business processes and how to "socialize" the corporate world.The article goes on to state:"Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Read more: http://www.mediapost.com/publications/article/187096/oracle-integrates-social-marketing-into-enterprise.html#ixzz2CPMZ1w3DMeg Bear, VP of cloud social platform at Oracle, sees the integration with ERP systems as a differentiator for the company. Oracle Social Relationship Management launched last month. It integrates social data into traditional enterprise applications like Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce and Oracle ERP."The post goes on to quote a Forrester analyst stating the following:""There's room for any process-driven application to run more efficiently, especially if they're socially enabled," said Rob Koplowitz, VP and principal analyst at Forrester Research. "It takes the human part of the process not generally captured today to provide better access to content, information and collective actions."Koplowitz said several acquisitions support Oracle's long-term vision: to layer social on top of other enterprise apps, like its ERP platform."With many great acquisitions under our belt and organically grown social tools, the market recognizes that Oracle is poised to seize the moment in socially enabled business apps.Continue reading the full article here.

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • Customer Experience in the Year Ahead

    - by Christina McKeon
    With 2012 coming to an end soon, we find ourselves reflecting on the year behind us and the year ahead. Now is a good time for reflection on your customer experience initiatives to see how far you have come and where you need to go. Looking back on your customer experience efforts this year, were you able to accomplish the following? Customer journey mapping Align processes across the entire customer lifecyle (buying and owning) Connect all functional areas to the same customer data Deliver consistent and personal experiences across all customer touchpoints Make it easy and rewarding to be your customer Hire and develop talent that drives better customer experiences Tie key performance indicators (KPIs) to each of your customer experience objectives This is by no means a complete checklist for your customer experience strategy, but it does help you determine if you have moved in the right direction for delivering great customer experiences. If you are just getting started with customer experience planning or were not able to get to everything on your list this year, consider focusing on customer journey mapping in 2013. This exercise really helps your organization put your customer in the center and understand how everything you do affects that customer. At Oracle, we see organizations in various stages of customer experience maturity all learn a lot when they go through journey mapping. Companies just starting out with customer experience get a complete understanding of what it is like to be a customer and how everything they do affects that customer. And, organizations that are further along with customer experience often find journey mapping helps provide perspective when re-visiting their customer experience strategy. Happy holidays and best wishes for delivering great customer journeys in 2013!

    Read the article

  • Implementing Service Level Agreements in Enterprise Manager 12c for Oracle Packaged Applications

    - by Anand Akela
    Contributed by Eunjoo Lee, Product Manager, Oracle Enterprise Manager. Service Level Management, or SLM, is a key tool in the proactive management of any Oracle Packaged Application (e.g., E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, Fusion Apps, etc.). The benefits of SLM are that administrators can utilize representative Application transactions, which are constantly and automatically running behind the scenes, to verify that all of the key application and technology components of an Application are available and performing to expectations. A single transaction can verify the availability and performance of the underlying Application Tech Stack in a much more efficient manner than by monitoring the same underlying targets individually. In this article, we’ll be demonstrating SLM using Siebel Applications, but the same tools and processes apply to any of the Package Applications mentioned above. In this demonstration, we will log into the Siebel Application, navigate to the Contacts View, update a contact phone record, and then log-out. This transaction exposes availability and performance metrics of multiple Siebel Servers, multiple Components and Component Groups, and the Siebel Database - in a single unified manner. We can then monitor and manage these transactions like any other target in EM 12c, including placing pro-active alerts on them if the transaction is either unavailable or is not performing to required levels. The first step in the SLM process is recording the Siebel transaction. The following screenwatch demonstrates how to record Siebel transaction using an EM tool called “OpenScript”. A completed recording is called a “Synthetic Transaction”. The second step in the SLM process is uploading the Synthetic Transaction into EM 12c, and creating Generic Service Tests. We can create a Generic Service Test to execute our synthetic transactions at regular intervals to evaluate the performance of various business flows. As these transactions are running periodically, it is possible to monitor the performance of the Siebel Application by evaluating the performance of the synthetic transactions. The process of creating a Generic Service Test is detailed in the next screenwatch. EM 12c provides a guided workflow for all of the key creation steps, including configuring the Service Test, uploading of the Synthetic Test, determining the frequency of the Service Test, establishing beacons, and selecting performance and usage metrics, just to name a few. The third and final step in the SLM process is the creation of Service Level Agreements (SLA). Service Level Agreements allow Administrators to utilize the previously created Service Tests to specify expected service levels for Application availability, performance, and usage. SLAs can be created for different time periods and for different Service Tests. This last screenwatch demonstrates the process of creating an SLA, as well as highlights the Dashboards and Reports that Administrators can use to monitor Service Test results. Hopefully, this article provides you with a good start point for creating Service Level Agreements for your E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, or Fusion Applications. Enterprise Manager Cloud Control 12c, with the Application Management Suites, represents a quick and easy way to implement Service Level Management capabilities at customer sites. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Google+ |  Newsletter

    Read the article

  • Standard Network Tiers in a Distributed N-Tier System

    Distributed N-Tier client/server architecture allows for segments of an application to be broken up and distributed across multiple locations on a network.  Listed below are standard tiers in a Distributed N-Tier System. End-User Client Tier The End-User Client is responsible for sending and receiving requests from web servers and other applications servers and translating the responses so that the End-User can interpret the data effectively. The primary roles for this tier are to communicate with servers and translate server responses back to the end-user to interpret. Business-Specific Functions Validate Data Display Data Send Data to Webserver Web Server Tier The Web server tier processes new requests for information coming in from the HTTP and HTTPS ports. This primarily handles the generation of user interfaces and calls the application server when needed to access Data and business logic when needed. Business-specific functions Send Data to application server Format Data for Display Validate Data Application Server Tier The application server stores and executes predefined business logic that is applied to various pieces of data as the business determines. The processed data is then returned back to the Webserver. Additionally, this server directly calls the database to obtain and store any data used by the system Business-Specific Functions Validate Data Process Data Send Data to Database Server Database Server Tier The Database Server is responsible for storing and returning all data need by the calling applications. The primary role for this this server is storage. Data is stored as needed and can be recalled at any point later in time. Business-Specific Functions Insert Data Delete Data Return Data to Application Server

    Read the article

  • How Security Products Are Made; An Interview with BitDefender

    - by Jason Fitzpatrick
    Most of us use anti-virus and malware scanners, without giving the processes behind their construction and deployment much of a thought. Get an inside look at security product development with this BitDefender interview. Over at 7Tutorials they took a trip to the home offices of BitDefender for an interview with Catalin Co?oi–seen here–BitDefender’s Chief Security Researcher. While it’s notably BitDefender-centric, it’s also an interesting look at the methodology employed by a company specializing in virus/malware protection. Here’s an excerpt from the discussion about data gathering techniques: Honeypots are systems we distributed across our network, that act as victims. Their role is to look like vulnerable targets, which have valuable data on them. We monitor these honeypots continuously and collect all kinds of malware and information about black hat activities. Another thing we do, is broadcast fake e-mail addresses that are automatically collected by spammers from the Internet. Then, they use these addresses to distribute spam, malware or phishing e-mails. We collect all the messages we receive on these addresses, analyze them and extract the required data to update our products and keep our users secure and spam free. Hit up the link below for the full interview. How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • Oracle OpenWorld 2012 is Around the Corner - Discover AutoVue Activities

    - by Pam Petropoulos
    Planning to attend Oracle OpenWorld 2012?  If so, be sure to check out the various AutoVue Enterprise Visualization activities that you can take advantage of while in San Francisco. AutoVue Sessions: CON8381 - Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization   Click here for full session description. Date: Monday, October 1, 2012 Time: 10:45 a.m. – 11:45 a.m. Location: Intercontinental Hotel - Telegraph Hill Customer Speaker: Siew Yeow Loye, Global Foundries   CON8385 - Optimize Asset Performance and Reliability with AutoVue Visualization   Click here for full session description. Date:Thursday, October 4, 2012 Time: 2:15 p.m. – 3:15 p.m. Location: Palace Hotel – Gold Ballroom AutoVue Demo Pods: Demo   Demo ID: 3122   AutoVue: PLM & Enterprise Visualization Moscone West; Workstation: W-082 Demo ID: 3001  Oracle E-Business Suite Enterprise Asset Management and AutoVue Visualization Solutions Palace Hotel Level 2 HPU-008   Customers are also invited to attend the Oracle OpenWorld 2012 Supply Chain Management Customer Reception on Tuesday, October 2, 2012. This year's event is being held at ROE Lounge, located just 2 blocks from Moscone Center, and offers a casual and upbeat atmosphere so you can mix and mingle with friends and colleagues. This event sold out last year and space is again limited so Register Today. Date: Tuesday, October 2, 2012 Time: 6:00 p.m. – 8:00 p.m. Location: ROE Lounge, 651 Howard Street, San Francisco   For additional information regarding AutoVue sessions, demos, and activities be sure to review the AutoVue FocusOn Document.   Join us at Oracle OpenWorld, September 30–October 4, 2012 and discover new products, solutions, and practices to make you even more successful in your job and in your industry.

    Read the article

  • Why can't I install from software center?

    - by user64720
    There was a problem upgrading to Firefox 13. This error kept returning: /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb W: Waited for dpkg --assert-multi-arch but was not there - dpkgGo (10: There are no "child" processes). Now it seems that there is some problem with dpkg and I can't install anything from software center. I already tried to clean previous packages with sudo rm /var/lib/apt/lists/* -vf and then sudo apt-get update, it didn't work. When running sudo dpkg --configure -a, I get this: dpkg: problems with dependencies prevent the configuration of firefox-globalmenu: firefox-globalmenu depends on firefox (= 13.0+build1-0ubuntu0.12.04.1); however: The package is not installed. dpkg: error while processing firefox-globalmenu (--configure): problems com dependencies - leaving unconfigured There has been found errors while processing: firefox-globalmenu What should I do to fix this?? EDIT: I don't have the necessary expertise to understand why what I did worked and what was causing the conflict, but anyway, since there was a problem with firefox-globalmenu:, I went to synaptics package manager, I removed this particular package and reinstalled it. After that, I was able to install Firefox from synaptics and also any other applications from software center. However, still there was a problem, when running sudo apt-get update, the following kept returning: Failed to get gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages Verification code hash doesn't match. E: Some archives index failed at being downloaded. They have been ignored, or older copies are used instead. So I typed sudo rm /var/lib/apt/lists/* -vf in terminal and then again sudo apt-get update and everything is fine now. I did this before an answer was posted, anyway I agree the problem was that particular package and its removal. So I'll mark the below answer as accepted.

    Read the article

  • Apache DVB http video Streaming bandwidth or priority problem

    - by igino manfre'
    I'm streaming few precompressed DVB videos from cloud. The streams are generated from VLC on "impossible" ports (such as 64085, 64086 etc) reverse proxed by Apache on port 80 and 8080. All the generated streams are listed in "http://95.110.164.61/indexv.html". From an ADSL connection with enough downlink bandwidth, recalling the stream generated by VLC (such as "http://95.110.164.61:64087/mpg2_6.4") it flows fluently. Recalling the same stream proxed by Apache ("http://95.110.164.61/mpg2_6.4") the stream stops and goes. The only situation in which the Apache proxed streams flow regularly is from a site connected through 64 Mbps warranted bandwith with RTT to the server less than 10 mseconds. Please note that streams below 2 Mbps are fluently proxed. The system is a single core xeon with windows 2008 R2 on 4 GB of RAM with 1 Gbps of network bandwidth. The drain of computational and bandwidth resources is negligeable, the RAM usage always lower than 50%. On the system I run many VLC streamers. Any of them drains a variable amount of RAM (from about 25 to 70 MB). On the contrary the couple of httpd.exe processes drain no more than 7 MB. Using Wireshark (on the server) I see that VLC directy send to the client much more packets than Apache, and the stream is framgmented on many frames. I'm not a programmer, a newby of Apache. Can anyone please address me to a specific portion of the Apache's huge documentation? Thank you. igino

    Read the article

  • Register to Attend the AutoVue 20.2 Webcast on April 3, 2012

    - by Pam Petropoulos
    Want to learn more about the latest AutoVue 20.2 release?               Discover what this latest major release of AutoVue can do for you. Join Celine Beck, AutoVue Product Management and Strategy Manager, during this live webcast to discover how the new release can transform your business processes and extend the value of your visualization investment. Hear how customers and partners are improving their workflows and creating differentiated offerings thanks to AutoVue enterprise visualization.   Date: Tuesday, April 3, 2012                                                                                                                                                            Time: 11:00 a.m. EST   Click here to register for this event.   For complete details about the new release, also check out the What’s New in AutoVue 20.2 Datasheet, available here.

    Read the article

  • How does one find out which application is associated with an indicator icon in Ubuntu 12.04?

    - by Amos Annoy
    It is trivial to do this in Ubuntu 10.04. The question is specific to Ubuntu 12.04. This has nothing to do (does it?) with right click. How can an indicator's icon in Ubuntu 12.04 be matched with the program responsible for it's manifestation on the top panel? A list of running applications can include all processes using System Monitor. How is the correct matching process found for an indicator? (the examination of SM points out a rather poignant factor in the faster depletion and shortened run time on battery - the ambient quiescent CPU rate in 12.04 is now well over 20% when previously it was well under 10% in 10.04, between 5% and 7%!) (I have a problem with the battery indicator - it sometimes has % and other times hh:mm - it is necessary to know the ap. & v. to get more info on controlling same. ditto: There are issues with other indicator aps.) Details from: How can I find Application Indicator ID's? suggests looking at: file:///usr/share/indicator-application/ordering-override.keyfile [Ordering Index Overrides] nm-applet=1 gnome-power-manager=2 ibus=3 gst-keyboard-xkb=4 gsd-keyboard-xkb=5 which solves the battery identification, and presumably nm is NetworkManager for the rf icon, but the envelope, blue tooth and speaker indicator aps. are still a mystery. (Also, the ordering is not correlated.) Mind you, it was simple in the past to simply right click to get the About option to find the ap. & v. info. browsing around and about: file:///usr/share/indicator-application/ordering-override.keyfile examined: file:///usr/share/indicators file:///usr/share/indicators/messages/applications/ ... perhaps?/presumably? the information sought may be buried in file:///usr/share/indicators

    Read the article

  • Ubuntu Slow - What architecture does the Windows Installer install?

    - by Benjamin Yep
    I feel absolutely limited by using Windows, and I need to switch to a Unix environment. I once installed Red Hat on my lappie (screen + external monitor setup; 4GB ram; x64; runs fast) and it worked fine, but I saw that the computer cluster that is the birthplace of my unix knowledge switched to Ubuntu, so naturally I follow. To the point. When I installed Ubuntu onto my machine via the Windows Installer, it ran quite slow. Opening Firefox takes about 8-9 seconds, it freezes up often, unable to handle its own background processes. I saw in a thread that, perhaps, it is running slow because the Windows Installer is installing an x64 version. Of course, my computer has had no performance issues in the past(except that time with the trojans but you know, know one is perfect ;) ) Anyways, I uninstalled Ubuntu, freeing up the max allocated memory it took up, and continue to be sad, trapped in my MS world with only a buggy Cygwin, any assistance is greatly appreciated! :) Thanks ~Ben

    Read the article

  • Joining a company to get experience vs. going alone [closed]

    - by daniels
    My goal is to build a successful web startup, say the next Digg or Twitter, and I am in doubt regarding what is the best route to follow as a programmer. I see basically two options: Get an internship/job with an established online company, so that I could get a mentor and learn from more experienced programmers, learn their processes, methodology and so on. I could do this for 1-2 years, and then quit to start working on my own stuff. Start working on my own projects right away, starting with small ones and moving up gradually. This would give me more control on the things I would be working with, but I would lack contact with more experienced people, so I would need to figure basic things on my own. Doing both is not an option in my opinion, cause I would need to put a lot of effort/time into each if I was to learn/improve as a programmer. So is one route definitely better than the other? Is there a third one I am not considering? Background: I already work by myself developing content-based websites and doing SEO, and I am decent at it so money is not a problem. Last year I started learning to program, first by myself and now I enrolled in a CS degree on a good university.

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >