Search Results

Search found 5859 results on 235 pages for 'customer metrics'.

Page 15/235 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • WebCenter Customer Spotlight: Hyundai Motor Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. They  undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors.  Hyundai Motor Company cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported their future growth in the competitive car industry. Company OverviewHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. The company strives to enhance its brand image and market recognition by continuously improving the quality and design of its cars. Business Challenges To maximize the company’s growth potential, Hyundai Motor Company undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Specifically, they wanted to: Introduce a smart work environment to improve staff productivity and efficiency, and take advantage of rapid company growth due to new, enhanced car designs Replace a legacy document system managed by individual staff to improve collaboration, the visibility of corporate documents, and sharing of work-related files between employees Improve the security and storage of documents containing corporate intellectual property, and prevent intellectual property loss when staff leaves the company Eliminate delays when downloading files from the central server to a PC Build a large, single document repository to more efficiently manage and share data between 30,000 staff at the company’s headquarters Establish a scalable system that can be extended to Hyundai offices around the world Solution DeployedAfter conducting a large-scale benchmark test, Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors. Business Results Lowered the overall time spent each day on all document-related work by approximately 85%—from 4.5 hours to around 42 minutes on an average day Saved more than US$1 million per year in printer, paper, and toner costs, and laid the foundation for a completely paperless environment Reduced staff’s time spent requesting and receiving documents about car sales or designs from supervisors by 50%, by storing and managing all documents across the corporation in a single repository Cut the time required to draft new-car manufacturing, sales, and design documents by 20%, by allowing employees to reference high-quality data, such as marketing strategy and product planning documents already in the system Enhanced staff productivity at company headquarters by 9% by reducing the document-related tasks of 30,000 administrative and research and development staff Ensured the system could scale to hold 3 petabytes of car sales, manufacturing, and design data by 2013 and be deployed at branches worldwide We chose Oracle Exalogic, Oracle Exadata, and Oracle WebCenter Content to support our new document-centralization system over their competitors as Oracle offers stable storage for petabytes of data and high processing speeds. We have cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported our future growth in the competitive car industry. Kang Tae-jin, Manager, General Affairs Team, Hyundai Motor Company Additional Information Hyundai Motor Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Rails: The Law of Demeter [duplicate]

    - by user2158382
    This question already has an answer here: Rails: Law of Demeter Confusion 4 answers I am reading a book called Rails AntiPatterns and they talk about using delegation to to avoid breaking the Law of Demeter. Here is their prime example: They believe that calling something like this in the controller is bad (and I agree) @street = @invoice.customer.address.street Their proposed solution is to do the following: class Customer has_one :address belongs_to :invoice def street address.street end end class Invoice has_one :customer def customer_street customer.street end end @street = @invoice.customer_street They are stating that since you only use one dot, you are not breaking the Law of Demeter here. I think this is incorrect, because you are still going through customer to go through address to get the invoice's street. I primarily got this idea from a blog post I read: http://www.dan-manges.com/blog/37 In the blog post the prime example is class Wallet attr_accessor :cash end class Customer has_one :wallet # attribute delegation def cash @wallet.cash end end class Paperboy def collect_money(customer, due_amount) if customer.cash < due_ammount raise InsufficientFundsError else customer.cash -= due_amount @collected_amount += due_amount end end end The blog post states that although there is only one dot customer.cash instead of customer.wallet.cash, this code still violates the Law of Demeter. Now in the Paperboy collect_money method, we don't have two dots, we just have one in "customer.cash". Has this delegation solved our problem? Not at all. If we look at the behavior, a paperboy is still reaching directly into a customer's wallet to get cash out. EDIT I completely understand and agree that this is still a violation and I need to create a method in Wallet called withdraw that handles the payment for me and that I should call that method inside the Customer class. What I don't get is that according to this process, my first example still violates the Law of Demeter because Invoice is still reaching directly into Customer to get the street. Can somebody help me clear the confusion. I have been searching for the past 2 days trying to let this topic sink in, but it is still confusing.

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • Displaying performance metrics in a modern web app?

    - by Charles
    We're updating our ancient internal PHP application at work. Right now, we gather extensive performance measurements on every pageview, and log them to the database. Additionally, users requested that some of the metrics be displayed at the bottom of the page. This worked out pretty well for us, because the last thing that the application does on every request is include the file containing the HTML footer. The updated parts of the application use an MVC framework and a Dispatch/Request/Response loop. The page footer is no longer the last thing done. In fact, it could very well be the first thing done, before the rest of the page is created. Because we can grab the Response before it's returned to the user, we could try to include placeholders for the performance metrics in the footer and simply replace them with the actual numbers, but this strikes me as a bad idea somehow. How do you handle this in your modern web app? While we're using PHP, I'm curious how it's done in a Ruby/Rails app, and in your favorite Python framework.

    Read the article

  • Stuck with the first record while parsing an XML in Java

    - by Ritwik G
    I am parsing the following XML : <table ID="customer"> <T><C_CUSTKEY>1</C_CUSTKEY><C_NAME>Customer#000000001</C_NAME><C_ADDRESS>IVhzIApeRb ot,c,E</C_ADDRESS><C_NATIONKEY>15</C_NATIONKEY><C_PHONE>25-989-741-2988</C_PHONE><C_ACCTBAL>711.56</C_ACCTBAL><C_MKTSEGMENT>BUILDING</C_MKTSEGMENT><C_COMMENT>regular, regular platelets are fluffily according to the even attainments. blithely iron</C_COMMENT></T> <T><C_CUSTKEY>2</C_CUSTKEY><C_NAME>Customer#000000002</C_NAME><C_ADDRESS>XSTf4,NCwDVaWNe6tEgvwfmRchLXak</C_ADDRESS><C_NATIONKEY>13</C_NATIONKEY><C_PHONE>23-768-687-3665</C_PHONE><C_ACCTBAL>121.65</C_ACCTBAL><C_MKTSEGMENT>AUTOMOBILE</C_MKTSEGMENT><C_COMMENT>furiously special deposits solve slyly. furiously even foxes wake alongside of the furiously ironic ideas. pending</C_COMMENT></T> <T><C_CUSTKEY>3</C_CUSTKEY><C_NAME>Customer#000000003</C_NAME><C_ADDRESS>MG9kdTD2WBHm</C_ADDRESS><C_NATIONKEY>1</C_NATIONKEY><C_PHONE>11-719-748-3364</C_PHONE><C_ACCTBAL>7498.12</C_ACCTBAL><C_MKTSEGMENT>AUTOMOBILE</C_MKTSEGMENT><C_COMMENT>special packages wake. slyly reg</C_COMMENT></T> <T><C_CUSTKEY>4</C_CUSTKEY><C_NAME>Customer#000000004</C_NAME><C_ADDRESS>XxVSJsLAGtn</C_ADDRESS><C_NATIONKEY>4</C_NATIONKEY><C_PHONE>14-128-190-5944</C_PHONE><C_ACCTBAL>2866.83</C_ACCTBAL><C_MKTSEGMENT>MACHINERY</C_MKTSEGMENT><C_COMMENT>slyly final accounts sublate carefully. slyly ironic asymptotes nod across the quickly regular pack</C_COMMENT></T> <T><C_CUSTKEY>5</C_CUSTKEY><C_NAME>Customer#000000005</C_NAME><C_ADDRESS>KvpyuHCplrB84WgAiGV6sYpZq7Tj</C_ADDRESS><C_NATIONKEY>3</C_NATIONKEY><C_PHONE>13-750-942-6364</C_PHONE><C_ACCTBAL>794.47</C_ACCTBAL><C_MKTSEGMENT>HOUSEHOLD</C_MKTSEGMENT><C_COMMENT>blithely final instructions haggle; stealthy sauternes nod; carefully regu</C_COMMENT></T> </table> with the following java code: package xmlparserformining; import java.util.List; import java.util.Iterator; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.Node; import org.dom4j.io.SAXReader; public class XmlParserForMining { public static Document getDocument( final String xmlFileName ) { Document document = null; SAXReader reader = new SAXReader(); try { document = reader.read( xmlFileName ); } catch (DocumentException e) { e.printStackTrace(); } return document; } public static void main(String[] args) { String xmlFileName = "/home/r/javaCodez/parsing in java/customer.xml"; String xPath = "//table/T/C_ADDRESS"; Document document = getDocument( xmlFileName ); List<Node> nodes = document.selectNodes( xPath ); System.out.println(nodes.size()); for (Node node : nodes) { String customer_address = node.valueOf(xPath); System.out.println( "Customer address: " + customer_address); } } } However, instead of getting all the various customer records, I am getting the following output: 1500 Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E and so on .. What is wrong here? Why is it printing only the first record ?

    Read the article

  • Three Global Telecoms Soar With Siebel

    - by michael.seback
    Deutsche Telekom Group Selects Oracle's Siebel CRM to Underpin Next-Generation CRM Strategy The Deutsche Telekom Group (DTAG), one of the world's leading telecommunications companies, and a customer of Oracle since 2001, has invested in Oracle's Siebel CRM as the standard platform for its Next Generation CRM strategy; a move to lower the cost of managing its 120 million customers across its European businesses. Oracle's Siebel CRM is planned to be deployed in Germany and all of the company's European business within five years. "...Our Next-Generation strategy is a significant move to lower our operating costs and enhance customer service for all our European customers. Not only is Oracle underpinning this strategy, but is also shaping the way our company operates and sells to customers. We look forward to working with Oracle over the coming years as the technology is extended across Europe," said Dr. Steffen Roehn, CIO Deutsche Telekom AG... "The telecommunications industry is currently undergoing some major changes. As a result, companies like Deutsche Telekom are needing to be more intelligent about the way they use technology, particularly when it comes to customer service. Deutsche Telekom is a great example of how organisations can use CRM to not just improve services, but also drive more commercial opportunities through the ability to offer highly tailored offers, while the customer is engaged online or on the phone," said Steve Fearon, vice president CRM, EMEA Read more. Telecom Argentina S.A. Accelerates Time-to-Market for New Communications Products and Services Telecom Argentina S.A. offers basic telephone, urban landline, and national and international long-distance services...."With Oracle's Siebel CRM and Oracle Communication Billing and Revenue Management, we started a technological transformation that allows us to satisfy our critical business needs, such as improving customer service and quickly launching new phone and internet products and services." - Saba Gooley, Chief Information Officer, Wire Line and Internet Services, Telecom Argentina S.A.Read more. Türk Telekom Develops Benefits-Driven CRM Roadmap Türk Telekom Group provides integrated telecommunication services from public switched telephone network (PSTN) and global systems for mobile communications technology (GSM). to broadband internet...."Oracle Insight provided us with a structured deployment approach that makes sense for our business. It quantified the benefits of the CRM solution allowing us to engage with the relevant business owners; essential for a successful transformation program." - Paul Taylor, VP Commercial Transformation, Türk Telekom Read more.

    Read the article

  • What are the quality metrics for RAM?

    - by Hi-Tech KitKat Android
    I have searched RAM and i found there are given some specification for the same capacity RAM, What are the difference and performance comparison between these? Like RAM1 General Brand Transcend Memory Type 2 GB (8 x 128 MB) DDR2 DIMM Memory Standard DDR2-800/PC-6400 Compatible Device PC Pins 240-pin Burst Length 4, 8 Buffered/Unbuffered Unbuffered Memory Memory Clock 400 MHz Technology DDR2 SDRAM Memory CAS Latency 4, 5, 6 RAM 2 General Brand Transcend Memory Type 2 GB (8 x 128 MB) DDR2 DIMM Memory Standard DDR2-667/PC2-5300 Compatible Device PC Pins 240-pin Burst Length 4, 8 Buffered/Unbuffered Unbuffered Memory Memory Clock 333 MHz Technology DDR2 SDRAM Memory CAS Latency 3, 4, 5 RAM3 General Brand Kingston Memory Type 2 GB (64 x 256 MB) 800 MHz DDR2 DIMM Compatible Device PC Pins 240-pin Error Check Non-ECC Buffered/Unbuffered Unbuffered Memory Memory Clock 200 MHz Technology DDR2 SDRAM Memory CAS Latency 6 What are the affect of the following Memory Type(given as 8 x 128 MB) Memory Clock (given in MHz) CAS Latency (given as 4,5,6) my Requirement is 2 GB DDR2 Type Desktop Please help

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • Package to compare LSA, TFIDF, Cosine metrics and Language Models

    - by gouwsmeister
    Hi, I'm looking for a package (any language, really) that I can use on a corpus of 50 documents to perform interdocument similarity testing in various metrics, like tfidf, okapi, language models, lsa, etc. I want as a result a document similarity matrix, i.e. doc1 is x% similar to doc2, etc... This is for research purposes, not for production. I specifically want the doc similarity matrix as I want to correlate this with human ratings. Thank you in advance!

    Read the article

  • [SQL] Select 3 lastest order for each customer

    - by Ratiug
    Hi Here is my table CusOrder that collect customer order OrderID Cus_ID Product_ID NumberOrder OrderDate 1 0000000001 9 1 6/5/2553 0:00:00 2 0000000001 10 1 6/5/2553 0:00:00 3 0000000004 9 2 13/4/2553 0:00:00 4 0000000004 9 1 17/3/2553 0:00:00 5 0000000002 9 1 22/1/2553 0:00:00 7 0000000005 9 1 16/12/2552 0:00:00 8 0000000003 9 3 13/12/2552 0:00:00 10 0000000001 9 2 19/11/2552 0:00:00 11 0000000003 9 2 10/11/2552 0:00:00 12 0000000002 9 1 23/11/2552 0:00:00 I need to select 3 lastest order for each customer and I need all customer so it will show each customer and his/her 3 lastest order how can I do it sorry for my bad english

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • How to secure a VM while allowing customer RDS (or equivalent) access to its desktop

    - by ChrisA
    We have a Windows Client/(SQL-)Server application which is normally installed at the customer's premises. We now need to provide a hosted solution, and browser-based isn't feasible in the short term. We're considering hosting the database ourselves, and also hosting the client in a VM. We can set all this up easily enough, so we need to: ensure that the customer can connect easily, and also ensure that we suitably restrict access to the VM (and its host, of course) We already access the host and guest machines across the internet via RDS, but we restrict access to it to only our own internal, very small, set of static IPs, and of course theres the 2 (or 3?)-user limit on RDS connections to a remote server. So I'd greatly appreciate ideas on how to manage: the security the multi-user aspect. We're hoping to be able to do this initially without a large investment in virtualisation infrastructure - it would be one customer only to start with, with perhaps two remote users. Thanks!

    Read the article

  • People, Process & Engagement: WebCenter Partner Keste

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Within the WebCenter group here at Oracle, discussions about people, process and engagement cross over many vertical industries and products. Amidst our growing partner ecosystem, the community provides us insight into great customer use cases every day. Such is the case with our partner, Keste, who provides us a guest post on our blog today with an overview of their innovative solution for a customer in the transportation industry. Keste is an Oracle software solutions and development company headquartered in Dallas, Texas. As a Platinum member of the Oracle® PartnerNetwork, Keste designs, develops and deploys custom solutions that automate complex business processes. Seamless Customer Self-Service Experience in the Trucking Industry with Oracle WebCenter Portal  Keste, Oracle Platinum Partner Customer Overview Omnitracs, Inc., a Qualcomm company provides mobility solutions for trucking fleets to companies in the transportation industry. Omnitracs’ mobility services include basic communications such as text as well as advanced monitoring services such as GPS tracking, temperature tracking of perishable goods, load tracking and weighting distribution, and many others. Customer Business Needs Already the leading provider of mobility solutions for large trucking fleets, they chose to target smaller trucking fleets as new customers. However their existing high-touch customer support method would not be a cost effective or scalable method to manage and service these smaller customers. Omnitracs needed to provide several self-service features to make customer support more scalable while keeping customer satisfaction levels high and the costs manageable. The solution also had to be very intuitive and easy to use. The systems that Omnitracs sells to these trucking customers require professional installation and smaller customers need to track and schedule the installation. Information captured in Oracle eBusiness Suite needed to be readily available for new customers to track these purchases and delivery details. Omnitracs wanted a high impact User Interface to significantly improve customer experience with the ability to integrate with EBS, provisioning systems as well as CRM systems that were already implemented. Omnitracs also wanted to build an architecture platform that could potentially be extended to other Portals. Omnitracs’ stated goal was to deliver an “eBay-like” or “Amazon-like” experience for all of their customers so that they could reach a much broader market beyond their large company customer base. Solution Overview In order to manage the increased complexity, the growing support needs of global customers and improve overall product time-to-market in a cost-effective manner, IT began to deliver a self-service model. This self service model not only transformed numerous business processes but is also allowing the business to keep up with the growing demands of the (internal and external) customers. This solution was a customer service Portal that provided self service capabilities for large and small customers alike for Activation of mobility products, managing add-on applications for the devices (much like the Apple App Store), transferring services when trucks are sold to other companies as well as deactivation all without the involvement of a call service agent or sending multiple emails to different Omnitracs contacts. This is a conceptual view of the Customer Portal showing the details of the components that make up the solution. 12.00 The portal application for transactions was entirely built using ADF 11g R2. Omnitracs’ business had a pressing requirement to have a portal available 24/7 for its customers. Since there were interactions with EBS in the back-end, the downtimes on the EBS would negate this availability. Omnitracs devised a decoupling strategy at the database side for the EBS data. The decoupling of the database was done using Oracle Data Guard and completely insulated the solution from any eBusiness Suite down time. The customer has no knowledge whether eBS is running or not. Here are two sample screenshots of the portal application built in Oracle ADF. Customer Benefits The Customer Portal not only provided the scalability to grow the business but also provided the seamless integration with other disparate applications. Some of the key benefits are: Improved Customer Experience: With a modern look and feel and a Portal that has the aspects of an App Store, the customer experience was significantly improved. Page response times went from several seconds to sub-second for all of the pages. Enabled new product launches: After successfully dominating the large fleet market, Omnitracs now has a scalable solution to sell and manage smaller fleet customers giving them a huge advantage over their nearest competitors. Dozens of new customers have been acquired via this portal through an onboarding process that now takes minutes Seamless Integrations Improves Customer Support: ADF 11gR2 allowed Omnitracs to bring a diverse list of applications into one integrated solution. This provided a seamless experience for customers to route them from Marketing focused application to a customer-oriented portal. Internally, it also allowed Sales Representatives to have an integrated flow for taking a prospect through the various steps to onboard them as a customer. Key integrations included: Unity Core Salesforce.com Merchant e-Solution for credit card Custom Omnitracs Applications like CUPS and AUTO Security utilizing OID and OVD Back end integration with EBS (Data Guard) and iQ Database Business Impact Significant business impacts were realized through the launch of customer portal. It not only allows the business to push through in underserved segments, but also reduces the time it needs to spend on customer support—allowing the business to focus more on sales and identifying the market for new products. Some of the Immediate Benefits are The entire onboarding process is now completely automated and now completes in minutes. This represents an 85% productivity improvement over their previous processes. And it was 160 times faster! With the success of this self-service solution, the business is now targeting about 3X customer growth in the next five years. This represents a tripling of their overall customer base and significant downstream revenue for the ongoing services. 90%+ improvement of customer onboarding and management process by utilizing, single sign on integration using OID/OAM solution, performance improvements and new self-service functionality Unified login for all Customers, Partners and Internal Users enables login to a common portal and seamless access to all other integrated applications targeted at the respective audience Significantly improved customer experience with a better look and feel with a more user experience focused Portal screens. Helped sales of the new product by having an easy way of ordering and activating the product. Data Guard helped increase availability of the Portal to 99%+ and make it independent of EBS downtime. This gave customers the feel of high availability of the portal application. Some of the anticipated longer term Benefits are: Platform that can be leveraged to launch any new product introduction and enable all product teams to reach new customers and new markets Easy integration with content management to allow business owners more control of the product catalog Overall reduced TCO with standardization of the Oracle platform Managed IT support cost savings through optimization of technology skills needed to support and modify this solution ------------------------------------------------------------ 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

    Read the article

  • Magento - Change Customer Group on purchase of specific product

    - by Gaurav
    Hi, I am developing a purchasable video website having different videos in different categories. It needs membership functionality in which if customer purchase membership then he/she will have all videos free for one year. For that I can create a membership product and if any customer purchase this membership product he/she will fall/switch into ‘Members Customer Group’ and then I can give 100% discount for all products for customers that are in group ‘Members Customer Group’. But I have no idea how I can achieve this? Can anyone please guide me how I can change the customer group on purchase of membership product? Thanks

    Read the article

  • Oracle UPK Customer Roundtable - Featuring Medtronic's Journey To Support Global Systems Implementat

    - by [email protected]
    Hear Medtronic's journey of adopting Oracle UPK globally across their SAP, Siebel, and PeopleSoft applications. Register Now for this free webinar! Thursday, April 29, 2010 -- 9:00 am PT Medtronic's success story highlights how Oracle UPK improved workforce effectiveness, addressed compliance, and ensured end user adoption. From starting out with a small group of developers using Oracle UPK to having 35 developers creating 18,000 topics, Oracle UPK has become part of Medtronic's learning infrastructure with multi-languages, help menu integration and much more.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >