Search Results

Search found 59295 results on 2372 pages for 'lord of time'.

Page 353/2372 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • High CPU load - Ubuntu 14.04

    - by watt
    I noticed that sometimes when browsing (with other processes in the background), I get very high CPU load for the browser process (over 100%) and the computer becomes really slow. I tried switching from Firefox (with just a few extensions) to Chromium, but same thing happens without me visiting graphics-intense sites, flash sites or anything like that. I also noticed python or node (when running "make") produce the same high CPU load from time to time so this is not necessarily browser-related. When I only have a browser open, it doesn't seem to happen and everything is fine in Windows 7. I switched from unity to gnome3 with no effect. Specs: lenovo w510 (4gb RAM, i7 q820 @ 1.73) + up to date Ubuntu 14.04 64bit. Printscreen: http://imgur.com/8MZJNKC Do you guys have any idea why this might happen? Please let me know if there's other info you need. Thanks!

    Read the article

  • Still no keyboard after uninstalling Ubuntu

    - by Muhammad Rushdi Ibrahim
    I installed Ubuntu 11.04 for the first time yesterday. After rebooting for the first time, I couldn't log in because I couldn't type anything using the keyboard. After rebooting, the keyboard failed completely; I can only automatically boot into Windows since I can't choose Ubuntu. Then the problem got worse. I had to use On-Screen keyboard to log in into Windows. Still no keyboard. When I rebooted, my laptop couldn't reboot at all! I had to hard reboot. I decided to uninstall the Ubuntu, using the Add/Remove program in the Control Panel. I uninstalled it successfully. My laptop automatically boots into Windows without Ubuntu option. However, I still don't have the keyboard! Please help me. Acer Aspire 4935 Windows 7 Ultimate Thanks.

    Read the article

  • Should I be looking for an alternative to Zen Cart as my business grows?

    - by MarkS
    I created a business website for a family business which is growing. It's my family, and I'm a software developer, but I don't want to rebuild the wheels or be a shopping cart programmer. For this business, I need the web store to "just work", but... it gets complicated... There are two parts of this business website. One of them is driven by Wordpress and I use the awesome Thesis theme. This is modern, flexible, and saves me a lot of time from doing custom coding and styling. I couldn't be more pleased with this arrangement. The other part of the site is a Zen Cart store. It's administration and it's flexibility is frustrating and archaic Web 1.0. For the past few years, I keep hearing that the developers are working on a 2.0 version of Zen Cart, but they haven't communicated anything significant in the past few years other than to say, "When it's ready, we'll let you know." What I'm looking for in a cart, I would need to install 6-10 additional mods, and would need to do a lot of custom coding. I'm now willing to pay for a top-notch e-commerce solution for a small business that we can grow up into a larger business over time. Requirements: Extremely flexible shipping that let's us set up rules per product/category, tables of rates, calculated rates, max package weighs, etc. (flexibility like that available with CEON Advance Shipping Module for Zen Cart Coupons and gift certificates Manual order entry for phone orders Multi-channel support (We also sell on Amazon, eBay, use Google Base and we want to maintain one set of inventory and have it kept current) Decent SEO features Reviews and star-ratings on products Easy social networking features for sharing, following, liking, etc) Easy integration with AdWords and analytics tracking Modern and very usable product and store administration (Like I was saying, I'm spoiled by Wordpress and Thesis) At the end of the day, I don't care if it's a hosted solution or if I have to host it myself. I just want something that is going to stay up-to-date, regularly be maintained and improved, and if I have to update it, things like the one-click update present in Wordpress is something it has to have. Professional Webmasters, if you had to run a store / website, but you had to spend your time focusing on your sales and marketing efforts rather than diffing php files and copying and tweaking them to change even the slightest details of your site, what would you choose?

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • New SSIS features and enhancements in Denali – a webinar on 28th June in association with Pragmatic Works

    - by jamiet
    Tomorrow I shall be presenting a webinar entitled “New SSIS features and enhancements in Denali”. The webinar is being hosted by Pragmatic Works and you can sign up for it at Pragmatic Works webinars. The webinar will start at 1930BST and you can view the time for your timezone at this link: http://www.timeanddate.com/worldclock/fixedtime.html?msg=New+SSIS+features+and+enhancements+in+Denali&iso=20110628T1830 The webinar was arranged a few months ago and at that time we were hoping that the next Community Technology Preview (CTP) of SQL Server Denali would be available for public consumption; unfortunately it transpires that that is not yet the case and hence I will be presenting new features of CTP1 that was released at the start of this year. If you’re not yet familiar with the new features of SSIS that are coming in the next release of SQL Server then please do come and join the webinar. @Jamiet

    Read the article

  • Oracle WebCenter: Social Networking & Collaboration

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration.We recently talked with Carin Chan, Principal Product Manager at Oracle, around the topic of Social Networking and Collaboration. In today’s work environment, employees have come to expect social and collaborative services to augment their work environment. Whether it is to post a blog or to poll fellow coworkers, employees expect and demand access to highly integrated, collaborative work environments that allow them to quickly contribute at work -- whether it is to make informed decisions, contribute on projects, or share knowledge.Social and collaborative services from Oracle WebCenter add an immeasurable amount of value to achieving a modern user experience. Oracle WebCenter Services provides rich and comprehensive social computing services that include services such as wikis, blogs, instant messaging, presence, activity streams and graphs, and polls/surveys that offer employees access to rich collaborative services to work efficiently.Employees can create pages or spaces that mix and match collaborative services while bringing in data from other applications to share with groups, teams, or organizations. These out of the box social and collaborative services include: People Connections and Activity Streams enable users to quickly assemble and visualize their social business networks and track user activities.Activity Graphs tracks all user activities in real-time and gathers intelligence about these users, their connections and the way they use information to make educated recommendations and provide on the spot information discovery.Wikis and blogs enable the community authoring of documents and sharing of ideas and also allow for the gathering of feedback and comments on those ideas.Tags and links allow users to easily mark, connect and share information with others.RSS feeds are available to track new or changed information related to discussion forums, processes or activities in an Oracle WebCenter environment.Discussion forums enable sharing of group knowledge and easy creation of communities around specific topics.Announcements allow you to manage and publish important news to your user base.Instant Messaging and Presence enable real-time awareness and communication with available users in the context of a business task.Web and Voice Conferencing enables real-time communication with internal and external business users.Lists provide a way to manage list data directly on the web as well as export and import it from and to Microsoft Excel.Oracle WebCenter Analytics provides comprehensive reporting metrics on activity and content usage within portals or composite applications.Activity Streams allow you to track activities and visualize your business networks.While being able to integrate into your portal deployment, these services are also integrated into how users are already working. This includes integration with software such as Microsoft Outlook, Microsoft Office and mobile devices such as the Apple iPhone. These services are just a tip of the iceberg regarding social and collaborative services that Oracle WebCenter has to offer your employees. Be sure to keep checking back this week for in future posts, we’ll delve deeper into a few of these collaborative services and discuss how a combination of collaborative services offer a better portal deployment to empower business users. Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, social, activity streams, blogs, wikis

    Read the article

  • What's new in the RightNow November 2012 release?

    - by Richard Lefebvre
    What new in the RightNow November 2012? In order to find out, please watch this tutorial with imbedded demonstration or read the November 2012 Release notes.   News Facts The November 2012 release of     Oracle’s RightNow CX Cloud Service marks the completion of development efforts for 2012 and continues Oracle’s commitment to enhancing the Oracle RightNow offering following the acquisition. New release delivers key capabilities designed to help organizations improve customer experiences in order to increase customer acquisition and retention, while reducing total cost of ownership. Part of the Oracle Cloud, Oracle RightNow CX Cloud Service now integrates Oracle RightNow Chat Cloud Service with Oracle Engagement Engine Cloud Service, helping organizations intelligently and proactively engage with customers through the right channel at the right time. Chat solutions have emerged as an important component of a cross-channel customer experience strategy. According to Forrester Research, Inc., chat adoption has risen dramatically between 2009 and 2011 from 19% to 37%, and it has the highest satisfaction level of all customer service channels at 62% satisfaction. (*) To help companies deliver enhanced customer experiences, Oracle has made significant investments in Oracle RightNow Chat Cloud Service throughout 2012. With the addition of rules-based engagement to existing capabilities such as co-browse, mobile chat, and cross-channel knowledge integration with the contact center, all delivered via the cloud, Oracle RightNow Chat Cloud Service is differentiated as the industry-leading chat solution. The Oracle Cloud offers a broad portfolio of software as-a-service applications, including Oracle Customer Service and Support Cloud Service, which is based on the Oracle RightNow CX Cloud Service. New Capabilities Key Oracle RightNow Chat Cloud Service and other cross-channel capabilities include: Chat Business Rules, with over 70 built-in rule conditions, leverage the Oracle Engagement Engine to help enable organizations capture rich visitor data and invoke complex actions and triggers. Chat Business Rules allow granular control over when to engage a customer via the chat channel based on customer behavior, customer profile information and operational information. Click-to-Call provides the option for a customer to engage with a live agent over the phone during the Web browsing experience. Chat Availability Controls provide organizations with the ability to throttle volume through the chat channel based on real-time agent availability and wait time thresholds. This ability to manage the channel more efficiently allows organizations to provide a better experience to customers using the chat channel. Strategic and Operational Chat Channel Analytics provide better insight into channel and agent productivity and utilization and effectiveness with both out-of-the-box reports and ad hoc reports. New chat channel analytics provide comprehensive metrics with full data transparency. Background Service Updates improve high availability metrics for Oracle RightNow Chat Cloud Service during service update periods, setting the industry leading standard for sales and service delivery to customers via the chat channel. Additional Capabilities include: Improved Web developer tools for more efficient self-service user interface design Improved administration for enhanced user sessions management Increased cross-channel community collaboration Enhanced extensibility widgets and syndication management Streamlined content management and analytics capabilities Read the full announcement here

    Read the article

  • How to track events or e-commerce sales that occur later using Google Analytics?

    - by Anton
    Here's my problem: I have a static site with Google Analytics tracking code. To buy one of my services, users call me, and when their order is ready (many days later), I send them an e-mail link to a special page (download.php) where I have GA tracking code that is executed the first time they visit, so I track a "sale". The issue is, GA thinks that "sale" was a separate visit, and erroneously shows that only direct visits to my site result in sales. I don't understand how I can view stats (Pages / Visit, Avg. Time on Site, etc.) about users who eventually bought something. I've tried events and e-commerce tracking with no luck. Please help!

    Read the article

  • SPARC T4-4 Delivers World Record First Result on PeopleSoft Combined Benchmark

    - by Brian
    Oracle's SPARC T4-4 servers running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved World Record 18,000 concurrent users while executing a PeopleSoft Payroll batch job of 500,000 employees in 43.32 minutes and maintaining online users response time at < 2 seconds. This world record is the first to run online and batch workloads concurrently. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 35% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. This is the first three tier mixed workload (online and batch) PeopleSoft benchmark also processing PeopleSoft payroll batch workload. Performance Landscape PeopleSoft HR Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-2 (db) 18,000 0.944 0.503 43.32 64 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory 5 x 300 GB SAS internal disks 1 x 100 GB and 2 x 300 GB internal SSDs 2 x 10 Gbe HBA Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 2 x 300 GB SAS internal disks 1 x 100 GB internal SSD Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two Oracle PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Management oracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Oracle's PeopleSoft HR and Payroll combined benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 09/30/2012.

    Read the article

  • Gnome panel not found

    - by emilbochnik
    Hi I installed the Ubuntu 10.10 on my laptop. 1st time Ubuntu user ever. After successful installation only panel on top with small ubuntu logo on left and system/connections, time, keyboard, volume icons/ on right. No menu and not able to create menu. Right click on the panel - no options. I tried everything, but it could be the most basic think as i have no experience with ubuntu. Please can you help me to resolve this issue. thank you bochnik

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Game development: “Play Now” via website vs. download & install

    - by Inside
    Heyo, I've spent some time looking over the various threads here on gamedev and also on the regular stackoverflow and while I saw a lot of posts and threads regarding various engines that could be used in game development, I haven't seen very much discussion regarding the various platforms that they can be used on. In particular, I'm talking about browser games vs. desktop games. I want to develop a simple 3D networked multiplayer game - roughly on the graphics level of Paper Mario and gameplay with roughly the same level of interaction as a hack & slash action/adventure game - and I'm having a hard time deciding what platform I want to target with it. I have some experience with using C++/Ogre3D and Python/Panda3D (and also some synchronized/networked programming), but I'm wondering if it's worth it to spend the extra time to learn another language and another engine/toolkit just so that the game can be played in a browser window (I'm looking at jMonkeyEngine right now). For simple & short games the newgrounds approach (go to the site, click "play now", instant gratification) seems to work well. What about for more complex games? Is there a point where the complexity of a game is enough for people to say "ok, I'm going to download and play that"? Is it worth it to go with engines that are less-mature, have less documentation, have fewer features, and smaller communities* just so that a (possibly?) larger audience can be reached? Does it make sense to even go with a web-environment for the kind of game that I want to make? Does anyone have any experiences with decisions like this? Thanks! (* With the exception of flash-based engines it seems like most of the other approaches have these downsides when compared to what is available for desktop-based environments. I'd go with flash, but I'm worried that flash's 3D capabilities aren't mature enough right now to do what I want easily. There's also Unity3D, but I'm not sure how I feel about that at all. It seems highly polished, but requires a plugin to be downloaded for the game to be played -- at that rate I might as well have players download my game.)

    Read the article

  • Ask the Readers: Are You A Second Screen Multi-tasker?

    - by Jason Fitzpatrick
    Television watchers are no longer keeping their eyes continuously glued to the screen–increasingly smartphone, tablet, and laptop users have merged their mobile device and television time. Are you one of the second screen multi-taskers? Image courtesy of Umani, a TV-companion application for iPad. According to Nielsen user surveys, at least 80% of mobile device owners have used their device while watching television in the past month–27% said they use their mobile device alongside the television multiple times a day. What the survey results are light on, however, is an in depth look at what the users are doing with their second screen. This week we want to hear about whether or not you’re one of the second screen multi-taskers and what you use your mobile device for during your television/movie time. Sound off in the comments and then check back in on Friday for the What You Said roundup. How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • Introducing UPK 3.6 Simulation Help (You Say It and We Do It!)

    - by kathryn.lustenberger(at)oracle.com
    We would like to thank everyone that participated in the recent documentation survey that was conducted over the last several months. Your feedback is valuable and we appreciate the time you took to provide it. Many of you commented that you would like to have "UPKs for UPK" in the documentation. In response, we are pleased to announce the availability of Simulation Help. This unique help system is a blending of the text-based Developer help and a collection of approximately 200 simulations that show authors how to create, record, refine, localize, and publish content using the Developer. You can access Simulation Help at any time using the following link: http://download.oracle.com/technology/products/upk/index.html Save this link as a favorite or bookmark in your browser for easy access anytime. We have also provided a link to a short one-question survey so you can tell us what you think of the new Simulation Help. http://www.surveymonkey.com/s/BJT7LV6 Thanks again for your valuable feedback on the product documentation!

    Read the article

  • Introducing UPK 3.6 Simulation Help (You Say It and We Do It!)

    - by marc.santosusso
    We would like to thank everyone that participated in the recent documentation survey that was conducted over the last several months. Your feedback is valuable and we appreciate the time you took to provide it. Many of you commented that you would like to have "UPKs for UPK" in the documentation. In response, we are pleased to announce the availability of Simulation Help. This unique help system is a blending of the text-based Developer help and a collection of approximately 200 simulations that show authors how to create, record, refine, localize, and publish content using the Developer. You can access Simulation Help at any time using the following link: http://download.oracle.com/technology/products/upk/index.html Save this link as a favorite or bookmark in your browser for easy access anytime. We have also provided a link to a short one-question survey so you can tell us what you think of the new Simulation Help. http://www.surveymonkey.com/s/BJT7LV6 Thanks again for your valuable feedback on the product documentation!

    Read the article

  • Is Python worth learning? Is it a useful tool?

    - by Kenneth
    I recently had a discussion with a professor of mine on the topic of web development. I had recently decided I would learn python to increase my arsenal of web tools which I mentioned to him at that time. He almost immediately asked why I would waste my time on that. I'm not certain but I think he recently started in on researching and studying web development so he could pick up the web development classes that haven't been taught for a while after the previous professor who taught those classes left. I've heard a lot about python and thought maybe he was mistaken about its usefulness. Is python a useful tool to have? What applications can it be used for? Is it better than other similar alternatives? Does it have useful applications outside of web development as well?

    Read the article

  • System displays "File system maintenance error, press ctrl+d" while booting

    - by user3215
    In my office I've Ubuntu 8.10 desktop installed and it's running for a long time. When ever the system is started, I'll get a file system maintenance error and something it's prompted for the root password or (press ctrl+d to continue). After pressing Ctrl+D the system normally boots up. I could not resolve this issue for a long time and I think something should be done in the fstab file. I'm not sure to do anything and expecting the experts here to help to perfectly fix this. Any help is appreciated. Thanks!

    Read the article

  • How do I point a new domain to start on a page that's not index.html on separate hosting?

    - by Owen Campbell-Moore
    I'm using a service (CMS/Host) called SquareSpace to host my site, and today I'm registering the domain for it. Basically, how do I make it so when somebody types www.tedxoxford.com it points at http://www.tedxoxford.com/landing (currently http://tedxoxford.squarespace.com/landing) instead of the default index? Is this possible? Squarespace is quite a restricted CMS and means that your logos etc all point to the index so I don't want people ending up on my landing/splash page every time they want the home page, only on the first time they type in the URL. A dirty hack would be to check the refferer and redirect anyone hitting the index to the landing page, but that's a lot of loading overhead I'd rather avoid...

    Read the article

  • What's the difference between Scala and Red Hat's Ceylon language?

    - by John Bryant
    Red Hat's Ceylon language has some interesting improvements over Java: The overall vision: learn from Java's mistakes, keep the good, ditch the bad The focus on readability and ease of learning/use Static Typing (find errors at compile time, not run time) No “special” types, everything is an object Named and Optional parameters (C# 4.0) Nullable types (C# 2.0) No need for explicit getter/setters until you are ready for them (C# 3.0) Type inference via the "local" keyword (C# 3.0 "var") Sequences (arrays) and their accompanying syntactic sugariness (C# 3.0) Straight-forward implementation of higher-order functions I don't know Scala but have heard it offers some similar advantages over Java. How would Scala compare to Ceylon in this respect?

    Read the article

  • How to Calculate TCP Socket Buffer Sizes for Data Guard Environments

    - by alejandro.vargas
    The MAA best practices contains an example of how to calculate the optimal TCP socket buffer sizes, that is quite important for very busy Data Guard environments, this document Formula to Calculate TCP Socket Buffer Sizes.pdf contains an example of using the instructions provided on the best practices document. In order to execute the calculation you need to know which is the band with or your network interface, usually will be 1Gb, on my example is a 10Gb network; and the round trip time, RTT, that is the time it takes for a packet to make a travel to the other end of the network and come back, on my example that was provided by the network administrator and was 3 ms (1000/seconds)

    Read the article

  • Tips or techniques to use when you do't know how to code something?

    - by janoChen
    I have a background as UI designer. And I realized that it is a bit hard for me to write a pieces of logic. Sometimes I get it right, but most of the time, I end up with something hacky (and it usually takes a lot of time). And is not that I don't like programming, in fact, I'm starting to like it as much as design. It's just that sometimes I think that I'm better at dealing with colors an shapes, rather than numbers and logic (but I want to change that). What I usually do is to search the solution on the Internet, copy the example, and insert it into my app (I know this is not a very good practice). I've heard that one tip was to write the logic in common English as comment before writing the actual code. What other tips and techniques I can use?

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >