Search Results

Search found 2214 results on 89 pages for 'significant figures'.

Page 35/89 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Can Microsoft Build Appliances?

    - by andrewbrust
    Billy Hollis, my Visual Studio Live! colleague and fellow Microsoft Regional Director said recently, and I am paraphrasing, that the computing world, especially on the consumer side, has shifted from one of building hardware and software that makes things possible to do, to building products and technologies that make things easy to do.  Billy crystalized things perfectly, as he often does. In this new world of “easy to do,” Apple has done very well and Microsoft has struggled.  In the old world, customers wanted a Swiss Army Knife, with the most gimmicks and gadgets possible.  In the new world, people want elegantly cutlery.  They may want cake cutters and utility knives too, but they don’t want one device that works for all three tasks.  People don’t want tools, they want utensils.  People don’t want machines.  They want appliances. Microsoft Appliances: They Do Exist Microsoft has built a few appliance-like devices.  I would say XBox 360 is an appliance,  It’s versatile, mind you, but it’s the kind of thing you plug in, turn on and use, as opposed to set-up, tune, and open up to upgrade the internals.  Windows Phone 7 is an appliance too.  It’s a true smartphone, unlike Windows Mobile which was a handheld computer with a radio stack.  Zune is an appliance too, and a nice one.  It hasn’t attained much traction in the market, but that’s probably because the seminal consumer computing appliance -- the iPod – got there so much more quickly. In the embedded world, Mediaroom, Microsoft’s set-top product for the cable industry (used by AT&T U-Verse and others) is an appliance.  So is Microsoft’s Sync technology, used in Ford automobiles.  Even on the enterprise side, Microsoft has an appliance: SQL Server Parallel Data Warehouse Edition (PDW) combines Microsoft software with select OEMs’ server, networking and storage hardware.  You buy the appliance units from the OEMs, plug them in, connect them and go. I would even say that Bing is an appliance.  Not in the hardware sense, mind you.  But from the software perspective, it’s a single-purpose product that you visit or run, use and then move on.  You don’t have to install it (except the iOS and Android native apps where it’s pretty straightforward), you don’t have to customize it, you don’t have to program it.  Basically, you just use it. Microsoft Appliances that Should Exist But Microsoft builds a bunch of things that are not appliances.  Media Center is not an appliance, and it most certainly should be.  Instead, it’s an app that runs on Windows 7.  It runs full-screen and you can use this configuration to conceal the fact that Windows is under it, but eventually something will cause you to abandon that masquerade (like Patch Tuesday). The next version of Windows Home Server won’t, in my opinion, be an appliance either.  Now that the Drive Extender technology is gone, and users can’t just add and remove drives into and from a single storage pool, the product is much more like a IT server and less like an appliance-premised one.  Much has been written about this decision by Microsoft.  I’ll just sum it up in one word: pity. Microsoft doesn’t have anything remotely appliance-like in the tablet category, either.  Until it does, it likely won’t have much market share in that space either.  And of course, the bulk of Microsoft’s product catalog on the business side is geared to enterprise machines and not personal appliances. Appliance DNA: They Gotta Have It. The consumerization of IT is real, because businesspeople are consumers too.  They appreciate the fit and finish of appliances at home, and they increasingly feel entitled to have it at work too.  Secure and reliable push email in a smartphone is necessary, but it isn’t enough.  People want great apps and a pleasurable user experience too.  The full Microsoft Office product is needed at work, but a PC with a keyboard and mouse, or maybe a touch screen that uses a stylus (or requires really small fingers), to run Office isn’t enough either.  People want a flawless touch experience available for the times they want to read and take quick notes.  Until Microsoft realizes this fully and internalizes it, it will suffer defeats in the consumer market and even setbacks in the business market.  Think about how slow the Office upgrade cycle is…now imagine if the next version of Office had a first-class alternate touch UI and consider the possible acceleration in adoption rates. Can Microsoft make the appliance switch?  Can the appliance mentality become pervasive at the company?  Can Microsoft hasten its release cycles dramatically and shed the “some assembly required” paradigm upon which many of its products are based?  Let’s face it, the chances that Microsoft won’t make this transition are significant. But there are also encouraging signs, and they should not be ignored.  The appliances we have already discussed, especially Xbox, Zune and Windows Phone 7, are the most obvious in this regard.  The fact that SQL Server has an appliance SKU now is a more subtle but perhaps also more significant outcome, because that product sits so smack in the middle of Microsoft’s enterprise stack.  Bing is encouraging too, especially given its integrated travel, maps and augmented reality capabilities.  As Bing gains market share, Microsoft has tangible proof that it can transform and win, even when everyone outside the company, and many within it, would bet otherwise. That Great Big Appliance in the Sky Perhaps the most promising (and evolving) proof points toward the appliance mentality, though, are Microsoft’s cloud offerings -- Azure and BPOS/Office 365.  While the cloud does not represent a physical appliance (quite the opposite in fact) its ability to make acquisition, deployment and use of technology simple for the user is absolutely an embodiment of the appliance mentality and spirit.  Azure is primarily a platform as a service offering; it doesn’t just provide infrastructure.  SQL Azure does likewise for databases.  And Office 365 does likewise for SharePoint, Exchange and Lync. You don’t administer, tune and manage servers; instead, you create databases or site collections or mailboxes and start using them. Upgrades come automatically, and it seems like releases will come more frequently.  Fault tolerance and content distribution is just there.  No muss.  No fuss.  You use these services; you don’t have to set them up and think about them.  That’s how appliances work.  To me, these signs point out that Microsoft has the full capability of transforming itself.  But there’s a lot of work ahead.  Microsoft may say they’re “all in” on the cloud, but the majority of the company is still oriented around its old products and models.  There needs to be a wholesale cultural transformation in Redmond.  It can happen, but product management, program management, the field and executive ranks must unify in the effort. So must partners, and even customers.  New leaders must rise up and Microsoft must be able to see itself as a winner.  If Microsoft does this, it could lock-in decades of new success, and be a standard business school case study for doing so.  If not, the company will have missed an opportunity, and may see its undoing.

    Read the article

  • Error in Bind9 named.conf file. Bind won't start.

    - by tj111
    I'm trying to setup a DNS server on an Ubuntu Server machine (10.04). I configured an entry in named.conf.local to test it, but when trying to restart bind9 I get the following error: * Starting domain name service... bind9 [fail] So I checked the output of syslog and this is what I get. May 20 18:11:13 empression-server1 named[4700]: starting BIND 9.7.0-P1 -u bind May 20 18:11:13 empression-server1 named[4700]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' May 20 18:11:13 empression-server1 named[4700]: adjusted limit on open files from 1024 to 1048576 May 20 18:11:13 empression-server1 named[4700]: found 4 CPUs, using 4 worker threads May 20 18:11:13 empression-server1 named[4700]: using up to 4096 sockets May 20 18:11:13 empression-server1 named[4700]: loading configuration from '/etc/bind/named.conf' May 20 18:11:13 empression-server1 named[4700]: /etc/bind/named.conf:10: missing ';' before 'include' May 20 18:11:13 empression-server1 named[4700]: loading configuration: failure May 20 18:11:13 empression-server1 named[4700]: exiting (due to fatal error) So it thinks I have an error in the default named.conf file, which is pretty ridiculous. I went through it and deleted a blank line just for the hell of it, but I can't see how it figures there's an error in there. Note that before this I did have an error in named.conf.local, but it showed up properly in syslog and I fixed it, so it is reporting the correct file. Here is the contents of named.conf: // This is the primary configuration file for the BIND DNS server named. // // Please read /usr/share/doc/bind9/README.Debian.gz for information on the // structure of BIND configuration files in Debian, *BEFORE* you customize // this configuration file. // // If you are just adding zones, please do that in /etc/bind/named.conf.local include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones";

    Read the article

  • Is Financial Inclusion an Obligation or an Opportunity for Banks?

    - by tushar.chitra
    Why should banks care about financial inclusion? First, the statistics, I think this will set the tone for this blog post. There are close to 2.5 billion people who are excluded from the banking stream and out of this, 2.2 billion people are from the continents of Africa, Latin America and Asia (McKinsey on Society: Global Financial Inclusion). However, this is not just a third-world phenomenon. According to Federal Deposit Insurance Corp (FDIC), in the US, post 2008 financial crisis, one family out of five has either opted out of the banking system or has been moved out (American Banker). Moving this huge unbanked population into mainstream banking is both an opportunity and a challenge for banks. An obvious opportunity is the significant untapped customer base that banks can target, so is the positive brand equity a bank can build by fulfilling its social responsibilities. Also, as banks target the cost-conscious unbanked customer, they will be forced to look at ways to offer cost-effective products and services, necessitating technology upgrades and innovations. However, cost is not the only hurdle in increasing the adoption of banking services. The potential users need to be convinced of the benefits of banking and banks will also face stiff competition from unorganized players. Finally, the banks will have to believe in the viability of this business opportunity, and not treat financial inclusion as an obligation. In what ways can banks target the unbanked For financial inclusion to be a success, banks should adopt innovative business models to develop products that address the stated and unstated needs of the unbanked population and also design delivery channels that are cost effective and viable in the long run. Through business correspondents and facilitators In rural and remote areas, one of the major hurdles in increasing banking penetration is connectivity and accessibility to banking services, which makes last mile inclusion a daunting challenge. To address this, banks can avail the services of business correspondents or facilitators. This model allows banks to establish greater connectivity through a trusted and reliable intermediary. In India, for instance, banks can leverage the local Kirana stores (the mom & pop stores) to service rural and remote areas. With a supportive nudge from the central bank, the commercial banks can enlist these shop owners as business correspondents to increase their reach. Since these neighborhood stores are acquainted with the local population, they can help banks manage the KYC norms, besides serving as a conduit for remittance. Banks also have an opportunity over a period of time to cross-sell other financial products such as micro insurance, mutual funds and pension products through these correspondents. To exercise greater operational control over the business correspondents, banks can also adopt a combination of branch and business correspondent models to deliver financial inclusion. Through mobile devices According to a 2012 world bank report on financial inclusion, out of a world population of 7 billion, over 5 billion or 70% have mobile phones and only 2 billion or 30% have a bank account. What this means for banks is that there is scope for them to leverage this phenomenal growth in mobile usage to serve the unbanked population. Banks can use mobile technology to service the basic banking requirements of their customers with no frills accounts, effectively bringing down the cost per transaction. As I had discussed in my earlier post on mobile payments, though non-traditional players have taken the lead in P2P mobile payments, banks still hold an edge in terms of infrastructure and reliability. Through crowd-funding According to the Crowdfunding Industry Report by Massolution, the global crowdfunding industry raised $2.7 billion in 2012, and is projected to grow to $5.1 billion in 2013. With credit policies becoming tighter and banks becoming more circumspect in terms of loan disbursals, crowdfunding has emerged as an alternative channel for lending. Typically, these initiatives target the unbanked population by offering small loans that are unviable for larger banks. Though a significant proportion of crowdfunding initiatives globally are run by non-banking institutions, banks are also venturing into this space. The next step towards inclusive finance Banks by themselves cannot make financial inclusion a success. There is a need for a whole ecosystem that is supportive of this mission. The policy makers, that include the regulators and government bodies, must be in sync, the IT solution providers must put on their thinking caps to come out with innovative products and solutions, communication channels such as internet and mobile need to expand their reach, and the media and the public need to play an active part. The other challenge for financial inclusion is from the banks themselves. While it is true that financial inclusion will unleash a hitherto hugely untapped market, the normal banking model may be found wanting because of issues such as flexibility, convenience and reliability. The business will be viable only when there is a focus on increasing the usage of existing infrastructure and that is possible when the banks can offer the entire range of products and services to the large number of users of essential banking services. Apart from these challenges, banks will also have to quickly master and replicate the business model to extend their reach to the remotest regions in their respective geographies. They will need to ensure that the transactions deliver a viable business benefit to the bank. For tapping cross-sell opportunities, banks will have to quickly roll-out customized and segment-specific products. The bank staff should be brought in sync with the business plan by convincing them of the viability of the business model and the need for a business correspondent delivery model. Banks, in collaboration with the government and NGOs, will have to run an extensive financial literacy program to educate the unbanked about the benefits of banking. Finally, with the growing importance of retail banking and with many unconventional players eyeing the opportunity in payments and other lucrative areas of banking, banks need to understand the importance of micro and small branches. These micro and small branches can help banks increase their presence without a huge cost burden, provide bankers an opportunity to cross sell micro products and offer a window of opportunity for the large non-banked population to transact without any interference from intermediaries. These branches can also help diminish the role of the unorganized financial sector, such as local moneylenders and unregistered credit societies. This will also help banks build a brand awareness and loyalty among the users, which by itself has a cascading effect on the business operations, especially among the rural and un-banked centers. In conclusion, with the increasingly competitive banking sector facing frequent slowdowns and downturns, the unbanked population presents a huge opportunity for banks to enhance their customer base and fulfill their social responsibility.

    Read the article

  • People, Process & Engagement: WebCenter Partner Keste

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Within the WebCenter group here at Oracle, discussions about people, process and engagement cross over many vertical industries and products. Amidst our growing partner ecosystem, the community provides us insight into great customer use cases every day. Such is the case with our partner, Keste, who provides us a guest post on our blog today with an overview of their innovative solution for a customer in the transportation industry. Keste is an Oracle software solutions and development company headquartered in Dallas, Texas. As a Platinum member of the Oracle® PartnerNetwork, Keste designs, develops and deploys custom solutions that automate complex business processes. Seamless Customer Self-Service Experience in the Trucking Industry with Oracle WebCenter Portal  Keste, Oracle Platinum Partner Customer Overview Omnitracs, Inc., a Qualcomm company provides mobility solutions for trucking fleets to companies in the transportation industry. Omnitracs’ mobility services include basic communications such as text as well as advanced monitoring services such as GPS tracking, temperature tracking of perishable goods, load tracking and weighting distribution, and many others. Customer Business Needs Already the leading provider of mobility solutions for large trucking fleets, they chose to target smaller trucking fleets as new customers. However their existing high-touch customer support method would not be a cost effective or scalable method to manage and service these smaller customers. Omnitracs needed to provide several self-service features to make customer support more scalable while keeping customer satisfaction levels high and the costs manageable. The solution also had to be very intuitive and easy to use. The systems that Omnitracs sells to these trucking customers require professional installation and smaller customers need to track and schedule the installation. Information captured in Oracle eBusiness Suite needed to be readily available for new customers to track these purchases and delivery details. Omnitracs wanted a high impact User Interface to significantly improve customer experience with the ability to integrate with EBS, provisioning systems as well as CRM systems that were already implemented. Omnitracs also wanted to build an architecture platform that could potentially be extended to other Portals. Omnitracs’ stated goal was to deliver an “eBay-like” or “Amazon-like” experience for all of their customers so that they could reach a much broader market beyond their large company customer base. Solution Overview In order to manage the increased complexity, the growing support needs of global customers and improve overall product time-to-market in a cost-effective manner, IT began to deliver a self-service model. This self service model not only transformed numerous business processes but is also allowing the business to keep up with the growing demands of the (internal and external) customers. This solution was a customer service Portal that provided self service capabilities for large and small customers alike for Activation of mobility products, managing add-on applications for the devices (much like the Apple App Store), transferring services when trucks are sold to other companies as well as deactivation all without the involvement of a call service agent or sending multiple emails to different Omnitracs contacts. This is a conceptual view of the Customer Portal showing the details of the components that make up the solution. 12.00 The portal application for transactions was entirely built using ADF 11g R2. Omnitracs’ business had a pressing requirement to have a portal available 24/7 for its customers. Since there were interactions with EBS in the back-end, the downtimes on the EBS would negate this availability. Omnitracs devised a decoupling strategy at the database side for the EBS data. The decoupling of the database was done using Oracle Data Guard and completely insulated the solution from any eBusiness Suite down time. The customer has no knowledge whether eBS is running or not. Here are two sample screenshots of the portal application built in Oracle ADF. Customer Benefits The Customer Portal not only provided the scalability to grow the business but also provided the seamless integration with other disparate applications. Some of the key benefits are: Improved Customer Experience: With a modern look and feel and a Portal that has the aspects of an App Store, the customer experience was significantly improved. Page response times went from several seconds to sub-second for all of the pages. Enabled new product launches: After successfully dominating the large fleet market, Omnitracs now has a scalable solution to sell and manage smaller fleet customers giving them a huge advantage over their nearest competitors. Dozens of new customers have been acquired via this portal through an onboarding process that now takes minutes Seamless Integrations Improves Customer Support: ADF 11gR2 allowed Omnitracs to bring a diverse list of applications into one integrated solution. This provided a seamless experience for customers to route them from Marketing focused application to a customer-oriented portal. Internally, it also allowed Sales Representatives to have an integrated flow for taking a prospect through the various steps to onboard them as a customer. Key integrations included: Unity Core Salesforce.com Merchant e-Solution for credit card Custom Omnitracs Applications like CUPS and AUTO Security utilizing OID and OVD Back end integration with EBS (Data Guard) and iQ Database Business Impact Significant business impacts were realized through the launch of customer portal. It not only allows the business to push through in underserved segments, but also reduces the time it needs to spend on customer support—allowing the business to focus more on sales and identifying the market for new products. Some of the Immediate Benefits are The entire onboarding process is now completely automated and now completes in minutes. This represents an 85% productivity improvement over their previous processes. And it was 160 times faster! With the success of this self-service solution, the business is now targeting about 3X customer growth in the next five years. This represents a tripling of their overall customer base and significant downstream revenue for the ongoing services. 90%+ improvement of customer onboarding and management process by utilizing, single sign on integration using OID/OAM solution, performance improvements and new self-service functionality Unified login for all Customers, Partners and Internal Users enables login to a common portal and seamless access to all other integrated applications targeted at the respective audience Significantly improved customer experience with a better look and feel with a more user experience focused Portal screens. Helped sales of the new product by having an easy way of ordering and activating the product. Data Guard helped increase availability of the Portal to 99%+ and make it independent of EBS downtime. This gave customers the feel of high availability of the portal application. Some of the anticipated longer term Benefits are: Platform that can be leveraged to launch any new product introduction and enable all product teams to reach new customers and new markets Easy integration with content management to allow business owners more control of the product catalog Overall reduced TCO with standardization of the Oracle platform Managed IT support cost savings through optimization of technology skills needed to support and modify this solution ------------------------------------------------------------ 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Probation is Over: PASS Board Year 1, Q2

    - by Denise McInerney
    Though it's not always official every job begins with a probation period. You start out with lots of questions and every day you find out how much more you have to learn. Usually after a few months you discover that you can actually answer some questions and have at least an idea of what you are supposed to be doing. Now at the end of my second quarter on the "job" of serving on the PASS Board I have reached that point. My probation period is over. The last three months were busy for the entire Board with the budget process, an in-person meeting and moving forward with PASS Global Growth plans. I had also set a specific goal for myself for my 2nd quarter: to see the Board to adopt a Code of Conduct for the PASS Summit. Code of Conduct When I ran for the Board I included my desire to see PASS establish a code of conduct in my campaign platform.  I was motivated to do this for a few reasons. Other technical conferences have had incidents of harassment. Most of these did not have a policy in place prior to having a problem, though several conference organizers have since adopted anti-harassment policies or codes of conduct. I felt it would be in PASS' interest to establish a policy so we would be prepared should there be an incident.   "This is Community" Adopting a code of conduct would reinforce our community orientation and send a message about the positive character of the Summit. PASS is a leader among technical organizations for its promotion and support of women. Adopting a code of conduct would further demonstrate our leadership in this area. After researching similar polices from other organizations I published a first draft in April. I solicited feedback from the Board, HQ staff and some PASS members. Incorporating that feedback I presented version 4 at the May Board meeting, where we had a good discussion. You can read the meeting minutes for details. I incorporated points from  the Board discussion as well as feedback from a legal review to produce a final version which has been submitted to the Board. It will be discussed at the Board meeting July 12. You can read the full text at the end of this post. Virtual Chapters In the first quarter we started ramping up marketing support for the Virtual Chapters. Since then each edition of the Connector has highlighted a different VC to help get out the message about the variety of eductional opporutnities that are offered. These VC profiles will continue in the coming months. I was very pleased to welcome the new DBA Fundamentals VC which is geared toward new DBAs, people who are considering entering the field and those transitioning from a different IT role. Thanks to the contributions of Erin Stellato, Michelle Nalliah and Karla Landrum we published a "Virtual Chapter Guidebook". This document includes great advice on how to build and promote a VC. It's also a reference for how things work, from budgets to webinar hosting. I think this document will be extremely valuable to all our VC leaders and am grateful to those who put it together. Board Meeting/SQL Rally The Board met in May in Dallas. Among the items discussed were Global Growth, the budget, future events and the upcoming elections. We covered a lot of ground in two days and I will again refer you to the meeting minutes for details. The meeting schedule allowed us to participate in the SQL Rally networking events and one full day of the conference. I enjoyed having the opportunity to meet and talk with many PASS members. And my hat is off to the SQL Rally organizers who put on an outstanding event. Global Growth PASS has undertaken a major intitiative to reach and engage SQL Server professionals around the world. This Global Growth plan is ambitious and will have a significant impact on the strategic direction of the organization. We have been reaching out to the community for feedback, including hosting Twitter chats and live Town Hall meetings. I co-hosted two of these events and appreciated hearing the different perspectives of the people who participated If you have not done so I encourage you to read about the Global Growth vision and proposed governance changes  and submit your feedback. FY13 Budget July 1 is the beginning of PASS' fiscal year, which makes the end of June the deadline for approving a budget. Each director submits a budget for his or her portfolio. For the Virtual Chapter portfolio I focused on how we can allocate resources to grow the VCs. Budgeting is a give-and-take process, and while I didn't get everything I asked for I'm pleased the FY13 budget includes a significant increase in financial support for the Virtual Chapters. Many people put a lot of work into the budget, but no two people deserve credit more than VP of Finance Douglas McDowell and Accounting Manager Sandy Cherry. Thanks to both of them for getting us across the goal line on time. SQL Saturday I attended SQL Saturdays in Orange Co. CA and Phoenix. It's always inspiring to see the enthusiasm in the community for learning and networking. These events are successful due to the hard work of many volunteers. Thanks to the organizers in both cities for all your efforts. Next Up This quarter we'll be gearing up plans for the VCs at the Summit and exploring ways the VCs can best support PASS' Global Growth work. I'll also be wrapping up work on the Code of Conduct and attending a Board meeting in September. And I will be at SQL Saturday #144 in Sacramento later this month. Here is the language of the Code of Conduct I have submitted to the Board for consideration: PASS Code of Conduct The PASS Summit provides database professionals from a variety of backgrounds with an opportunity to connect, share and learn.  We value the strong sense of community that characterizes this event and we seek to foster an inclusive, professional atmosphere. We are dedicated to providing a harassment-free conference experience for everyone, regardless of gender, race, sexual orientation, disability, physical appearance, religion or any other protected classification.  Everyone at the Summit is expected to follow the Code of Conduct. This includes but is not limited to: PASS Staff, Exhibitors, Speakers, Attendees and anyone affiliated with the event. Participants are expected to follow the Code of Conduct at all Summit events, including PASS-sponsored social events. Participant behavior Harassment includes, but is not limited to, offensive verbal comments related to gender, race, sexual orientation, disability, physical appearance, religion, or any other protected classification.  Intimidation, threats, stalking, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact and unwelcome attention will also be considered harassment. Similarly, sexual, racist, derogatory, threatening or other inappropriate language and imagery are not appropriate for any conference venue, including sessions.  Recourse If a participant engages in any conduct that is prohibited under this Code of Conduct, the conference organizers may take any action they deem appropriate, including warning the offender or expelling the offender from the conference. No refunds will be granted to attendees expelled from the Summit due to violations of the Code of Conduct. If you are being harassed, witness harassment, or have any other concerns, please contact a member of conference staff immediately. Conference staff can be identified by their “Headquarters/Staff” shirts and are trained to handle the situation appropriately. A Code of Conduct Committee (CCC) made up of the Executive Manager and three members of the Board of Directors designated by the President will be authorized to take action in response to an incident or behavior that violates the Code of Conduct.

    Read the article

  • Load balanced IIS. Should I use NLB, or linux-based reverse proxy, or something else?

    - by growse
    What would be the best approach for load-balancing at least 2-3 Windows 2008 R2 IIS webservers running a multitude of .NET applications? My choices appear to be: 1) Hardware-based network device load balancer, like a Cisco CSS 2) Windows NLB 3) Some sort of linux based proxy, either haproxy or other The three servers sit as VMs on a vSphere farm, so I have the ability to clone to up the instance count in times of high load. I control the switch that the vSphere hosts are plugged into (Cisco 3750), but don't control the switching/routing infrastructure beyond that to the clients. (1) Is too expensive, and probably overkill for my needs. I've included this in case someone figures out a cunning way to do it on my existing network kit, which I doubt. (2) would seem to be the obvious "built-in" option, but seems to be quite fiddly messing around with network interfaces, multicast, and generally other things that seem to be needlessly complex. It's also fairly stupid, in that it can't remove hosts from the pool if they start throwing 500 errors or otherwise go wrong (3) is the most interesting option, as it would appear to offer the most flexibility and customizability, but without having to mess around with the network. However, while I'm familiar with the reverse-proxy capabilities of lighttpd etc, I'm not that well read on other options like HAProxy, which might be able to offer a lot more. Which would you go for, and is there anything I've not thought of?

    Read the article

  • filter / directing URLs coming onto a network

    - by Jon
    Hi all, I an not sure if this is possible or not but what i would like to do is as follows: I have one IP address (dynamic using zoneedit.com to keep it upto date). I have one webserver running my main site which is an Ubuntu machine running Apache. I also have a windows 2008 server running another site. Just to confuse things I also run part of my Apache site on the windows server, currently using proxypassreverse to get the information from it. So it looks something like this: IP 1.2.3.4 maps to mydomain.com as well as myotherdomain.com All requests that come into port 80 are forwarded to the Apache box and I use Virtualhost settings to proxy the windows sites where needed. so mydomain.com is an Apache site mydomain.com/mywindowssection is the Apache server using proxypassreverse to get part of the site from the Windows server myotherdomain.com uses Apache and proxypassreverse to get the whole site. What I would like to be able to do is forward all http requests that come into my network to one machine that figures out who should be serving that content. so: mydomain.com would go to the Apache machine myotherdomain.com would go the windows machine. I am just in the process of setting up an Astaro gateway (never done this before so taking a while to configure) as my firewall, dns, dhcp etc, don't know if this can handle it. I have the capacity to run a VM on the network if a seperate box would be needed for this process as well. Thanks for any and all feedback. Jon

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • Solaris 11 SRU / Update relationship explained, and blackout period on delivery of new bug fixes eliminated

    - by user12244672
    Relationship between SRUs and Update releases As you may know, Support Repository Updates (SRUs) for Oracle Solaris 11 are released monthly and are available to customers with an appropriate support contract.  SRUs primarily deliver bug fixes.  They may also deliver low risk feature enhancements. Solaris Update are typically released once or twice a year, containing support for new hardware, new software feature enhancements, and all bug fixes available at the time the Update content was finalized.  They also contain a significant number of new bug fixes, for issues found internally in Oracle and complex customer bug fixes which  require significant "soak" time to ensure their efficacy prior to release. Changes to SRU and Update Naming Conventions We're changing the naming convention of Update releases from a date based format such as Oracle Solaris 10 8/11 to a simpler "dot" version numbering, e.g. Oracle Solaris 11.1. Oracle Solaris 11 11/11 (i.e. the initial Oracle Solaris 11 release) may be referred to as 11.0. SRUs will simply be named as "dot.dot" releases, e.g. Oracle Solaris 11.1.1, for SRU1 after Oracle Solaris 11.1. Many Oracle products and infrastructure tools such as BugDB and MOS are tailored towards this "dot.dot" style of release naming, so these name changes align Oracle Solaris with these conventions. No Blackout Periods on Bug Fix Releases The Oracle Solaris 11 release process has been enhanced to eliminate blackout periods on the delivery of new bug fixes to customers. Previously, Oracle Solaris Updates were a superset of all preceding bug fix deliveries.  This made for a very simple update message - that which releases later is always a superset of that which was delivered previously. However, it had a downside.  Once the contents of an Update release were frozen prior to release, the release of new bug fixes for customer issues was also frozen to maintain the Update's superset relationship. Since the amount of change allowed into the final internal builds of an Update release is reduced to mitigate risk, this throttling back also impacted the release of new bug fixes to customers. This meant that there was effectively a 6 to 9 week hiatus on the release of new bug fixes prior to the release of each Update.  That wasn't good for customers awaiting critical bug fixes. We've eliminated this hiatus on the delivery of new bug fixes in Oracle Solaris 11 by allowing new bug fixes to continue to be released in SRUs even after the contents of the next Update release have been frozen. The release of SRUs will remain contiguous, with the first SRU released after the Update release effectively being a superset of both the the Update release and all preceding SRUs*.  That is, later SRUs are supersets of the content of previous SRUs. Therefore, the progression path from the final SRUs prior to the Update release is to the first SRU after the Update release, rather than to the Update release itself. The timeline / logical sequence of releases can be shown as follows: Updates: 11.0                                                11.1                               11.2     etc.                  \                                                         \                                    \ SRUs:       11.0.1, 11.0.2,...,11.0.12, 11.0.13, 11.1.1, 11.1.2,...,11.1.x, 11.2.1, etc. For example, for systems with Oracle Solaris 11 11/11 SRU12.4 or later installed, the recommended update path is to Oracle Solaris 11.1.1 (i.e. SRU1 after Solaris 11.1) or later rather than to the Solaris 11.1 release itself.  This will ensure no bug fixes are "lost" during the update. If for any reason you do wish to update from SRU12.4 or later to the 11.1 release itself - for example to update a test system - the instructions to do so are in the SRU12.4 README, https://updates.oracle.com/Orion/Services/download?type=readme&aru=15564533 For systems with Oracle Solaris 11 11/11 SRU11.4 or earlier installed, customers can update to either the 11.1 release or any 11.1 SRU as both will be supersets of their current version. Please do read the README of the SRU you are updating to, as it will contain important installation instructions which will save you time and effort. *Nerdy details: SRUs only contain the latest change delta relative to the Update on which they are based.  Their dependencies will, however, effectively pull in the Update content.  Customers maintaining a local Repo (e.g. behind their firewall), need to add both the 11.1 content and the relevant SRU content to their Repo, to enable the SRU's dependencies to be resolved.  Both will be available from the standard Support Repo and from MOS.  This is no different to existing SRUs for Oracle Solaris 11.0, whereby you may often get away with using just the SRU content to update, but the original 11.0 content may be needed in the Repo to resolve dependencies.

    Read the article

  • PASS: The Budget Process

    - by Bill Graziano
    Every fiscal year PASS creates a detailed budget.  This helps us set priorities and communicate to our members what we’re going to do in the upcoming year.  You can review the current budget on the PASS Governance page.  That page currently requires you to login but I’m talking with HQ to see if there are any legal issues with opening that up. The Accounting Team The PASS accounting team is two people.  The Executive Vice-President of Finance (“EVP”) and the PASS Accounting Manager.  Sandy Cherry is the accounting manager and works at PASS HQ.  Sandy has been with PASS since we switched management companies in 2007.  Throughout this document when I talk about any actual work related to the budget that’s all Sandy :)  She’s the glue that gets us through this process.  Last year we went through 32 iterations of the budget before the Board approved so it’s a pretty busy time for her us – well, mostly her. Fiscal Year The PASS fiscal year runs from July 1st through June 30th the following year.  Right now we’re in fiscal year 2011.  Our 2010 Summit actually occurred in FY2011.  We switched to this schedule from a calendar year in 2006.  Our goal was to have the Summit occur early in our fiscal year.  That gives us the rest of the year to handle any significant financial impact from the Summit.  If registrations are down we can reduce spending.  If registrations are up we can decide how much to increase our reserves and how much to spend.  Keep in mind that the Summit is budgeted to generate 82% of our revenue this year.  How it performs has a significant impact on our financials.  The other benefit of this fiscal year is that it matches the Microsoft fiscal year.  We sign an annual sponsorship agreement with Microsoft and it’s very helpful that our fiscal years match. This year our budget process will probably start in earnest in March or April.  I’d like to be done in early June so we can publish before July 1st.  I was late publishing it this year and I’m trying not to repeat that. Our Budget Our actual budget is an Excel spreadsheet with 36 sheets.  We remove some of those when we publish it since they include salary information.  The budget is broken up into various portfolios or departments.  We have 20 portfolios.  They include chapters, marketing, virtual chapters, marketing, etc.  Ideally each portfolio is assigned to a Board member.  Each portfolio also typically has a staff person assigned to it.  Portfolios that aren’t assigned to a Board member are monitored by HQ and the ExecVP-Finance (me).  These are typically smaller portfolios such as deferred membership or Summit futures.  (More on those in a later post.)  All portfolios are reviewed by all Board members during the budget approval process, when interim financials are released internally and at year-end. The Process Our first step is to budget revenues.  The Board determines a target attendee number.  We have formulas based on historical performance that convert that to an overall attendee revenue number.  Other revenue projections (such as vendor sponsorships) come from different parts of the organization.  I hope to have another post with more details on how we project revenues. The next step is to budget expenses.  Board members fill out a sample spreadsheet with their budget for the year.  They can add line items and notes describing what the amounts are for.  Each Board portfolio typically has from 10 to 30 line items.  Any new initiatives they want to pursue needs to be budgeted.  The Summit operations budget is managed by HQ.  It includes the cost for food, electrical, internet, etc.  Most of these come from our estimate of attendees and our contract with the convention center.  During this process the Board can ask for more or less to be spent on various line items.  For example, if we weren’t happy with the Internet at the last Summit we can ask them to look into different options and/or increasing the budget.  HQ will also make adjustments to these numbers based on what they see at the events and the feedback we receive on the surveys. After we have all the initial estimates we start reviewing the entire budget.  It is sent out to the Board and we can see what each portfolio requested and what the overall profit and loss number is.  We usually start with too much in expenses and need to cut.  In years past the Board started haggling over these numbers as a group.  This past year they decided I should take a first cut and present them with a reasonable budget and a list of what I changed.  That worked well and I think we’ll continue to do that in the future. We go through a number of iterations on the budget.  If I remember correctly, we went through 32 iterations before we passed the budget.  At each iteration various revenue and expense numbers can change.  Keep in mind that the PASS budget has 200+ line items spread over 20 portfolios.  Many of these depend on other numbers.  For example, if we decide increase the projected attendees that cascades through our budget.  At each iteration we list what changed and the impact.  Ideally these discussions will take place at a face-to-face Board meeting.  Many of them also take place over the phone.  Board members explain any increase they are asking for while performing due diligence on other budget requests.  Eventually a budget emerges and is passed. Publishing After the budget is passed we create a version without the formulas and salaries for posting on the web site.  Sandy also creates some charts to help our members understand the budget.  The EVP writes a nice little letter describing some of the changes from last year’s budget.  You can see my letter and our budget on the PASS Governance page. And then, eight months later, we start all over again.

    Read the article

  • Network latency and speed of light

    - by James
    This was kinda of covered by the following Is minimum latency fixed by the speed of light? , but i would like to add the follow up a bit. The scenario is as follows; we have two opposing sites one on the West Coast of the US and one in Ireland. The customer is in central Europe, and has requested a latency test. Ireland gives responses of ~65-70ms. However the West Coast guys claim to be faster with a response of 60ms. Now a quick check says that light in fiber would take about 42ms to make the trip to the States and 8.5ms to Ireland. So obviously this is a single hop and does not include routers, switches, firewalls, protocol overhead etc. Would I be right to call BS on their figures? As a final note I tested a ping to Google IP address that was allegedly on the west coast from a site that covered a similar distance and was amazed to get a response time of 20ms. Suggesting ICMP packets that travel twice the speed of light. So A) what am I missing B) Am I right to suspect shenanigans? UPDATE: Guys thanks so far for your help and I have been reading various previous questions on this. About 5 years I had an issue where the hop from the UK to Ireland added 10ms of latency no matter what we did. In the end I moved the servers; So imagine my surprise when I have guys that claim they are 5ms faster with a transatlantic trip. So again should I call BS? Oh and assume both sites are normal mortals that don't have access to Google magical routing, warp dives or flux capacitors. :)

    Read the article

  • DNS to \\Server\ wrong - \\Server.company.local\ works fine

    - by JimmyClif
    I had a little network glitch and since then one of my servers shows up wrong at some workstations when typing in \\server\. Example: On workstationA I go to Explorer and and type \\server\ and it brings me to our copier at 192.168.2.101. \\server.company.local\ gets me to the right place at 192.168.2.252. Ping with server pings 192.168.2.252 - same correct result with ping server.company.com nslookup also shows correct result with both. reverse lookup by ip is correct also. I flush the DNS on the workstation and the error still occurs. reboot same result. At that point I give up and start remapping the shares to \\server.company.local\share just to get the user back working... DNS Server has correct entries for that server. Can access the server via \\server\ on dns server, all looks fine. Eventually the workstation figures it out by itself and \\server\ works again but my life wouldn't be as stressful if I had a clue what happened or how to fix it myself. Thanks for your time looking and answering.

    Read the article

  • Any dangers in using DDR memory with a higher frequency than the FSB?

    - by raw_noob
    I'm looking to upgrade memory in an older motherboard. The processor is an AMD Sempron 2500+ with a maximum speed of 333/166MHz. The motherboard is an MSI MS-7061 (KV3M-V), which accepts up to 2Gb of DDR memory maximum PC2700 in 2 slots and has a maximum FSB of 333MHz. The board does not have dual-channel support. Existing memory includes a stick of 512Mb PC3200, which seems to be running OK (presumably at PC2700) but is rated 200MHz, which is below the FSB speed. The other stick is 256Mb PC2100/133MHz, again below the FSB speed. (All figures from CPU-Z.) I have a chance to acquire a single used stick of PC3200/400MHz memory very cheaply. Crucial's system scanner seems to suggest that this will be OK with my system, but other sites have suggested that running memory with a higher frequency than the FSB can cause instability. Is this true? Would I be better waiting until I can buy the correct PC2700/333MHz stick? I'm assuming that the mixed memory I have at present is running as 768Mb at 133MHz. Is this a reasonable assumption? If so, would you expect the performance differences between 768Mb/133MHz and 1Gb/333MHz to be very noticeable? If I install the new 1Gb/400 or 333MHz stick in slot 1, am I right in thinking that adding back the existing 512Mb/200MHz stick in slot 2 would pull the whole 1.5Gb system memory speed down to 200MHz? If so, which would be better - 1.5Gb/200MHz, or the single 1Gb stick at the full 333MHz that the FSB permits? Is more headroom more important than extra speed? Any help - or even opinions - gratefully received. I can't find reliable information, and I can't afford to make expensive mistakes.

    Read the article

  • How to view / enumerate / obtain a list of all effective rights / permissions on an Active Directory object?

    - by Laura
    I am new to Server Fault and was hoping to find an answer to a question that I have been struggling with for the past week or so. I have been recently asked by my management to furnish a list of all the effective rights / permissions delegated on the Active Directory object for our Domain Admins group. I initially figured I'd use the Effective Permissions Tab in Active Directory Users and Computers but had two problems with it. The first was that it doesn't seem very accurate and the second was that it requires me to enter the name of a specific user, and it only shows me what it figures are effective permissions for that user. Now, we have more than a 1000 users in our environment so there's no way I can possibly enter 1000 user names one by one. Plus, there is no way to export that information either. I also looked at dsacls from MS but it doesn't do effective permissions. Someone pointed me to a tool called ADUCAdmin but that seems to falsely claim to do effective permissions. Could someone kindly help me find a way to obtain this listing? Basically, I need to generate a list of all the modify effective permissions granted on the Domain Admins group object along with the list of all the admins to which these permissions are granted. In case it helps, I don't need a fancy listing - simple text / CSV output would be enough I would be grateful for any assistance since this is time and security sensitive for us.

    Read the article

  • ADSL2+ - High sync-rate, good line attenuation, but low noise margin and slow speeds

    - by Mark Pim
    I've been with my ISP (IdNet) for a few months and have been getting some good speeds, but in the last week the speed has dramatically decreased (from 15 Mbps+ to around 0.2 Mbps). This happens at all times of day, not just peak periods. Obviously I've done all I can to isolate problems my end - only one PC is connected to the router (via ethernet cable) and no other background programs are using the network etc. I've raised the issue with the ISP and they've suggested trying a new ADSL filter to see if that is casuing the problem, but I thought it would also be good to get the opinion of superuser on possible causes or other troubleshooting I can do. Here are the juicy stats :) My router (Netgear DGN1000) reports: Downstream Upstream Connection Speed 17602 kbps 1062 kbps Line Attenuation 17.9 db 8.6 db Noise Margin 6.0 db 6.1 db I used RouterStats and it seems to show those figures stay fairly consistent all the time I ran the BT speedtest and it reported: download speed of 164 kbps, out of a max achievable of 21000 kbps upload speed of 859 kbps, out of 1048 kbps DSL connection rate 17719 kbps down and 1048 kbps up IP Profile of 15000 kbps Is there any more troubleshooting I can do? Does this look like a problem with my equipment / wiring or with BT's line? Any advice would be great :)

    Read the article

  • Creating dynamic map graphs

    - by Mehper C. Palavuzlar
    I need a software (or softwares) to achieve the following. I'm not sure if it could be done, but I'd like to hear the suggestions from super users. The data I want to use for graphing purposes include sales figures of magazines by provinces. There are 81 provinces of Turkiye, and I want the computer to automatically paint / write on a graph according to the sales magnitude of the provinces. Since there are loads of magazines with loads of issues, the process must be executed automatically just after selecting the related magazine and issue. So there will be graphs showing the sales weight of the whole country with some nice illustrations. Those graphs might be used as part of some decision support mechanism to help field teams. Is it possible? I have all the data and base maps of Turkiye to be filled/painted. I'm sure this is not easy. If there is a way to do that, it might probably include more than one software. Thanks in advance for any valuable comments and answers.

    Read the article

  • Can't attach EC2 instance to Network Interface

    - by Ian Warburton
    When trying to attach a network interface, it says... No instances were found for this availability zone. My instance is in us-east-1c and my network interface is in us-east-1b. Is that significant? If so, how do I create the VPC in the same zone and if not then why this error? EDIT: I've re-created the VPC and the Network Interface is now us-east-1c and the EC2 instance is also us-east-1c. Same error message though!

    Read the article

  • Hyper-V Deployment Options Best Practices

    - by Erv Walter
    In what circumstances would you choose each of the following deployment options: Hyper-V installed as the bare bones Windows Hyper-V Server 2008 R2 Hyper-V role installed on a Windows Server 2008 R2 Server Core installation Hyper-V role installed on a Windows Server 2008 R2 Full Installation For example, I know there are licensing considerations for each option: With Hyper-V on top of a full installation of Enterprise or Data Center edition, you can use Windows Server as a guest OS without needing additional licenses (4 for Enterprise, unlimited for Data Center) With "Windows Hyper-V Server" you have to obtain licenses for each guest OS. But my real question is, are there technical considerations as well? I understand that the Full Installation doesn't perform as well as the other two options, but is there a significant difference between Server Core and "Windows Hyper-V Server"? What are the pros and cons of Hyper-V on Server Core vs "Windows Hyper-V Server" and when would you choose each?

    Read the article

  • PHP FastCGI HTTP Error 500 on Windows 7

    - by CJM
    I've just installed PHP (5.3.1) and MySQL (5.1.44) on my development machine. Then I used the Web Platform Installer to install a copy of Joomla and Drupal. However, when I tried to browse either site application, I get a HTTP Error 500: Module FastCgiModule Notification ExecuteRequestHandler Handler PHP_via_FastCGI Error Code 0x00000000 Requested URL http://localhost:808/drupal/index.php Physical Path D:\Projects\drupal\index.php Logon Method Anonymous Logon User Anonymous PHPInfo.php reports that FastCGI is configured (not sure if that is significant). Sure the fact that PHPInfo.php reports anything is perhaps an indication that PHP itself is working...? I'm struggling to know where to look for a solution... Each application appears to be configure similarly to my other [ASP/ASP.NET] applications.

    Read the article

  • Netbook performance - 1.33 GHz vs 1.6/1.66 GHz Atom

    - by Imran
    All new 11" netbooks seem to carry 1.33 GHz Atom Z520 CPU instead of 1.6/1.66 GHz Atom N270/N280. The screen resolution of 11" netbooks make them very appealing, but I'm a bit concerned about their performance as they carry a slower CPU than the 1.6GHz Atom, which isn't a great performer in the first place. Is there any significant difference in performance between 1.33 GHz and 1.6/1.66 GHz Atom processors in day to day usage? Are any of those fast enough to decode 720p x264 video? (When paired with typical Intel GMA platform and software decoder like ffdshow/CoreAVC of course, not with Nvidia Ion platform)

    Read the article

  • disparity between `top`'s given CPU % and process CPU usage total

    - by intuited
    I've noticed that there are sometimes (large) differences between the reported total CPU usage and a summation of the per-process CPU utilization given by apps like top and wmtop. As an example: I recently ran a git filter-branch --index-filter on a fairly large repo, with the index-filter command piping git ls-files through a grep filter and into xargs git rm --cached. This took a few minutes to run; while it was going I noticed that both wmtop and top were displaying a high (above 50% on my 2-core machine) total CPU usage, but that neither showed any individual processes which were using a significant amount of CPU time. Are some processes not shown in the process list? What sorts of processes are these, and is there a way to find out how much CPU time they are using?

    Read the article

  • lsass.exe memory leak on windows 2003 server

    - by thelsdj
    In the past month or so I noticed that lsass.exe has started to leak memory, getting to 500MB+ of ram in under a week after reboot. Before this I had never noticed it using any significant amount of memory compared to other processes on the system. This is happening on 2 identical servers, neither of which has anything to do with Active Directory. Maybe a recent Windows Update has caused this? Any thoughts on things to check? As a side question is there some way to recycle the memory usage of lsass.exe without rebooting? Edit: Here is what I'm seeing in Process Monitor, there are thousands of registry open/query/close a minute from lsass.exe. How can I track down what is triggering these?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >