Search Results

Search found 5528 results on 222 pages for 'offsite storage'.

Page 194/222 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Incomplete upgrade 12.04 to 12.10

    - by David
    Everything was running smoothly. Everything had been downloaded from Internet, packages had been installed and a prompt asked for some obsolete programs/files to be removed or kept. After that the computer crashed and and to manually force a shutdown. I turned it on again and surprise I was on 12.10! Still the upgrade was not finished! How can I properly finish that upgrade? Here's the output I got in the command line after following posted instructions: i astrill - Astrill VPN client software i dayjournal - Simple, minimal, digital journal. i gambas2-gb-form - A gambas native form component i gambas2-gb-gtk - The Gambas gtk component i gambas2-gb-gtk-ext - The Gambas extended gtk GUI component i gambas2-gb-gui - The graphical toolkit selector component i gambas2-gb-qt - The Gambas Qt GUI component i gambas2-gb-settings - Gambas utilities class i A gambas2-runtime - The Gambas runtime i google-chrome-stable - The web browser from Google i google-talkplugin - Google Talk Plugin i indicator-keylock - Indicator for Lock Keys i indicator-ubuntuone - Indicator for Ubuntu One synchronization s i A language-pack-kde-zh-hans - KDE translation updates for language Simpl i language-pack-kde-zh-hans-base - KDE translations for language Simplified C i libapt-inst1.4 - deb package format runtime library idA libattica0.3 - a Qt library that implements the Open Coll idA libbabl-0.0-0 - Dynamic, any to any, pixel format conversi idA libboost-filesystem1.46.1 - filesystem operations (portable paths, ite idA libboost-program-options1.46.1 - program options library for C++ idA libboost-python1.46.1 - Boost.Python Library idA libboost-regex1.46.1 - regular expression library for C++ i libboost-serialization1.46.1 - serialization library for C++ idA libboost-signals1.46.1 - managed signals and slots library for C++ idA libboost-system1.46.1 - Operating system (e.g. diagnostics support idA libboost-thread1.46.1 - portable C++ multi-threading i libcamel-1.2-29 - Evolution MIME message handling library i libcmis-0.2-0 - CMIS protocol client library i libcupsdriver1 - Common UNIX Printing System(tm) - Driver l i libdconf0 - simple configuration storage system - runt i libdvdcss2 - Simple foundation for reading DVDs - runti i libebackend-1.2-1 - Utility library for evolution data servers i libecal-1.2-10 - Client library for evolution calendars i libedata-cal-1.2-13 - Backend library for evolution calendars i libedataserver-1.2-15 - Utility library for evolution data servers i libexiv2-11 - EXIF/IPTC metadata manipulation library i libgdu-gtk0 - GTK+ standard dialog library for libgdu i libgdu0 - GObject based Disk Utility Library idA libgegl-0.0-0 - Generic Graphics Library idA libglew1.5 - The OpenGL Extension Wrangler - runtime en i libglew1.6 - OpenGL Extension Wrangler - runtime enviro i libglewmx1.6 - OpenGL Extension Wrangler - runtime enviro i libgnome-bluetooth8 - GNOME Bluetooth tools - support library i libgnomekbd7 - GNOME library to manage keyboard configura idA libgsoap1 - Runtime libraries for gSOAP i libgweather-3-0 - GWeather shared library i libimobiledevice2 - Library for communicating with the iPhone i libkdcraw20 - RAW picture decoding library i libkexiv2-10 - Qt like interface for the libexiv2 library i libkipi8 - library for apps that want to use kipi-plu i libkpathsea5 - TeX Live: path search library for TeX (run i libmagickcore4 - low-level image manipulation library i libmagickwand4 - image manipulation library i libmarblewidget13 - Marble globe widget library idA libmusicbrainz4-3 - Library to access the MusicBrainz.org data i libnepomukdatamanagement4 - Basic Nepomuk data manipulation interface i libnux-2.0-0 - Visual rendering toolkit for real-time app i libnux-2.0-common - Visual rendering toolkit for real-time app i libpoppler19 - PDF rendering library i libqt3-mt - Qt GUI Library (Threaded runtime version), i librhythmbox-core5 - support library for the rhythmbox music pl i libusbmuxd1 - USB multiplexor daemon for iPhone and iPod i libutouch-evemu1 - KernelInput Event Device Emulation Library i libutouch-frame1 - Touch Frame Library i libutouch-geis1 - Gesture engine interface support i libutouch-grail1 - Gesture Recognition And Instantiation Libr idA libx264-120 - x264 video coding library i libyajl1 - Yet Another JSON Library i linux-headers-3.2.0-29 - Header files related to Linux kernel versi i linux-headers-3.2.0-29-generic - Linux kernel headers for version 3.2.0 on i linux-image-3.2.0-29-generic - Linux kernel image for version 3.2.0 on 64 i mplayerthumbs - video thumbnail generator using mplayer i myunity - Unity configurator i A openoffice.org-calc - office productivity suite -- spreadsheet i A openoffice.org-writer - office productivity suite -- word processo i python-brlapi - Python bindings for BrlAPI i python-louis - Python bindings for liblouis i rts-bpp-dkms - rts-bpp driver in DKMS format. i system76-driver - Universal driver for System76 computers. i systemconfigurator - Unified Configuration API for Linux Instal i systemimager-client - Utilities for creating an image and upgrad i systemimager-common - Utilities and libraries common to both the i systemimager-initrd-template-am - SystemImager initrd template for amd64 cli i touchpad-indicator - An indicator for the touchpad i ubuntu-tweak - Ubuntu Tweak i A unity-lens-utilities - Unity Utilities lens i A unity-scope-calculator - Calculator engine i unity-scope-cities - Cities engine i unity-scope-rottentomatoes - Unity Scope Rottentomatoes

    Read the article

  • Clever memory usage through the years

    - by Ben Emmett
    A friend and I were recently talking about the really clever tricks people have used to get the most out of memory. I thought I’d share my favorites, and would love to hear yours too! Interleaving on drum memory Back in the ye olde days before I’d been born (we’re talking the 50s / 60s here), working memory commonly took the form of rotating magnetic drums. These would spin at a constant speed, and a fixed head would read from memory when the correct part of the drum passed it by, a bit like a primitive platter disk. Because each revolution took a few milliseconds, programmers took to manually arranging information non-sequentially on the drum, timing when an instruction or memory address would need to be accessed, then spacing information accordingly around the edge of the drum, thus reducing the access delay. Similar techniques were still used on hard disks and floppy disks into the 90s, but have become irrelevant with modern disk technologies. The Hashlife algorithm Conway’s Game of Life has attracted numerous implementations over the years, but Bill Gosper’s Hashlife algorithm is particularly impressive. Taking advantage of the repetitive nature of many cellular automata, it uses a quadtree structure to store the hashes of pieces of the overall grid. Over time there are fewer and fewer new structures which need to be evaluated, so it starts to run faster with larger grids, drastically outperforming other algorithms both in terms of speed and the size of grid which can be simulated. The actual amount of memory used is huge, but it’s used in a clever way, so makes the list . Elite’s procedural generation Ok, so this isn’t exactly a memory optimization – more a storage optimization – but it gets an honorable mention anyway. When writing Elite, David Braben and Ian Bell wanted to build a rich world which gamers could explore, but their 22K memory was something of a limitation (for comparison that’s about the size of my avatar picture at the top of this page). They procedurally generated all the characteristics of the 2048 planets in their virtual universe, including the names, which were stitched together using a lookup table of parts of names. In fact the original plans were for 2^52 planets, but it was decided that that was probably too many. Oh, and they did that all in assembly language. Other games of the time used similar techniques too – The Sentinel’s landscape generation algorithm being another example. Modern Garbage Collectors Garbage collection in managed languages like Java and .NET ensures that most of the time, developers stop needing to care about how they use and clean up memory as the garbage collector handles it automatically. Achieving this without killing performance is a near-miraculous feet of software engineering. Much like when learning chemistry, you find that every time you think you understand how the garbage collector works, it turns out to be a mere simplification; that there are yet more complexities and heuristics to help it run efficiently. Of course introducing memory problems is still possible (and there are tools like our memory profiler to help if that happens to you) but they’re much, much rarer. A cautionary note In the examples above, there were good and well understood reasons for the optimizations, but cunningly optimized code has usually had to trade away readability and maintainability to achieve its gains. Trying to optimize memory usage without being pretty confident that there’s actually a problem is doing it wrong. So what have I missed? Tell me about the ingenious (or stupid) tricks you’ve seen people use. Ben

    Read the article

  • SharePoint: Numeric/Integer Site Column (Field) Types

    - by CharlesLee
    What field type should you use when creating number based site columns as part of a SharePoint feature? Windows SharePoint Services 3.0 provides you with an extensible and flexible method of developing and deploying Site Columns and Content Types (both of which are required for most SharePoint projects requiring list or library based data storage) via the feature framework (more on this in my next full article.) However there is an interesting behaviour when working with a column or field which is required to hold a number, which I thought I would blog about today. When creating Site Columns in the browser you get a nice rich UI in order to choose the properties of this field: However when you are recreating this as a feature defined in CAML (Collaborative Application Mark-up Language), which is a type of XML (more on this in my article) then you do not get such a rich experience.  You would need to add something like this to the element manifest defined in your feature: <Field SourceID="http://schemas.microsoft.com/sharepoint/3.0"        ID="{C272E927-3748-48db-8FC0-6C7B72A6D220}"        Group="My Site Columns"        Name="MyNumber"        DisplayName="My Number"        Type="Numeric"        Commas="FALSE"        Decimals="0"        Required="FALSE"        ReadOnly="FALSE"        Sealed="FALSE"        Hidden="FALSE" /> OK, its not as nice as the browser UI but I can deal with this. Hang on. Commas="FALSE" and yet for my number 1234 I get 1,234.  That is not what I wanted or expected.  What gives? The answer lies in the difference between a type of "Numeric" which is an implementation of the SPFieldNumber class and "Integer" which does not correspond to a given SPField class but rather represents a positive or negative integer.  The numeric type does not respect the settings of Commas or NegativeFormat (which defines how to display negative numbers.)  So we can set the Type to Integer and we are good to go.  Yes? Sadly no! You will notice at this point that if you deploy your site column into SharePoint something has gone wrong.  Your site column is not listed in the Site Column Gallery.  The deployment must have failed then?  But no, a quick look at the site columns via the API reveals that the column is there.  What new evil is this?  Unfortunately the base type for integer fields has this lovely attribute set on it: UserCreatable = FALSE So WSS 3.0 accordingly hides your field in the gallery as you cannot create fields of this type. However! You can use them in content types just like any other field (except not in the browser UI), and if you add them to the content type as part of your feature then they will show up in the UI as a field on that content type.  Most of the time you are not going to be too concerned that your site columns are not listed in the gallery as you will know that they are there and that they are still useable. So not as bad as you thought after all.  Just a little quirky.  But that is SharePoint for you.

    Read the article

  • Future Of F# At Jazoon 2011

    - by Alois Kraus
    I was at the Jazoon 2011 in Zurich (Switzerland). It was a really cool event and it had many top notch speaker not only from the Microsoft universe. One of the most interesting talks was from Don Syme with the title: F# Today/F# Tomorrow. He did show how to use F# scripting to browse through open databases/, OData Web Services, Sharepoint, …interactively. It looked really easy with the help of F# Type Providers which is the next big language feature in a future F# version. The object returned by a Type Provider is used to access the data like in usual strongly typed object model. No guessing how the property of an object is called. Intellisense will show it just as you expect. There exists a range of Type Providers for various data sources where the schema of the stored data can somehow be dynamically extracted. Lets use e.g. a free database it would be then let data = DbProvider(http://.....); data the object which contains all data from e.g. a chemical database. It has an elements collection which contains an element which has the properties: Name, AtomicMass, Picture, …. You can browse the object returned by the Type Provider with full Intellisense because the returned object is strongly typed which makes this happen. The same can be achieved of course with code generators that use an input the schema of the input data (OData Web Service, database, Sharepoint, JSON serialized data, …) and spit out the necessary strongly typed objects as an assembly. This does work but has the downside that if the schema of your data source is huge you will quickly run against a wall with traditional code generators since the generated “deserialization” assembly could easily become several hundred MB. *** The following part contains guessing how this exactly work by asking Don two questions **** Q: Can I use Type Providers within C#? D: No. Q: F# is after all a library. I can reference the F# assemblies and use the contained Type Providers? D: F# does annotate the generated types in a special way at runtime which is not a static type that C# could use. The F# type providers seem to use a hybrid approach. At compilation time the Type Provider is instantiated with the url of your input data. The obtained schema information is used by the compiler to generate static types as usual but only for a small subset (the top level classes up to certain nesting level would make sense to me). To make this work you need to access the actual data source at compile time which could be a problem if you want to keep the actual url in a config file. Ok so this explains why it does work at all. But in the demo we did see full intellisense support down to the deepest object level. It looks like if you navigate deeper into the object hierarchy the type provider is instantiated in the background and attach to a true static type the properties determined at run time while you were typing. So this type is not really static at all. It is static if you define as a static type that its properties shows up in intellisense. But since this type information is determined while you are typing and it is not used to generate a true static type and you cannot use these “intellistatic” types from C#. Nonetheless this is a very cool language feature. With the plotting libraries you can generate expressive charts from any datasource within seconds to get quickly an overview of any structured data storage. My favorite programming language C# will not get such features in the near future there is hope. If you restrict yourself to OData sources you can use LINQPad to query any OData enabled data source with LINQ with ease. There you can query Stackoverflow with The output is also nicely rendered which makes it a very good tool to explore OData sources today.

    Read the article

  • Five Key Trends in Enterprise 2.0 for 2011

    - by kellsey.ruppel(at)oracle.com
    We recently sat down with Andy MacMillan, an industry veteran and vice president of product management for Enterprise 2.0 at Oracle, to get his take on the year ahead in Enterprise 2.0 (E2.0). He offered us his five predictions about the ways he believes E2.0 technologies will transform business in 2011. 1. Forward-thinking organizations will achieve an unprecedented level of organizational awareness. Enterprise 2.0 and Web 2.0 technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. But this year we are anticipating that organizations will go to the next step and integrate social activities with business applications to deliver rich contextual "activity streams." Activity streams are a new way for enterprise users to get relevant information as quickly as it happens, by navigating to that information in context directly from their portal. We don't mean syndicating social activities limited to a single application. Instead, we believe back-office systems will be combined with social media tools to drive how users make informed business decisions in brand new ways. For example, an account manager might log into the company portal and automatically receive notification that colleagues are closing business around a certain product in his market segment. With a single click, he can reach out instantly to these colleagues via social media and learn from their successes to drive new business opportunities in his own area. 2. Online customer engagement will become a high priority for CMOs. A growing number of chief marketing officers (CMOs) have created a new direct report called "head of online"--a senior marketing executive responsible for all engagements with customers and prospects via the Web, mobile, and social media. This new field has been dubbed "Web experience management" or "online customer engagement" by firms and analyst organizations. It is likely to rapidly increase demand for a host of new business objectives and metrics from Web content management solutions. As companies interface with customers more and more over the Web, Web experience management solutions will help deliver more targeted interactions to ensure increased customer loyalty while meeting sales and business objectives. 3. Real composite applications will be widely adopted. We expect organizations to move from the concept of a single "uber-portal" that encompasses all the necessary features to a more modular, component-based concept for composite applications. This approach is now possible as IT and power users are empowered to assemble new, purpose-built composite applications quickly from existing components. 4. Records management will drive ECM consolidation. We continue to see a significant shift in the approach to records management. Several years ago initiatives were focused on overlaying records management across a set of electronic repositories and physical storage locations. We believe federated records management will continue, but we also expect to see records management driving conversations around single-platform content management consolidation. 5. Organizations will demand ECM at extreme scale. We have already seen a trend within IT organizations to provide a common, highly scalable infrastructure to consolidate and support content and information needs. But as data sizes grow exponentially, ECM at an extreme scale is likely to spread at unprecedented speeds this year. This makes sense as regulations and transparency requirements rise. The model in which ECM and lightweight CMS systems provide basic content services such as check-in, update, delete, and search has converged around a set of industry best practices and has even been coded into new industry standards such as content management interoperability services. As these services converge and the demand for them accelerates, organizations are beginning to rationalize investments into a single, highly scalable infrastructure. Is your organization ready for Enterprise 2.0 in 2011? Learn more.

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

  • Cannot Mount USB 3.0 Hard Disk ?!!

    - by Tenken
    Hi, I have a USB 3.0 external hard disk which I am unable to mount. The entry appears in the "lsusb" command, but I do not exactly understand how to mount it. This is the output for my lsusb command. "ASMedia Technology Inc." is the USB 3.0 device. I would appreciate some help in mounting and accessing the hard disk. This the relevant output of my "lsusb -v" : Bus 009 Device 002: ID 174c:5106 ASMedia Technology Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x174c ASMedia Technology Inc. idProduct 0x5106 bcdDevice 0.01 iManufacturer 2 ASMedia iProduct 3 AS2105 iSerial 1 00000000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.35-28-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:04:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0503 highspeed power enable connect Port 4: 0000.0503 highspeed power enable connect Device Status: 0x0003 Self Powered Remote Wakeup Enabled This is the error given when I try to mount the hard drive: shinso@shinso-IdeaPad:~$ sudo mount /dev/sdb /mnt [sudo] password for shinso: mount: /dev/sdb: unknown device This the output of "dmesg|tail": [30062.774178] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [30535.800977] usb 9-4: USB disconnect, address 3 [30659.237342] Valid eCryptfs headers not found in file header region or xattr region [30659.237351] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31259.268310] Valid eCryptfs headers not found in file header region or xattr region [31259.268313] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31860.059058] Valid eCryptfs headers not found in file header region or xattr region [31860.059062] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [32465.220590] Valid eCryptfs headers not found in file header region or xattr region [32465.220593] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO I am using Ubuntu 10.10 (64 bit). Any help is appreciated.

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • How to get faster graphics in KVM? VNC is painfully slow with Haiku OS guest, Spice won't install and SDL doesn't work

    - by Don Quixote
    I've been coming up to speed on the Haiku operating system, an Open Source clone of BeOS 5 Pro. I'm using an Apple MacBook Pro as my development machine. Apple's BootCamp BIOS does not support more than four partitions on the internal hard drive. While I can set up extended and logical partitions, doing so will prevent any of the installed operating systems from booting. To run Haiku directly on the iron, I boot it off a USB stick. Using external storage is also helpful because I am perpetually out of filesystem space. While VirtualBox is documented to allow access to physical drives, I could not actually get it to work. Also VirtualBox can only use one of the host CPU's cores. While VB guests can be configured for more than one CPU, they are only emulated. A full build of the Haiku OS takes 4.5 under VB. I had the hope of reducing build times by using KVM instead, but it's not working nearly as well as VirtualBox did. The Linux Kernel Virtual Machine is broken in all manner of fundamental ways as seen from Haiku. But I'm a coder; maybe I could contribute to fixing some of those problems. The first problem I've got is that Haiku's video in virt-manager is quite painfully slow. When I drag Haiku windows around the desktop, they lag quite far behind where my mouse is. It's quite difficult to move a window to a precise position on the screen. Just imagine that the mouse was connected to the window title bar with a really stretchy spring. Also Haiku's mouse lags quite far behind where I have moved it. I found lots of Personal Package Archives that enable Spice from QEMU / KVM at the Ubuntu Personal Package Arhives. I tried a few of the PPAs but none of them worked; with one of them, the command "add-apt-repository" crashed with a traceback. There is a Wiki page about Spice, but it says that it only works on 64-bit. My Early 2006 MacBook Pro is 32-bit. Its Apple Model Identifier is MacBookPro1,1; these use Core Duos NOT Core 2 Duos. I don't mind building a source deb for 32-bit if I can expect it to work. Is there some reason that Spice should be 64-bit only? Does it need features of the x86_64 Instruction Set Architecture that x86 does not have? When I try using SDL from virt-manager, the configuration for Local SDL Window says "Xauth: /home/mike/.Xauthority". When I try to start my guest, virt-manager emits an error. When I Googled the error message, the usual solution was to make ~/.Xauthority readible. However, .Xauthorty does not exist in my home directory. Instead I have a $XAUTHORITY environment variable. There is no way to configure SDL in virt-manager to use $XAUTHORITY instead of ~/.Xauthority. Neither does it work to copy the value of $XAUTHORITY into the file. I am ready to scream, because I've been five fscking days trying to make KVM work for Haiku development. There is a whole lot more that is broken than the slow video. All I really want to do for now is speed up my full builds of Haiku by using "jam -j2" to use both cores in my CPU. I may try Xen next, but the last time I monkeyed with Xen it was far, far more broken than I am finding KVM to be. Just for now, I would be satisfied if there were some way to use my USB stick as a drive in VirtualBox. VB does allow me to configure /dev/sdb as a drive, but it always causes a fatal error when I try to launch the guest. Thank You For Any Advice You Can Give Me. -

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • Cloud Computing Pricing - It's like a Hotel

    - by BuckWoody
    I normally don't go into the economics or pricing side of Distributed Computing, but I've had a few friends that have been surprised by a bill lately and I wanted to quickly address at least one aspect of it. Most folks are used to buying software and owning it outright - like buying a car. We pay a lot for the car, and then we use it whenever we want. We think of the "cloud" services as a taxi - we'll just pay for the ride we take an no more. But it's not quite like that. It's actually more like a hotel. When you subscribe to Azure using a free offering like the MSDN subscription, you don't have to pay anything for the service. But when you create an instance of a Web or Compute Role, Storage, that sort of thing, you can think of the idea of checking into a hotel room. You get the key, you pay for the room. For Azure, using bandwidth, CPU and so on is billed just like it states in the Azure Portal. so in effect there is a cost for the service and then a cost to use it, like water or power or any other utility. Where this bit some folks is that they created an instance, played around with it, and then left it running. No one was using it, no one was on - so they thought they wouldn't be charged. But they were. It wasn't much, but it was a surprise.They had the hotel room key, but they weren't in the room, so to speak. To add to their frustration, they had to talk to someone on the phone to cancel the account. I understand the frustration. Although we have all this spelled out in the sign up area, not everyone has the time to read through all that. I get that. So why not make this easier? As an explanation, we bill for that time because the instance is still running, and we have to tie up resources to be available the second you want them, and that costs money. As far as being able to cancel from the portal, that's also something that needs to be clearer. You may not be aware that you can spin up instances using code - and so cancelling from the Portal would allow you to do the same thing. Since a mistake in code could erase all of your instances and the account, we make you call to make sure you're you and you really want to take it down. Not a perfect system by any means, but we'll evolve this as time goes on. For now, I wanted to make sure you're aware of what you should do. By the way, you don't have to cancel your whole account not to be billed. Just delete the instance from the portal and you won't be charged. You don't have to call anyone for that. And just FYI - you can download the SDK for Azure and never even hit the online version at all for learning and playing around. No sign-up, no credit card, PO, nothing like that. In fact, that's how I demo Azure all the time. Everything runs right on your laptop in an emulated environment.  

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • How much is a subscriber worth?

    - by Tom Lewin
    This year at Red Gate, we’ve started providing a way to back up SQL Azure databases and Azure storage. We decided to sell this as a service, instead of a product, which means customers only pay for what they use. Unfortunately for us, it makes figuring out revenue much trickier. With a product like SQL Compare, a customer pays for it, and it’s theirs for good. Sure, we offer support and upgrades, but, fundamentally, the sale is a simple, upfront transaction: we’ve made this product, you need this product, we swap product for money and everyone is happy. With software as a service, it isn’t that easy. The money and product don’t change hands up front. Instead, we provide a service in exchange for a recurring fee. We know someone buying SQL Compare will pay us $X, but we don’t know how long service customers will stay with us, or how much they will spend. How do we find this out? We use lifetime value analysis. What is lifetime value? Lifetime value, or LTV, is how much a customer is worth to the business. For Entrepreneurs has a brilliant write up that we followed to conduct our analysis. Basically, it all boils down to this equation: LTV = ARPU x ALC To make it a bit less of an alphabet-soup and a bit more understandable, we can write it out in full: The lifetime value of a customer equals the average revenue per customer per month, times the average time a customer spends with the service Simple, right? A customer is worth the average spend times the average stay. If customers pay on average $50/month, and stay on average for ten months, then a new customer will, on average, bring in $500 over the time they are a customer! Average spend is easy to work out; it’s revenue divided by customers. The problem comes when we realise that we don’t know exactly how long a customer will stay with us. How can we figure out the average lifetime of a customer, if we only have six months’ worth of data? The answer lies in the fact that: Average Lifetime of a Customer = 1 / Churn Rate The churn rate is the percentage of customers that cancel in a month. If half of your customers cancel each month, then your average customer lifetime is two months. The problem we faced was that we didn’t have enough data to make an estimate of one month’s cancellations reliable (because barely anybody cancels)! To deal with this data problem, we can take data from the last three months instead. This means we have more data to play with. We can still use the equation above, we just need to multiply the final result by three (as we worked out how many three month periods customers stay for, and we want our answer to be in months). Now these estimates are likely to be fairly unreliable; when there’s not a lot of data it pays to be cautious with inference. That said, the numbers we have look fairly consistent, and it’s super easy to revise our estimates when new data comes in. At the very least, these numbers give us a vague idea of whether a subscription business is viable. As far as Cloud Services goes, the business looks very viable indeed, and the low cancellation rates are much more than just data points in LTV equations; they show that the product is working out great for our customers, which is exactly what we’re looking for!

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 2 - Password Lists

    - by Argenis
      Digital World, Digital Locks One of the biggest digital assets that any company has is its secrets. These include passwords, key rings, certificates, and any other digital asset used to protect another asset from tampering or unauthorized access. As a DBA, you are very likely to manage some of these assets for your company - and your employer trusts you with keeping them safe. Probably one of the most important of these assets are passwords. As you well know, the can be used anywhere: for service accounts, credentials, proxies, linked servers, DTS/SSIS packages, symmetrical keys, private keys, etc., etc. Have you given some thought to what you're doing to keep these passwords safe? Are you backing them up somewhere? Who else besides you can access them? Good-Ol’ Post-It Notes Under Your Keyboard If you have a password-protected Excel sheet for your passwords, I have bad news for you: Excel's level of encryption is good for your grandma's budget spreadsheet, not for a list of enterprise passwords. I will try to summarize the main point of this best practice in one sentence: You should keep your passwords on an encrypted, access and version-controlled, backed-up, well-known shared location that every DBA on your team is aware of, and maintain copies of this password "database" on your DBA's workstations. Now I have to break down that statement to you: - Encrypted: what’s the point of saving your passwords on a file that any Windows admin with enough privileges can read? - Access controlled: This one is pretty much self-explanatory. - Version controlled: Passwords change (and I’m really hoping you do change them) and version control would allow you to track what a previous password was if the utility you’ve chosen doesn’t handle that for you. - Backed-up: You want a safe copy of the password list to be kept offline, preferably in long term storage, with relative ease of restoring. - Well-known shared location: This is critical for teams: what good is a password list if only one person in the team knows where it is? I have seen multiple examples of this that work well. They all start with an encrypted database. Certainly you could leverage SQL Server's native encryption solutions like cell encryption for this. I have found such implementations to be impractical, for the most part. Enter The World Of Utilities There are a myriad of open source/free software solutions to help you here. One of my favorites is KeePass, which creates encrypted files that can be saved to a network share, Sharepoint, etc. KeePass has UIs for most operating systems, including Windows, MacOS, iOS, Android and Windows Phone. Other solutions I've used before worth mentioning include PasswordSafe and 1Password, with the latter one being a paid solution – but wildly popular in mobile devices. There are, of course, even more "enterprise-level" solutions available from 3rd party vendors. The truth is that most of the customers that I work with don't need that level of protection of their digital assets, and something like a KeePass database on Sharepoint suits them very well. What are you doing to safeguard your passwords? Leave a comment below, and join the discussion! Cheers, -Argenis

    Read the article

  • trying to setup wireless

    - by JohnMerlino
    I'm trying to set up wireless on vostro 1520 dell laptop, with latest Ubuntu install. Here's the output of some of the commands that I was told to run: lshw -C network viggy@ubuntu:~$ lshw -C network WARNING: you should run this program as super-user. *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:08:00.0 logical name: eth0 version: 03 serial: 00:24:e8:da:84:25 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168d-1.fw ip=192.168.2.6 latency=0 multicast=yes port=MII speed=100Mbit/s resources: irq:47 ioport:3000(size=256) memory:f6004000-f6004fff memory:f6000000-f6003fff memory:f6020000-f603ffff *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:0e:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:18 memory:fa000000-fa003fff *-network DISABLED description: Wireless interface physical id: 1 logical name: wlan0 serial: 0c:60:76:05:ee:74 capabilities: ethernet physical wireless configuration: broadcast=yes driver=b43 driverversion=3.2.0-29-generic firmware=N/A multicast=yes wireless=IEEE 802.11bg lspci 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 03) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 0e:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) 1a:00.0 FireWire (IEEE 1394): O2 Micro, Inc. Device 10f7 (rev 01) 1a:00.1 SD Host controller: O2 Micro, Inc. Device 8120 (rev 01) 1a:00.2 Mass storage controller: O2 Micro, Inc. Device 8130 (rev 01) iwconfig lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on eth0 no wireless extensions. At this point in time, I don't have wireless.

    Read the article

  • New PeopleSoft HCM 9.1 On Demand Standard Edition provides a complete set of IT services at a low, predictable monthly cost

    - by Robbin Velayedam
    At Oracle Open World last month, Oracle announced that we are extending our On Demand offerings with the general availability of PeopleSoft On Demand Standard Edition. Standard Edition represents Oracle’s commitment to providing customers a choice of solutions, technology, and deployment options commensurate with their business needs and future growth. The Standard Edition offering complements the traditional On Demand offerings (Enterprise and Professional Editions) by focusing on a low, predictable monthly cost model that scales with the size of your business.   As part of Oracle's open cloud strategy, customers can freely move PeopleSoft licensed applications between on premise and the various  on demand options as business needs arise.    In today’s business climate, aggressive and creative business objectives demand more of IT organizations. They are expected to provide technology-based solutions to streamline business processes, enable online collaboration and multi-tasking, facilitate data mining and storage, and enhance worker productivity. As IT budgets remain tight in a recovering economy, the challenge becomes how to meet these demands with limited time and resources. One way is to eliminate the variable costs of projects so that your team can focus on the high priority functions and better predict funding and resource needs two to three years out. Variable costs and changing priorities can derail the best laid project and capacity plans. The prime culprits of variable costs in any IT organization include disaster recovery, security breaches, technical support, and changes in business growth and priorities. Customers have an immediate need for solutions that are cheaper, predictable in cost, and flexible enough for long-term growth or capacity changes. The Standard Edition deployment option fulfills that need by allowing customers to take full advantage of the rich business functionality that is inherent to PeopleSoft HCM, while delegating all application management responsibility – such as future upgrades and product updates – to Oracle technology experts, at an affordable and expected price. Standard Edition provides the advantages of the secure Oracle On Demand hosted environment, the complete set of PeopleSoft HCM configurable business processes, and timely management of regular updates and enhancements to the application functionality and underlying technology. Standard Edition has a convenient monthly fee that is scalable by number of employees, which helps align the customer’s overall cost of ownership with its size and anticipated growth and business needs. In addition to providing PeopleSoft HCM applications' world class business functionality and Oracle On Demand's embassy-grade security, Oracle’s hosted solution distinguishes itself from competitors by offering customers the ability to transition between different deployment and service models at any point in the application ownership lifecycle. As our customers’ business and economic climates change, they are free to transition their applications back to on-premise at any time. HCM On Demand Standard Edition is based on configurability options rather than customizations, requiring no additional code to develop or maintain. This keeps the cost of ownership low and time to production less than a month on average. Oracle On Demand offers the highest standard of security and performance by leveraging a state-of-the-art data center with dedicated databases, servers, and secured URL all within a private cloud. Customers will not share databases, environments, platforms, or access portals with other customers because we value how mission critical your data are to your business. Oracle’s On Demand also provides a full breadth of disaster recovery services to provide customers the peace of mind that their data are secure and that backup operations are in place to keep their businesses up and running in the case of an emergency. Currently we have over 50 PeopleSoft customers delegating us with the management of their applications through Oracle On Demand. If you are a customer interested in learning more about the PeopleSoft HCM 9.1 Standard Edition and how it can help your organization minimize your variable IT costs and free up your resources to work on other business initiatives, contact Oracle or your Account Services Representative today.

    Read the article

  • 4 Key Ingredients for the Cloud

    - by Kellsey Ruppel
    It's a short week here with the US Thanksgiving Holiday. So, before we put on our stretch pants and get ready to belly up to the dinner table for turkey, stuffing and mashed potatoes, let's spend a little time this week talking about the Cloud (kind of like the feathery whipped goodness that tops the infamous Thanksgiving pumpkin pie!) But before we dive into the Cloud, let's do a side by side comparison of the key ingredients for each. Cloud Whipped Cream  Application Integration  1 cup heavy cream  Security  1/4 cup sugar  Virtual I/O  1 teaspoon vanilla  Storage  Chilled Bowl It’s no secret that millions of people are connected to the Internet. And it also probably doesn’t come as a surprise that a lot of those people are connected on social networking sites.  Social networks have become an excellent platform for sharing and communication that reflects real world relationships and they play a major part in the everyday lives of many people. Facebook, Twitter, Pinterest, LinkedIn, Google+ and hundreds of others have transformed the way we interact and communicate with one another.Social networks are becoming more than just an online gathering of friends. They are becoming a destination for ideation, e-commerce, and marketing. But it doesn’t just stop there. Some organizations are utilizing social networks internally, integrated with their business applications and processes and the possibility of social media and cloud integration is compelling. Forrester alone estimates enterprise cloud computing to grow to over $240 billion by 2020. It’s hard to find any current IT project today that is NOT considering cloud-based deployments. Security and quality of service concerns are no longer at the forefront; rather, it’s about focusing on the right mix of capabilities for the business. Cloud vs. On-Premise? Policies & governance models? Social in the cloud? Cloud’s increasing sophistication, security in applications, mobility, transaction processing and social capabilities make it an attractive way to manage information. And Oracle offers all of this through the Oracle Cloud and Oracle Social Network. Oracle Social Network is a secure private network that provides a broad range of social tools designed to capture and preserve information flowing between people, enterprise applications, and business processes. By connecting you with your most critical applications, Oracle Social Network provides contextual, real-time communication within and across enterprises. With Oracle Social Network, you and your teams have the tools you need to collaborate quickly and efficiently, while leveraging the organization’s collective expertise to make informed decisions and drive business forward. Oracle Social Network is available as part of a portfolio of application and platform services within the Oracle Cloud. Oracle Cloud offers self-service business applications delivered on an integrated development and deployment platform with tools to rapidly extend and create new services. Oracle Social Network is pre-integrated with the Fusion CRM Cloud Service and the Fusion HCM Cloud Service within the Oracle Cloud. If you are looking for something to watch as you veg on the couch in a post-turkey dinner hangover, you might consider watching these how-to videos! And yes, it is perfectly ok to have that 2nd piece of pie

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >