Search Results

Search found 9318 results on 373 pages for 'rescue disk'.

Page 349/373 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • ArchBeat Link-o-Rama Top 10 for August 19-26, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared via the OTN ArchBeat Facebook page for the week of August 19-26, 2012. Now Available: Oracle SQL Developer 3.2 (3.2.09.23) The latest release of Oracle SQl Developer includes UI enhancements, 12c database support, and bug fixes. ADF Tutorial Chapter 3: Creating a Master-Detail taskflow | Yannick Ongena Oracle ACE Yannick Ongena continues his ADF tutorial with a chapter devoted to view layer and using the data control to build pages that allow user to update reference data. GlassFish Community Event at JavaOne 2012 Don't miss out on this exclusive GlassFish Community Event on Sunday, September 30th from 11:00 a.m. – 1:00 p.m. in Moscone South. Register Now! Part of JavaOne 2012. Oracle BI 11g Book Authors – Podcast #9 | Art of Business Intelligence In this home-grown podcast, authors Christian Screen, Haroun Khan, and Adrian Ward talk about their new book, "Oracle Business Intelligence Enterprise Edition 11g: A Hands-On Tutorial," about their sessions at Oracle OpenWorld, and about their ORACLENERD t-shirts. Oracle Service Bus duplicate message check using Coherence | Jan van Zoggel "Giving the fact that every message on our ESB has an unique messageID element in the SOAP header we could store this on disk, database or in memory,"says Jan van Zoggel. "With the help of Oracle Coherence this last option, in memory, is relatively simple." Even simpler with Jan's detailed instructions. Oracle Technology Network Architect Day - Boston - Sept 12 There are easier ways to increase your IT brainpower. Skip the electrodes and register for Oracle Technology Network Architect Day in Boston, September 12, 2012. This free event includes 8 technical sessions, panel Q&A, roundtable discussions—and a free lunch. 8:00 a.m. – 5:00 p.m. at the Boston Marriott Burlington, One Burlington Mall Road, Burlington, MA 01803. Oracle BPM enable BAM | Peter Paul van de Beek "BAM enables you to make decisions based on real-time information gathered from your running processes," says Peter Paul van de Beek. "With BPMN processes you can use the standard Business Indicators that the BPM Suite offers you and use them to with BAM without much extra effort." Sample Application for Switching Application Module Data Sources | Andrejus Baranovskis A sample application and how-to guide from Oracle ACE Director and ADF expert Andrejus Baranovskis. ORCLville: Some Basic BI Thoughts "If we'd stop to consider what business intelligence really is, many of us might grow a different perspective about how we implement enterprise apps," says Oracle ACE Director Floyd Teter. "What if we implemented with an eye to what kind of information we'd like to get from our enterprise apps?" Oracle VM VirtualBox 4.1.20 released |Oracle's Virtualization Blog Oracle VM VirtualBox 4.1.20 was just released at the community and Oracle download sites, reports the Fat Bloke. This is a maintenance release containing bug fixes and stability improvements. Thought for the Day "The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures." — Frederick P. Brooks Source: SoftwareQuotes

    Read the article

  • Keeping an Eye on Your Storage

    - by Fatherjack
    There are plenty of resources that advise you about looking for signs that your storage hardware is having problems. SQL Server Alerts for 823, 824 and 825 are covered here by Paul Randall of SQL Skills: http://www.sqlskills.com/blogs/paul/a-little-known-sign-of-impending-doom-error-825/ and here by me: https://www.simple-talk.com/blogs/2011/06/27/alerts-are-good-arent-they/. Now until very recently I wasn’t aware that there was a different way to track the 823 + 824 errors. It was by complete chance that I happened to be searching about in the msdb database when I found the suspect_pages table. Running a query against it I got zero rows. This, as it turns out is a good thing. Highlighting the table name and pressing F1 got me nowhere – Is it just me or does Books Online fail to load properly for no obvious reason sometimes? So I typed the table name into the search bar and got my local version of http://msdn.microsoft.com/en-us/library/ms174425.aspx. From that we get the following description: Contains one row per page that failed with a minor 823 error or an 824 error. Pages are listed in this table because they are suspected of being bad, but they might actually be fine. When a suspect page is repaired, its status is updated in the event_type column. So, in the table we would, on healthy hardware, expect to see zero rows but on disks that are having problems the event_type column would show us what is going on. Where there are suspect pages on the disk the rows would have an event_type value of 1, 2 or 3, where those suspect pages have been restored, repaired or deallocated by DBCC then the value would be 4, 5 or 7. Having this table means that we can set up SQL Monitor to check the status of our hardware as we can create a custom metric based on the query below: USE [msdb] go SELECT COUNT(*) FROM [dbo].[suspect_pages] AS sp All we need to do is set the metric to collect this value and set an alert to email when the value is not 1 and we are then able to let SQL Monitor take care of our storage. Note that the suspect_pages table does not have any updates concerning Error 825 which the links at the top of the page cover in more detail. I would suggest that you set SQL Monitor to alert on the suspect_pages table in addition to other taking other measures to look after your storage hardware and not have it as your only precaution. Microsoft actually pass ownership and administration of the suspect_pages table over to the database administrator (Manage the suspect_pages Table (SQL Server)) and in a surprising move (to me at least) advise DBAs to actively update and archive data in it. The table will only ever contain a maximum of 1000 rows and once full, new rows will not be added. Keeping an eye on this table is pretty important, although In my opinion, if you get to 1000 rows in this table and are not already waiting for new disks to be added to your server you are doing something wrong but if you have 1000 rows in there then you need to move data out quickly because you may be missing some important events on your server.

    Read the article

  • questions about dual-boot install Ubuntu 10.04 and Windows 7 on same hard drive

    - by Tim
    I'd like to dual-boot install Ubuntu 10.04 on the same hard drive as Windows 7 which has already been installed. As to sources on the internet: I found a website iinet about dual-boot installation of Ubuntu 10.10 and Windows 7 on the same hard drive, which I think more specific than the one on Ubuntu Community without specific version of the OSes. Since I am installing Ubuntu 10.04 instead of 10.10, my question is whether their installers are same or almost same and if I can follow iinet for my dual-boot installation? Or are there better websites for information about dual-boot installtion of Ubuntu 10.04 and Windows 7? As to shrinking Windows partitions to make free space for Ubuntu partitions: iinet uses the partition software in Ubuntu's installer to shrink the Windows partition. But I saw in many website that the partition software in Ubuntu's installer cannot guarantee shrinking Windows 7 partitions successfully, so they recommended in general to shrink Windows partitions under Windows itself using its softwares. For example, in Ubuntu Community, it says: Some people think that the Windows partition must be resized only from within Windows Vista and Windows 7 using the shrink/resize option. ... If you use GParted Partition Editor in the Ubuntu Live CD be careful. So I was wondering which way to go in my situation? As to partition for bootloader files: In iinet, I don't see there is a partition created and dedicated to boot files (i.e. Grub files). However, I saw in many websites strongly suggesting using a boot partition for Grub files, especially for the purpose of separation and protection from installed OS files. I was wondering which way I should choose and why? As to installing bootloader Grub, in iinet, I see that to install Grub it only needs to specify the hard drive device for bootloader installation. However, in ubuntuguide(for more than 2 OSes and Ubuntu 9.04), some commands are needed to run in order to put Grub configuration files in MBR, and OS partition, for the chain-load process (where to find the files for the next stage). In Ubuntu Community, there are some related sentences which I don't quite understand how to do in practice: the only thing in your computer outside of Ubuntu that needs to be changed is a small code in the MBR (Master Boot Record) of the first hard disk. The MBR code is changed to point to the boot loader in Ubuntu. If you have a problem with changing the MBR code, you might prefer to just install the code for pointing to GRUB to the first sector of your Ubuntu partition instead. If you do that during the Ubuntu installation process, then Ubuntu won't boot until you configure some other boot manager to point to Ubuntu's boot sector. Windows Vista no longer utilizes boot.ini, ntdetect.com, and ntldr when booting. Instead, Vista stores all data for its new boot manager in a boot folder. Windows Vista ships with an command line utility called bcdedit.exe, which requires administrator credentials to use. You may want to read http://go.microsoft.com/fwlink/?LinkId=112156 about it. Using a command line utility always has its learning curve, so a more productive and better job can be done with a free utility called EasyBCD, developed and mastered in during the times of Vista Beta already. EasyBCD is user friendly and many Vista users highly recommend EasyBCD. In what is quoted above, I was wondering how exactly I should change the MBR code to point to the bootloader in Ubuntu? if I fail to change MBR code, are the other suggested boot managers being bcdedit.exe and EasyBCD in Windows? With the three sources above, which one shall I follow? Thanks and regards

    Read the article

  • How to remove synaptic without installing all the unwanted packages?

    - by Jay
    I am trying to uninstall synaptic. I prefer using apt-get and other command line tools to manage my packages. So I do not need synaptic and the software manager. I'm trying to remove both of them using apt-get. Its a new box. Recently installed Linux Mint mate 15. After installation, the only thing I did was, sudo apt-get update and sudo apt-get dist-upgrade After that, I did this command for removing synaptic, sudo apt-get remove --purge synaptic But this gives me a very weird output, Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: apturl-kde icoutils kate-data katepart kde-runtime kde-runtime-data kdelibs-bin kdelibs5-data kdelibs5-plugins kdesudo kdoctools kubuntu-debug-installer libattica0.4 libdlrestrictions1 libkactivities-bin libkactivities-models1 libkactivities6 libkatepartinterfaces4 libkcmutils4 libkde3support4 libkdeclarative5 libkdecore5 libkdesu5 libkdeui5 libkdewebkit5 libkdnssd4 libkemoticons4 libkfile4 libkhtml5 libkidletime4 libkio5 libkjsapi4 libkjsembed4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkntlm4 libkparts4 libkpty4 libkrosscore4 libktexteditor4 libkxmlrpcclient4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomukutils4 libntrack-qt4-1 libntrack0 libphonon4 libplasma3 libpolkit-qt-1-1 libpoppler-qt4-4 libqapt2 libqapt2-runtime libqca2 libqt4-qt3support libsolid4 libsoprano4 libstreamanalyzer0 libstreams0 libthreadweaver4 libvirtodbc0 nepomuk-core nepomuk-core-data ntrack-module-libnl-0 odbcinst odbcinst1debian2 oxygen-icon-theme phonon phonon-backend-gstreamer plasma-scriptengine-javascript qapt-batch shared-desktop-ontologies soprano-daemon virtuoso-minimal virtuoso-opensource-6.1-bin virtuoso-opensource-6.1-common Use 'apt-get autoremove' to remove them. The following extra packages will be installed: apturl-kde icoutils kate-data katepart kde-runtime kde-runtime-data kdelibs-bin kdelibs5-data kdelibs5-plugins kdesudo kdoctools kubuntu-debug-installer libattica0.4 libdlrestrictions1 libkactivities-bin libkactivities-models1 libkactivities6 libkatepartinterfaces4 libkcmutils4 libkde3support4 libkdeclarative5 libkdecore5 libkdesu5 libkdeui5 libkdewebkit5 libkdnssd4 libkemoticons4 libkfile4 libkhtml5 libkidletime4 libkio5 libkjsapi4 libkjsembed4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkntlm4 libkparts4 libkpty4 libkrosscore4 libktexteditor4 libkxmlrpcclient4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomukutils4 libntrack-qt4-1 libntrack0 libphonon4 libplasma3 libpolkit-qt-1-1 libpoppler-qt4-4 libqapt2 libqapt2-runtime libqca2 libqt4-qt3support libsolid4 libsoprano4 libstreamanalyzer0 libstreams0 libthreadweaver4 libvirtodbc0 libxml2-utils nepomuk-core nepomuk-core-data ntrack-module-libnl-0 odbcinst odbcinst1debian2 oxygen-icon-theme phonon phonon-backend-gstreamer plasma-scriptengine-javascript qapt-batch shared-desktop-ontologies soprano-daemon virtuoso-minimal virtuoso-opensource-6.1-bin virtuoso-opensource-6.1-common Suggested packages: libterm-readline-gnu-perl libterm-readline-perl-perl djvulibre-bin finger hspell libqca2-plugin-cyrus-sasl libqca2-plugin-gnupg libqca2-plugin-ossl phonon-backend-vlc phonon-backend-xine phonon-backend-mplayer The following packages will be REMOVED: aptoncd* apturl* mintupdate* mintwelcome* synaptic* The following NEW packages will be installed: apturl-kde icoutils kate-data katepart kde-runtime kde-runtime-data kdelibs-bin kdelibs5-data kdelibs5-plugins kdesudo kdoctools kubuntu-debug-installer libattica0.4 libdlrestrictions1 libkactivities-bin libkactivities-models1 libkactivities6 libkatepartinterfaces4 libkcmutils4 libkde3support4 libkdeclarative5 libkdecore5 libkdesu5 libkdeui5 libkdewebkit5 libkdnssd4 libkemoticons4 libkfile4 libkhtml5 libkidletime4 libkio5 libkjsapi4 libkjsembed4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkntlm4 libkparts4 libkpty4 libkrosscore4 libktexteditor4 libkxmlrpcclient4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomukutils4 libntrack-qt4-1 libntrack0 libphonon4 libplasma3 libpolkit-qt-1-1 libpoppler-qt4-4 libqapt2 libqapt2-runtime libqca2 libqt4-qt3support libsolid4 libsoprano4 libstreamanalyzer0 libstreams0 libthreadweaver4 libvirtodbc0 libxml2-utils nepomuk-core nepomuk-core-data ntrack-module-libnl-0 odbcinst odbcinst1debian2 oxygen-icon-theme phonon phonon-backend-gstreamer plasma-scriptengine-javascript qapt-batch shared-desktop-ontologies soprano-daemon virtuoso-minimal virtuoso-opensource-6.1-bin virtuoso-opensource-6.1-common 0 upgraded, 78 newly installed, 5 to remove and 0 not upgraded. Need to get 60.9 MB of archives. After this operation, 146 MB of additional disk space will be used. Do you want to continue [Y/n]? n Abort. As you can see, apt-get is trying to install the same packages that it is asking me to autoremove. Could someone please tell me, how to uninstall synaptic properly? Or am I missing something? Just for the record, I also did, sudo apt-get autoremove --purge like it asked me to ... and this is what I got, Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded.

    Read the article

  • Detect Unicode Usage in SQL Column

    One optimization you can make to a SQL table that is overly large is to change from nvarchar (or nchar) to varchar (or char).  Doing so will cut the size used by the data in half, from 2 bytes per character (+ 2 bytes of overhead for varchar) to only 1 byte per character.  However, you will lose the ability to store Unicode characters, such as those used by many non-English alphabets.  If the tables are storing user-input, and your application is or might one day be used internationally, its likely that using Unicode for your characters is a good thing.  However, if instead the data is being generated by your application itself or your development team (such as lookup data), and you can be certain that Unicode character sets are not required, then switching such columns to varchar/char can be an easy improvement to make. Avoid Premature Optimization If you are working with a lookup table that has a small number of rows, and is only ever referenced in the application by its numeric ID column, then you wont see any benefit to using varchar vs. nvarchar.  More generally, for small tables, you wont see any significant benefit.  Thus, if you have a general policy in place to use nvarchar/nchar because it offers more flexibility, do not take this post as a recommendation to go against this policy anywhere you can.  You really only want to act on measurable evidence that suggests that using Unicode is resulting in a problem, and that you wont lose anything by switching to varchar/char. Obviously the main reason to make this change is to reduce the amount of space required by each row.  This in turn affects how many rows SQL Server can page through at a time, and can also impact index size and how much disk I/O is required to respond to queries, etc.  If for example you have a table with 100 million records in it and this table has a column of type nchar(5), this column will use 5 * 2 = 10 bytes per row, and with 100M rows that works out to 10 bytes * 100 million = 1000 MBytes or 1GB.  If it turns out that this column only ever stores ASCII characters, then changing it to char(5) would reduce this to 5*1 = 5 bytes per row, and only 500MB.  Of course, if it turns out that it only ever stores the values true and false then you could go further and replace it with a bit data type which uses only 1 byte per row (100MB  total). Detecting Whether Unicode Is In Use So by now you think that you have a problem and that it might be alleviated by switching some columns from nvarchar/nchar to varchar/char but youre not sure whether youre currently using Unicode in these columns.  By definition, you should only be thinking about this for a column that has a lot of rows in it, since the benefits just arent there for a small table, so you cant just eyeball it and look for any non-ASCII characters.  Instead, you need a query.  Its actually very simple: SELECT DISTINCT(CategoryName)FROM CategoriesWHERE CategoryName <> CONVERT(varchar, CategoryName) Summary Gregg Stark for the tip. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why Is Hibernation Still Used?

    - by Jason Fitzpatrick
    With the increased prevalence of fast solid-state hard drives, why do we still have system hibernation? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Moses wants to know why he should use hibernate on a desktop machine: I’ve never quite understood the original purpose of the Hibernation power state in Windows. I understand how it works, what processes take place, and what happens when you boot back up from Hibernate, but I’ve never truly understood why it’s used. With today’s technology, most notably with SSDs, RAM and CPUs becoming faster and faster, a cold boot on a clean/efficient Windows installation can be pretty fast (for some people, mere seconds from pushing the power button). Standby is even faster, sometimes instantaneous. Even SATA drives from 5-6 years ago can accomplish these fast boot times. Hibernation seems pointless to me [on desktop computers] when modern technology is considered, but perhaps there are applications that I’m not considering. What was the original purpose behind hibernation, and why do people still use it? Quite a few people use hibernate, so what is Moses missing in the big picture? The Answer SuperUser contributor Vignesh4304 writes: Normally hibernate mode saves your computer’s memory, this includes for example open documents and running applications, to your hard disk and shuts down the computer, it uses zero power. Once the computer is powered back on, it will resume everything where you left off. You can use this mode if you won’t be using the laptop/desktop for an extended period of time, and you don’t want to close your documents. Simple Usage And Purpose: Save electric power and resuming of documents. In simple terms this comment serves nice e.g (i.e. you will sleep but your memories are still present). Why it’s used: Let me describe one sample scenario. Imagine your battery is low on power in your laptop, and you are working on important projects on your machine. You can switch to hibernate mode – it will result your documents being saved, and when you power on, the actual state of application gets restored. Its main usage is like an emergency shutdown with an auto-resume of your documents. MagicAndre1981 highlights the reason we use hibernate everyday: Because it saves the status of all running programs. I leave all my programs open and can resume working the next day very easily. Doing a real boot would require to start all programs again, load all the same files into those programs, get to the same place that I was at before, and put all my windows in exactly the same place. Hibernating saves a lot of work pulling these things back up again. It’s not unusual to find computers around the office here that have been hibernated day in and day out for months without an actual full system shutdown and restart. It’s enormously convenient to freeze your work space at the exact moment you stopped working and to turn right around and resume there the next morning. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • apt-get upgrade stuck at the same package

    - by decibyte
    Current status I've started to suspect this is not an Ubuntu issue, but related to the internet connection here at my work. Until I'm sure, Im leaving my question below: Original question I'm stuck, can't upgrade my system. Running sudo apt-get upgrade gives me the following: mmm@alalunga:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn libgrip0 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae The following packages will be upgraded: apport apport-gtk bind9-host build-essential dhcp3-client dhcp3-common dnsutils eog evince evince-common firefox firefox-branding firefox-dbg firefox-globalmenu firefox-gnome-support firefox-locale-en gimp gimp-data gir1.2-totem-1.0 glib-networking glib-networking-common glib-networking-services gnupg gpgv icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-6-plugin icedtea-netx icedtea-netx-common icedtea-plugin isc-dhcp-client isc-dhcp-common libapache2-mod-php5 libart-2.0-2 libbind9-80 libdns81 libevince3-3 libgimp2.0 libisc83 libisccc80 libisccfg82 liblwres80 libssl-dev libssl-doc libssl1.0.0 libtotem0 linux-firmware linux-libc-dev openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-mysql php5-xsl policykit-1-gnome python-apport python-django python-gst0.10 python-problem-report resolvconf thunderbird thunderbird-globalmenu thunderbird-gnome-support totem totem-common totem-mozilla totem-plugins xserver-xorg-input-synaptics 74 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. Need to get 317 MB/327 MB of archives. After this operation, 1.481 kB of additional disk space will be used. Do you want to continue [Y/n]? Get:1 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:2 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:3 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:4 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:5 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:6 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:7 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] 9% [7 openjdk-6-jre-headless 27,3 MB/27,3 MB 100%] It keeps downloading the package openjdk-6-jre-headless, then does nothing for a while (hanging on what's the last line above), then download the package again. It's at its 13th download attempt at the moment of writing. The actual downloads seem to be done just fine, but whatever it does after downloading seems to be failing. I tried removing openjdk-6, but then it wanted to install openjdk-7 instead, with the same result, hanging at openjdk-7-jre-headless instead. I also tried changing servers from my local (Danish) to the main server. No luck. It's also keeping me from upgrading alle the other packages. What to do? Update After following instructions in the answer by @lpanebr, it is now stuck at the linux-firmware package. So, maybe it's a more general problem than being related to specific package(s)? Although it did download some packages without problems before getting stuck at linux-firmware.

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

  • disks not ready in array causes mdadm to force initramfs shell

    - by RaidPinata
    Okay, this is starting to get pretty frustrating. I've read most of the other answers on this site that have anything to do with this issue but I'm still not getting anywhere. I have a RAID 6 array with 10 devices and 1 spare. The OS is on a completely separate device. At boot only three of the 10 devices in the raid are available, the others become available later in the boot process. Currently, unless I go through initramfs I can't get the system to boot - it just hangs with a blank screen. When I do boot through recovery (initramfs), I get a message asking if I want to assemble the degraded array. If I say no and then exit initramfs the system boots fine and my array is mounted exactly where I intend it to. Here are the pertinent files as near as I can tell. Ask me if you want to see anything else. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions # CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 13 Nov 2012 13:50:41 -0700 # by mkconf $Id$ ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae Here is fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdc2 during installation UUID=3fa1e73f-3d83-4afe-9415-6285d432c133 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdc3 during installation UUID=c4988662-67f3-4069-a16e-db740e054727 none swap sw 0 0 # mount large raid device on /data /dev/md0 /data ext4 defaults,nofail,noatime,nobootwait 0 0 output of cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sda[0] sdd[10](S) sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdb[1] 23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: <none> Here is the output of mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae devices=/dev/sda,/dev/sdb,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdd Please let me know if there is anything else you think might be useful in troubleshooting this... I just can't seem to figure out how to change the boot process so that mdadm waits until the drives are ready to build the array. Everything works just fine if the drives are given enough time to come online. edit: changed title to properly reflect situation

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

  • Virtualized data centre&ndash;Part three: Architecture

    - by marc dekeyser
    Having the basics (like discussed in the previous articles) is all good and well, but how do we get started on this?! It can be quite daunting after all!   From my own point of view I can absolutely confirm your worries and concerns, but also tell you that it is not as hard as it seems! Deciding on what kind of motherboard to buy, processor and how much memory is an activity you will spend quite some time doing research on. And that is not even mentioning storage! All in all it comes down to setting you expectations and your budget. Probably adjusting your expectations according to your budget :). Processors As a rule of thumb you want VT-D (virtualization) technology built in to the processor allowing you to have 64 bit machines running on your host. Memory The more the better! If you are building a home lab don’t bother with ECC unless you are going to run machines that absolutely should be on all the time and your comfort depends on it! Motherboard Depends on what you are going to do with storage: If you are going the NAS way then the number of SATA port/RAID capabilities do not really matter. If you decide to have a single server with lots of dedicated storage it obviously matters how much SATA ports you will have, alternatively you could use a RAID controller (but these set you back a pretty penny if you want one. DELL 6i’s are usually available for a good bargain if you can find one!). Easiest is to get one with a built-in graphics card (on-board) as you are just adding more heat, power usage and possible points of failure. Networking Just like your choice of motherboard the networking side tends to depend on how you want to go. A single virtualization  host with local storage can usually get away with having a single network card, a cluster or server which uses iSCSI storage tends to have more than one teamed up :). Storage The dreaded beast from the dark! The horror which lives in the forest! The most difficult decision you are going to make in the building of your lab. Why you might ask? Simple my friend, having the right choice of storage can make or break your virtualization solution. The performance of you storage choice will have an important impact on the responsiveness of your virtual machines and the deployment of new machines. It also makes a run with your budget! If you decide to go the NAS route you will be dropping a lot more money than if you would be having just a bunch of disks sitting in a server and manually distributing the virtual machines over the disks. Platform I’m a Microsoftee so Hyper-V is a dead giveaway for me. If you are interested in using VMware I won’t stop you but the rest of my posts will be oriented on Server 2012 Hyper-V (aka 3.0)! What did I use? Before someone asks me this in the comments I’ll give you a quick run down of what I am using. - Intel 2.4 quad core processors (i something something) - 24 GB DDR3 Memory - Single disk in each server (might look at this as I move the servers to 2012) - Synology DS1812+ NAS - 3 network interfaces where possible - HP1800 procurve managed switch I decided to spring for the NAS as I will also be using it for backups and media storage (which is working out quite nicely with my Xbox 360 I must say). At the time of building my 2 boxes (over a year and a half ago) these set me back about 900 euros each so I can image you can build the same or better for a lower price. Next article will be diagramming what I want to achieve and starting a build on the Hyper V 3.0 cluster!

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • How do I configure WakeOnUSB properly?

    - by wishi
    How do I configure Wake-On-USB properly on a 10.04 or 10.10 Ubuntu (2.6.36 and higher if needed)? (Wake-on-USB is when the computer is asleep and for example a USB Keyboard event wakes up the machine!) The notebook is an Acer Aspire Timeline X 1830T. I don't know in which way the Linux Kernel supports the controllers. There are different ways to approach this, for example /proc/acpi/wakeup... or UDEV... or something with HAL? /proc/acpi/wakeup shows every device in S4, but I need S3. Device S-state Status Sysfs node P0P2 S4 *disabled PEGP S4 *disabled P0P1 S0 *disabled pci:0000:00:1e.0 EHC1 S4 *disabled pci:0000:00:1d.0 USB1 S4 *enabled USB2 S4 *disabled USB3 S4 *disabled USB4 S4 *disabled EHC2 S4 *disabled pci:0000:00:1a.0 USB5 S4 *disabled USB6 S4 *disabled USB7 S4 *disabled HDEF S0 *disabled pci:0000:00:1b.0 RP01 S5 *disabled pci:0000:00:1c.0 PXSX S5 *disabled pci:0000:01:00.0 RP02 S0 *disabled pci:0000:00:1c.1 PXSX S5 *disabled pci:0000:02:00.0 RP03 S0 *disabled PXSX S5 *disabled RP04 S0 *disabled PXSX S5 *disabled RP05 S0 *disabled PXSX S5 *disabled RP07 S0 *disabled PXSX S5 *disabled RP08 S0 *disabled PXSX S5 *disabled GLAN S0 *disabled PEG3 S4 *disabled PEG5 S4 *disabled PEG6 S4 *disabled SLPB S3 *enabled S4, which is Suspend-To-Disk afaik... doesn't seem to work either if I echo USB1 into the wakeup table. It just sets an S4 flag. can I get the USB ports in S3? I want to make the machine wakeup from Suspend-To-Ram (S3, ACPI standard) in case a key on my external keyboard is pressed. It only wakes up if a key on the internal Laptop keyboard is pressed... from Suspend To Ram. It seems if I plug in a USB mouse, that the USB port isn't even powered. I have no BIOS option to change this. Further specific information regarding the device: usb-devices T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 13 Spd=1.5 MxCh= 0 D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1 P: Vendor=04d9 ProdID=1603 Rev=03.10 S: Manufacturer= S: Product=USB Keyboard C: #Ifs= 2 Cfg#= 1 Atr=a0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=01 Prot=01 Driver=usbhid I: If#= 1 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbhid root@underwater-laptop:/# lsusb [...] Bus 001 Device 013: ID 04d9:1603 Holtek Semiconductor, Inc. Bus 001 Device 004: ID 0bda:0138 Realtek Semiconductor Corp. Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub [...] If this doesn't work I have to properly explain why :( - but I think it is very hard to research this kernel internal. Any hints for good information here? I hope it's possible... I'm just looking for any solution. edit: this, waking up on USB, works on Windows! Thanks a lot, Marius

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • Cant finish upgrade from 11.10 to 12 on VPS based on Parallels Virtuozzo Containers, due to libc6

    - by Carmageddon
    I was stuck with this problem near the end of an upgrade: WARNING: this version of the GNU libc requires kernel version 2.6.24 or later. Please upgrade your kernel before installing glibc. The installation of a 2.6 kernel could ask you to install a new libc first, this is NOT a bug, and should NOT be reported. In that case, please add lenny sources to your /etc/apt/sources.list and run: apt-get install -t lenny linux-image-2.6 Their suggested stepds dont work on VPS, and after googling, I came up to this: Why did my upgrade to 12.04 fail with "glibc not found" or "libc6" or "requires kernel 2.6.24" error? There is comment by izx which explains my problem and proposes a workaround (might take a while to convince the guys to upgrade the kernel..). However, when I follow his instructions, I get error: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libc-dev-bin libc6 libc6-dev libnih1 Suggested packages: glibc-doc The following packages will be upgraded: libc-dev-bin libc6 libc6-dev libnih1 4 upgraded, 0 newly installed, 0 to remove and 394 not upgraded. 1 not fully installed or removed. Need to get 0 B/7737 kB of archives. After this operation, 233 kB disk space will be freed. Do you want to continue [Y/n]? y locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Preconfiguring packages ... (Reading database ... 35175 files and directories currently installed.) Preparing to replace libc6-dev 2.13-20ubuntu5.2 (using .../libc6-dev_2.15-0ubuntu10.3_amd64.deb) ... Unpacking replacement libc6-dev ... Preparing to replace libc-dev-bin 2.13-20ubuntu5.2 (using .../libc-dev-bin_2.15-0ubuntu10.3_amd64.deb) ... Unpacking replacement libc-dev-bin ... Preparing to replace libc6 2.13-20ubuntu5.2 (using .../libc6_2.15-0ubuntu10.3_amd64.deb) ... locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Checking for services that may need to be restarted... Checking init scripts... runlevel:/var/run/utmp: No such file or directory Checking for services that may need to be restarted... Checking init scripts... runlevel:/var/run/utmp: No such file or directory WARNING: init script for samba not found. Stopping some services possibly affected by the upgrade (will be restarted later): cron: stopping...done. WARNING: this version of the GNU libc requires kernel version 2.6.24 or later. Please upgrade your kernel before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add lenny sources to your /etc/apt/sources.list and run: apt-get install -t lenny linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Processing triggers for man-db ... locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Errors were encountered while processing: /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I also attempted to manually grab the .deb package and install it using dpkg -i, but getting: locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) Even though the file is: libc-bin_2.15-0ubuntu10+openvz0_amd64.deb

    Read the article

  • Why won't my Broadcom BCM4312 LP-PHY work with the STA driver?

    - by Jackson Taylor
    I tried the steps here for a 4312: https://help.ubuntu.com/community/WifiDocs/Driver/bcm43xx Both of these: sudo modprobe -r b43 ssb wl sudo modprobe wl return: FATAL: Module wl not found. FATAL: Error running install command for wl (this one is only for the second one actually) I tried the broadcom-sta, didn't work. What's confusing is down below in the next steps for STA with internet access it says to use the bcmwl one. So I install that and it succeeds but with some errors: sudo apt-get install bcmwl-kernel-source Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: module-assistant Use 'apt-get autoremove' to remove it. The following NEW packages will be installed: bcmwl-kernel-source 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,181 kB of archives. After this operation, 3,609 kB of additional disk space will be used. Selecting previously unselected package bcmwl-kernel-source. (Reading database ... 168005 files and directories currently installed.) Unpacking bcmwl-kernel-source (from .../bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb) ... Setting up bcmwl-kernel-source (5.100.82.112+bdcom-0ubuntu3) ... Loading new bcmwl-5.100.82.112+bdcom DKMS files... Building only for 3.5.0-21-generic Building for architecture x86_64 Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. ERROR: Module b43 does not exist in /proc/modules ERROR: Module b43legacy does not exist in /proc/modules ERROR: Module ssb does not exist in /proc/modules ERROR: Module bcm43xx does not exist in /proc/modules ERROR: Module brcm80211 does not exist in /proc/modules ERROR: Module brcmfmac does not exist in /proc/modules ERROR: Module brcmsmac does not exist in /proc/modules ERROR: Module bcma does not exist in /proc/modules FATAL: Module wl not found. FATAL: Error running install command for wl update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.5.0-21-generic jtaylor991@jtaylor991-whiteHP:~$ sudo modprobe wl FATAL: Module wl not found. FATAL: Error running install command for wl Then I do the modprobe wl commands listed above and it gives the above listed errors. It didn't work with the broadcom-sta driver either. I installed the b43 ones but nothing happened, and I don't know why so those are still installed. firmware-b43legacy-installer, b43-fwcutter and firmware-b43-lpphy-installer (yes it is a LP-PHY) are currently installed. If I go into System Settings Software Sources Additional Drivers it says "Using Broadcom 802.11 Linux STA wireless driver source from bcmwl-kernel-source (proprietary) But bcmwl-kernel-source isn't installed. I could try again but I remember rebooting and it still said this. What's funny is it found wireless networks during the Ubuntu setup/installation, I don't remember if I got it to connect or not though. I think it kept asking for a password when I put it in (yes it was right I showed password and looked at it) so I just ignored it. But right now the Enable Wireless option in the top right is just gone, it's just Enable Networking and I'm on ethernet on this HP Pavilion dv4-1435dx right here. If I run rfkill list it shows: 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no It was hard blocked at the beginning but unblocking it makes no change. Also it's a touch sensitive button, and it appears to be always orange no matter if it's enabled or not because when I touch it the hard blocked changes between yes and no in rfkill list. I think it was blue for a minute at one point. What is going on?!?! Help me! Lol, thanks for any and all of your time guys. Oh yeah this is Ubuntu 12.10 fresh install.

    Read the article

  • I am not able to delete a corrupt NTFS partition on my pen drive. How can I force its deletion?

    - by yesuraj
    I formatted my 16GB pen drive with the NTFS file system in windows vista. After that I started copying some files. However, only a few files were copied to the pen drive before the copy operation hung. So I cancelled the copy operation. Now I am unable to use the pen drive. I DON'T REALLY NEED ANY FILES THAT I COPIED TO THE PENDRIVE. I JUST WANT TO USE THE PENDRIVE AGAIN. I have tried using Ubuntu to format the pen drive. But when i use fdisk to delete the partition, it looks like it is working fine but in fact it does not delete the partition. Also I am unable to format it with any other file system. When I tried to use gparted, it throws the following error: Error mounting: mount exited with exit code 14: The disk contains an unclean file system(0,0). The file system wasn't safely closed on window. Fixing ntfs_attr_pread_i:ntfs_pread failed: Input/output error Failed to read NTFS$Bitmap:Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a softRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into windows twice. The usage of the /f parameter is very important!. If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the dmraid documentation for more details When I searched the Internet I found help on how to recover. But I don’t want to recover, I want to format it again. When I pressed w after deleting the partition, it took more time than previously. After that i removed the pen drive and re-inserted, but the partition I had deleted was still present. If I simply type the command fdisk /dev/sdb without removing the pen drive after the partition is deleted, then it returns the error message Unable to open /dev/sdb. Here are the steps that I followed: root@yesuraj-ubuntu:~# fdisk /dev/sdb Command (m for help): d Selected partition 1 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks THE DEMESG PRINTS ARE AS FOLLOWS, [ 6139.774753] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6154.816941] usb 2-1.3: device descriptor read/64, error -110 [ 6169.968908] usb 2-1.3: device descriptor read/64, error -110 [ 6170.158427] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6185.200638] usb 2-1.3: device descriptor read/64, error -110 [ 6200.352572] usb 2-1.3: device descriptor read/64, error -110 [ 6200.542093] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6205.559460] usb 2-1.3: device descriptor read/8, error -110 I used the dd command and it erased the partition table. But now when I connect the pen drive, dmesg contains this error message: [88143.437001] sdb: unknown partition table. I am not able to create a partion using fdisk /dev/sdb. The error message says that it is unable to find the node. Other messages from dmesg follow below. [87100.531596] usb 2-1.3: new high speed USB device number 39 using ehci_hcd [87130.915257] usb 2-1.3: new high speed USB device number 40 using ehci_hcd [87135.932647] usb 2-1.3: device descriptor read/8, error -110

    Read the article

  • Free hosting solution for a very low-traffic website [duplicate]

    - by user966939
    This question already has an answer here: How to find web hosting that meets my requirements? 4 answers I run a very low-traffic website (about 40 users, basically all of which are daily active on the site). I don't see it changing anytime soon either, as there is no way to sign up on the site right now. Until now I have just been using a sub-directory on a friend's host (shared), to host the web site. But in only a few weeks from now, his subscription will end, and he has no plans on renewing it. So of course this means I'll have to move on to something else. But I don't think I'll find someone who'd be willing to share a... shared host with me again. And besides, the software used on that server is ancient (PHP 4.4.9 + MySQL 4.1.22). There's one obvious solution that comes to mind, I guess: choose a better host and pay for it myself. The problem here is that I have no real fixed income, as I'm only a student. So even if the pricing is dirt cheap, I just can't be certain I will be able to afford it, every single month, for... at least 2 years maybe? So I've looked at free hosting solutions instead. The least requirement I had was that it was completely free of ads. But no matter where I look, I always find something in a corner or two ("what can you expect from a free host?" - yeah I know, but I guess it was worth a shot). For example, on Byethost (one of the free hosts I tried), if you trigger a PHP error while error reporting is set to E_ALL, you will spawn some hidden ad... Besides Byethost, I've tried 000Webhost, x10Hosting, 2Freehosting/1Freehosting, Wink.ws, and they are only worse. Okay, I'm running low on ideas. But! What if I just hosted the site myself, on my own computer? That could work. I actually do have my computer on practically 24/7. But not really. Sometimes I need to reboot it, and sometimes we even have power outages. And what if the hardware needs an upgrade? It's not such a big deal for me if the site went down, because I know what's going on; but what about the users? If I do decide to host it myself, is there some way to show users an alternate page instead of them just seeing a generic "server not found" page in the browser when the site is not accessible? Or is there something I have been missing out on? Is there a different kind of "web hosting" solution out there that I haven't heard of? Here is what I'm really looking for: Free (as in, no costs) NO ads Bandwidth enough for a low-traffic forum with roughly 40 users (Semi-)Up-to-date PHP and MySQL (at least not older than a year) No standard (non-extension) PHP functions turned off - such as sleep() The mbstring extension is enabled Disk space: at least 5 MB At least one MySQL database Some bonus points would be: Max execution time of PHP scripts can be set Remote access to MySQL database What would be the best solution for me? Is there one?

    Read the article

  • Process for Securing Web Sites and Applications

    - by Aamir Hasan
    The following quick-start guide provides a detailed overview of how to configure security for IIS 6.0. Reduce the Attack Surface of the Web Server 1.       Enable only essential Windows Server 2003 components and services. 2.       Enable only essential IIS 6.0 components and services. 3.       Enable only essential Web service extensions. 4.       Enable only essential Multipurpose Internet Mail Extensions (MIME) types. 5.       Configure Windows Server 2003 security settings. Prevent Unauthorized Access to Web Sites and Applications 1.       Store content on a dedicated disk volume. 2.       Set IIS Web site permissions. 3.       Set IP address and domain name restrictions. 4.       Set the NTFS file system permissions. Isolate Web Sites and Applications 1.       Evaluate the effects of impersonation on application compatibility: 2·         Identify the impersonation behavior for ASP applications. 3·         Select the impersonation behavior for ASP.NET applications. 4.       Configure Web sites and applications for isolation. Configure User Authentication 1.       Configure Web site authentication. 2·         Select the Web site authentication method. 3·         Configure the Web site authentication method. 4.       Configure File Transfer Protocol (FTP) site authentication. Encrypt Confidential Data Exchanged with Clients 1.       Use Secure Sockets Layer (SSL) to encrypt confidential data. 2.       Use Internet Protocol security (IPSec) or virtual private network (VPN) with remote administration. Maintain Web Site and Application Security 1.       Obtain and apply current security patches. 2.       Enable Windows Server 2003 security logs. 3.       Enable file access auditing for Web site content. 4.       Configure IIS logs. 5.       Review security policies, processes, and procedures.  Note:To secure the Web sites and applications in a Web farm, use the process described in this chapter to configure security for each server in the Web farm. Link:http://www.studentacad.com/post/2010/04/28/Process-for-Securing-Web-Sites-and-Applications.aspx

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Ubuntu 12.04 Hp G72 Problem Installing proprietary wireless driver

    - by user69402
    I have a fresh Ubuntu 12.04 installed on HP G72 machine. In order for my wireless to work I need the proprietary driver installed - Broadcom STA wireless driver. Trying to install it from the System Settings gives me the error: "Sorry, installation of this driver failed. Please have a look at the log file for details: /var/log/jockey.log". So far I suspect the error to be caused by the bad "bcmwl-kernel-source" installation. What i tried: 1. remove "bcmwl-kernel-source" 2. install "bcmwl-kernel-source" installation through the terminal ends with "error code (1)". I would greatly appreciate any help Here is everything that the terminal returns: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: bcmwl-kernel-source 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,151 kB of archives. After this operation, 3,514 kB of additional disk space will be used. Selecting previously unselected package bcmwl-kernel-source. (Reading database ... 170331 files and directories currently installed.) Unpacking bcmwl-kernel-source (from .../bcmwl-kernel-source_5.100.82.38+bdcom-0ubuntu6.1_amd64.deb) ... Setting up bcmwl-kernel-source (5.100.82.38+bdcom-0ubuntu6.1) ... Loading new bcmwl-5.100.82.38+bdcom DKMS files... /usr/sbin/dkms: line 467: unset: POST_REMOVE$PRE_BUMLD': not a valid identifier /usr/sbin/dkms: line 467: unset:BUILD_E\CLUWIVE_ARCH': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: modules_conf_arra}': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: `$': not a valid identifier /usr/sbin/dkms: line 419: ${!POST_REMOVE$PRE_BUMLD[@]}: bad substitution /usr/sbin/dkms: line 419: ${!BUILD_E\CLUWIVE_ARCH[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution malloc: ../bash/subst.c:3671: assertion botched free: start and end chunk sizes differ Aborting.../tmp/tmp.pEXTnftUfI: line 4: modules_conf_arra}[[@]}]=[[@]}]}: command not found dkms.conf: Error! No 'DEST_MODULE_LOCATION' directive specified for record #0. dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with '/kernel', '/updates', or '/extra' in record #0. dkms.conf: Error! No 'PACKAGE_VERSION' directive specified. Error! Bad conf file. File: /usr/src/bcmwl-5.100.82.38+bdcom/dkms.conf does not represent a valid dkms.conf file. dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 8 Errors were encountered while processing: bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • Gnome Install Error (1)

    - by Guy1984
    I'm trying to install Gnome on my Ubuntu 12.04 P.Pangolin and getting the following errors: root@***:~# sudo apt-get install gnome-core gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done gnome-core is already the newest version. gnome-session-fallback is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. 5 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up bluez (4.98-2ubuntu7) ... start: Job failed to start invoke-rc.d: initscript bluetooth, action "start" failed. dpkg: error processing bluez (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of gnome-bluetooth: gnome-bluetooth depends on bluez (>= 4.36); however: Package bluez is not configured yet. dpkg: error processing gnome-bluetooth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-shell: gnome-shell depends on gnome-bluetooth (>= 3.0.0); however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-shell (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-user-share: gnome-user-share depends on gnome-bluetooth; however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-user-share (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-core: gnome-core depends on gNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already nome-bluetooth (>= 3.0); however: Package gnome-bluetooth is not configured yet. gnome-core depends on gnome-shell (>= 3.0); however: Package gnome-shell is not configured yet. gnome-core depends on gnome-user-share (>= 3.0); however: Package gnome-user-share is not configured yet. dpkg: error processing gnome-core (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: bluez gnome-bluetooth gnome-shell gnome-user-share gnome-core E: Sub-process /usr/bin/dpkg returned an error code (1) Syslog: Oct 5 16:04:17 ks34900 bluetoothd[5176]: Bluetooth daemon 4.98 Oct 5 16:04:17 ks34900 bluetoothd[5176]: Starting SDP server Oct 5 16:04:17 ks34900 bluetoothd[5176]: opening L2CAP socket: Address family not supported by protocol Oct 5 16:04:17 ks34900 bluetoothd[5176]: Server initialization failed Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init alert plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init time plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init proximity plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to open control socket: Address family not supported by protocol (97) Oct 5 16:04:17 ks34900 bluetoothd[5176]: Can't init bnep module Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init network plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Unable to start SCO server socket Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init audio plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Failed to init gatt_example plugin Oct 5 16:04:17 ks34900 bluetoothd[5176]: Can't open HCI socket: Address family not supported by protocol (97) Oct 5 16:04:17 ks34900 bluetoothd[5176]: adapter_ops_setup failed Oct 5 16:04:17 ks34900 kernel: init: bluetooth main process (5176) terminated with status 1 Oct 5 16:04:17 ks34900 kernel: init: bluetooth main process ended, respawning Oct 5 16:04:17 ks34900 bluez: Stopping uarts Oct 5 16:04:17 ks34900 bluez: Stopping rfcomm Any thoughts?

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • Mounting NAS share: Bad Address

    - by Korben
    I've faced to the problem that can't solve. Hope you can help me with it. I have a storage QNAP TS-459U, with it's own Linux, and 'massive1' folder shared, which I need to mount to my Debian server. They are connected by regular patch cord. Debian server has two network interfaces - eth0 and eth1. eth0 is for Internet, eth1 is for QNAP. So, I'm saying this: mount -t cifs //169.254.100.100/massive1/ /mnt/storage -o user=admin , where 169.254.100.100 is an IP of QNAP's interface. The result I get (after entering password): mount error(14): Bad address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Tried: mount.cifs, smbmount, with '/' at the end of the network share and without it, and many other variations of that command. And always its: mount error(14): Bad address Funny thing is when I was in Data Center, I had connected my netbook to QNAP by the same scheme (with Fedora 16 on it), and it connected without any problems, I could read/write files on the QNAP's NAS share! So I'm really stuck with the Debian. I can't undrestand where's the difference with Fedora, making this error. Yeah, I've used Google. Couldn't find any useful info. Ping to the QNAP's IP is working, I can log into QNAP's Linux by ssh, telnet on 139's port is working. This is network interface configuration I use in Debian: IP: 169.254.100.1 Netmask: 255.255.0.0 The only diffence in connecting to Fedora and Debian is that in Fedora I've added gateway - 169.254.100.129, but ping to this IP is not working, so I think it's not necessary at all. P.S. ~# cat /etc/debian_version wheezy/sid ~# uname -a Linux host 2.6.32-5-openvz-amd64 #1 SMP Mon Mar 7 22:25:57 UTC 2011 x86_64 GNU/Linux ~# smbtree WORKGROUP \\HOST host server \\HOST\IPC$ IPC Service (host server) \\HOST\print$ Printer Drivers NAS \\MASSIVE1 NAS Server \\MASSIVE1\IPC$ IPC Service (NAS Server) \\MASSIVE1\massive1 \\MASSIVE1\Network Recycle Bin 1 [RAID5 Disk Volume: Drive 1 2 3 4] \\MASSIVE1\Public System default share \\MASSIVE1\Usb System default share \\MASSIVE1\Web System default share \\MASSIVE1\Recordings System default share \\MASSIVE1\Download System default share \\MASSIVE1\Multimedia System default share Please, help me with solving this strange issue. Thanks before.

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >