Search Results

Search found 10463 results on 419 pages for 'required'.

Page 284/419 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Triple boot WIndows 7, Windows 8, and Mountain Lion on Macbook Pro

    - by Nathan
    Ok, So I have a bit of a unique situation here I could use some help on. I've modded my summer 2011 MBPro to have 2 harddrives by replacing the optical drive. OSX Mountain Lion is installed on a single partition of a 120GB SSD. The second drive is 750GB, partitioned as 550GB, 150GB, and ~50GB. I've set the 550GB to act as my OSX homefolder, but I'd like to install windows 7 and Windows 8 on the remaining partitions. It Took a while, but by following this guide, I eventually found a way to install Windows without a CD/DVD drive by following this http://huguesval.com/blog/2012/02/installing-windows-7-on-a-mac-without-superdrive-with-virtualbox/ It worked flawlessly for creating both windows 7 and windows 8 images that I could clone onto FAT32 partitions. However, I have encountered a problem when trying to triple boot. After I put Windows 8 onto the ~50GB partition and tried to boot into windows 7 I get an error that says something like: error: 0x0000000e The Boot selection failed because the required device is inaccessible. If I re-clone the windows 7 image onto the drive and select the option to "replace BCD" file for the drive, windows 7 will boot but windows 8 now gives me the same exact error. I realize this is a pretty extensive setup, but if anyone has some insight I'd love to hear it.

    Read the article

  • Why datacenter water cooling is not widespread?

    - by MainMa
    From what I read and hear about datacenters, there are not too many server rooms which use water cooling, and none of the largerst datacenters use water cooling (correct me if I'm wrong). Also, it's relatively easy to buy an ordinary PC components using water cooling, while water cooled rack servers are nearly nonexistent. On the other hand, using water can possibly (IMO): Reduce the power consumption of large datacenters, especially if it is possible to create direct cooled facilities (i.e. the facility is located near a river or the sea). Reduce noise, making it less painful for humans to work in datacenters. Reduce space needed for the servers: On server level, I imagine that in both rack and blade servers, it's easier to pass the water cooling tubes than to waste space to allow the air to pass inside, On datacenter level, if it's still required to keep the alleys between servers for maintenance access to servers, the empty space under the floor and at the ceiling level used for the air can be removed. So why water cooling systems are not widespread, neither on datacenter level, nor on rack/blade servers level? Is it because: The water cooling is hardly redundant on server level? The direct cost of water cooled facility is too high compared to an ordinary datacenter? It is difficult to maintain such system (regularly cleaning the water cooling system which uses water from a river is of course much more complicated and expensive than just vacuum cleaning the fans)?

    Read the article

  • Getting Perl DBD::mysql working on OS X 10.7?

    - by Bart B
    I can't seem to get Perl & MySQL to talk to each other on OS X 10.7 Lion. I did all the installs by the book, I used Oracle's PKG installer for the latest MySQL Community Server, and I installed DBI and DBD::mysql via CPAN. There were not problems at all during the install, but, when I try to USE DBD::mysql to connect to my local DB server I get the following error: install_driver(mysql) failed: Can't load '/Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle' for module DBD::mysql: dlopen(/Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle, 1): Library not loaded: /usr/local/mysql/lib/libmysqlclient.16.dylib Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Reason: image not found at /System/Library/Perl/5.12/darwin-thread-multi-2level/DynaLoader.pm line 204. at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a required shared library or dll isn't installed where expected After a lot of googling all I could find were suggested hacks, so I gave this one a go: http://arkoftech.wordpress.com/2011/02/10/fixing-dbdmysql-for-mysql-5-5-89-under-macos-10-6-x/ I had to update some of the paths in the instructions since on Lion it's Perl 5.12 not 5.10. After doing that I got a new error: dyld: lazy symbol binding failed: Symbol not found: _mysql_init Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Expected in: flat namespace dyld: Symbol not found: _mysql_init Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Expected in: flat namespace Trace/BPT trap: 5 There must be a simple way to get MySQL & Perl working on OS X? - HELP!

    Read the article

  • SQL Server 2008 R2 transactional replication over VPN

    - by enashnash
    I'm having difficulty setting up replication over a VPN. I have a SQL Server 2008 R2, Enterprise Edition database on a Windows 2008 R2 Server. SQL Server is running on a non-standard port. I have set it up so that it is acting as its own distributor and have configured a publisher on this server. It is set as an updatable transational publication (yes, this is necessary). On this server, I have Routing and Remote Access enabled in order to be able to establish VPN connections. It is configured with a static IP address pool, of which the first in the range is always assigned to the server. I have assigned a test user a static address within this range (I don't know if this is necessary or not). All clients will be 2008 R2 versions, but could be SQL Express or standalone developer instances of the full product. I can establish a VPN connection from the client without problems and can see that the correct IP addresses are allocated. After connecting to the database to test that I can establish a connection, I realised that I needed to be able to connect to the database using the server name rather than an IP address - required for replication - which wouldn't work initially. I created an entry in the hosts file for the server on the client using the NETBIOS name of the server, and now I can connect to the server, from the client, using the SERVER\INSTANCE, PORT syntax, over the VPN. As it is the default instance on the server, I can also connect with simply SERVER, PORT syntax. After all that, I still get the following dreaded error: SQL Server replication requires the actual server name to make a connection to the server. Connections through a server alias, IP address, or any other alternate name are not supported. Specify the actual server name, 'SERVER\INSTANCE'. (Replication.Utilities). What have I missed? How do I get this to work? TIA

    Read the article

  • How do I install OpenStack on a single Ubuntu 12.04 node?

    - by Sam Edwards
    I'm having trouble installing OpenStack in Ubuntu 12.04, for various reasons: The official Ubuntu website recommends Juju and MAAS. However, this is a single node I am trying to get OpenStack installed on, and MAAS requires "two or more nodes" according to the docs. Additionally, I don't have any experience in MAAS and Juju and would rather stick to technologies I am more familiar with so that I can debug problems that arise. I have tried StackGeek but this fails because the node only has a single Ethernet port. The node does, however, have the second hard drive required for the nova storage. I have tried DevStack but cannot log into the dashboard. The login form appears fine, but as soon as I try to submit the page, my browser begins loading indefinitely. I have tried installing straight from packages, but I get an Internal Server Error in the dashboard upon trying to log in, with no helpful logs anywhere in sight to aid me in debugging the issue. Each of these attempts was with a fresh Ubuntu 12.04 LTS setup; I'm finding it really strange that no matter what I try, I cannot get OpenStack installed. Is this even a stable/mature project? Why am I encountering so many bugs?

    Read the article

  • Probelms Intstalling Trac using apt-get Ubuntu Jaunty

    - by Ben Waine
    Hi, I'm having some issues getting apt to install trac correctly on my Ubuntu Jaunty Box. Using the command 'apt-get install trac' I get the following output: root@myserver:~# apt-get install trac Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: trac: Depends: python-setuptools (> 0.5) but it is not installable Depends: python-pysqlite2 (>= 2.3.2) but it is not going to be installed Depends: python-subversion but it is not installable Depends: libjs-jquery but it is not installable Recommends: python-pygments (= 0.6) but it is not installable or enscript but it is not installable Recommends: python-tz but it is not installable E: Broken packages I have successfully used the command on my karmic kola desktop machine and am able to create new projects etc. I thought I might be able to solve the problem by installing all python related extensions. This produced a very similar output. I have Main, universe and multi-verse repositories enabled. Its a remote machine and I have no access to the gui. Hope someone can help, googleing failed to solve the issue or find a solution! Thanks, Ben

    Read the article

  • How can I access a webDAV folder as a UNC share?

    - by Amar
    first of all I am just getting to know about webDAV and appreciate your patience. I have a virtual directory on IIS 6 (windows 2003) that is based on a network share on a file server different from web server. something like www.mysite.com/myreports where myreport is based on \myfileserver\reports. I have been asked by network folks to not use the UNC path like that for security reason and try to explore using webDAV. What I have been told is webDAV can give me a UNC path without requiring to open the ports required for a file server UNC path. Then I can use the new UNC path to map my virtual directory and my asp.net code will not require any change. Help: Being new to webDAV, I did some research over the web. I now can create a web folder in fileserver. I put IIS on file server as well. I can browse the content as http:://fileserver/mywebDAVreports which is based on reports folder. But I don't know how to get a UNC for this web folder to be able to map my virtual directory on web server. I appreciate any help on this. Regards, Amar

    Read the article

  • Making the iPhone work with stripped down Windows XP

    - by Gabriel
    Hi, this is my first time posting here and I have a really specific question. I have an ASUS eee 901 running Windows XP Home. I had everything working well, but then I decided to improve performance by moving Windows to the smaller but faster internal SSD. I used Nlite to strip down Windows, following the instructions here: http://wiki.eeeuser.com/howto:nlitexp I now have a very lightweight installation of XP home with SP3 and all the current updates. Almost everything is working really well. I have installed iTunes and I CAN sync with no problems. However, each time I plug in my iPhone 3GS (latest firmware), Windows tries and fails to install drivers. The Found New Hardware Wizard launches, but nothing I do will make it complete successfully, with the result that the iphone does not show up in Windows as removable storage, or as a camera. When I launch the Camera and Scanner Wizard, it shows only my webcam, not the iphone. I have verified that I have the following files in place: Windows\System32\ptpusb.dll (regsvr32 successful) Windows\System32\ptpusd.dll (entry point not found, can not be registered) Windows\System32\usbaaplrc.dll (entry point not found, can not be registered) Windows\System32\drivers\usbaapl.sys Windows\System32\drivers\usbscan.sys Windows\System32\drivers\usbstor.sys Does anyone know if some other file is required or if there's some other element preventing this from working? Edit (From posted answer) I did select Cameras & Camcorders, and my webcam is working fine for video & still capture.

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Python easy_install confused on Mac OS X

    - by slf
    environment info: $ echo $PATH /opt/local/bin:/opt/local/sbin:/sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin:/opt/local/bin:/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:~/.utility_scripts $ which easy_install /usr/bin/easy_install specifically, let's try the simplejson module (I know it's the same thing as import json in 2.6, but that isn't the point) $ sudo easy_install simplejson Searching for simplejson Reading http://pypi.python.org/simple/simplejson/ Reading http://undefined.org/python/#simplejson Best match: simplejson 2.1.0 Downloading http://pypi.python.org/packages/source/s/simplejson/simplejson-2.1.0.tar.gz#md5=3ea565fd1216462162c6929b264cf365 Processing simplejson-2.1.0.tar.gz Running simplejson-2.1.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Ojv_yS/simplejson-2.1.0/egg-dist-tmp-AypFWa The required version of setuptools (>=0.6c11) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U setuptools'. (Currently using setuptools 0.6c9 (/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python)) error: Setup script exited with 2 ok, so I'll update setuptools... $ sudo easy_install -U setuptools Searching for setuptools Reading http://pypi.python.org/simple/setuptools/ Best match: setuptools 0.6c11 Processing setuptools-0.6c11-py2.6.egg setuptools 0.6c11 is already the active version in easy-install.pth Installing easy_install script to /usr/local/bin Installing easy_install-2.6 script to /usr/local/bin Using /Library/Python/2.6/site-packages/setuptools-0.6c11-py2.6.egg Processing dependencies for setuptools Finished processing dependencies for setuptools I'm not going to speculate, but this could have been caused by any number of environment changes like the Leopard - Snow Leopard upgrade, MacPorts or Fink updates, or multiple Google App Engine updates.

    Read the article

  • To clone or to automate a system installation?

    - by Shtééf
    Let's say you're setting up a cluster of servers performing the same task. Or say you're just setting up a bunch of different servers, but you expect to use a base configuration on all of your servers. Would it be better practice to create a base image and clone it, or to automate the installation and configuration? I occasionally end up in this argument with my boss, in situations where we're time-pressed. When he sees me struggle with perfecting the automation, his suggestion is often to clone the entire disk to the other machines. But my instinct has always been to avoid cloning. This is mostly from an Ubuntu perspective, but the question is fairly general. My reasons for avoiding cloning are: On a typical install, even if it's fresh, there are already several unique identifiers installed: filesystem UUIDs, SSH host keys, among others. These would have to be regenerated. Network needs to be reconfigured for each clone. This would need to be done off-line, of course, or the settings will conflict with other machines on the network. On the other hand, some of the cloning advantages are quite clear as well: (Initially?) less effort required than automating configuration. Tools exist to quickly address (some) of the above disadvantages. (I can see right through my own bias there.)

    Read the article

  • Error while detecting start profile for instance with ID _u

    - by Techboy
    I am upgrading a SAP Java instance and am getting the error shown in the log below. There are not backup profile files in the profiles directory. The string _u does not exist in the default, start or instance profile for instance SCS01. Please can you suggest what the issue might be? Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.initializexAbstractPhaseType.javax758x [Thread[main,5,main]]: Phase PREPARE/INIT/DETERMINE_PROFILES has been started. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.initializexAbstractPhaseType.javax759x [Thread[main,5,main]]: Phase type is com.sap.sdt.j2ee.phases.PhaseTypeDetermineProfiles. Mar 13, 2010 12:09:01 PM [Info]: ...ap.sdt.j2ee.phases.PhaseTypeDetermineProfiles.checkVariablesxPhaseTypeDetermineProfiles.javax284x [Thread[main,5,main]]: All 4 required variables exist in variable handler. Mar 13, 2010 12:09:01 PM [Info]: ....j2ee.tools.sysinfo.AbstractInfoController.updateProfileVariablexAbstractInfoController.javax302x [Thread[main,5,main]]: Parameter /J2EE/StandardSystem/DefaultProfilePath has been detected. Parameter value is \server1\SAPMNT\PI7\SYS\profile\DEFAULT.PFL. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.j2ee.tools.sysinfo.ProfileDetector.detectInstancexProfileDetector.javax340x [Thread[main,5,main]]: Instance SCS01 with profile \server1\SAPMNT\PI7\SYS\profile\PI7_SCS01_server1, start profile \server1\SAPMNT\PI7\SYS\profile\START_SCS01_server1, and host server1 has been detected. Mar 13, 2010 12:09:01 PM [Error]: com.sap.sdt.ucp.phases.AbstractPhaseType.doExecutexAbstractPhaseType.javax862x [Thread[main,5,main]]: Exception has occurred during the execution of the phase. Mar 13, 2010 12:09:01 PM [Error]: com.sap.sdt.j2ee.tools.sysinfo.ProfileDetector.detectInstancexProfileDetector.javax297x [Thread[main,5,main]]: Error while detecting start profile for instance with ID _u. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax905x [Thread[main,5,main]]: Phase PREPARE/INIT/DETERMINE_PROFILES has been completed. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax906x [Thread[main,5,main]]: Start time: 2010/03/13 12:09:01. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax908x [Thread[main,5,main]]: End time: 2010/03/13 12:09:01. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax909x [Thread[main,5,main]]: Duration: 0:00:00.078. Mar 13, 2010 12:09:01 PM [Info]: com.sap.sdt.ucp.phases.AbstractPhaseType.cleanupxAbstractPhaseType.javax910x [Thread[main,5,main]]: Phase status is error.

    Read the article

  • Not Able To Connect to Windows Server 2008 R2 using FileZilla Externally

    - by obautista
    I configured FTP Service/Role on my Windows Server 2008 R2 machine. I am able to connect from the inside, but not from the outside. On the inside I tested using cmd prompt and IE FTP. On the outside, I am testing with FileZilla and IE FTP. From the outside, IE FTP prompts me to enter my username/pwd, but nothing happens. Page eventually times out and I get "Internet Explorer cannot display page". Using FileZilla, I get the following messages. Note FileZilla resolved domain name and authenticates. I did not configure FTP Wirewall Support on the FTP site. I am not sure if I need to do this. I set up basic authentication, non-ssl, not allowing anonymous. I testing with Windows Firewall Turned off and on (I added windows firewall rule for port 21). On my network firewall (Cisco), I added a rule to forward port 21 traffic to FTP Server. Status: Resolving address of ftp.technologyblends.com Status: Connecting to 75.149.66.201:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER * Response: 331 Password required for . Command: PASS *** Response: 230 User logged in. Command: SYST Response: 215 Windows_NT Command: FEAT Response: 211-Extended features supported: Response: LANG EN* Response: UTF8 Response: AUTH TLS;TLS-C;SSL;TLS-P; Response: PBSZ Response: PROT C;P; Response: CCC Response: HOST Response: SIZE Response: MDTM Response: REST STREAM Response: 211 END Command: OPTS UTF8 ON Response: 200 OPTS UTF8 command successful - UTF8 encoding now ON. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I. Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing

    Read the article

  • Where should CentOS users get /usr/share/virtio-win/drivers for virt-v2v?

    - by Philip Durbin
    I need to migrate a number of virtual machines from VMware ESX to CentOS 6 KVM hypervisors. Ultimately, I wrote an RPM spec file that solved my problem at https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec but I'm not sure if there's another RPM in base CentOS or EPEL (something standard) I should be using instead. Originally, I was getting this "No root device found in this operating system image" error when attemting to migrate a Window 2008 VM. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm virt-v2v: No root device found in this operating system image. . . . but I solved this with a simply yum install libguestfs-winsupport since the docs say: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. Next I got an error about missing drivers for Windows 2008. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: /usr/share/virtio-win/drivers/amd64/Win2008 . . . and I resolved this by grabbing an iso from Fedora at http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ as recommended by http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers and building an RPM from it with this spec file: https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec Now, virt-v2v exits without error: [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: my-vm configured with virtio drivers. [root@kvm01b ~]# Now, my question is, rather that the virtio-win RPM from the spec file I wrote, is there some other more standard RPM in base CentOS or EPEL that will resolve the error above? Here's a bit more detail about my setup: [root@kvm01b ~]# cat /etc/redhat-release CentOS release 6.2 (Final) [root@kvm01b ~]# rpm -q virt-v2v virt-v2v-0.8.3-5.el6.x86_64 See also Bug 605334 – VirtIO driver for windows does not show specific OS: Windows 7, Windows 2003

    Read the article

  • Automatically check for Security Updates on CentOS or Scientific Linux?

    - by Stefan Lasiewski
    We have machines running RedHat-based distros such as CentOS or Scientific Linux. We want the systems to automatically notify us if there are any known vulnerabilities to the installed packages. FreeBSD does this with the ports-mgmt/portaudit port. RedHat provides yum-plugin-security, which can check for vulnerabilities by their Bugzilla ID, CVE ID or advisory ID. In addition, Fedora recently started to support yum-plugin-security. I believe this was added in Fedora 16. Scientific Linux 6 did not support yum-plugin-security as of late 2011. It does ship with /etc/cron.daily/yum-autoupdate, which updates RPMs daily. I don't think this handles Security Updates only, however. CentOS does not support yum-plugin-security. I monitor the CentOS and Scientific Linux mailinglists for updates, but this is tedious and I want something which can be automated. For those of us who maintain CentOS and SL systems, are there any tools which can: Automatically (Progamatically, via cron) inform us if there are known vulnerabilities with my current RPMs. Optionally, automatically install the minimum upgrade required to address a security vulnerability, which would probably be yum update-minimal --security on the commandline? I have considered using yum-plugin-changelog to print out the changelog for each package, and then parse the output for certain strings. Are there any tools which do this already?

    Read the article

  • Please insert a disk into SD/MMC - Vista problem [closed]

    - by Naunidh
    Possible Duplicate: Please insert a disk into SD/MMC - Vista problem Hi I tried pushing my 2GB micro SD card using the inbuilt card reader. On clicking the drive I get "Please insert a disk into SD/MMC". This problem is really frustrating. The card works fine on other computers so does the microSD to SD attachment. I have done following o fix. - Updated Vista and installed SP1. - Updated the TI drivers for FlashDrive. - Checked Vaio site for updates (none required). - Added a new entry HKLM\SYSTEM\ControlS* et001\Services\tifm21\Parameters/SDParam=1 took the hint from (http://tinyurl.com/nk33tp) I have restarted the PC multiple times. As soon as I put the card in, the SD/MMC device icon blips, so it seems the hardware is at least detecting something. The card reader was working fine few days back. I guess some windows update has broken something, does any one have any idea on how to proceed. MY laptop is VGN-N365E.

    Read the article

  • Postfix installation error on Ubuntu

    - by kgpdeveloper
    How do I fix this error on Ubuntu 10.04 ? Reading package lists... Done Building dependency tree Reading state information... Done postfix is already the newest version. The following packages were automatically installed and are no longer required: libaprutil1-dbd-sqlite3 libcap2 apache2.2-bin libapr1 libaprutil1-ldap libaprutil1 php5-common Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up postfix (2.7.0-1) ... Postfix configuration was not changed. If you need to make changes, edit /etc/postfix/main.cf (and others) as needed. To view Postfix configuration values, see postconf(1). After modifying main.cf, be sure to run '/etc/init.d/postfix reload'. Running newaliases newaliases: warning: valid_hostname: numeric hostname: 202002 newaliases: fatal: file /etc/postfix/main.cf: parameter myhostname: bad parameter value: 202002 dpkg: error processing postfix (--configure): subprocess installed post-installation script returned error exit status 75 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: postfix E: Sub-process /usr/bin/dpkg returned an error code (1) Even if I reboot, the same error shows up. Thanks for the help..

    Read the article

  • OCZ Vertex 3 SSD boot failure

    - by Col Zero
    Ok, so I just purchased and received a 120GB OCZ Vertex 3. I had Windows 7 on an .ISO file on my computer. I formatted a USB stick and configured it to be read as a CD so that I could install windows onto my SSD from it. I started my comp, had the boot priority set to my USB, starting installing windows 7 to my SSD. And out of no where (I wasn't watching) my computer restarted and it brought me back to the beginning of the Windows 7 set-up. So I turned my computer off and booted it up from the SSD to see if it had installed onto the SSD. The first 2 attempts I had a disk boot failure. So I plugged my hard drive back in, started my computer, turned it off, plugged the SSD back in (literally) and it booted up fine/ Finalized windows got internet set up, and Windows had updates that required a restart. So I restarted and had another disk boot failure. Now I have a disk boot failure every time I try to start my computer up through my SSD. Extra Info: My SSD has never been able to be detected in my BIOS unless my Hard Drive was unplugged (eve then my BIOS didn't always detect it). MY SSD wasn't detected in my BIOS the first and only time it successfully booted up. My SSD literally boots up successfully randomly (only once unfortunately) and is detected in my BIOS randomly. I've tried switching cords etc and nothing has worked. I just want to get this damn thing running so I can see whats its like. I finally found a way to get the OS on this sucker and now it won't even boot up. Any help appreciated

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS *************** Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • Netcat (nc) traditional package for RHEL 6.x?

    - by HTTP500
    I'm trying to use the Percona Apache Monitoring [Cacti] Template for Memcached. They do indeed warn that you can't use the openbsd version of the package and provide a solution for Ubuntu/Debian users, i.e.: You need nc on the server. Some versions of nc accept different command-line options. You can change the options used by configuring the PHP script. If you don’t want to do this for some reason, then you can install a version of nc that conforms to the expectations coded in the script’s default configuration instead. On Debian/Ubuntu, netcat-openbsd does not work, so you need the netcat-traditional package, and you need to switch to /bin/nc.traditional... Since the RHEL 6.x version indeed comes from openbsd (confirmed by rpm -qi nc) how does one go about getting this installed on RHEL/CentOS? Anyone else running these Percona templates on RHEL/CentOS? What did you do? alien the Debian package? Update 1: FWIW, I tried to use GNU netcat by compiling it from source but it doesn't seem to have the exact options required by the Cacti template either (i.e. there is no analogy for -C or -q1 so it seems) Update 2: I alien[ed] the netcat-traditional_1.10-38_amd64.deb package to make a .tgz and it does produce a binary "nc.traditional" and that version has the -q option but no -C Cheers

    Read the article

  • If spaces in filenames are possible, why do some of us still avoid using them?

    - by Chris W. Rea
    Somebody I know expressed irritation today regarding those of us who tend not to use spaces in our filenames, e.g. NamingThingsLikeThis.txt -- despite most modern operating systems supporting spaces in filenames. Non-technical people must look at filenames created by geeks and wonder where we learned English. So, what are the reasons that spaces in filenames are avoided or discouraged? The most obvious reason I could think of, and why I typically avoid it, are the extra quotes required on the command line when dealing with such files. Are there any other significant reasons, other than the practice being a vestigial preference? UPDATE: Thanks for all your answers! I'm surprised how popular this was. So, here's a summary: Six Reasons Why Geeks Prefer Filenames Without Spaces In Them It's irritating to put quotes around them when referenced on the command line (or elsewhere.) Some older operating systems didn't used to support them and us old dogs are used to that. Some tools still don't support spaces in filenames at all or very well. (But they should.) It's irritating to escape spaces when used where spaces must be escaped, such as URLs. Certain unenlightened services (e.g. file hosting, webmail) remove or replace spaces anyway! Names without spaces can be shorter, which is sometimes desirable as paths are limited.

    Read the article

  • Verifying SMTP “MAIL FROM:” Matches “From:” Header in DATA

    - by dkovacevic
    Is there ever a legitimate reason for the SMTP “MAIL FROM:” field to not match the “From:” field in the DATA section of a message, besides mailing lists? From http://stackoverflow.com/questions/1750194/smtp-why-does-email-needs-envelope-and-what-does-the-envelope-mean: “But, to continue your snail mail metaphor, most professional letters will contain the sender's and recipient's addresses printed on the letter itself. Those addresses are not necessary for the postman, but are instead a courtesy to the recipient. So it's sensible that email would work the same way.” The problem with this line of logic lies here: “courtesy to the recipient”. Including the “From:” address in an email via SMTP is not a courtesy; it is required if the recipient is to be able to send a reply. From: How to limit the From header to match MAIL FROM in postfix?: “But if you really want to ensure From: and MAIL FROM then you have to apply header_checks so that Return-Path: matches From:” What are the implications of doing this? Mailing lists would obviously be a problem. Are there any other legitimate uses of differing “MAIL FROM:” and “From:” header information?

    Read the article

  • Favorite tricks with linux kernel boot parameters?

    - by ~drpaulbrewer
    Most linux bootloaders let you edit the kernel boot command line before booting. There are often lots of parameters available -- Knoppix, for instance, has a list on their Knoppix Cheat Codes page -- but most are applicable only to compatibility and special situations. A few are hidden gems. Common usages of these codes are to boot to single-user mode, alter screen mode or drivers, or to specify an alternative root directory. Other more exotic uses are possible. Some linux distributions let you copy the boot cd into ram. Others (e.g., Ubuntu) let you use preseed files to clone installs when setting up multiple systems -- useful when installing a lab full of computers without having to baby sit each install. What other tricks have you found useful in system installs, repairs, backups, restores, establishing temporary servers, or other tasks? To add your favorite trick to the list: As much of the code for these options goes on either in initrd, or in a service handler that detects the kernel parameters, please list *(1) the kernel boot line parameter, (2) what it does, (3) the linux distribution and any required packages to activate the feature*. Thanks.

    Read the article

  • Setting up AJP with JBoss 7

    - by purlogic
    I have two different versions of JBoss on a server, JBoss 6.0 Final and JBoss 7.0.2. I can one run or the other by switching a couple of sym links and issuing a "service jboss start" command. I am, by no means, an expert in JBoss, however JBoss 6.0 appears to have AJP running out of the box with no initial configuration required on port 8009. With JBoss 7, however, I had to vi the file "standalone/configuration/standalone.xml" and add a few entries. Those entries are: In the <subsystem /> tag, I added: <connector name="ajp" protocol="AJP/1.3" socket-binding="ajp" /> In the <socket-binding-group /> tag I added: <socket-binding name="ajp" port="8009"/> Then in Apache's configuration file (httpd.conf), I added: <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass /app ajp://localhost:8009/app ProxyPassReverse /app ajp://localhost:8009/app AJP proxying works with 6, not with 7... I assume it's because I haven't properly set up AJP in JBoss 7 and not entirely sure how to do that. I have searched documentation on their site with not a lot of specifics on how to do so. Any help or insight into setting up AJP with JBoss 7 would be much appreciated!!

    Read the article

  • RSA keys - virtual hosts

    - by Bosworth99
    Pardon my noobness, but I just got started with VPS (linux) hosting; setting up passwordless ssh for multiple users has proved to be kind of a pain. Currently I'm the single user of this ubuntu 10.04 LTS VPS (linode.com). I was able to establish a single rsa passkey under my home/user/.ssh/authorized_keys location. Fine. PuTTy works as expected, and Filezilla (sftp) links up as required. I've been working on a single site that this user owns, and thats not been a problem. Now, I want to set up some other sites, and I've chosen Webmin with the VirtualMin plugin to make this work. I made another user (or, rather, virtualmin did), but I've been unable to get FileZilla to link up to this new user. Could anyone with experience here explain what the setup is supposed to look like? IE - can I use a single rsa key pair for all accounts (if, for example, I give ownership of files to the original user?). Or is it standard practice to create a separate key pair for each user, and establish a separate putty/filezilla login for each? I've spent enough time dinking around with this to be frustrated. "Sever rejected the provided key" error sucks after the fifth hour. I'm about to set up an ftp server and call it a day. Any thoughts would be most welcome -

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >