Search Results

Search found 7061 results on 283 pages for 'target'.

Page 208/283 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Installing VirtualBox on BackTrack 5

    - by m0skit0
    I'm getting this error when running VirtualBox's installation script: $ sudo ~/Downloads/VirtualBox-4.1.14-77440-Linux_x86.run Verifying archive integrity... All good. Uncompressing VirtualBox for Linux installation........... VirtualBox Version 4.1.14 r77440 (2012-04-12T16:20:44Z) installer Removing previous installation of VirtualBox 4.1.14 r77440 from /opt/VirtualBox Installing VirtualBox to /opt/VirtualBox tar: Record size = 8 blocks Python found: python, installing bindings... Building the VirtualBox kernel modules Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxhost/4.1.14/build/ for more information. ERROR: binary package for vboxhost: 4.1.14 not found Here's the log: $ cat /var/lib/dkms/vboxhost/4.1.14/build/make.log DKMS make.log for vboxhost-4.1.14 for kernel 3.2.6 (i686) Sun May 13 14:32:52 CEST 2012 make: Entering directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/Makefile:39: /usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu: No such file or directory make: *** No rule to make target `/usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu'. Stop. make: Leaving directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/ directory: $ ls /usr/src/linux-headers-3.2.6/arch/x86/ Kconfig Makefile ia32 lguest mm pci tools video Kconfig.cpu boot kernel lib net platform um xen Kconfig.debug crypto kvm math-emu oprofile power vdso Makefile references on "cpu" $ cat /usr/src/linux-headers-3.2.6/arch/x86/Makefile | grep cpu include $(srctree)/arch/x86/Makefile_32.cpu # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) Before upgrading to 3.X I didn't have this problem, the script would install VB correctly. Any ideas on what might be causing this? Thanks in advance!

    Read the article

  • GNOME 2 + Compiz equivalent?

    - by virtualeyes
    Running Fedora 14 and realize I need to either change distros or find an alternative to GNOME 3 in Fedora 17. Based on what I have read to-date, XFCE and KDE are the go-to WMs if I want to avoid GNOME 3. I tried KDE 4 and I wasn't impressed; I like the simplicity of GNOME 2 with Compiz and Emerald. Can't stay on Fedora 14 forever, however, so...where to turn? Basically looking for these features in my desktop environment: GNOME Do or equivalent Snap to grid/Window tiling A must-have, the ability to hot key focused window to a monitor grid region is a huge productivity win. Zoom window to cursor In a multi-monitor setup sometimes it's nice to, say, GNOME Do terminal in one monitor and then hot key the opened window to the other monitor just by zipping the mouse cursor anywhere on target monitor (followed by, of course, snap-to-grid hotkey, all without a single mouse click) Polarization At night white background hurts the eyes, so I prefer to hot key polarize to black. Multi-monitor support I'm partial to Fedora given that I've worked with CentOS for years and have little experience with any other Linux distro; however, if the difference between Fedora and Arch, Mint, etc. is fairly subtle, I'll make the leap, just need a distro & desktop environment that allows me to be productive with keyboard hot keys and provides the above basic features. Any suggestions?

    Read the article

  • Can't Get Mac Mini to turn on - no screen, no beep, only the fan, power light, and optical drive noi

    - by pibyers
    I have an Intel-based Mac mini mid-2007 (Model A1176). This is the computer my kids use so I don't use it regularly. The computer had been working fine until one day my kids told me that it no longer works. The computer will not boot up. When I turn it on the fan turns, the white power light in the front turns on, and there is a sound that appears to be from the optical drive (rather than hard drive). I don't get anything to the monitor, nor do I get any dings or other start up sounds from the computer. Here is what I've tried thus far to no avail: 1) Swapped out the monitors early on since I figured that was my weak link - no change 2) Reset the PMU - no change 3) Tried to boot up from the System Disk - The mini loaded the dvd into the drive, but nothing else (I can't eject the disk so I can put it back) 3) Start up the computer in target mode connected to another mac - I tried this too, but I never received a chime or the disk show up on the other mac. I'm about out of ideas apart from scraping the computer. Does anyone have any ideas that I can try? Again, nothing has been done to the computer in at least 6 months when I upgraded the RAM. I'm also still on Leopard. Thanks.

    Read the article

  • Read Only Domain Controllers and DNS zone updates

    - by Mike M
    I have a Windows 2003 domain and just added a new DC that runs 2008 R2. I updated the schema accordingly for both forest and domain levels. I also made sure to run /rodcprep at the time I did this. I have a branch office with a 2008 R2 file/print server that is a read-only domain controller (DC). The one problem I have been having is with AD-integrated DNS records updates. In the data center, we had to make an IP address change on a particular server. All our other sites' DCs (2003) updated the record fine. The 2008 R2 DC in the data center also updates its record fine. However, the RODC in the branch office does not. So if I nslookup the target server on a 2003 DC, the IP address is correct. Same with the 2008 R2 DC in the data center. But an nslookup on the branch office RODC still pulls in the old IP address. Moreover, any new records we've created (e.g., just added a new terminal server) do not get updated on the branch RODC either. Is there something simple I'm missing? How do I get the RODC to sync its AD-integrated DNS records with the rest of my world? Thank you in advance for your responses. Mike

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • What's with the accesses to $random_existing_file/cache/df.php?

    - by Bernd Jendrissek
    Occasionally I eyeball Apache's access_log and lately I've been noticing these accesses to URLs that I don't serve. They're correctly 404'ed, but I'd like to know just who and what is involved here. "Obviously" it's some sort of vulnerability probing; I'd like to know which. (Not that it affects me, but I like to know the score.) Here's an example: 69.89.31.206 - - [28/Nov/2012:17:36:34 +0200] "GET /cvfull.pdf/cache/df.php HTTP/1.1" 404 489 "-" "-" Oddly, all 26 attempts are to either /cache/df.php, or to /cvfull.pdf/cache/df.php - they come in pairs. A few weeks ago it was zx.php, now it's df.php - I'm assuming the target is the same. Perhaps I should be flattered that a script is thinking of hiring me. Seriously, my CV is one of only two PDF files on my site, so I can only guess that non-PDF URLs aren't interesting? I've tried Googling for "cache df php", but my Google-fu is weak at the best of times, so I can only find a few reports of other script attacks. What's the vulnerability being scanned for here?

    Read the article

  • Windows Server 2008 R2 install reboots unexpectedly during "Completing installation" phase

    - by knda
    I am attempting to install Windows Server 2008 R2 onto a Cisco UCS C201 M2 rack mounted server but am having major difficulties and wondering if anyone has some insight or items they could recommend for me to look at to get this one resolved. Installation is being attempted via the Cisco remote console (using CIMC's Virtual dvd-rom).. following the first phase of Setup where the installation files are copied to the target hard drive, then a reboot occurs to load Setup from the HDD, mid-way in the "Completing Installation" phase the system then reboots unexpectedly. System configuration Cisco UCS C201 M2 (2RU rack mounted server) 16GB RAM, 2x 73GB 15K SAS, 4x 300GB 10k SAS Add-on cards - Intel quad-port GigE card (no fibre channel cards) Storage - LSI MegaRAID SAS 9261-8i. onboard SATA is disabled (no SATA drives connected) KVM - Belkin No physical DVD-ROM.. :( I have... Run memtest86+, no RAM faults Disabled/enabled SATA support (BIOS) Attempted install from USB DVD-ROM, no effect Attempted unattended install scripted via Cisco Configuration Manager DVD provided Removed Belkin KVM in case that was causing drama Discovered that the Cisco website is "awesome" for searching for PDFs/Drivers cough, reverted back to Google Downloaded latest LSI drivers from LSI's site and used during Server 2008 install checked Windows ISO against checksum's from MS site checked Windows ISO by using it for an install in a VM Running out of ways to troubleshoot this as I am not sure how to enable any sort of 'verbose' mode during the setup process. Next step I have planned is to remove the Intel NIC and try the installation again.. Edit: Problem was the "Cisco INTEL QUAD PT GBE" (1000/PT) .. will have to see if this card is faulty or if it's just drivers.. thanks for the help.

    Read the article

  • Cheapest iSCSI SAN for Windows 2008/SQL Server clustering?

    - by MichaelGG
    Are there any production-quality iSCSI SANs suitable for use with Windows Server 2008/SQL Server for failover clustering? So far, I've only seen Dell's MD3000i, and HP's MSA 2000 (2012i), which both are around $6K with a minimal disk configuration. Buffalo (yea, I know), has a $1000 device with iSCSI support, but they say it will not work for 2008 failover clustering. I'm interested in seeing something suitable for failover in a production environment, but with very low IO requirements. (Clustering, say, a 30GB DB.) As for using software: On Windows, StarWind seems to have a great solution. But it's actually more money than buying a hardware SAN. (As I understand, only the enterprise edition supports having replicas, and that's $3000 a license.) I was thinking I could use Linux, something like DRBD + an iSCSI target would be fine. However, I haven't seen any free or low-cost iSCSI software that supports SCSI-3 persistent reservations, which Windows 2008 needs for failover clustering. I know $6K isn't much at all, just curious to see if there are practical cheaper solutions out there. And finally, yes, the software is expensive, but many small business get MS BizSpark, so the Windows 2008 Enterprise / SQL 2008 licenses are completely free.

    Read the article

  • Switch Windows 8 from a hybrid MBR/GPT => GPT only on Macbook Pro Retina

    - by Sid
    I used DiskUtility+Bootcamp Wizard to setup my hard drive for Windows 8 (final MSDN). Somewhere in that process, the Apple tools turned my GPT disk into a hybrid MBR/GPT. All my 4 primary MBR partitions are used up, so when I try turning on Bitlocker in Windows 8, it complains about not finding a System drive. I know on Windows 8 the Bitlocker setup tries to create the 200(?)MB system partition if it's missing. However with all 4 partitions filled I suspect it can't create system drive = it can't find it = throws back an error like "BitLocker Setup could not find a target system drive. You may need to manually prepare your drive for BitLocker". I've already tried disabling hibernation, swap file etc. Now I'm thinking that if I were to get rid of the MBR scheme altogether, perhaps I can be alright within the GPT world without MBR's 4 primary partitions limit. So, how can I get rid of the MBR tables on the hybrid scheme in a manner that still leaves Mac OS and Windows 8 in working conditions? Details: Hardware is the MacbookPro Retina. Primary MBR partitions are consumed as follows: EFI partition HFS+ partition (=encrypted, therefore ="Apple_CoreStorage") HFS+ partition (Recovery partition, contains unencrypted Mac bootloader) NTFS partition (Windows8 all-in-one partition) diskutil list output sid-mbpr:~ sid$ diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *251.0 GB disk0 1: EFI 209.7 MB disk0s1 2: Apple_CoreStorage 160.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Microsoft Basic Data Win8 90.1 GB disk0s4 GPT vs MBR addresses sid-mbpr:~ sid$ sudo gptsync /dev/rdisk0 Password: Current GPT partition table: # Start LBA End LBA Type 1 40 409639 EFI System (FAT) 2 409640 312909639 Unknown 3 312909640 314179175 Mac OS X Boot 4 314179584 490233855 Basic Data Current MBR partition table: # A Start LBA End LBA Type 1 1 409639 ee EFI Protective 2 409640 312909639 ac Apple RAID 3 312909640 314179175 ab Mac OS X Boot 4 * 314179584 490233855 07 NTFS/HPFS Status: GPT partition of type 'Unknown' found, will not touch this disk.** **: Ignore this message, the gptsync tool is old and doesn't understand the UUID for "Apple_CoreStorage" / FileVault2 partitions. Since LBA addresses are alright, safe to ignore this message.

    Read the article

  • What is causing apache2 proxy error when forwarding to tomcat?

    - by Dark Star1
    I set up apache to proxy for tomcat but I am getting the following error when I target the page. I sometimes get a blank page or a 503: [Error] [Mon Dec 03 04:58:16 2012] [error] proxy: ap_get_scoreboard_lb(2) failed in child 29611 for worker proxy:reverse [Mon Dec 03 04:58:16 2012] [error] proxy: ap_get_scoreboard_lb(1) failed in child 29611 for worker https://localhost:8443/ [Mon Dec 03 04:58:16 2012] [error] proxy: ap_get_scoreboard_lb(0) failed in child 29611 for worker http://localhost:8080/ I have two vhosts configured on the vm as follows: [http host] <VirtualHost *:80> ServerName www.mysite.net ServerAlias mysite.net ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8080/ retry=0 ProxyPassReverse / http://localhost:8080/ retry=0 </VirtualHost> [ssl vhost] <VirtualHost *:443> ServerName www.mysite.net ServerAlias mysite.net ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/server.crt SSLCertificateKeyFile /etc/apache2/ssl/server.key ProxyRequests Off ProxyPreserveHost On ProxyPass / https://localhost:8443/ retry=0 ProxyPassReverse / https://localhost:8443/ retry=0 </VirtualHost> My system details are: Apache/2.2.22 (Ubuntu) mod_jk/1.2.32 mod_ssl/2.2.22 OpenSSL/1.0.1 mod proxy_http is also enabled.

    Read the article

  • Cisco QoS Guidence

    - by Kyle Brandt
    I have a 10M connection to the internet that is hooked into a 100M port. I am getting started with QoS, and am hopping for a little guidance on setting it up on a Cisco 3825 router. Right now I am going forward with the idea that I have to implement it on my router, and the provider can't provide QoS for me. How I envision it working is that the QoS will drop or queue packets on my router and that will help prevent a situation where the provider has to start dropping a lot of packets. Right now all I am tasked with is making sure that one of the 3 LANs gets a certain slice (say 3M for Gig Lan1) of the 10M internet connection (But ideally this will be more flexible in the Future). 10M Internet on 100M port on HWIC-4ESW +-----------------------+ | | Gig Lan1 | Cisco 3825 | Lan3 on HWIC-4ESW | | +-----------------------+ Gig Lan2 I need to learn more about QoS, but having a target technology and maybe example configuration will help me wrap my head around the reading I am doing a little more. Which Cisco QoS Technology do you recommend for this particular situation? Have a basic sample config of how this might work? Right now the 10M line is not congested, so this more to have something in place in case it starts to become mildly congested in the future.

    Read the article

  • nginx rewrite regex for API versioning

    - by MSpreij
    What I want is for the first to be turned into the second.. /widget => /widget/index.php /widget/ => /widget/index.php /widget?act=list => /widget/index.php?act=list /widget/?act=list => /widget/index.php?act=list /widget/list => /widget/index.php?act=list /widget/v2?act=list => /widget/v2.php?act=list /widget/v2/?act=list => /widget/v2.php?act=list /widget/v2/list => /widget/v2.php?act=list v2 could also be v45, basically "v\d+" act, "list" in this case, can have many values and more will be added. Any additional query parameters would just be passed on with $args, I guess. Basically URLs not specifying the version will go to index.php, which can then decide what specific version file to include. What I am afraid of happening is loops - this should sit in location /widget { right?. (As for putting the version of the API in the URL, I'm not trying to be RESTful, and target audience is small) Resources on how to do this entirely in index.php using "routers" also welcome :-/

    Read the article

  • Error when attempting to do a differential or incremental backup of Exchange using ntbackup

    - by voon
    Hi folks, We're running Small Business Server 2003 here. I was reviewing our backup procedures lately and noticed in the ntbackup logs that the differential backups of Exchange were failing with the error: (SERVERNAME)\Microsoft Information Store\First Storage Group is not a valid drive, or you do not have access. A quick search of google found this MS KB article: http://support.microsoft.com/kb/555613 However, both of the suggested fixes don't to apply to our problem. First solution is to make sure the backup media is formatted and has adequate space. Well, our backup target is a 1 TB external hard drive with about 600 gigs of free space. (A full backup of our Exchange DB is currently around 5 GB) The second suggested fix is to "perform a full backup before trying to do incremental". And again, that can't it because we are doing full backups twice a week. There are no errors in the application log, just entries for ntbackup starting and ending. I've also tested doing an differential & incremental backup onto the server's internal drive, which unsurprising still did not work. I could get around this problem by always doing a full backup of Exchange but I kind of like the idea of being space efficient with doing differential backups. Anyone got any ideas?

    Read the article

  • Need clear steps on how to convert a Windows 2000 Server to a XenServer VM

    - by Jay
    The source system is not local. The target host running XenServer is not local. The source system is running Windows 2000 Server SP4 and has 1 disk split into 6 partitions, all NTFS: C: 6 GB (boot) D: 15 GB E: 6 GB F: 6 GB G: 5 GB H: 26 GB Most of the partitions are mostly mostly full ( 60%). What is the most straightforward way to do a P2V migration of the server? I can do minor database & data syncs after the P2V is successful & running as a VM within XenServer, it's just getting to that point which is not clear. The option of installing a Windows 2000 Server from scratch is not available, I need to convert the existing physical server as-is into a VM to be hosted within a XenServer environment. I've looked at XenConvert but it maxes out on converting only 4 partitions in one shot, and I'm not certain how to account for the 2 extra partitions. I'm not familiar with XenServer but it's my only option right now to go P2V.

    Read the article

  • Connecting to IPv6 hosts when mobile and on a Surface?

    - by Cerebrate
    Specifically, at my usual location, I have an IPv6 network which connects to the Internet via a static tunnel set up to Hurricane Electric's tunnel broker ( http://www.tunnelbroker.net/ ). This works essentially perfectly, allowing inbound and outbound connectivity. Now, however, I need to connect back to host(s) on that network over IPv6 from mobile tablet(s); meaning the conditions are such that there is no guarantee or even likelihood of native IPv6 support where it happens to be at any given time, and the IPv4 address of the tablet will change on a fairly regular basis. The native Teredo support, as configured by default, functions well enough to let me ping my target hosts, but appears to have neither the reliability nor the throughput to support anything else; I have been unable to make any actual connections (trying a number of TCP-based protocols) using it. I had considered setting up an independent tunnel for the tablet(s), and using scripts to update the client endpoint IP address when it changes, but since both (a) many of the locations will be behind NAT devices over which I have no control, and (b) the option over which I do have control is an AT&T Unite hotspot which does not offer protocol 41 forwarding or respond to ICMP on its public address, this approach does not seem viable. I am additionally constrained as the mobile tablet(s) in question are Surface RTs, and as such are incapable of running, for example, AICCU client software. What is my best option to pursue to obtain IPv6 connectivity in this scenario?

    Read the article

  • Slow parity initialization of RAID-5 array on HP Smart Array 411 controller

    - by Rob Nicholson
    On 29th October 2011, I built a RAID-5 array using 4 x 146.8GB Seagate SAS ST3146855SS drives running at 15k connected to a PowerEdge R515 with HP Smart Array P411 controller running Windows 2008 (so nothing particularly unusual). I know that parity initialisation of a RAID-5 array can take some time but it's still running after 2.5 weeks which seems a little unusual. I'd previously built another array on the same controller using 4 x 2TB SATA-2 drives and that did take a while to complete but a) I'm sure it was less than 2.5 weeks, b) that array was ~12 times bigger and c) during initialization, the percentrage slowly increased each day. At the moment, the status display for this new 2nd array simply says "Parity Initialization Status: In Progress" and it's said that since the start. It's this lack of change on the status that worries me the most - feels like it's not actually doing anything. Do you think something has gone wrong or am I being unpatient and for some reason, the status not increasing is normal? I kind of expected a much smaller array on faster drives (15k SAS versus 7.5k SATA-2) to build in a few days. This is our primary SAN running StarWind so my "have a play" options are very limited. This 2nd array is currently in use for one small virtual disk so I could shut the target machine down, move the virtual disk to another drive and try rebuilding.

    Read the article

  • ProCurve 1800 switch issue

    - by user98651
    I recently deployed ProCurve 1800-24G switches in place of some older ProCurve 2424M switches in my network. However, I'm having a serious problem with the switch connected to the router. It seems, every night when our Windows 2008 R2 server (off site) runs a backup to a iSCSI target (on site) [facilitated through a PPTP tunnel] the LAN loses connectivity with the router. To clarify, there is only one router which is connected to the switch affected by this problem. The only way to resolve the issue is to either reboot the router or pull the ethernet cable that goes to the router and plug it back in. During the outage, clients cannot receive DHCP requests, DNS requests, ping, or do anything else with the router in this state. Now, neither the switch or router are configured extensively and the issue only seems to have surfaced with the new switch in place. I have tried a number of things including replacing cables, rebooting and checking the switch configuration (it is literally as basic as you can get at this point-- flat LAN, no trunking). Interestingly, the router shows (accessed externally) no changes in configuration or status during this state but similarly cannot ping or access other hosts on the network. This issue occurs in different stages of backup (ie, different amounts transferred). I've also dumped packets from the switch into WireShark but cannot seem to find any anomaly yet (I'm looking at packets around the time the issue appeared and at the time when I reset the NIC). Any suggestions for what to look for? Ideas on what could be causing this? I'm seeing some transmit/receive errors on the NIC from both the router and switch side but nothing serious when compared to the total packet counts. I'm seriously doubting hardware at this point, as I have tried another switch, different cables, and a different NIC on the router.

    Read the article

  • Trying to use a SmartHost with my Exchange 2010 server

    - by Pure.Krome
    Hi folks, I'm trying to use a SmartHost with my Exchange 2010 Server. SmartHost details: Secure SMTPS: securemail.internode.on.net 465 <-- Note: that's port 465 Configure your existing SMTP settings (in your email program) to: use authentication (enter your Internode username and password, enter your username as [email protected]). enable SSL for sending email (SMTPS). So I've added the smart host details to my Org Config -> Hub Transport. I then used PowerShell to add the port:- Set-SendConnector "securemail.internode.on.net" -port 465 I've then added my username/password (as suggested above) to the SmartHost as Basic Authentication (with no TLS). Then I try sending an email and I get the following error message :- 451 4.4.0 Primary target IP address responded with: "421 4.4.2 Connection dropped due to ConnectionReset." So i'm not sure how to continue. I also tried ticking the TLS box but stll I get the same error. If i don't use SMTPS (secure SMTP, on port 465) and use basic SMTP on port 25 with no Authentication, email gets sent. Any ideas? EDIT: Btw, I can telnet to that server on port 465 from my mail server .. just to make sure i'm not getting firewall'd, etc.

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • Increasing Java's heapspace in Tomcat startup script

    - by Ankur
    I want to increase my heap size when using Tomcat. I was told to add this line export CATALINA_OPTS=-Xms16m -Xmx256m; In to the startup.sh script - I did so (at the beginning) but got the error export: 24: -Xmx256m: bad variable name Where am I supposed to add it, am I doing something else wrong? <b>export CATALINA_OPTS=-Xms16m -Xmx256m;</b> # Better OS/400 detection: see Bugzilla 31132 os400=false darwin=false case "`uname`" in CYGWIN*) cygwin=true;; OS400*) os400=true;; Darwin*) darwin=true;; esac # resolve links - $0 may be a softlink PRG="$0" while [ -h "$PRG" ] ; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG=`dirname "$PRG"`/"$link" fi done PRGDIR=`dirname "$PRG"` EXECUTABLE=catalina.sh # Check that target executable exists if $os400; then # -x will Only work on the os400 if the files are: # 1. owned by the user # 2. owned by the PRIMARY group of the user # this will not work if the user belongs in secondary groups eval else if [ ! -x "$PRGDIR"/"$EXECUTABLE" ]; then echo "Cannot find $PRGDIR/$EXECUTABLE" echo "This file is needed to run this program" exit 1 fi fi exec "$PRGDIR"/"$EXECUTABLE" start "$@"

    Read the article

  • copying an lvm partition to a smaller disk, and renaming volume groups.

    - by dlamblin
    I was trying to shrink a vmdk (VMWare disk image) file to be as small as possible, and found two recommendations. The first is to cat /dev/zero into the fs then delete it, and run VMWare tools' shrink. This works okay. The second is to copy everything into a new vmdk. I went the second route. I did not use dd because I actaully want to use as few blocks as possible, instead of having a block-by-clok copy. Any unlinked files will still have blocks that aren't zeroed out. Secondly the centos image was mostly lvm, except for the boot partition, and my target was going to be 4gb instead of 8gb. I did use dd for the first 40mb to get the boot blocks and partition copied. I then used parted to create an identical primary boot, and smaller primary lvm. Then I used pvcreate on that device sdb2, vgcreate, and lvcreate to create a root and swap. I used mkfs.ext3fs on the root partition and then rsync -av / /2root excluding /proc /sys /2root /dev. So far everything went fine. My problem is that: The result is 2.7 GB while the source was 2.1 GB. This is weird to me. The second vgroup is called VolGroup01, while the original was called VolGroup00. How can I rename the VolGroup01 to VolGroup00 and swap it out after all this?

    Read the article

  • Sun T5120: Memory issue with 8GB DIMMs, "Unsupported memory configuration"

    - by watain
    I'm trying to upgrade RAM in a Sun T5120 server, replacing 2GB (Sun P/N: 501-7953-01) with 8GB DIMMs (Sun P/N: 511-1262-01). When bringing up the host system, I get the following errors on the ILOM: -> show faulty Target | Property | Value --------------------+------------------------+--------------------------------- /SP/faultmgmt/0 | fru | /SYS/MB /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU3 Forced fail faults/0 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:28 faults/1 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU2 Forced fail faults/1 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:13 faults/2 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU1 Forced fail faults/2 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:28:59 faults/3 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU0 Forced fail faults/3 | | (IBIST) /SP/faultmgmt/1 | fru | /SYS /SP/faultmgmt/1/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/1/ | sp_detected_fault | Dec 14 15:29:42 ERROR: faults/0 | | Unsupported memory | | configuration As you can see, the only error message I get is "Unsupported memory configuration". Note that I'm absolutely sure that I placed in the DIMMs in the correct slots. Might this issue be related to the Voltage of the DIMMs? Any suggestions on how to trouble-shoot this issue? This issue seems to be similar to the one explained at "Inserted disabled" while upgrading Sun Sparc t5120 memory. However the given link http://docs.sun.com/source/820-4445-10/chapter1.html seems to point to an inexistent page...

    Read the article

  • A tiered approach to cloning linux partitons

    - by Djurdjura
    I'm looking at a strategy for cloning Linux (root) partitions without having to use a Live CD. Literature suggests rightly that the source and target partitions must be umounted to be able to get a clean clone. This assumes that you need to use a LiveCD. I was wondering if instead of requiring a LiveCD, if using a 3rd partition that would emulate the LiveCD functionality, if we can't achieve the same functionality. In other words, at a high level a system with 3 partitions (all bootable): Rescue Partition (LiveCD emulation) Running Partition (Source) Backup Partition (Destination) All 3 partitions are LVMS. When it's time to clone the source partition to the backup (destination) partition, we would boot to the rescue partition, unmount the other 2 partitions (is it required?), run disk check on the source, copy to the destination (dd or simple copy to avoid replicating the defragmentation from the source), run disk check on the destination partition, update Grub menu list to force boot from either partition, and reboot into that partition. My question, is it an approach that you'd recommend? MBR in all this? Any gotchas or extra checks required? Thanks, D. PS. On recommendation from members, posting here instead of stackoverflow.com.

    Read the article

  • Setup.exe called from a batch file crashes with error 0x0000006

    - by Alex
    We're going to be installing some new software on pretty much all of our computers and I'm trying to setup a GPO to do it. We're running a Windows Server 2008 R2 domain controller and all of our machines are Windows 7. The GPO calls the following script which sits on a network share on our file server. The script it self calls an executable that sits on another network share on another server. The executable will imediatelly crash with an error 0x0000006. The event log just says this: Windows cannot access the file for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Setup.exe because of this error. Here's the script (which is stored on \\WIN2K8R2-F-01\Remote Applications): @ECHO OFF IF DEFINED ProgramFiles(x86) ( ECHO DEBUG: 64-bit platform SET _path="C:\Program Files (x86)\Canam" ) ELSE ( ECHO DEBUG: 32-bit platform SET _path="C:\Program Files\Canam" ) IF NOT EXIST %_path% ( ECHO DEBUG: Folder does not exist PUSHD \\WIN2K8R2-PSA-01\PSA Data\Client START "" "Setup.exe" "/q" POPD ) ELSE ( ECHO DEBUG: Folder exists ) Running the script manually as administrator also results in the same error. Setting up a shortcut with the same target and parameters works perfectly. Manually calling the executable also works. Not sure if it matters, but the installer is based on dotNETInstaller. I don't know what version though. I'd appreciate any suggestions on fixing this. Thanks in advance! UPDATE I highly doubt this matters, but the network share that the script is hosted in is a shared drive, while the network share the script references for the executable is a shared folder. Also, both shares have Domain Computers listed with full access for the sharing and security tabs. And PUSHD works without wrapping the path in quotes.

    Read the article

  • Script errors when run by launchd at startup, but not when run in Terminal

    - by Mechcozmo
    Hello. I'm attempting to create a RAM disk that loads the previous contents when the system starts up, and every six hours writes the contents to a disk image. Currently, when you run the script from the terminal ("sudo bash LogToRAM.sh") everything works fine. But when run from launchd during startup, it doesn't work. Here's the lines from the log; the first line just gives some idea as to where in the boot process we are: SecurityAgent[202] Showing Login Window com.mechcozmo.LogToRAM[51] + /Developer/usr/bin/SetFile -a V /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] ERROR: File Not Found. (-43) on file: /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] + /usr/sbin/asr -source '/Library/Application Support/LogToRAM/RAMdisk_store.dmg' -target /Volumes/LogfileRAMdisk/ -noverify Here is the script and plist file in question. Note that 'set -vx' is up at the top of the script; it give a lot of information about what is happening in the script. My current theory is that the /Volumes directory does not exist at this stage of the boot process, but that seems unlikely to be honest.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >