Search Results

Search found 17976 results on 720 pages for 'old versions'.

Page 527/720 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • "Modern" Ethernet over coax

    - by Electrons_Ahoy
    So, I've just bought a house. It's reasonably new - built in the early '00s. One of the features that got built in was a cable TV drop in every room. The cabling is gorgeous - there's even a wiring cabinet of sorts in a closet where the cables all tie together to the splitter to the outside line. Of course, my problem is that I only own the one TV. I do, however, own a few computers. What I would love to be able to do is drop a switch in the wiring closet and run 100/1000BASE-T ethernet over the coax in the walls I wouldn't otherwise be using. My fantasy would be if you could get some kind of adapter-plug-thing that would take a coax plug on one side and a cat5/RJ45 plug on the other. Had anyone else done this? Any suggestions? (There are a few other options that suggest themselves - first, I could just use the existing cabling channels and re-run cat5 or 6 through the walls. While tempting, that sounds like more work than I really want to put in, so I'm calling that Plan B. Also, I could just scare up a mess of old 10BASE2 cards and run the house on thinnet, all mid-90s style. While I think I'd get major style points for that, I don't think I can get a 10BASE2 adapter for the new laptop. Also, I have all these super-snazzy gigabit adaptors I'd like to be using. And so forth.)

    Read the article

  • Self-hosting vs. Budget hosting - What are the economics?

    - by cdonner
    My current hosting provider (shared Linux, unlimited domains, < $10 per month, with about 20 sites) has been giving me a lot of grief lately. I am contemplating to just ditch them and repurpose the old Sun V20z that is sitting in my basement rack, and move the hosting in-house, literally. My math goes as follows: my company pays up to $80 a months for my home internet service, which would cover the upgrade from currently Fios to Comcast business internet with 5 static IPs. So this comes free. running the server will cost me about $180/year at the current rate of approx. $.2/kWh my time is free So, it seems that the my net cost of doing this would be about $80 anually, plus the work that goes into setup and maintenance. I will have to get email hosting somewhere, which I do not want to do myself. On the other side of the balance sheet, I'd likely get better uptime than my provider based on recent stats, will not get suspended and don't have to spend hours with customer support. Overall, I am not convinced. Has anybody actually done that? What was your experience, and did it pay off?

    Read the article

  • Exchange 2007 restore - Backup Exec Unable to Attach to a resource

    - by Andy
    I have been struggling with this one for months! Grateful for any advice. The setup is a windows 2003 server network, 4xservers on the domain. Two exchange 2007 servers (only one with mailboxes still on). Backup Exec (12.5) on a non-exchange server with agents on the others. Backup exec runs a full backup of exchange across the network well, at pretty reasonable speeds. However, when you try any kind of restore (individual emails, mailboxes or whole system restore - all to same location or to alternate server, RSG etc) the following message is received within about 10-15 secs of starting the job: Job ended: 24 December 2010 at 13:28:32 Completed status: Failed Final error: 0xe000848c - Unable to attach to a resource. Make sure that all selected resources exist and are online, and then try again. If the server or resource no longer exists, remove it from the selection list. Edit the selection list properties, click the View Selection Details tab, and then remove the resource. Final error category: Resource Errors For additional information regarding this error refer to link V-79-57344-33932 Things I have already tried: Changed account to main administrator account (with all permissions) checked versions of ese.dll on both servers - both the same Checked all VSS writers on both servers are stable / normal restoring to different locations Any advice anyone could give would be much appreciated. Many thanks, Andy

    Read the article

  • Cannot connect to my EC2 instance because of "Permission denied (publickey)"

    - by Burak
    In AWS console, I saw that my key pair was deleted. I created a new one with the same name. Then I tried to connect with ssh -v -i sohoKey.pem ec2-user@******.compute-1.amazonaws.com Here's the output: macs-MacBook-Air:~ mac$ ssh -v -i sohoKey.pem ec2-user@******.compute-1.amazonaws.com OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to ********.compute-1.amazonaws.com [*****] port 22. debug1: Connection established. debug1: identity file sohoKey.pem type -1 debug1: identity file sohoKey.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '*******.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /Users/mac/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: sohoKey.pem debug1: Authentications that can continue: publickey debug1: Trying private key: sohoKey.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Update: I detached my old EBS and attached to the new instance. Now, how can I mount it?

    Read the article

  • AsteriskNow Migration / Shared Extension Space

    - by Aaron C. de Bruyn
    I am testing the possibility of migrating from an old Avaya phone system to AsteriskNow. The migration would cover several hundred phones--but spread out over several years. (Management wants to move buildings to the new phone system one by one as cables get cut or time permits.) Two other directive is that extensions must not change and they want a GUI that other admins (non-Linux geeks) can manage. They currently use 9XXX for all extensions. We linked the Avaya and Asterisk box via PRI card and they both are communicating. From the Avaya side, if we move (for example) extension 9001 to Asterisk, we forward the call over the PRI to the AsteriskNow box and the SIP phone rings. In AsteriskNow we have an outgoing rule '_9XXX' that routes all 4-digit extensions starting with 9 back to Avaya. Here's the trouble. Dialing 9001 (the extension moved over to AsteriskNow) causes the call to be routed out the PRI to the Avaya box, then the Avaya box routes the call back to Asterisk, and Asterisk routes it to the SIP phone. As we get more and more users switched over, it will use up more and more channels over the PRI card. Is there a way I can ask Asterisk to check it's local extensions first--then forward off to the Avaya system if it starts with '_9XXX'? (I know how I can do it when editing the raw config files, I'm just looking for a way to do it in the GUI so other admins can manage it if necessary.) As a last-ditch plan, I know I can specifically add '_9001' as an outgoing call rule and sent it directly to extension 9001--but I'd really hate to do that for several hundred phones

    Read the article

  • best practice to removing DC from Site that no longer connects via vpn in another city

    - by dasko
    hi i am looking for a recap of what i have done already to see if i missed anything. i had two cities connected by wan using a ipsec persistent tunnel between gateways. i had one DC (DOMAIN CONTROLLER) in each city that was a global catalog server (GC) they were set up to replicate and i had them configured under Sites and Servers with their own subnet etc... about 6 months ago the one city was removed and i was not able to gracefully remove, through dcpromo, the server that was there. it is no longer used and cannot be brought back. the company went from two sites down to single site. Problem is i had a whole bunch of kcc errors and replication bugs in the event viewer. i wanted to clean up my active directory and decided to use the ntdsutil metadata cleanup commands. i removed the server from the specifed site based on a procedure from petri website. I then removed the instances of the old DC and site from Sites and Servers. Then i went and cleaned up the DNS by removing Host A records, NS server name from both the local DNS forward lookup zone and the _msdcs i also removed the reverse lookup zone for the subnet that no longer exists. is there anything i missed? thanks in advance for any help. gd

    Read the article

  • Uninstall php5 installed from source.

    - by diegomichel
    I have tried to install php5 from source , and it worked... Then for some reason need to install the official packets, so i tried a make uninstall and for my surprise there is such make uninstall... so i tried delete all the installed files by hand. Then installed the official debian packages and it worked fine... till i need install sqlite module, which give me the following error: php --version PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/pdo_sqlite.so' - /usr/lib/php5/20090626/pdo_sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/sqlite.so' - /usr/lib/php5/20090626/sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP 5.3.1-5 with Suhosin-Patch (cli) (built: Feb 22 2010 22:46:05) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies So i remember that manual install i did, and i think there is some old lib installed causing that problem, the bad thing is that there is not such make uninstall on the source code of php5... php-5.2.13 > make uninstall make: *** No rule to make target `uninstall'. Stop. I have tried reinstall and purge all php related packages via aptitude with not success. OS: Debian Squeeze. uname -a Linux desktop 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC 2010 x86_64 GNU/Linux Any idea how to fix that?

    Read the article

  • solr php extension fails to run on newest Debian Wheezy

    - by hijarian
    I'm trying to use the Solr PHP extension on the recently-upgraded Debian Wheezy. It installs both from PECL and from sources flawlessly but instead of giving me expected functionality it gives me this on every PHP run: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/solr.so' - /usr/lib/php5/20100525/solr.so: undefined symbol: curl_easy_getinfo in Unknown on line 0 Also scripts which use the extension throws an error PHP Error[2]: include(SolrClient.php): failed to open stream: No such file or directory in file <...path to my autoloader...> My main point is that it was set up before and worked like a charm. In the upgrade among the relevant packages only the versions of PHP and libcurl was changed. Instance of Solr itself was left as is. I have all possible libcurl libraries: $ locate libcurl ... /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.3 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.2.0 /usr/lib/x86_64-linux-gnu/libcurl.a /usr/lib/x86_64-linux-gnu/libcurl.la /usr/lib/x86_64-linux-gnu/libcurl.so /usr/lib/x86_64-linux-gnu/libcurl.so.3 /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/lib/x86_64-linux-gnu/libcurl.so.4.2.0 ... /usr/lib32/libcurl.so.3 /usr/lib32/libcurl.so.4 /usr/lib32/libcurl.so.4.2.0 ... I have instaled the php5-curl package version 5.4.4-2 with aptitude. I installed the Sorl extensions both with sudo pecl install solr (with various combinations of -f and -n flags and tried solr-beta too) and with wget ... cd ... phpize ./configure make make install I'm installing the 1.0.2 version of extension because it worked before the upgrade from Squeeze to Wheezy. As I said earlier, extension installs without any errors. I have already added the extension=solr.so incantation to the /etc/php5/mods-available/solr.ini What magic should I do to make solr extension work? Is this true that the only solution that I have is to downgrade the libcurl version as it was before the upgrade?

    Read the article

  • Intel Core i7 QuadCore on HP Pavilion dv7 Overheating Issues

    - by kellax
    I bought a brand new HP notebook: HP Pavilion dv7-6b21em BeatsAudio edition. The notebook is about 2 months old and has pretty nasty overheating problem. I mainly use it for development however i do play some games. The disturbing thing is that the computer is loud on pretty simple tasks. Here are the specs: CPU: Intel Core i7-2670QM QuadCore ( 8 threads ) @ 2.20 GHz RAM: ( 8GB ) 2x 4GB @ 1066 HDD: 1TB 7200 GPU: ATI Radeon HD 6770M 1GB Dedicated DDR OS: Windows 7 64bit Enterprise I have an external monitor runing on VGA port an 22' Samsung SyncMaster S24B300 CPU Heat Statistics Platform: rPGA 988B (Socket G2) Frequency: cca. 3000 Mhz VID: 1.1809 - 1.2059 v Revision: D2 CPUID: 0x206A7 TDP: 45.0 Wats, Lithographu: 32 nm Heat: Tj. Max: 100*C, Power 4.5 - 5.9 Wats Core #0: 63*C Load on all is about 0 to 2% Core #1: 65*C Core #2: 66*C Core #3: 67*C I opened the notebook the fan is working fine there is no dust but still right now the fan is pretty loud even tho all i have open is FireFox. When i run a game the heat jumps to whopping 90-97*C. It has not shut down due to overheating yet but the loud fan is pretty annoying considering I'm not really doing anything stressfull. Is there anything i can do to fix this is it maybe a BIOS issue ? I have all drivers updated tho to the latest. I have very few background processes running consuming bare 2GB of RAM and about 2% of CPU. I had it serviced they said there is nothing wrong with it. But i feel that a Notebook that costs 1.2k Euros cant be like this.

    Read the article

  • Can I change the "From" name and address without going into Mail.app's preferences?

    - by Arjan van Bentem
    Can I somehow add an editable "From" field to Mail.app, a bit like Virtual Identity for Thunderbird, just like I could change the "Reply To" address on the fly? In Mail.app, one can set up multiple email addresses for a single account by just entering a comma-seperated list like [email protected], [email protected]. Next, when composing a message, Mail offers a dropdown to select a "From" address. And when replying†, it automatically selects the right address if it can find a match: Nice, but I'd like to be able to change the "From" on the fly, without going into the Account Information. Also, in previous versions of Mail one could even specify multiple Full Names for a single account: Email Address: Arjan <[email protected]>, Arjan on SU <[email protected]> But nowadays, Mail only uses the setting of Full Name, and even ignores names in the Email Address field when no value for Full Name is set at all. Hence, it would be great if I could change the Full Name on the fly as well. I had no luck finding a plugin for Mail yet. † When using sub-addressing, anything that is sent to [email protected] is simply delivered to [email protected]. I'd then like to reply with the same full address, rather than just [email protected] if Mail cannot find a match, without going into the Account Information first. I sometimes also want to compose a new message with a new sub-addressing-address.

    Read the article

  • SQL Server 2008 Logshipping not Restoring

    - by Nai
    I am getting the following errors during the restore part of the Logshipping process on my secondary server: 2010-04-01 10:00:01.85 Error: The file 'F:\UK_20100327090001.trn' is too recent to apply to the secondary database 'UK_Backup'.(Microsoft.SqlServer.Management.LogShipping) 2010-04-01 10:00:01.85 Error: The log in this backup set begins at LSN 55408000007387500001, which is too recent to apply to the database. An earlier log backup that includes LSN 55147000001788900001 can be restored. RESTORE LOG is terminating abnormally.(.Net SqlClient Data Provider) 2010-04-01 10:00:01.87 Searching for an older log backup file. Secondary Database: 'UK_Backup' 2010-04-01 10:00:01.90 Skipped log backup file. Secondary DB: 'UK_Backup', File: 'F:\UK_20100324090000.trn' 2010-04-01 10:00:01.93 Error: Could not find a log backup file that could be applied to secondary database 'UK_Backup'.(Microsoft.SqlServer.Management.LogShipping) 2010-04-01 10:00:01.93 Deleting old log backup files. Primary Database: 'UK' 2010-04-01 10:00:01.96 The restore operation completed with errors. Secondary ID: 'c066bb63-930c-4b73-861c-f59f0a38c12c' It was happily humming along until I checked it this morning. Some additional details. In the Logshipping folder, there is one file UK_20100324090001.trn dated on the 2009-3-24. The next most recent .trn file is the UK_20100374090001.trn which is the file that was applied during the restore. Why is there an older trn file seemingly on it's own? How can I fix this problem? It'll be a real pain to restart the entire logshipping process. x_x

    Read the article

  • IPCop Packet Mangling

    - by Zenham
    I've found myself in a pickle replacing an old firewall for a client this afternoon. I'm configuring their new IPCop firewall (1.4.21), Zerina OpenVPN addon is installed. What I need to do: There are three network interfaces, currently set up as red (WAN), green (LAN, 192.168.20.0/24) and orange (remote network 10.1.20.0/24). The orange interface is a direct fiber link to another organization. Simple description: Traffic and networks appear to be properly configured at this point, but I have many (150+) specific IPs on the LAN which, when accessing the resources on the 10.1.20.x network, need to be mangled to appear to be coming from the 10.1.20.0/24 network (and return traffic properly delivered). The routing on the far side was configured earlier and should be fine, but I need to redirect any packets coming across destined for those IPs to end up at their proper destination. The addressing is fixed and predictable (ie. 192.168.20.125 - 10.1.20.125). I need to insert whatever rules I have into the IPCop ruleset through /etc/rc.local I know, I'm just not sure about how I should structure this. There's CUSTOMOUTPUT and CUSTOMINPUT targets, both which currently just consist of the single rule redirecting packets to the OVPNOUTPUT/OVPNINPUT targets, so I'm guessing I should insert a rule matching outbound packets destined for the 10.1.20.x network and redirecting to a new target (maybe called TO-ORANGE) and a rule at the top of CUSTOMINPUT which redirects to a FROM-ORANGE target. Under those targets, I would have rules which do the IP matching and mangling. Am I approaching this right? If so, I'm not very familiar with mangle, and would appreciate seeing examples of how to write that source-IP rewrite. If not, how would you suggest doing this? TIA! edit: I notice additionally that the nat table has CUSTOMPREROUTING and CUSTOMPOSTROUTING targets, I guess I could alternatively post the rules in there....

    Read the article

  • Tomcat 6 Windows Server 64 Redirect Connector Fails

    - by Rafe
    So is there some problem with running the Tomcat connectors under a 64 bit windows OS? Here's my configuration: Windows Server 2003 64 bit Intel Xeon Tomcat 6.0.26 JVM 1.6.0 (64bit) ISAPI Redirect Connector 1.2.30.0 (64 bit) Calling the IP address of the site with :8080 brings up the tomcat page so I know that's running and the examples all work so its obviously not having a problem with the JVM. Calling the site ip on port 80 however gives me error 324 - looking at the application log on windows shows "Could not load all ISAPI filters for site/service. Therefore startup aborted". The ISAPI filter page under the web site properties shows the status of this filter to be down with a red arrow. The ISAPI filter name is jakarta and there is a corresponding virtual directory set up in the root of the site pointing to the same directory as the filter. The jakarta web service extension is also pointing to the required dll (c:\program files\apache software foundation\jakarta isapi redirector\bin\isapi_redirect.dll). Incidentally, this same problem occurs when trying to use Tomcat 5.5. I've also tried swapping out various redirect versions. It's really odd because I got it to work once with a version of the redirector that came with Plesk but I've since uninstalled everything to do with plesk and even trying to use the plesk-compiled dll doesn't work now. I am pulling my hair out on this, any ideas?

    Read the article

  • Open Office crashes, recovers, crashes again

    - by Daniel R Hicks
    After completely reinstalling my laptop due to apparent registry corruption, I've encountered a problem with Open Office: I open a simple Calc spreadsheet, it comes up normally, but then after anywhere from 5 seconds to several minutes (without even touching the Calc window) OO crashes, then comes up through recovery. If I let it "recover" it will do so and bring the spreadsheet up again, only to repeat the crash scenario again. If I kept clicking "OK" it would apparently do this all day. I reinstalled OO once and the problem went away for awhile, but it came back. I then attempted to "reset" my profile (ie, rename the OO user directory in App Data), but OO crashed during the first startup after that, then resumed the original behavior. If I open the same file using Excel it complains of errors in the file, and "recovers" them, but the "error report" it generates contains no details. If I save the "recovered" file then OO Calc will open it, but the problem returns after saving again. Any ideas? (The system is Vista SP2, running OO 3.4.1) How to reproduce: Start Open Office Calc. Save workspace as "CrashTest.ods" From Task Manager kill Open Office (soffice.exe/bin -- one of each) Double click on the saved "CrashTest.ods" in Explorer. OO puts up a message that recovery will occur -- allow it. When the Calc window comes up, don't touch it -- just wait about 10 seconds. Calc window closes and OO puts up a message that recovery will occur -- from now on the sequence will repeat. I suspect this behavior is limited to a few (recent) versions of OO, and very possibly only Calc. Reported as Open Office Bug 1211094. Sigh!! As much as it irritates me, I'm having to switch over to Excel for several things I used to do with Calc. Excel has a miserable UI, but at least it says up for longer than 10 seconds.

    Read the article

  • 3d Studio Max and 2+CPUs - Core limit ?

    - by FreekOne
    Hi guys, I am scouting for parts to put in a new machine, and in the process, while looking at different benchmarks I stumbled upon this benchmark and it got me a bit worried. Quote form it: Noticably absent from this review is an old-time favorite, 3ds Max. I did attempt to run our custom 3ds Max benchmark on both the 2009 and 2010 versions of the software, but the application would simply not load on the Westmere box with hyper-threading enabled. Evidently Autodesk didn't plan far enough ahead to write their software for more than 16 threads. Once there is an update that addresses this issue, I will happily add 3ds Max back into the benchmarking mix. Since I was looking at dual hexa-core Xeons (x5650), that would put my future machine at 24 logical cores which (duh) is well over 16 cores and since I'm mostly building this for 3DS Max work, you can see how this would seriously spoil my plans. I tried looking for additional information on this potential issue, but the above article seems to be the only one who mentions it. Could anyone who has access to a 16 core machine or an in-depth knowledge about 3DS Max please confirm this ? Any help would be much appreciated !

    Read the article

  • Windows Authentication behaves oddly when VPN'd

    - by Dan F
    Hi all We've got a few apps that rely on windows authentication - a couple of web apps with AD auth turned on and we usually connect to our SQL servers with windows auth. This normally runs without a hitch. It doesn't work so well if we're VPN'd to a client site though. SSMS Opening SSMS normally from the start menu, then picking a server that normally accepts windows auth, results in a message saying: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (.Net SqlClient Data Provider) If I drop to a command prompt and use runas /user:domain\user to launch SSMS I can successfully windows auth to our SQL server instances with that ssms process. If I look in task manager, both copies of ssms.exe (start menu vs runas) have the same user, and I can see no discernible differences between the processes in procexp. AD Auth websites If I open IE and browse to any of our websites that require an authenticated windows user, I get the "who are you" prompt, and that dialog thinks I'm whoever the VPN user is. I can click "Use another account" and authenticate that way though. Outlook Even Outlook prompts for a username when we are VPN'd! It's affecting our Win7 and Vista machines. It's been a while since we had an XP box, but I don't recall having this issue on XP for what it's worth. The VPN connections are just using the built in windows VPN connections, they're not fancy cisco VPNs or anything of that nature. Does anyone know how to tell windows that I'd like to be my normal old primary domain user rather than the VPN user when authenticating to resources in our domain? Heck, I'd be happy with a solution that prompted me with the "who are you" if I was trying to access windows auth requiring resources on the client's VPN. Thanks! Apologies if this is more a superuser question, I wasn't sure which site it best suited. It's about networking and infrastructure and plagues all of our developers here, so I hope it's a serverfault Q.

    Read the article

  • Windows NT from vmware to kvm

    - by Luca Rossi
    I'm trying to convert a couple of old Windows NT virtual servers from vmware to KVM. I tried almost all guidelines and how to I found around the web but with no luck. I have the vmware virtual disk: Dlc1.vmdk partitioned image. I converted the vmdk into qcow2 image with the qemu utility and I tried to use it with kvm: kvm -hda test.qemu -vnc :1 -m 750 but I receive "error loading operating system" I also tried with raw partitions I can mount through losetup and kpartx. but nothing changed I also tried to create an brand new image file with: qemu-img create -f qcow2 test.qcow2 2G I partitioned the new image file and I copied the original partition 1 to the new partition 1 with dd: dd if=/dev/mapper/loop1p1 of=/dev/mapper/loop0p1 bs=128M no luck again I also tried with a single unpartitioned file: qemu-img create -f qcow2 test.qcow2 2G and I copied the partition 1 to the new image file: dd if=/dev/mapper/loop0p1 of=test.img bs=128M but when booting, I receive a black screen and the virtual machine hangs. The bootloader is loaded successfully, because I also tried with a GRUB live iso and I receive the same screens and errors. Note that grub sees the Windows setup and give me the boot choice. I have the suspect the problem is that the vmware machine is probably a scsi guest and in centos 6 (my system) scsi emulation is no longer supported. But in that case, where to change in Windows? I'm not so skilled with MS systems. Thank you for the help Luca Rossi

    Read the article

  • Recent ImageMagick on CentOS 6.3

    - by organicveggie
    I'm having a terrible time trying to get a recent version of ImageMagick installed on a CentOS 6.3 x86_64 server. First, I downloaded the RPM from the ImageMagick site and tried to install it. That failed due to missing dependencies: error: Failed dependencies: libHalf.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libIex.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libIlmImf.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libImath.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libltdl.so.3()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 I have libtool-ltdl installed, but that includes libltdl.so.7, not libltdl.so.4. I have a similar problem with libHalf, libIex, libIlmImf and libImath. Typically, you can install OpenEXR to get those dependencies. Unfortunately, CentOS 6.3 includes OpenEXR 1.6.1, which includes ilmbase-devel 1.0.1. And that release of ilmbase-devel includes newer versions of those dependencies: libHalf.so.6 libIex.so.6 libIlmImf.so.6 libImath.so.6 I next tried following the instructions for installing ImageMagick from source. No luck there either. I get a build error: RPM build errors: File not found by glob: /home/sean/rpmbuild/BUILDROOT/ImageMagick-6.8.0-4.x86_64/usr/lib64/ImageMagick-6.8.0/modules-Q16/coders/djvu.* I even re-ran configure to explicitly exclude djvu and I still get the same error. At this point, I'm pulling my hair out. What's the easiest way to get a relatively recent version of ImageMagick ( 6.7) installed on CentOS 6.3? Does someone offer RPMs with dependencies somewhere?

    Read the article

  • Performance variation

    - by Ree
    During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames: POST OS installation Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc) I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why? To emphasize: the difference is noticeable on the same machine performing the same tasks and this applies to ANY machine in my experience. I'm not comparing machine to machine performance.

    Read the article

  • External Hard Drive Won't Mount - MAC OSX

    - by dtj
    I have a Western Digital hard drive that's about 4 or 5 years old. It's 500 GB, USB. I use it to backup my Mac every so often. I had it partitioned: 1 side for full backups, and the other side for general storage of music, installers, etc. I decided to get rid of the partition today and dump all the data. So I opened disk utility, and hit 'erase'. It started thinking and then disk utility crashed. After the crash, the hard drive won't mount, however disk utility still sees the drive, but not the individual volume within. I tried booting up Disk Warrior and no luck there either. It has the drive as an "unknown drive". When I hit rebuild, it goes through all it steps and then stops cause of this error: The drive "unknown" is severely damaged and DiskWarrior is unable to determine its case sensitivity What can I do at this point? There isn't any physical damage to the drive. Never been dropped or anything.

    Read the article

  • ZFS + FreeBSD + virtualbox

    - by John
    Hi, I'm configuring a FreeBSD server hosting virtualbox serving half dozen mission critical busy mail servers. I just learned ZFS, I'm quite attracted, but have a few questions: what is the CPU overhead of ZFS? I googled and found little (or no) benchmark for that. from what I learned, when ZFS updates files, it keeps the old file as snapshot, and write the updated part for the new version. However that would mean for each snapshot it keeps that require significant storage overhead. How much is this storage overhead? For example, suppose I have 2TB usable space, how much space can actually be used for the latest version of files one year later? is FreeBSD with ZFS hosting virtualbox serving half dozen busy guest mission critical mail servers a reasonable combination? Anything particular to be careful with? And can I still choose ZFS for the guest OSs? This is because I may build another identical such box for redundancy, and will need to do some mirroring between each pair of the guest systems across the boxes. I'm trying to configure a Dell R710 for this. From what I learned, I shouldn't choose any RAID at all, is that true? In that case, are the drives still arrive hot swappable? this may sounds a bit pathetic, but since I have no experience with ZFS at all, and this is a mission critical server, so just ask just in case: I'm choosing twin Intel L5630 processors, and 6 x 600GB 15K RPM Serial-Attach SCSI drives. If I need more space in the future, I would just hot swap some drivers with larger capacity to expand the storage. There is no problem with these, right?

    Read the article

  • MSI Installer error 2203; how to force permissions on installer directory?

    - by goober
    [Cross-Posted on StackOverflow.com as well because the question relates to development. Feel free to let me know where it best belongs.] Hi all, I'll try to bullet-point to keep it short: Background / Issue Trying to install ASP.NET MVC 3 RC on my Windows 7 machine. Uninstalled other versions of MVC (2 and 3 Beta 1). Ran the installer -- got a generic error, 2203. Log files said that it was a permissions error on C:\Windows\Installer. Checked C:\Windows\Installer -- sure enough, it's marked as read-only. I un-checked "Read-Only" in the folder properties and applied. It appears to open the dialog and apply to all files. However, when clicking properties again, the read-only box is backed to checked. Checked the security tab of the folder -- both system and the Administrators group have full access. I checked ownership -- the Administrators group is listed as an owner. Verified that I'm in the system as an Administrator (in fact, the only account in the Administrators group besides Administrator). So, what gives? Thanks in advance for any help you can provide!

    Read the article

  • GNU screen, how to get current sessionname programmably

    - by Jimm Chen
    [ This can be considered step 2 of my previous question Is it possible to change GNU screen session name after created? ] Actually, I'd like to write a script that can display current screen session name and change current session name. For example: sren armcross It will change the session name to armcross (ARM gcc cross compiler) and output something like: screen session name changed from '25278.pts-15.linux-ic37' to 'armcross' So, the key question now is how to get current session name. Not only for display the old session name, but according to Is it possible to change GNU screen session name after created? , I have to know it(pass to -d -r) before I can change it to something else. Can we use $STY for current session name? No. $STY will not change after you have changed the session name to a user-defined one. However, for command screen -d -r <oldsessname> -X sessionname armcross should be the user-defined name(if ever defined) instead of $STY, otherwise, screen spouts error "No screen session found." Maybe, there is a verbose way, use screen -list to list all sessions(user-defined name listed), then, match the pid part from $STY against those listed sessions and we will find current session's user-defined name. It should not be so verbose for such a straightforward question. Don't you think so? The -d -D and -r -R options seems to expose too much implementation detail to screen's user. It seems, to rename a session, you have to detach it, then do the rename, then reattach it. Right? My env: opensuse 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06 Thank you.

    Read the article

  • memcached append() php ubuntu - bad protocol

    - by awongh
    I am running ubuntu gutsy(7.1) , php5 and I am trying to get memcached running locally. I installed everything as per the docs: memcached daemon, php PECL extension, libevent, etc. But now I can only run half of the example script for memcached append(): <?php $m = new Memcached(); $m->addServer('localhost', 11211); $m->setOption(Memcached::OPT_COMPRESSION, false); $m->set('foo', 'abc'); $m->append('foo', 'def'); var_dump($m->get('foo')); ?> the script terminates @ append() with an RES_BAD_PROTOCOL error message. It still runs the get(). I don't know why memcached would otherwise be working fine (connect, set, get - with the correct value of 'abc') and not work for append. it also doesnt work with prepend. I believe I have the setup correct, but I am not sure. Maybe there are compatibility problems between the versions of the dependecies? thanks much

    Read the article

  • Asus WL-520GU conflicting subnet (and/or IP) with 2Wire DSL

    - by Paula
    I have an Asus wireless router: WL-520GU... and an AT&T 2Wire for my DSL connection. When I try to browse anywhere, I just get an odd message from the Asus router (in the common Asus broken-English, bad formatting, and awful spelling): http://postimage.org/image/upxrjflcj I guess it's trying to say: Your Asus Router and your 2Wire have the same subnet mask. (It doesn't say if that's good, or bad... but it sounds like they must be different.) but... But for the "solution" it looks like it's trying to say: Your Asus Router and your 2Wire have the same IP address. My Asus has the defaults: 192.168.1.1 and 255.255.255.0 My 2Wire has: 192.168.1.66 I'm not seeing where the conflict(s) could be. The Asus firmware is v3.0.0.14 . None of these problems occur with the old v3.0.0.8 firmware. Any ideas on how to fix this? (PLEASE don't say to run a totally different DD/Tomato firmware because it's "better". I need to fix THIS 1 problem, not try to convince my company to switch everything to an entirely different set of problems.)

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >