Search Results

Search found 10661 results on 427 pages for 'move semantics'.

Page 324/427 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Computer Comparison - which is "better"

    - by David Murdoch
    A company I work with recently replaced their old server and gave it to me. Their old server is a Dell PowerEdge 2600. I've been playing with the machine and even installed Windows Server 2008 on it...and it seems to run it pretty well. Here are the specs for the two machines: Dev Machine: AMD Athlon64 3000+ 2.38 GHz (overclocked from 1.8GHz [@ 280x8.5] - it is stable-ish) Memory (RAM): 1x1GB OCZ PC3200 (Dual-Channel) 300GB HD OS: Windows XP Pro (32bit) SuperPi 1M digit test: 40 seconds Dell PowerEdge 2600 Server: Intel Xeon CPU 2.8GHz 2.8GHz Memory (RAM): 512MBx2 (PC2700, not dual channel) 68GB HD (RAID 5) OS: Windows Server 2000 (32bit) SuperPi 1M digit test: 56 seconds [using 1 processor] (Themes and Aero-Flass UI turned off, of course) I use my computer to regularly run Photoshop CS5, Illustrator CS5, Flash CS5, 5 browsers (Chrome, FF, IE, Safari, Opera), iTunes, Visual Studio 2010, and Kaspersky Internet Security 2010 [sometimes simultaneously :-) ]. The SuperPi test has my dev machine coming in about 30% faster than the Server machine...though this could be due to the server running "Vista" with background processes prioritized. Do you think it would be realistic/advantageous for me to move from my dev machine to the Dell PowerEdge 2600? Is it possible to install additional DVD drives/burners on the server? Can I install my internal 300 GB hard drive on the server? Can I add some USB 2.0 ports? Note: I'll probably install Win XP Pro on the dev machine if I do switch. If not, are there any creative and useful way for me to take advantage of this server (with the goal of faster computing)?

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

  • Installation of Active Directory on separate VM from DNS does not entierly work - not sure why

    - by René Kåbis
    Not sure what I am doing wrong here. I have a moderately midrange server (16 cores, 2Ghz, 32GB ECC REG RAM, 6TB storage, nothing too extreme) where I am running Hyper-V (Server 2012 R2 Enterprise) in order to provision virtual machines. So why an AD separate from DNS? I want redundancy. I want to be able to move VMs and back them up individually and not have too many services on any one VM. I have already provisioned a VM with DNS, and have set it up right -- essentially, I have: Set up Static IP’s for everyone involved. Installed the DNS service on the DNS VM. Created a forward lookup zone and a reverse lookup zone (primary zone) xyz.ca Configured the zones to use nonsecure and secure dynamic updates (i will change this to secure later after the domain controller is online). Created a A record for the DC in the forward lookup zone (and a reverse ptr) Changed DC’s DNS server (network settings) to the new DNS server. Checked that I can ping the dns server from the new DC by hostname. When I went ahead and did a DCpromo on the DC, and un-cheked the “install DNS” option, everything seemed to go well (no error messages), but I saw no changes on the DNS server whatsoever (no additional settings). Plus, the DNS server seems to be unable to join the domain, as it claims that the domain is not discoverable. As a final note, I do run Symantec Endpoint Protection, which includes a firewall and most settings set as default. I have not yet tried turning this off, but my experience has been that if a service would open up a port on a Windows firewall, it would do the same through Symantec. There is pretty tight integration these days with corporate-class AV and Windows. I have a template vhdx fully set up (just short of any special roles and features) that I can use to replace the current AD VM with, so doing this all over again is not too much skin off of my nose.

    Read the article

  • Batch deletion of smaller files from group of files via unix command line

    - by artlung
    I have a large number (more than 400) of directories full of photos. What I want to do is to keep the larger sizes of these photos. Each directory has 31 to 66 files in it. Each directory has thumbnails, and larger versions, plus a file called example.jpg I dispatched the example.jpg file easily with: rm */example.jpg I initially thought that it would be easy to delete the thumbnails, but the problem is they are not consistently named. The typical pattern was photo1.jpg and photo1s.jpg. I did rm */photo*s.jpg but it ended up some of the files named photoXs.jpg were actually larger and not smaller. Argh. So what I want to do is scan each directory for filesize and delete (or move) the thumbnails. I initially thought I'd just ls -R every file and extract the size of each file and save those under a threshold. The problem? In one directory the large will be 1.1 MB and the thumb is 200k. In another the large is 200k and the small 30k. Even worse, the files really are mostly named photo1.jpg - so simply putting them all in the same folder, sorting by size, and deleting in groups would not work without renaming already, and if it's possible I'd prefer to keep them in their folders. I was almost resolved to just doing this all manually, but then thought I'd ask here. How would you do this task?

    Read the article

  • I need an admin toolset for Windows 2003 and 2008

    - by eugeneK
    i know this is way too general question but anyway. I need few tools, will write down my tasks as sysadmin and if you have any to automate my job i would be glad to hear. I don't mind paying for software needed unless it is way too expensive. First of i backup all files on server at local/office storage. I 7zip all SQL backup files and then move them over network to centralized location and then FTP them from office PC which has no FTP server installed and cannot have one. Backups happen at 4AM at the morning thus i need to set time for compressing and afterward FTPing. Then i FTP all IIS web application as differentiation backup, same goes for VOD movies. Second tool i need is system monitor which will monitor all servers from themselves and from external location for CPU/Memory/Hard disk and other basic failures. This tool should able to execute Website address with parameters which will send me an email with all report on failure. Third tool i need is a way to get all Event Logs from 10 Windows based servers without accessing each any of them manually. If you know any solution, thanks in advance.

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • Object Not found - Apache Rewrite issue

    - by Chris J. Lee
    I'm pretty new to setting up apache locally with xampp. I'm trying to develop locally with xampp (Ubuntu 11.04) linux 1.7.4 for a Drupal Site. I've actually git pulled an exact copy of this drupal site from another testing server hosted at MediaTemple. Issue I'll visit my local development environment virtualhost (http://bbk.loc) and the front page renders correctly with no errors from drupal or apache. The issue is the subsequent pages don't return an "Object not found" Error from apache. What is more bizarre is when I add various query strings and the pages are found (like http://bbk.loc?p=user). VHost file NameVirtualHost bbk.loc:* <Directory "/home/chris/workspace/bbk/html"> Options Indexes Includes execCGI AllowOverride None Order Allow,Deny Allow From All </Directory> <VirtualHost bbk.loc> DocumentRoot /home/chris/workspace/bbk/html ServerName bbk.loc ErrorLog logs/bbk.error </VirtualHost> BBK.error Error Log File: [Mon Jun 27 10:08:58 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ [Mon Jun 27 10:21:48 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/sites/all/themes/bbk/logo.png, referer: http://bbk.$ [Mon Jun 27 10:21:51 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ Actions I've taken: Move Rewrite module loading to load before cache module http://drupal.org/node/43545 Verify modrewrite works with .htaccess file Any ideas why mod_rewrite might not be working?

    Read the article

  • Need Routing help (tagged/untagged)

    - by TheCleaner
    I really need some help trying to figure some "basic" routing. My brain is fried from being sick for a week and I'm not thinking clearly. Picture below describes my "setup". I'm trying to accomplish routing a user from their workstation to the Juniper SSG520 and then "OUT" through the internet connection. I can't move the connection as it is physically located where the user's switch is. Here's what I CAN do at this point: I can ping from the Juniper SSG520 eth3/3 to 6x.xxx.253.116 from 6x.xxx.253.114 I can ping from the x450 in the top right to 6x.xxx.253.112 from 6x.xxx.253.116 What I CANNOT DO: I cannot ping from the SSG520 eth3/3 to 6x.xxx.253.112 from 6x.xxx.253.114 (basically from the Juniper box to the gateway. I've tried changing port 1 in the x450 VLAN 666 as tagged but when I do that then I can't even ping from the Juniper SSG520 eth3/3 to the VLAN on the x450 (6x.xxx.253.116). I need to route traffic out the eth3/3 interface on the SSG520 THROUGH the 2 x450 Switches and out the internet connection. The caveat is that the 2 x450 switches are connected via fiber over distance and have tagged VLANs in them for the routing. Thoughts? http://img251.imageshack.us/img251/7752/drawing1.jpg

    Read the article

  • What is a good client for handling large amounts of mail ?

    - by ldigas
    Although the title sums it up nice, I'll repeat and explain. What would be a good email client for handling large amounts of mail ? Large portion of mails I receive come with attachments (zip, rar, pdf, dwg, etc.) and within a month I usually have another 1,5-2Gb of new mail. I've noticed that 'standard' Outlook Express (with whose interface I've been very happy) gets awfully slow after a while. Archiving helps but not much. Then I usually take the files, move them onto a dvd, delete all messages I can do without and start anew. The thing is, I would love to have them all in email client since I often go after some old mails (slow projects). So, what would be good alternatives ? If it is portable, that would also be nice, but I can also live without it. post scriptum: I love @gmail, but cannot use it for work. I know I could theoretically forward all of it there, and back, but that approach doesn't make my boss very happy (email handling policies and similar).

    Read the article

  • CMS/Wiki to use for a HTML5 video site

    - by Clinton Blackmore
    Greetings. I want to put up a website with instructive screencasts, and allow for people to add comments to them. I would like use the Video for Everybody technique, partly because I dislike Flash and because it helps in a small way to move the web forward [while being backwards compatable]. I recognize that HTML5 is still in draft, and that support for it varies. I do have some hosting space, and can run Perl, PHP, and Ruby on Rails applications, with a MySQL backend. I should mention that part of my working job involves running some web servers, and that I am a programmer by training (with only a limited familiarity with Perl and PHP, and none with Ruby). I should mention why I don't particularly want to go with a video hosting site (like YouTube or Vimeo): Flash Video Resolution and Quality [I'd like to put up 800x600 videos] Videos promote a club that is not stricly non-profit [ie. may fall afoul of Terms of Service] I'm already paying for web hosting, and free video hosting comes with time and bandwidth limits I don't want there to be two locations where you can comment on the video Now, having said all that, I'd be quite comfortable putting up my own HTML pages, except: that's so web 1.0! :) [ie. it does not allow for comments] I also want to do some blogging and possibly put up a wiki; the site will not be entirely screencasts So, can anyone recommend a CMS (or Wiki, or similar application) that I can customise for this purpose?

    Read the article

  • How to extract a Vorbis stream from a WAVE file?

    - by H.B.
    I would like to move the Vorbis stream into an ogg container but ffmpeg does not seem to recognize the stream. Even though MPlayer gives this output upon playback: Opening audio decoder: [acm] Win32/ACM decoders Loading codec DLL: 'vorbis.acm' Loaded DLL driver vorbis.acm at 10000000 Warning! ACM codec reports srcsize=0 AUDIO: 44100 Hz, 2 ch, s16le, 128.0 kbit/9.07% (ratio: 16000-176400) Selected audio codec: [vorbisacm] afm: acm (OggVorbis ACM) ffmpeg: ffmpeg -i Source.wav -acodec copy Target.ogg Input #0, wav, from 'Source.wav': Duration: 00:02:15.17, bitrate: 128 kb/s Stream #0.0: Audio: qg[0][0] / 0x6771, 44100 Hz, 2 channels, 128 kb/s [ogg @ 00000000003096C0] Unsupported codec id in stream 0 Output #0, ogg, to 'Target.ogg': Metadata: encoder : Lavf53.6.0 Stream #0.0: Audio: qg[0][0] / 0x6771, 44100 Hz, 2 channels, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Could not write header for output file #0 (incorrect codec parameters ?) Of course this does not necessarily need to be done via ffmpeg, any method that is workable would be fine... I have cut down one of the files to 512KB: sample.wav (Changed two chunk size fields in the wave header to account for this, the embedded stream is cut "without notice")

    Read the article

  • Need to Remove Exchange 2003 Server That Crashed During Transition to 2010

    - by ThaKidd
    As the title stated, we were running an Exchange 2003 server that we knew was going down soon so we purchased a second server and installed Exchange 2010 into the AD. We managed to move all of the mailboxes off of 2003 and also managed to get the Offline Address Book setup on 2010. At this point the 2003 server bit the dust and will no longer boot. Therefore we were unable to properly uninstall Exchange and remove the last 2003 server so it still exists in AD. As far as the clients are concerned, everything is working properly. However, when I run the Microsoft Exchange Profile Analyzer, I still see the old server and its Administrative Group. I am going to guess that since the old server is showing up in AD, I will not be able to raise Exchange or AD functionality (as the 2003 server was also the only AD DC) levels. I have forced the 2003 DC out of AD so that is no longer an issue. Old Setup: Windows 2003 Server Enterprise & Exchange 2003 Standard New Setup: Windows 2010 Server Enterprise & Exchange 2010 Standard Two Questions: How do you go about manually forcing the 2003 server and its administrative group out of AD? When that is finished, where do you raise the Exchange mode (can't find this for the life of me)?

    Read the article

  • What could be causing SVCHost to leak handles?

    - by Goz
    I have a problem that has been causing me all sorts of grief recently. SVCHost appears to be leaking resources all over the shop. This is the SVCHost run with the arguments "-k netsvcs". At the moment it is sitting at around 5,700 Handles being used. Before I rebooted the machine it was sitting at around 33,000 handles! This higher number has been causing me large problems as my software, thus, fails to obtain the handles it needs (The software tries to create around 2000 handles). I'm totally at a loss as to what is going wrong. IF anyone could help me stop this happening it would be much appreciated. I'm running on XP with SP3. Edit: I tracked this problem down to the WMI system. I'm not sure why or how the problem was occurring. Basically I used "sc change" to move it into its own process and suddenly everything seems to be fine. I'm not entirely sure what is going on ...

    Read the article

  • How to use Zune software to listen to podcasts with generic MP3 player?

    - by Bevan
    I listen to a bunch of podcasts - a great way to fill the otherwise mindless space of a daily commute. My MP3 player is a Transonic brand, appears on my computer as a generic storage device. I've been using iTunes to download the podcasts, and manually moving the files out of the disk folder onto my player, but this is pretty tedious. iTunes also fails to recognise that the files are gone and leaves them in the list. (Actually, iTunes for windows is pretty much a dog, but that's a different rant.) The Zune software is 99% of what I want in a podcast downloader - performs well, looks nice, downloads reliably and so on. Some features - like only downloading the next five unheard episodes of a podcast - are superb. However, if I manually move the files across to the MP3 player, the Zune software concludes that the file has never been downloaded, and downloads it again. This leads me to my question: What is a good way to use the Zune software to download podcasts for listening on a generic MP3 player? Are there any addons for the Zune software to make this easier? Registry hacks? Can I configure the Zune software to not download the same episode multiple times? Is there a way for the Zune software to populate my MP3 player directly, instead of having to copy files?

    Read the article

  • PostgreSQL 8.4 - Tablespace Optimization

    - by FloE
    I'm currently running a PostgreSQL Database with about 1.5 billion rows / 500 GB of data (including indices). There are several schemata: on for the (read only, irregular changes / updates) 'core-model' and one for every user (about 20 persons). The users can access the core and store data in their own schema, so everything is located in one database. The server runs with CentOS and PostgreSQL 8.4 and is used for scientific studies, exploration etc and is running quite well. These days an upgrade of the DB storage hard disks arrive - all with the same performance as the old ones. I'm looking for the best way to distribute the data on these disks. It would be possible to separate frequently used objects (the core-data) from the user schemata, but I'm not sure if this is really worth the effort. It seems to be a much better idea to move the WAL files (pg_xlog directory) to its own partition. http://www.postgresql.org/docs/8.4/static/wal-internals.html What are your opinions? Are there any tablespace- or partitioning-related performance documentations / benchmarks?

    Read the article

  • Getting Windows (VMware) to load from OSX's localhost without an Internet Connection

    - by Jonah Goldstein
    I'm using MAMP to host my local sites, and VirtualHostX so that I can access sites during local development via a convenient URL like mysite.dev I'm also running Windows XP via VirtualBox, and it would be great to be able to load up any of my local sites within windows while offline as currently often working without access, on the move, unfortunately. I know that I can append my IP and a nice domain name to the host file in C:/WINDOWS/system32/drivers/etc ... and i can find my IP simply through terminal with "ifconfig" while I'm online. The problem is that when I'm not online, there's no IP. Even if there is an IP (when i have a connection), I still have grab it and update the windows hosts' file all the time, since I'm developing from a laptop and have a new IP at the drop of a dime. I found a tutorial where the author is able to get a permanent IP. He uses VMware Fusion as his VMachine, which is the only difference between his setup and mine. By running the terminal command "ifconfig vmnet1" he gets: a secret IP the virtual machine uses to talk to OSX And that doesn't change - which is awesome. I'm assuming it exists even if he's offline. His tutorial is here, http://bit.ly/U2lq It would be pretty fantabulous if I could replicate this with virtualBox. Anyone have ideas? Thanks:)

    Read the article

  • Suggestions for Windows 8 migration [closed]

    - by Big Endian
    I'm thinking of migrating to Windows 8. At first I hated it, but I'm pretty sure the Windows 8 model is the future, and I don't particularly want to end up hating the future like my parents, frustrated and bewildered by anything past Windows XP. I'm currently running Windows 7 and my system has been accumulating some problems. It's probably an accumulation of issues from installing too much software, changing firewall settings, installing Ubuntu alongside Windows, and... well I'm not sure, but my computer has been buggy in unexpected ways lately (freezing and unfreezing, display driver crashing and recovering, and what I call "deep freeze/thaw cycle" where the mouse won't even move for a while). I'm good at solving computer problems, but I can't seem to get to the root of these and my best idea for fixing them is making sure I've backed up every file then re-installing the entire OS. Luckily for me, a new OS is just around the corner so this would be a good time to get two things out of the way at once. The problem I see is that the upgrade options I see are all "seamless". I don't want a seamless upgrade. I want to wipe the slate clean and start all over. Does this mean I will have to buy a full, new copy of Windows 8 rather than one of the cheaper upgrading options? Or does it not make since for me to go to Windows 8 given that I have a laptop, not a tablet? Maybe I should just re-install Windows 7, or even call good enough good enough, try to eliminate the bugs, and start with a fresh slate in 2-3 years after this computer eventually dies entirely from (inevitable) hardware failure. What would be the advantages or disadvantages and costs of each option, how would I go about upgrading to Windows 8 if that's the option I choose, and what is your personal opinion about my situation?

    Read the article

  • Why can't I specify the executable that opens file with extension on windows?

    - by Glen S. Dalton
    I am on windows server 2003, but I guess it is the same on windows xp. This is a superuser question, because it is definitly desktop, so do not move or close it. Question: I copied some movable applications (usually people create them for usb sticks) to locations like c:\bin\app1\app1.exe app1.exe can open files of type *.ap1 When I rightclick file.ap1 and choose "open with ..." the "Open with" dialog appears. But it is not working how I expect in this situation. I can choose c:\bin\app1\app1.exe with the "Browse" button, but: app1.exe will not appear in the dialog where I just choosed it in the programs list, like I am used to it after clicking OK on it in the browse dialog. app1.exe will not open it when I click ok in the "Open with" dialog, the application that was assigned until then will still open it What could be the reason? Edit: Additional Information: my account is member of the administrators group I just changed the permissions of the folder c:\bin\app1\ and made sure that the group "Administrators" has all rights. I also inherited this manually to all subfodlers and subfiles.

    Read the article

  • Migrating 10-15 Websites Running Linux, LAMP, RoR, WordPress

    - by Michael
    Task is to move 10-15 websites running Linux to new servers hosted by Amazon. These boxes are currently on dedicated servers. Some sites are running WordPress, some have custom CMS, and others might have RoR applications. Unfortunately, there is sparse documentation regarding each site and how services/files are dependent on each other which means there is a lot of detective work that needs to take place. My goal is to properly document each site, what makes them work, etc., so future admins have at least something to work with. Currently my strategy is to download each site so I have a backup of the files then scan through them looking for configuration files -- db connections, apache configs, etc. Then, create a nice spreadsheet with these findings and migrate these out to the new server. My question to ServerFault is this, are there things you would look out for? Easier ways to handle this task that I'm missing? Points will be awarded to answers that help with efficiency. Thanks in advance.

    Read the article

  • Creating basic, redundant gigE or IB storage network for Xen?

    - by StaringSkyward
    With only a modest budget, I want to move my 4 xen servers over to network storage -either NFS or iSCSI which will be determined based on how well it performs when we test it (we need good throughput and it must continue to work through link and switch failure tests). We may add another couple of xen servers at some point when this is done. I don't know much about the design and operation of storage networks, so would really appreciate some hints from those with experience. The budget is around $3,800 excluding the storage appliance. I am currently thinking these are my options to remain on budget: 1) Go for used infiniband hardware and aim for 10gb performance. 2) Stick with gig ethernet and buy some new switches (cisco or procurve) to create a storage-only ethernet LAN. Upgrade to 10gigE later but try to use hardware capable of it where possible to reduce upgrade costs. I have seen used, warrantied infiniband switches at reasonable prices (presumably because big companies are converging on 10gbit ethernet?) and the promise of cheap 10gb is attractive. I know nothing about IB, so here come the questions: Can I buy 2 x switches and have multiple HBAs in my xen and storage nodes to get redundancy and increased performance without complexity or expensive management software costs? If so, can you point me to some examples? Do NFS and iSCSI work just the same regardless? Is IB a sensible choice or could/should I use ethernet or FC on the same budget - I'm keen not to get boxed into a corner for future upgrades, however. For the storage I am likely to build a storage server using nexentastor with the intention that I can later add more disks, SSDs and add another server to provide a failover option at the storage level. An HP LeftHand starter SAN is also under consideration, too. Thanks in advance.

    Read the article

  • Compiled ruby fails to find curses

    - by Hamish Downer
    I'm attempting to install the sup MUA but I'm having trouble. When I try to run it, it can't find curses: /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- curses (LoadError) ... I am installing on a server running CentOS 5. I have compiled ruby and rubygems from source, and then installed sup using rubygems. I followed this article to compile ruby. I have found having a similar problem on ubuntu. The fix suggested there is to install libcurses-ruby, but I can't find a similarly named package in CentOS. I have installed the ncurses-devel package, as that was required for installing sup using gem. I have also installed the ncurses, cursesx and rbcurse gems, but none of these have fixed the problem. The article above about compiling ruby said you had to recompile the zlib extension, after doing: cd ext/zlib sudo ruby extconf.rb --with-zlib-include=/usr/include --with-zlib-lib=/usr/lib cd ../.. sudo make sudo make install So I've tried a few variants in ext/curses. The top few lines of ext/curses/extconf.rb are require 'mkmf' dir_config('curses') dir_config('ncurses') dir_config('termcap') So I've tried a few variants of setting paths: sudo ruby extconf.rb --with-curses-include=/usr/include --with-curses-lib=/usr/lib --with-ncurses-include=/usr/include --with-ncurses-lib=/usr/lib --with-termcap-lib=/lib sudo ruby extconf.rb --with-curses-include=/usr/include --with-curses-lib=/usr/lib --with-termcap-lib=/lib and re-doing the make, but to no avail as yet. Any ideas to move it forward are welcome.

    Read the article

  • Virtual machines interconnection inside Proxmox 2.1 Cluster

    - by Anton
    We have 3 physical servers (each with 1 NIC) in different datacentres, all of them are interconnected by openvpn bridged private network (10.x.x.x). Inside this network we have fully functional 3 nodes Proxmox 2.1 cluster. So, actually question is: Is there any "proper" way to make "global" local network (172.16.x.x) for all VMs inside cluster, so even if we move VM from one node to other we could reach it by static IP regardless of it's physical location? BTW, we can't add dedicated NIC to each server. Thanks in advance. EDIT: I have tried to make a separate openvpn bridge for 172.16.x.x, now I have at each server two interfaces: SRV1: openvpnbr1 - 172.16.13.1 vmbr0 - 172.16.1.1 SRV2: openvpnbr1 - 172.16.13.2 vmbr0 - 172.16.2.1 But now there is no connection between those ifaces: SRV1: ping 172.16.13.2 From 172.16.1.1 icmp_seq=2 Destination Host Unreachable SRV2: ping 172.16.13.1 From 172.16.2.1 icmp_seq=2 Destination Host Unreachable If I shut down vmbr0 interfaces, so there is connection between servers over openvpn, but vmbr0 is used by Proxmox... Where I am wrong?

    Read the article

  • Why do most songs in my media collection play twice? - Corrupt media?

    - by Dean
    Problem: Whether I'm playing the media with Rhythmbox on Ubuntu, Winamp on Windows, or my Nokia N95's media player, most of my audio files (OK, maybe only 40%) play twice. Info: I have a 500GB external 2.5" WD HDD, with a 150GB primary FAT32 partition labeled MUSIC. Inside this, I have about 500 folders containing about 10,000 MP3/WMA/M4A/WAV files. I manage the drive using Ubuntu 9.10, and frequently copy data to/from it using RSYNC, or on windows, TotalCopy. The visual output is different in each media player, but it behaves as if the 1 MP3 has the same song on it twice, and as soon as it ends it begins again. Winamp shows that the song goes for 2x as long as it should, The N95's media player shows the progress bar off the right-hand-side of the screen when it begins playing (then jumps back to the left, then continues along...). Rhythmbox doesn't show me how long the song is, nor does the progress bar move along the screen. Plea: It seams to me somewhere along the lines my collection has become corrupt... but where? And how? and please someone tell me I can fix it!! TIA, Dean.

    Read the article

  • Is there an high quality natural text reader for the mac?

    - by Another Registered User
    I'm reading about 150 pages of text on screen, every day. I will have to read about 15.000 in the next upcoming months. No joke. Well, the problem is this: I suffer from a sort of attention deficit hyperactivity disorder which forces me to read every sentence up to 10 times until I really get it. Mac OS X Snow Leopard has a built-in text reader with the name "Alex". Although it is already pretty good quality, I know there are far better natural sounding voices out there. I have heard already voices that are absolutely amazing compared to Alex. They're so good, that you can't tell anymore the difference between a real person or a computer. Alex still has this "metal factor" in its voice, which makes my ears hurt after 8 hours of listening. The next problem with Alex is, that he never makes a break after a sentence. Also, it's not possible to think about a sentence and then continue reading. It's also not possible to have him repeat a sentence, without tedious text selection and shortcut usage. Actually, the best tool I can imagine would have the option to read a sentence and move on to the next one after pressing a special key, OR repeating the previously one after pressing a special key. That would help so much! And if that's even with one of those bell lab / AT&T / whatever super-natural voices, even better! But it would be already a great relief if there was just a better tool to control Alex. To let him make breaks after sentences or let him speak big chunks of text sentence-by-sentence with fine-grained control over repetition and moving on. Is there anything?

    Read the article

  • Windows 7: Setting up backup to an external hard drive on another computer on the network

    - by seansand
    I have an external hard drive connected to a Windows 7 (Home Edition) computer. I have another computer (with Windows 7 Ultimate), and I want to have the Windows 7 Ultimate back up to the same external hard drive, without having to disconnect and move the external hard drive from the Home Edition PC. When I get to the "Set up backup" dialog within Windows 7, it asks me where to save the backup. I select "Save on a network". However, when I enter "\\computername\harddrivename" under Browse, the "OK" button remains grayed out. The button remains grayed out unless I also enter a Username and Password under "Network credentials". However, the account I have on the other computer doesn't have a password for it. To un-gray out the button I must enter a fake password, allowing me to click "OK", but then obviously I get a "bad password" error. Does anyone know how to get around this problem? (Seems kind of ridiculous.) I made sure that the security settings with the external hard drive on the other computer are full access to Everyone, so permissions is not the problem. I also thought about using Homegroup instead of the regular security settings, but there is no obvious way to go about it that way, either.

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >