Search Results

Search found 2346 results on 94 pages for 'dedicated'.

Page 72/94 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Web based KVM management for Ubuntu

    - by Tim
    We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim Update: Anyone have any suggestions? It's awfully quiet here..

    Read the article

  • Consulting: Organizing site/environment documentation for customers?

    - by ewwhite
    Over time, I've taken on consulting and contract engineering work for various clients. More recently, customers are asking for certain types of documentation. These are small businesses and typically do not have dedicated technical staff. Within a single company, Wiki/Confluence/Sharepoint, etc. all make sense as a central repository for documentation and environment information. I struggle with finding a consistent method to deliver the following information to discrete customers. I'm shooting for a process that's more portable, secure and elegant than a simple spreadsheet or the dreaded binder full of outdated information. Important IP addresses, DHCP scope, etc. Network diagram (if needed). Administrative usernames and passwords and management URLs. Software license keys. Support contracts and warranty information. Vendor support contacts and instructions. I know there are other consultants here. Any suggestions or tips on maintaining documentation across multiple environments in a customer-friendly format? How do you do it?

    Read the article

  • How do I get started with the M-Project is a Mobile HTML5 JavaScript Framework on Windows?

    - by Bruce Whealton
    This website for this great tool, call the M-Project says that I will need to add a doskey like this: doskey espresso=node C:\Path\To\Espresso\bin\espresso.js $1 $2 $3 $4 (It is a tool for creating Native mobile apps with the Phonegap/Cordova library, and it seems to be something that would be very helpful in this process). If I enter that at a command prompt in Windows 7 or 8, it's not going to stick around or persist. Is it an Environment Variable? Then it says at this page: http://www.the-m-project.org/ that it will work with Windows with some additional tools installed. The next line says that Node.js is needed, so I don't know if that is the additional tools mentioned above. Also, in an old discussion I read that one could just install cygwin. What would that do? It doesn't actually install any of the Linux distributions. I did install Ubuntu 12.04 server with VirtualBox because I thought it would be good to learn more about using Linux as I manage websites that are on a dedicated host. Anyway, the suggestion to install cygwin did not go into any details... I guess it would allow one to create a bash profile?? which would only work in a cygwin Command Line Window. Is that right? Isn't there a similar file that one could use in Windows or an Environment Variable that one could set to be able to achieve the same result? Thanks, Bruce

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

  • Migrating WebLogic 10.3.0 to new host. Slow managed server startup times

    - by wadevondoom
    We are migrating our Blue Martini Commerce application (only supported on WebLogic 10.3.0) to a new host (Redhat 6.3 on a VMWare ESX vm). We are seeing extremely slow start up times for our managed server(s) that is basically 20x slower than our current production. As a for instance the Publish managed server takes ~30 - 45 seconds in current production and in the new environment it takes ~10 minutes. The setup uses the same domain structure and JVM as the current production environment. The same setup files are used. We use jdk1.6.0_33 on 64 bit architecture. We used the generic 64bit weblogic installer and used pack / unpack utilities to migrate the domain. The JAVA_OPTS to start this server are: "-d64 -Xms256m -Xmx512m -XX:PermSize=48m -XX:MaxPermSize=256m" The sysadmins have checked /etc/sysctl.conf and /etc/limits.conf to ensure we were not hitting some kind of process limit. As I am not sure what this managed server does from a Blue Martini perspective during the phase of startup I also had the DBA check to ensure that Oracle RAC (11.2.0.3) wasn't also hitting some kind of process limit or if there was a tns listener issue. The new host is quite a bit stricter with their server lock downs so there are a few differences.... Redhat 6.3 in new env, RH 5.7 in current SElinux is targeted in new env and disabled in current VM in new env and dedicated hardware in current iptables disabled in current. It was enabled in new prod but I had them disable it just in case I apologize for not being more specific. I am mostly hoping got some tips. I do not have the typical root access I would normally have in this environment. I am just hoping got a path forward. I did a few 'kill -3' to see if there are blocked threads and I got nadda. The service works for all intents and purposes it is just painfully slow. Thanks you all in advance for reading and best regards. Wade

    Read the article

  • virtualized windows 2003 domain with CentOS 5.3 and poor connectivity

    - by Chris Gow
    I have a test lab set up running a virtualized windows 2003 domain on a CentOS 5.3(xen) host and am experiencing connectivity problems with guests running on other hosts that are part of the same domain. Here's the setup: On Computer A I have CentOS 5.3 running as the host and have virtualized windows 2003 servers for a primary domain controller, a backup domain controller and an exchange server. The primary domain controller also acts as a WINS and dns server. The windows domain appears on a separate subnet from my company's corporate network. Connectivity to any of the virtualized guests on Computer A is fine (remote desktop, ping, what have you). I have another host computer (Computer B) that also has a virtualized Windows 2003 server guest that is part of the same domain. However, connectivity to that guest is flaky at best. I continuously get at least 60% packet loss when I try to ping the guest, and due to that flakiness I can not access any of the services that it runs (remote desktop, web). Now here's the interesting part. It seems to affect only machines running on a different computer than the domain controller that are in the same domain. On Computer B there is another Windows 2003 guest that is not part of the test domain and is on my corporate network. There's no connectivity issues with that guest machine. The problem does not seem to be specific to Computer B either. I created a test VM on my local computer within the test domain and it exhibits the same behaviour as the guest in Computer B. A couple of items to note: - Host OS on both Computer A and B are the same CentOS 5.3 64 bit - Guest OS is Windows 2003 64 bit and 32 bit (the guest on Computer B is 32 bit) - Guest OSes are all up to date (as of Monday) - Host OS on Computer A was upgraded from CentOS 5.2 to 5.3 Update: Sorry I did not follow up with the comments from below. Computer A and B have been moved to their own dedicated switch and the problem has gone away. I'm not sure what the underlying problem(s) were though

    Read the article

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

  • How does one skip “Windows did not shut down successfully” in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • Platform to allow users to suggest and vote for ideas

    - by Simon
    I head up a support team for a software product. I am in the process of setting up a blog (Wordpress.org), a forum (PHPBB or maybe Vanilla), a means to view bugs (probably expose Jira), but would also like to allow our customers to suggest enhancement requests. Currently we have this as a 'contact form' via the blog, which feeds into a dedicated inbox, which the product manager reviews and may include some of these ideas into the product. However, I would like to give the clients more power/visibility over this. Have a look at ideas.arcgis.com. There are plenty of other similar sites as well. Users can suggest new ideas Other users can vote these ideas up or down (similar to Stack Exchange sites) Ideas that are voted highly will be given a higher priority over lower ones. We can report back on the # of ideas we implement, and potentially reject ideas with reasoning. Has anyone seen any platforms (ideally free) which would replicate something similar to this? I was half thinking of embedding Vanilla within a wordpress page, but need to look into it more.

    Read the article

  • exim4 seem to stop listening

    - by trakos
    Hey, I have a strange problem with my exim4 configuration. I have a dedicated server running debian for quite a long time now, but I'm not really using it actively recently, so everything just worked due to lack of changes ;) However, recently, my exim4 smtp stopped answering on port 25. It does not respond through localhost, as well - even though it's set to listen on any interface available. Some things I've checked: ks:/home/trakos/Maildir/new# netstat -ap | grep exim tcp 0 0 *:smtp : LISTEN 12521/exim4 ks:/home/trakos/Maildir/new# exiwhat 12521 daemon: -q30s, listening for SMTP on port 25 (IPv4) ks:/home/trakos/Maildir/new# cat /var/log/exim4/rejectlog ks:/home/trakos/Maildir/new# cat /var/log/exim4/paniclog The queue is set for 30s only because I was running it in a non-daemon mode to see any output. Strangely enough, no suspicious output is given, netstat even shows it is listening on port 25, but still trying to telnet to it times out. The only things that may have changed recently are: I've got second IP for my server I remember that few days ago my spamassasin crashed, and I've started it up again So yeah, I'm really clueless about this one now :P I mean, I don't even know what could be failing here. Could someone give me some ideas what should I check next? PS: it has uptime of 442 days, so I haven't really tried rebooting it yet ^^

    Read the article

  • High-performance Academic Server [closed]

    - by PHPsmith
    Suppose I want to build a server for the university's academic interests. The server is dedicated only to a site, where users (students and lecturers) just view and fill the academic data. But at a time (e.g. once a semester), about 12,000 students will access the site simultaneously. Due to limitation of resources, I have to build the server using free software (except for the operating system Windows 7, the university has been prepared). The hardware is also limited to the usual 4-core computers (eg, Ivy Bridge Intel Core i7-3770) with approximately 16GB of memory (DDR3 1600 MHz), equipped with an RJ-45 port (Intel 82 579 Gigabit Ethernet). With all these limitations, I have to choose the software (web server, database, etc) are appropriate for this purpose is achieved. I decided to create a site in PHP. Please help me by answering the following questions based on your expertise. (my prime candidate software to consider after googling) Web server which is faster & stable & secure, when implemented and optimized for PHP? And why? (nginx) PHP accelerator which is faster & stable & compatible with the selected web server? And why? (APC with Zend Optimizer+) Database which is faster & stable & secure, when implemented and optimized for selected web server and selected PHP accelerator? (MySQL) Are there any errors that have been or will be happening from my condition is? If there is, please enlighten me? Is there anything else I need to know in order to achieve this goal? If there is, please enlighten me? I understand that the performance also depends on the implementation of source-code program, so I assume it will create a site with the best efficiency (e.g. using AJAX).

    Read the article

  • Subdomain is preventing my search results from rising as expected in page rank

    - by culov
    My problem is that I have a site which has requires a dedicated page for every city I choose to support. Early on, I decided to use subdomains rather than a directly after my domain (ie i used la.truxmap.com rather than truxmap.com/la). I realize now that this was a major mistake because Google seems to treat la.truxmap.com as a completely different site as ny.truxmap.com. So for instance, if i search "la food truck map" my site will be near the top, however, if i search "nyc food truck map" im no where in sight because ny.truxmap.com wouldnt be very high in the page rank by itself, and it doesnt have the boost that it ought to be getting from the better known la.truxmap.com So a mistake I made a year ago is now haunting my page rank. I'd like to know what the most painless way of resolving my dilemma might be. I have received so much press at la.truxmap.com that I can't just kill the site, but could I re-direct all requests at la.truxmap.com to truxmap.com/la and do the same for all cities supported without trashing my current, satisfactory page rank results I'm getting from la.truxmap.com ?? EDIT I left out some critical information. I am using Google Apps to manage my domain (that is, to add the subdomains) and Google App Engine to host my site. Thus, Google Apps provides a simple mechanism to mask truxmap.appspot.com (the app engine domain) as la.truxmap.com, but I don't see how I can mask it as truxmap.com/la. If I can get this done, then I can just 301 redirect la.truxmap.com to truxmap.com/la as suggested below. Thanks so much!

    Read the article

  • External component has thrown an exception. ASP.NET ASPX PAGE POST

    - by Brandon
    I have an aspx page that communicates with a webservice I have. It connects to an SQL Server database on my virtual dedicated server. With just a little usage, I get this error External component has thrown an exception. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.SEHException: External component has thrown an exception. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SEHException (0x80004005): External component has thrown an exception.] Luxand.FSDK.Initialize(String DataFilesPath) +0 WebService.onLoad() +70 WebService..ctor() +91 facematch.btn_submit_Click(Object sender, EventArgs e) +218 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +105 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +107 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +7 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +11 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +33 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1746

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Force delivery retry without restarting the SMTP Service on Windows Server 2008 R2

    - by Mathias R. Jessen
    I have a Windows Server 2008 R2 box hosting 3 virtual SMTP servers; vSMTP01, vSMTP02 and vSMTP03. The first two are configured to deliver all messages to dedicated smarthosts, while the last is set to just deliver the messages on its own. All other delivery settings are as default ----(vSMTP01)-----> {SMARTHST01} / ----Inbound mail--->---SMTPSRV01---[----(vSMTP02)-----> {SMARTHST02} \ ----(vSMTP03)-----> { Internet } Now I want to take SMARTHST01 out for maintenance, but I don't want to reject submissions to vSMTP01 while doing so, so I just let it continue running. When SMARTHST01 is no longer responding, vSMTP01 queues the messages and wait for the first retry interval to pass (15 minutes). So far so good. Let's say SMARTHST01 gets online again after 20 minutes. The first interval has passed, and I'll have to wait another 25 minutes for the second retry interval to pass. If I stop and start the SMTP Service (Services.msc - Simple Mail Transfer Protocol service - Stop), the server will retry all deliveries, but that would cause a service interruption for ALL virtual SMTP servers on the machine, which is highly undesirable. How can I manually force vSMTP01 to retry delivery of all queued messages without interrupting the service of vSMTP02 and vSMTP03?

    Read the article

  • VPN service into 192 network

    - by tophersmith116
    I'm thinking about setting up a security testing lab. I work on a switched network, and that just makes for unnecessary headaches when doing testing. I'd like to create a 192 network with a few machines inside for DBs and AppServers etc. I will need a pivot machine that connects to both the outer network and the 192 (for automation purposes). But I'd like to be able to connect into the 192 network with my own machine from the outer network as the "attacking" machine (rather than have dedicated attack machines inside the 192 network). Therefore, I'd like to have the pivot server be a VPN server as well, so that my machine can VPN into the 192 network from the outer network. First off, is this even possible? Can I have a single computer with two NICs where a VPN service allows remote connections into the 192? Secondly, I'd like to have multiple outer clients connect to the VPN. Does anyone have any suggestions? I've used Hamachi well before, but I've also seen some good stuff from OpenVPN.

    Read the article

  • What is the best hosting option for Flash web-widget?

    - by par
    Our Flash web-widget has got highly popular. It is downloaded around 100,000 times per day. And that is the problem. Our server bandwidth is too narrow to deliver the widget to the clients fast. The widget is loaded very slow. Probably 20 times slower than before (at peak times). Probably I have choosen not the right hoster for my task - delivering 1 MB Flash widget to 100,000 users per day. What is the best hosting solution in my case? I'm not good at server administration so forgive me if I sound naive. The details are the following. Our hoster options: -Dedicated server, Ubuntu -10 Mbit Connection -monthly bandwidth limit: 2000 GB Widget size is 1 MB. The widget consists of the main SWF and a number of loaded SWF and data files. This is a part of Apache Status report taken right now ---- Server uptime: 1 hour 2 minutes 38 seconds Total accesses: 74865 - Total Traffic: 5.8 GB CPU Usage: u28 s7.78 cu0 cs0 - .952% CPU load 19.9 requests/sec - 1.6 MB/second - 81.1 kB/request 200 requests currently being processed, 0 idle workers WWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWCWWWW WWWWWCWWWWWWWWWWWWWWCWWWWWWWWWWWWCWWWWWWWWCWWCWWWWWWWWWWWWWWWWWW WWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWWCWCWWWWWWWWWWWWWWWCW WWWWWWWW........................................................ ----

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

  • What may the reason of slowness be (see details in message body)?

    - by Ivan
    I've got a really weird situation I'm beating to solve. A performance problem which looks really like an empty waiting sequence set in code (while it probably isn't so). I've got a pretty powerful dedicated server (10 GB RAM, eight Xeon cores, etc) running Ubuntu 10.04 with all the functionality services (except OpenVPN server used to provide secure access to clients) deployed in separate VirtualBox (vboxheadless) machines (one for the company e-mail server, one for web server and one for accounting/crm server (Firebird + proprietary app server working with Delphi-made clients)). CPU load (as "top" says) is almost always near zero. Host system RAM is close to 100% usage but not overloaded (as very little swapping gets used, and freed (by stopping one of VMs) memory doesn't get reused any quickly). Approximately 50% of guests RAM is used. iostat usually shows near zero %util. Network bandwidth seems to be underused. But the accounting/crm client (a Win32 Delphi application run on WinXP machines) software works hell-slow with this server (and works much better using an inside-LAN Windows server). I just can't imagine what can make it be slow if there are so plenty of CPU, RAM, HDD and bandwidth resources available on clients and on the server even in their hardest moments. Saying bandwidth is underused I not only know that clients and the server are connected to the Internet with a bigger channels than really used (which leaves the a chance they may have a bottleneck of a sort on the route between them), I've tested bandwidth between clients and the server by copying files among them.

    Read the article

  • Need advice in setting up server. fastCGI, suExec, speed, security, etc.

    - by lewisqic
    I am running my own dedicated server with centOS 5 and WHM/cPanel. I would like to configure my server to meet my needs but I need a little help. It will only be my own websites being run on this server. I'm still a little green when it comes to server administration so please forgive my ignorance. What I Would Like to Have: I need some public directories to be writable (for user image uploads and things like that) but I don't want those directories to have 777 permissions. I need individual accounts to have the ability to set custom php settings for their own account without affecting other accounts, whether through a php.ini file or through .htaccess or any other method. I would like things to run as fast as possible, whether that means using a php optimizer or cacher, such as eaccelerator or xcache or anything else. I need things to be as secure as possible. Here Are My Questions What should I use for my php handler? DSO? CGI? fastCGI? suPHP? Other? Should I be using suEXEC? What are the benefits or downfalls of this? What php optimizer/cacher is best to use? Are there any other security tips I need to know about all of this? I'd appreciate any advice or direction that can be offered. Thanks!

    Read the article

  • glusterfs mounts get unmounted when 1 of the 2 bricks goes offline

    - by Shiquemano
    I have an odd case where 1 of the 2 replicated glusterfs bricks will go offline and take all of the client mounts down with it. As I understand it, this should not be happening. It should fail over to the brick that is still online, but this hasn't been the case. I suspect that this is due to configuration issue. Here is a description of the system: 2 gluster servers on dedicated hardware (gfs0, gfs1) 8 client servers on vms (client1, client2, client3, ... , client8) Half of the client servers are mounted with gfs0 as the primary, and the other half are pointed at gfs1. Each of the clients are mounted with the following entry in /etc/fstab: /etc/glusterfs/datavol.vol /data glusterfs defaults 0 0 Here is the content of /etc/glusterfs/datavol.vol: volume datavol-client-0 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs0 end-volume volume datavol-client-1 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs1 end-volume volume datavol-replicate-0 type cluster/replicate subvolumes datavol-client-0 datavol-client-1 end-volume volume datavol-dht type cluster/distribute subvolumes datavol-replicate-0 end-volume volume datavol-write-behind type performance/write-behind subvolumes datavol-dht end-volume volume datavol-read-ahead type performance/read-ahead subvolumes datavol-write-behind end-volume volume datavol-io-cache type performance/io-cache subvolumes datavol-read-ahead end-volume volume datavol-quick-read type performance/quick-read subvolumes datavol-io-cache end-volume volume datavol-md-cache type performance/md-cache subvolumes datavol-quick-read end-volume volume datavol type debug/io-stats option count-fop-hits on option latency-measurement on subvolumes datavol-md-cache end-volume The config above is the latest attempt at making this behave properly. I have also tried the following entry in /etc/fstab: gfs0:/datavol /data glusterfs defaults,backupvolfile-server=gfs1 0 0 This was the entry for half of the clients, while the other half had: gfs1:/datavol /data glusterfs defaults,backupvolfile-server=gfs0 0 0 The results were exactly the same as the above configuration. Both configs connect everything just fine, they just don't fail over. Any help would be appreciated.

    Read the article

  • Update Grub on Squeeze - Kernel downgrade due VMware Server

    - by vodoo_boot
    I happen to run into various problems regarding grub and kernels. I don't really care about the kernel internas. All I want is VMware server in that dedicated root-server. 1.) What is a bzImage vs. vmlinuz? kaze:~# ls /boot/ System.map-2.6.32-5-amd64 bzImage-2.6.33.2 config-2.6.33.2 initrd.img-2.6.32-5-amd64 System.map-2.6.33.2 bzImage-2.6.35.6 config-2.6.35.6 vmlinuz-2.6.32-5-amd64 System.map-2.6.35.6 config-2.6.32-5-amd64 grub I updated my menu.lst (grub2): timeout 5 default 0 fallback 1 title 2.6.32.5 kernel (hd0,1)/boot/vmlinuz-2.6.32-5-amd64 root=/dev/sda2 panic=60 noapic acpi=off title 2.6.35.6 kernel (hd0,1)/boot//bzImage-2.6.35.6 root=/dev/sda2 panic=60 noapic acpi=off title 2.6.32.3 kernel (hd0,1)/boot//bzImage-2.6.33.2 root=/dev/sda2 panic=60 noapic acpi=off That doesn't do well... I think the vmlinuz file is missing initrd or so. Dunno. In fact I don't give too much about kernel boot voodoo as long as it works. update-grub(2) does not work. Does anybody know what magical trick there is to get the 2.6.32-5 booting? 2.) I thought t follow the Deban wiki.. I cannot get header-files for the installed 35.6 or 33.2 kernel in the repositories. I cannot build foreign headers because they will not match the running kernel. So how does one deal with that situtation? I'd prefer not to have to downgrade the kernel. Thanks for any answers!

    Read the article

  • Software raid 0 with six disks performance

    - by user134880
    I have some problems with disk performance. I have 6 x WD 500Gb RE4 disks. Each disk gives 135Mb/sec throughput. All measurements are made with hdparm with options "-tT" (I know that it is just synthetic test, but I need some start point to make measurements). I have controller with Sil3124 x 4 ports PCI Express 1x So... RAID0 on controller with 2 disks gives 200Mb/s - ok, pcie limit. RAID0 on motherboard with 2 disks gives 270Mb/s - niceeee :) RAID0 on contorller with 4 disks gives 200Mb/s - ok, pcie limit. RAID0 on controller with 4 disks + 1 disks on motherboard = 340Mb/s ... :( RAID0 on controller with 4 disks + 2 disks on motherboard = 300Mb/s .... why? Any ideas? Maybe need more cpu power? Now there is Pentium D Dual core 2.8Ghz, 4Gb RAM. It is dedicated box for storage.. no other activity.

    Read the article

  • KVM virtual machine unable to access internet

    - by peachykeen
    I have KVM set up to run a virtual machine (Windows Home Server 2011 acting as a build agent) on a dedicated server (CentOS 6.3). Recently, I ran updates on the host, and the virtual machine is now unable to connect to the internet. The virtual network is running through NAT, the host has an interface (eth0:0) set up with a static IP (virt-manager shows the network and its IP correctly), and all connections to that IP should be sent to the guest. The host and guest can ping one another, but the guest cannot ping anything above the host, nor can I ping the guest from anywhere else (I can ping the host). Results from the guest to another server under my control and from an external system to the guest both return "Destination port unreachable". Running tcpdump on the host and destination shows the host replying to the ping, but the destination never sees it (it doesn't even look like the host is bothering to send it on at all, which leads me to suspect iptables). The ping output matches that, listing replies from 192.168.100.1. The guest can resolve DNS, however, which I find rather odd. The guest's network settings (connection TCP/IPv4 properties) are set up with a static local IP (192.168.100.128), mask of 255.255.255.0, and gateway and DNS at 192.168.100.1. When originally setting up the vm/net, I had set up some iptables rules to enable bridging, but after my hosting company complained about the bridge, I set up a new virtual net using NAT and believe I removed all the rules. The VM's network was working perfectly fine for the last few months, until yesterday. I haven't heard anything from the hosting company, didn't change anything on the guest, so as far as I know, nothing else has changed (unfortunately the list of packages updated has since fallen off scrollback and I didn't note it down).

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >