Search Results

Search found 16560 results on 663 pages for 'high tech resources'.

Page 518/663 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Clustering/load balancing for cluster unaware applications

    - by AaronLS
    Forgive me if I use any of these terms incorrectly. I am wondering if there is any kind of software that would allow my two "join" two computers together such that a cluster unaware application could utilize their combined computing resources? By "cluster unaware" I mean an application that isn't designed to share work across multiple services. My understanding is that clustering is enabled by the specific application by it's architecture, such that messaging with multiple instances of the application coordinate the sharing of work. Instead I am looking for something that enables clustering at the OS or virtualization level, so that any application could essentially be clustered. Failing that, I am also wondering about the following scenario: We have 3 different applications we will call A, B, and C. We have 2 single core computers. At any given time lets say that any combination of those applications will be CPU intensive. In cases where only 2 of those apps are very active, have one of them moved over to a different server. In a nutshell, some sort of dynamic automatic shuffling of the application's load. I have heard of virtual machines that can be migrated across physical machines while live, but I am wondering if this can be done automatically in response to an application's or VM's CPU activity?

    Read the article

  • I keep losing wireless connection

    - by posfan12
    I have a WRT54GL v1.1 wireless router and a WUSB54G v4 wireless adapter, both made by Linksys. The router is in the living room by the TV and the my computer is in the bedroom. My ISP is Brighthouse. Operating System Microsoft Windows 7 Home Premium 64-bit SP1 CPU Intel Core 2 Duo E6600 @ 2.40GHz 36 °C Conroe 65nm Technology RAM 3.00GB Single-Channel DDR2 @ 333MHz (5-4-4-14) Motherboard eMachines EMCP73VT-PM (CPU 1) 26 °C Graphics ASUS VS247 (1920x1080@60Hz) 767MB GeForce GTX 460 (nVidia) 43 °C Hard Drives 466GB Seagate ST350041 8AS SCSI Disk Device (SATA) 35 °C Optical Drives HL-DT-ST DVDRAM GH41N SCSI CdRom Device Audio High Definition Audio Device The problem is that my Internet connection will work fine for 15 minutes or so. Then the data will just stop flowing. Windows says I am still connected, and the systray icon still shows five bars. But Comodo Firewall will stop showing up and down traffic, and another of my systray applications complains about a lack of connection. What I usually do is either disconnect from the network manually, or unplug and re-plug the USB adapter. At which point the connection will work properly for another 15 minutes. I've tried unplugging my router for 30 seconds and letting it reboot. I've also tried looking for a newer driver for my adapter but I seem to have the latest version 3.1.3.0. This is a recent problem starting about a week ago. For the previous several months things were working just fine. I haven't made any changes to my system that I am aware of. The only thing I did was open my case to blow the dust out of it, then put everything back together. How do I fix this issue?

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • Best way to mount 3-4 monitor like this?

    - by jasondavis
    I just purchased 2 HP 2009m widescreen monitors, they are not the biggest thing on the block, they are like 19-20" and are only around 150-200$ so I think they are perfect. I bought 2 of them just to make sure I like them, with the full intention of purchasing more to make either a tripple or quad display. I now I am stuck trying to decide, if I purchase 1 more to have a tripple display I would then like to just wrap the third monitor to either the rigth or left side, I could do this without a mount most likely pretty easy. If I decide to go with 2 more monitors to make a quad display then I would like to add the 2 new monitor directly above the 2 that I have now. So it would make a grid of 2 wide and 2 high. I have posted a few photos belwo to show them now with the 2 I have, you will notice that I have them tilted inwards to make more of a "V" shape instead of them being side by side and "STRAIGHT". Now if I decide to make thegrid of 4 then I will need to buy or build a stand to hold them all tightly together (no whitespace or gap between the grid of monitors) but I would like to still have both rows invert to make the slight "V". Do you know of any existing stands I could purchase that would hold all 4 monitors without making them be STARIGHT without the "V" shape? Any tips appreciated please, also they do have holes in the back for VESA. a few photos... (they are from iphone and lighting made them note very good but you can see what I am working with here)

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Way to speed up load-balanced ssl using nginx?

    - by paulnsorensen
    So the setup for our website is 4 nodes running rails 3 and nginx 1 that all use the same GoDaddy certificate. Because we are a paid site, we have to maintain PCI-DSS compliance and thus have to use the more expensive SSL ciphers -- also we force SSL using Rack. I've recently switched over to Linode's NodeBalancer (which I've read is an HACluster), and we're not getting the performance we'd ideally like. From what I've read, it looks like terminating the SSL on the nodes using the high cipher is what is causing the poor performance, but I'd like to be thorough. Is there anything I can do? I've read about other ways to terminate the SSL before the NodeBalancer (like using stud), but I don't know enough about these solutions. We certainly don't want to do anything experimental or anything that has a single point of failure. If there really isn't anything I can do to speed up the SSL handshake, my alternative would be to support certain pages on Rails using a secure and insecure subdomain. I've found a few guides that walk through that, but my resulting question is in this situation, would it be better to have nginx handle forcing ssl on the secure subdomain instead of rails? Thanks!

    Read the article

  • Amazon EC2 Reserved Instances: "Heavy Utilization" clarification

    - by gravyface
    Should be another easy one here, but I need clarification on what they define as "heavy utilization" for Reserved Instance types. From their Website: Heavy Utilization RIs – Heavy Utilization RIs offer the most absolute savings of any Reserved Instance type. They’re most appropriate for steady-state workloads where you’re willing to commit to always running these instances in exchange for our lowest hourly usage fee. With this RI, you pay a little higher upfront payment than Medium Utilization RIs, a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase. Using Heavy Utilization RIs, you can save up to 41% for a 1-year term and 58% for a 3-year term vs. running On-Demand Instances. If you’re trying to find a break-even utilization, you’re economically advantaged using Heavy Utilization RIs (vs. On-Demand Instances) if you plan to use your instance more than 43% of a 1-year term or 79% of a 3-year term. I'm assuming that, if I'm planning on running a 24/7 Web Server, then regardless of how many resources I consume (bandwidth, cpu cycles, memory), I would want to go with a Heavy Utilization Reserved Instance? This one Web Server in particular will likely barely budge the cpu, but it needs to be up and running 24/7. Not 100% on what they're defining as "heavy".

    Read the article

  • Using bind (named) as a public proxy server

    - by TrentDavis
    We have a Python DNS server that does a bunch of stuff to figure out values it should return for various DNS records. This works nicely, however as it is Python, the performance under high load won't be great. What I would like to do is have a "proxy" bind server sit in front of it to return results to the public internet. This will cache the results (typically 15 minutes, some records are a few seconds), so the load on the Python server will be greatly reduced as it will only see one look up per domain (only about 100 domains) every 15 minutes. The data in these domains changes a lot, so using a master won't work as it will constantly be changing. I have something setup that looked like it would work great (using a forwarder for the zone), and tested it with dig etc, all going great. However when we went to go live with it, things weren't working, and we figured out that named is not setting the "Authoritative" bit (fair enough, it is a forwarder). So my question is, can we tell bind to set the Authoritative bit for forwarded domains? I have looked at all the doco I can find, and can't find anything about doing things this way. Most of the doco about using it as a proxy if for a LAN to the internet. Ideally I would like to use bind as it is there and installed (CentOS 5 servers). But at a pinch we could look at a different name server to do the work if it just can't be done with bind. Thanks.

    Read the article

  • sql server 2008 cluster hang when a heavy load is run

    - by Billy OT
    we have a sql server 2008 active/active cluster running on wondows 2008R2 O/S. 14GB RAM, 4xCPU. we have set a ceiling of 12GB for sql server. We're running an agent job which loads 3 million records to a database. during this load the job fails and the cluster seems to attempt to fail over to the other node but unsuccessfully i.e., the cluster address is no longer accessible. we have to manually fail the cluster node back. during the load on viewing task manager we can see that memory usage hits a max of 12.5GB and CPU at times hits 100% on all 4 CPU, but for the most part fluctuates at an average of about 60%. I suppose my question is, will a cluster try to fail over if memory or CPU are taking a heavy hit? or am i barking up the wrong tree? also any ideas why it wouldn't fully fail over? we've crawled through logs, of which there are a lot, and can't find anything useful. we've also tried recreating the issue but it ran successfully at a later time. Also 3 million rows doesn't seem like a lot but in terms of resources should 14GB RAM and 4xCPU not be sufficient? Further information on this, we ran the load again today and corrupted the database! We received the error message : LogWriter: Operating system error 170. It looks like, under the heavy load, the sql cluster attempted to fail over and in doing so migrated a lun (or drive) which meant the disk was no longer reachable. (this is just our theory). The database is now 'suspect' and requiring restoration. The 170 error above also indicates that on failing over to the other node, the sql service could not start as it was already in use, therefore it couldn't fail over fully?? But I'm wondering why would it need to fail over in the first place? My assumptions could be completely wrong on this, so any ideas would be appreciated.

    Read the article

  • Troubleshooting iptables and configuring it to drop the priority of long-term connections

    - by intuited
    I'm somewhat familiar with the general concepts of iptables, and would like to learn it in more detail. I'm hoping that my learning experience can also be useful. The situation: I'm running dd-wrt on my router. Despite its purported QoS skills, I'm still seeing connection latency shoot up hugely whenever there's an ongoing http connection, eg some large download. Under such conditions, it can take 10 seconds or more to load a basic webpage; sometimes the connections are dropped entirely. I've tried adjusting the parameters, dropping the allotted bandwidth for up and download to well under my limit, but nothing seems to work. dd-wrt is configured to use HTB as the QoS algorithm; HFSC, although presented as an option, seems to cause the router to crash, and is rumoured to not actually work on any linux system. I'd like to be able to troubleshoot this issue and hopefully improve the settings that dd-wrt is using, but I'm finding the learning curve a bit overwhelming. For starters I am not sure what HTB actually specifies: is this a set of iptables commands, or do some of those commands specify how HTB is to be used? I would like it to prioritize based on protocol the way that it already supposed to, and in addition I'd like to have it drop the priority of connections which have a high total byte count, say over 400KB. Also tips on utilities that can be run under dd-wrt to get more info on what's going on in there are appreciated. I've tried to get iftop to work but there were issues running curses. I'm leaning towards replacing dd-wrt with openwrt; comments on this strategy are also welcome. I suspect that I would be well advised to get a second router as a standin before trying that. It may be worth noting that my total bandwidth is pretty limited (256Kbit/s).

    Read the article

  • Laptop recommendation - Portable Gaming

    - by ivan
    So, I'm looking for a new laptop (http://superuser.com/questions/116869/toshiba-satellite-u500-totally-damaged-lcd). My requirements for a new Laptop are: -good keyboard(illuminated) and touchpad (multi-media keys included, should be better than toshiba u500) -good graphics card, with system rating of 6.3 and up for gaming graphics (my Toshiba U500 has 6.3). I used to run some heavy games on my Toshiba U500 with ATI Mobility Radeon 4570 with 512 mb VRAM but the framerates are not that nice on high settings. -Decent CPU but I think all new Core i3, i5, i7 can run most of recent resource intensive games (My Toshiba U500 has a Core 2 Duo T6500, 2.13 Ghz) I'm also looking for a long-term reliability, good sound quality, lots of fast RAM of-course(4GB DDR3 - 1066Mhz and up) and a clear looking LED screen with a decent resolution. (I can accomodate a laptop with screen size of 13-inch upto 15.6 inch, and I don't want it to be heavy because I might be taking it outdoors) I'm actually impressed when I saw HP Pavilion DV6t but the screen resolution seems to be a little too small for 15.6 inch. The Pavilion DV3 are also good but I want to know if there other options. Looking for some opinions.. Thanks. :D

    Read the article

  • (solved) `ssh foo "<command/>"` not loading remote aliases?

    - by TomRoche
    summary: Why does this fail $ ssh foo 'R --version | head -n 1' bash: R: command not found but this succeeds $ ssh foo 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh foo 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux86_64/bin/R' $ ssh foo '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) ? details: I am a (rootless) user on 2 clusters. One uses environment modules, so any given server on that cluster can provide (via module add) pretty much the same resources. The other cluster, on which I must also unfortunately work, has servers managed individually, so I get in the habit of doing, e.g., EXEC_NAME='whatever' for S in 'foo' 'bar' 'baz' ; do ssh ${SERVER} "${EXEC_NAME} --version" done This works fine for packages installed normally/consistently, but often (for reasons unknown to me) packages are not: e.g. (compare alias below to alias above), $ ssh bar 'R --version | head -n 1' bash: R: command not found $ ssh bar 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh bar 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux/bin/R' $ ssh bar '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) Using aliases copes well with these install differences when I interactively shell into the server, but fails when I try to script ssh commands (as above); i.e., # interactively $ ssh foo ... foo> R --version calls my alias for R on remote host=foo, but # scripting $ ssh foo 'R --version' doesn't. What do I need to do to make ssh foo "<command/>" load my aliases on the remote host?

    Read the article

  • Resource reference passing in puppet

    - by paweloque
    Is it possible to pass puppet resource references to other resources? My use-case is to build a jenkins build pipeline with puppet. To chain jenkins jobs into a pipeline I need to pass the successor job to a job. A subset of the definition is: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Deploy', } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Smoke Test', } In the def you see that I define the successors by name, i.e. 'Deploy' and in case of the second job 'Smoke Test'. What I'd like to do is to pass a reference to a resource and extract the name from it: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Deploy["Deploy ${release_name}"], } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Smoke_test["Smoke Test ${release_name}"], } And then within the jobs::deploy and jobs::build definition I'd access the resource by reference and query for it's type, etc.. Is it possible to achieve this in puppet?

    Read the article

  • Easy solution to monitoring & blocking connections to non-malicious services, IP's, and tracking companies

    - by binarybunny
    Our family lives in the middle of nowhere, so the only high-speed internet available is Verizon's 3G mobile broadband. We have the highest package available, yet continually go over the 10GB limit and get charged $10 every 1GB we go over. We run a business from home, so stopping when we hit the limit is not an option. I've found the majority of connections are to Google, Microsoft, Akamai, Facebook, and other web service companies (mainly google). I know these are harmless connections, but when it costs money for them to monitor our web activity it becomes a serious problem. Here's some things I've done, but I'm sure there's something else that could help before blocking a huge set of IP ranges: stopped using windows (on my machine) use MVPS host file on all computers use firefox on all computers (with don't track me option) ad block plugin on all browsers blocking google updates blocking windows updates block images in browsers (when possible) use comodo (paranoia-level style of blocking..) virus-free computers with ESET NOD32 bought router and installed dd-wrt in attempt to block connections more diligently (and throttle bandwidth if it comes to that) Anything I'm missing? I know Google analytics is on almost all websites, as well as FB like buttons but I would like to be able to stop these connections without blocking use of google services like gmail, etc. Any ideas?

    Read the article

  • How to stop DW20.exe running on Win 2003 Server

    - by Laurence
    Periodically my ASP.NET application crashes (usually because memory consumption exceeds maximum allowed by application pool) and DW20.exe starts up. This is a big problem because it uses huge amounts of memory and CPU for minutes at a time. I want to know how to stop DW20.exe from running. Please note, I have already tried these often mentioned solutions: Disabling error reporting in Control Panel System Advanced Error Reporting Disabling the Error Reporting Service Modifying the registry as in http://support.microsoft.com/kb/841477 (however I might have done this wrong - this doc says "add a DWReportee value of 1" - what I did was add a DWORD entry with hexadecimal value of 1 - is this right? Also only 2 of the 4 keys were present, so I only modified these, e.g. there was no HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\PCHealth key at all) So, zero points for suggesting any of the above (unless you can see I have modified the registry incorrectly)! Also zero points for suggesting I resolve whatever is causing the application crashes :) - I am figuring this out, I just want something in the mean time to stop DW20.exe eating up all the server resources. By the way, this is a Windows 2003 SP1 server, with IIS 6 and SQL 2005 installed. There is no MS Office.

    Read the article

  • New keyboard for linux: Adesso Tru-Form or MS Natural Keyboard 4000?

    - by Andrea
    Hi folks! I'm going to buy a new ergonomic keyboard for my laptop. In the following, keep in mind I live in Italy. I considered the following models: Adesso PCK-308UB - Adesso Tru-Form™ Pro - Contoured Ergonomic Keyboard with TouchPad-PS2 Pro: has a built-in touchpad in the same position of my laptop somewhat cheaper than the alternative below Cons: the surface doesn't seem to be bowl-shaped. keys seem to lay on a straight slightly-inclined surface. It seems an idea used extensively in other ergonomic keyboards according to a few comments on the net, new Adesso keyboards seem to lack robustness, they're likely to loose small parts after a few weeks or months. Other users, instead, seem to never had any problem in years and swear by their quality and comfortability. Those who had problems, however, lamented a lack of responsiveness from the manufacturer. I'm not sure whether the keyboard, at least the standard keys, and the touchpad will both be recognized correctly under linux distros (I mostly use FC btw) last time I checked, Adesso didn't have local resellers in my country Microsoft Natural Ergonomic Keyboard Pro: recognized as one of the most comfortable keyboards reliable customer service operating in my country AFAIK there are several documented ways to get extra buttons work with linux Cons: it doesn't have a builtin touchpad and has a numeric keypad wasting space to reach mouse But there could be other keyboards I haven't considered yet, so here follows my ideal keyboard wishlist, ordered by priority linux compatible basic ergonomic design, which entails split tilted keyboard and pads advanced ergonomic design, like true-ergonomic's or kinesis , where special keys (like enter, caps-lock...) are placed symmetrically in the middle to be used by thumbs a builtin touchpad/trackball placed under the keyboard. I just love this on my notebook. I think it's pretty effective, since it allows my hand to rest naturally everytime I use it. Any opinion on this? high-quality switches, like cherry's (unsure about this one) additional programmable keys placed near usual ones, to simplify typing shortcuts TIA Andrea

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • How can i get more low memory with the following setup:

    - by user539484
    Modules using memory below 1 MB: Name Total = Conventional + Upper Memory -------- ---------------- ---------------- ---------------- MSDOS 14 317 (14K) 14 317 (14K) 0 (0K) HIMEM 1 120 (1K) 1 120 (1K) 0 (0K) EMM386 3 120 (3K) 3 120 (3K) 0 (0K) OAKCDROM 36 064 (35K) 36 064 (35K) 0 (0K) POWER 80 (0K) 80 (0K) 0 (0K) NLSFUNC 2 784 (3K) 2 784 (3K) 0 (0K) COMMAND 2 928 (3K) 2 928 (3K) 0 (0K) MSCDEX 15 712 (15K) 15 712 (15K) 0 (0K) SMARTDRV 30 384 (30K) 13 984 (14K) 16 400 (16K) KEYB 6 752 (7K) 6 752 (7K) 0 (0K) MOUSE 17 296 (17K) 17 296 (17K) 0 (0K) DISPLAY 8 336 (8K) 0 (0K) 8 336 (8K) SETVER 512 (1K) 0 (0K) 512 (1K) DOSKEY 4 144 (4K) 0 (0K) 4 144 (4K) POWER 4 672 (5K) 0 (0K) 4 672 (5K) Free 552 944 (540K) 539 088 (526K) 13 856 (14K) Memory Summary: Type of Memory Total = Used + Free ---------------- ---------- ---------- ---------- Conventional 653 312 114 224 539 088 Upper 47 920 34 064 13 856 Reserved 0 0 0 Extended (XMS)* 64 898 256 2 671 824 62 226 432 ---------------- ---------- ---------- ---------- Total memory 65 599 488 2 820 112 62 779 376 Total under 1 MB 701 232 148 288 552 944 Total Expanded (EMS) 33 947 648 (33 152K Free Expanded (EMS)* 33 538 048 (32 752K * EMM386 is using XMS memory to simulate EMS memory as needed. Free EMS memory may change as free XMS memory changes. Largest executable program size 538 976 (526K) Largest free upper memory block 7 488 (7K) MS-DOS is resident in the high memory area. I'm running MS-DOS 6.22 on VMWare virtual hardware. This is memory state after memmaker pass, so i'm looking for optimization beyond memmaker. Note: NLS drivers (DISPLAY, KEYB, NSLFUNC) are essential for me. Thanks to @mtone for valuable reminder about MSCDEX /E which gave me 16KiB of low memory (see the diff)!

    Read the article

  • Apache2 - 500 internal server error

    - by Lucio Coire Galibone
    i'm running a VPS with Linux CentOs 6 with 4 GB of RAM, 10 GB of HD and 2 virtual CPU Intel(R) Xeon(R)CPU L5640 @ 2.27GHz. As my host says each virtual CPU must be at least 0.5 physical cpu. At certain times of the day, those with more traffic, trying accessing my php script i receive intermittently "500 internal server error". I activate logging to debug level from apache, and also the PHP logging with E_ALL, but I can't find reference to Error 500 in any logs(I checked the right logs!). I haven't got any .htaccess file in path script. The strange thing is that the error start at first php line in the script (the previous html displays correctly, but at the first php line the script send 500 error). The cpu load is always good (max 0.15 0.08 0.01) and RAM is close to 95% but it arrived to swap just 2 times in a month with 2-5 MB. Apache works with prefork with this values: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 280 MaxClients 280 MaxRequestsPerChild 4000 </IfModule> Everthing works correctly and I don't get any error in quiet times, but i start receive errors when traffic rises (6-9000 visits per hour). Can i solve the problem increasing resources? (i can upgrade RAM up to 16 GB). It can depend from reaching MaxClients (but apache must write it on log, right?)? If I upgrade RAM to 6 or 8 GB i have to calculate MaxClients value with this? MaxClients = Total RAM dedicated to the web server / Max child process size Max child process size is around 20M. How else can the problem be? Thanks in advance

    Read the article

  • MySQL -- enable connection to remote server via local /tmp/mysql.sock

    - by Kevin
    Hey all, I run a shared hosting provider and we're looking to move to a High Availability (replicated across multiple datacenters) setup for our hosting. We have created a replicated MySQL setup with failover that works wonderfully, and we'd like to move all of our clients' databases to it. The only trouble is that we have many many customers, all of whom have configured their Wordpress, Drupal, etc. installations to connect to MySQL via a local socket, not to the address of the remove server. I would hate to have to go through manually and change the connection statement in all of our clients' sites. What I'd ideally love to see is a program that listens on /tmp/mysql.sock and forwards connections there to the remote server I specify. I've seen SQL Relay, but it seems to require that I hardcode all of the database names and usernames and passwords into its configuration file. This is not going to work for me because our users add new databases dynamically all of the time, and I'd rather not have to write code to updated SQLRelay's config file every time. Does anyone have an idea on how to do this? Alternatively, I'd accept idea on how to handle this at the PHP level. (i.e. redirect any attempted calls to mysql_connect() to use that hostname rather than localhost) Thanks, Kevin

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • Calendar sharing between unrelated Domains

    - by vlannoob
    I have a 'request' from one of the 'big guys' that has me scratching my head a bit. He is one of our Executives that also is a board member of several other organistations, so he floats around between 4 different unrelated companies, each with their own domains, Exchange setup etc. He has domain accounts in each organisation. He has an iPad with multiple Exchange accounts so he can see all his calendars which works Ok for him - Apple calendaring bugs/flaws aside. What he wants is the ability for 'reception staff' at each organisation to 'see' all his calendars as they are booking things for him in their respective organisations calendars without it conflicting with bookings made in his calendars by other organisations......you with me?? So for example: Company A books a meeting into his Comapny A calendar at 9am Monday and Company B books him a meeting in his Company B Calendar at 9:15am Monday on the other side of town and of course Company C has him booked in all day Monday on their Company C calendar. He gets all those on his iPad but he would like either a 'global' calendar all can see and book into or the ability for receptionists at Company's A,B and C to see all the Calendars to avoid these kind of conflicts. I told him to 'go away' straight off the bat, I don't control anything to do with the other companies or know their infrastructure. And quite frankly I don't want any part of it...but he's whining and he's high enough up the food chain that I can't ignore him forever. I'm open to suggestions. Is there any third party software/services that can facilitiate this kind of setup? I really don't want to be creating users in my AD structure to people not ion our organisation so they can get access to his calendar and I am sure there sysadmins feel the same. As usual - any advice is greatly appreciated ;)

    Read the article

  • RAID 5 Install on Ubuntu Server 12.04 [closed]

    - by tarabyte
    Environment: Ubuntu Server 12.04, installing from bootable flash drive Error: No root file system is defined. Please correct this from the partitioning menu. I'm trying to set up a personal file server with software RAID 5. I just got three hard drives for this, but haven't found any solid documentation. I'm unsure what the basic way to partition my hard drives is. Can someone upload a screenshot of their "partition disks" screen so that I can compare with mine (attached)? Should I set the bootable flag? Do I need a /home partition? A /boot partition? Should I "Use [my partition] as: Ext4 journaling file system"? Or make that field "physical volume for RAID"? I am an engineer, but I have only a cursory knowledge of all-things-linux. If you know of any good learning resources I'd be happy to hear about those too (that way I don't have to blindly follow deprecated tutorials online). well, image would be here but i don't have a high enough reputation yet (please vote up :)) Thank you, References I've looked into: https://help.ubuntu.com/community/Installation/SoftwareRAID https://help.ubuntu.com/12.04/serverguide/advanced-installation.html http://forevergeeks.com/setup-ubuntu-server-with-raid-5/

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >