Search Results

Search found 9056 results on 363 pages for 'world'.

Page 283/363 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • How to configure default text selection behavior in Windows XP, 7? (eg. mouse click selects entire word vs. mouse click inserts an active cursor)

    - by Mouse of Fury
    I find the mouse click behavior of Windows XP and Windows 7 annoying and intrusive. I don't remember Windows NT being quite this bad, or MacOS 7 - 10 which I used in the nineties. When I'm using a browser and I click on a text field - for example, the address bar, or a search box - the first thing which happens is the entire field is selected.Subsequent clicks seem to select parts of words, often deciding arbitrarily to exclude or include adjacent punctuation. The same in Excel and other apps, and when trying to rename files, so I'm assuming this behavior comes from a system-wide text handling routine. I frequently want to edit text, cut out or replace odd parts of the insides of words or chunks of sentences, and often find that to get a simple cursor to insert I have to click the mouse up to 4 times in succession. I've had to do a lot of this recently and it has been driving me insane. Is there a place at the system level where this can be configured? In a perfect world, I'd like a single click on a new text area to insert a cursor point, and a rapid double click to select the entire area. Words or text within the area could be selected by inserting a cursor, holding down the mouse button and dragging to the exact point where I want the selection to end - even if that's in the middle of a word. No, I don't need or want Windows to "smart select" a word or sentence for me. I've looked in the Mouse and Accessibility Options control panels (Windows XP). Haven't found anything even close. Thanks -

    Read the article

  • Microsoft Excel not graphing

    - by SmartLemon
    Im not sure if this is a math question or a su question. The experiment was relating the period of one "bounce" when you hang a weight on a spring and let it bounce. I have this data here, one being mass and one being time. The time is an average of 5 trials, each one being and average of 20 bounces, to minimize human error. t 0.3049s 0.3982s 0.4838s 0.5572s 0.6219s 0.6804s 0.7362s 0.7811s 0.8328s 0.869s The mass is the mass that was used in each trial (they aren't going up in exact differences because each weight has a slight difference, nothing is perfect in the real world) m 50.59g 100.43g 150.25g 200.19g 250.89g 301.16g 351.28g 400.79g 450.43g 499.71g My problem is that I need to find the relationship between them, I know m = (k/4PI^2)*T^2 so I can work out k like that but we need to graph it. I can assume that the relationship is a sqrt relation, not sure on that one. But it appears to be the reverse of a square. Should it be 1/x^2 then? Either way my problem is still present, I have tried 1/x, 1/x^2, sqrt, x^2, none of them produce a straight line. The problem for SU is that when I go to graph the data on Excel I set the y axis data (which is the weights) and then when I go to set the x axis (which is the time) it just replaces the y axis with what I want to be the x axis, this is only happening when I have the sqrt of "m" as the y axis and I try to set the x axis as the time. The problem of math is that, am I even using the right thing? To get a straight line it would need to be x = y^1/2 right? I thought I was doing the right thing, it is what we were told to do. I'm just not getting anything that looks right.

    Read the article

  • Ubuntu on VPS becomes unresponsive: BUG: soft lockup - CPU#0 stuck for 22s

    - by Bhante Nandiya
    We have a VPS running Ubuntu, on Xen. The problem is this, about once a day, for about 20-50 minutes, at a random time, the server becomes completely unresponsive to the outside world. After this period, it becomes responsive again, as if nothing had happened, it doesn't lose uptime, it doesn't restart. It just starts responding again as if it had been in suspended animation. These outages occur under conditions of non-exceptional memory and cpu, for example 70% mem, 5% cpu. I have stopped all non-essential services so the usage is very even. These outages don't particularly occur during times of increased memory/cpu (during daily tasks), they sometimes occur at times of very low cpu use (<2%), but in the past also occured during swapping. These blackouts have been occurring both under Ubuntu 12.04 LTS, and Ubuntu 14.04 LTS - no change at all (I upgraded Ubuntu specifically to see if it helped this problem). It is possible to log into our webhosts site, and use their administration console to see error messages from during this time. Presumably, these messages are from the Xen virtualization, the main message goes like this: BUG: soft lockp - CPU#0 stuck for 22s! [ksoftireqd/0:3] (repeats many times) SysRq : Emergency Sync (Sometimes this is the only message in the console) Others seen previously under different load situations include: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] (repeated many times) or: INFO: rcu_sched detected stall on CPU 0 (t=15000 jiffies) (repeated many times with t getting bigger) From googling around I've tried various kernel parameters such as nohz=off and acpi=off to no avail. All tech support has said is that other Ubuntu installations are not suffering the same problem. Anyone got any ideas or experience with this problem?

    Read the article

  • SFTP access without hassle

    - by enobayram
    I'm trying to provide access to a local folder for someone over the internet. After googling around a bit, I've come to the conclusion that SFTP is the safest thing to expose through the firewall to the chaotic and evil world of the Internet. I'm planning to use the openssh-server to this end. Even though I trust that openssh will stop a random attacker, I'm not so sure about the security of my computer once someone is connected through ssh. In particular, even if I don't give that person's user account any privileges whatsoever, he might just be able to "su" to, say, "nobody". And since I was never worried about such things before, I might have given some moderate privileges to nobody at some point (not sudo rights surely!). I would of course value your comments about giving privileges to nobody in the first place, but that's not the point, really. My aim is to give SFTP access to someone in such a sandboxed state that I shouldn't need to worry about such things (at least not more so than I should have done before). Is this really possible? Am I speaking nonsense or worried in vain?

    Read the article

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • Productive Writing Software

    - by Nick Retallack
    What do you guys use to write a personal journal, notes, and reference information that you want to group and search through later? I'm not one of those crazy people who likes to share their journal with the world. Some things I like to keep to myself. It's quite nice to have a reference. Preferences for cross-platform stuff (windows, mac, linux) Some Background For Windows, there's a program called Yeah Write, which really changed my expectations about how easy it should be to start writing something. You don't open or close files -- just click an empty slot and start writing, or click a filled slot to work on another file. And you can organize things into categories by creating tabs. Now that I carry a Macbook, I've just been using TextEdit. I like it because I can't lose my work when my computer crashes: it auto-saves everything and restores it when I launch TextEdit again. But I make a mess, leave thirty open files, and save everything to one directory with no tags for easy grouping. Saving files and selecting the directory they should be in is too clunky to do in the middle of a meeting. And organizing files into folders in the Finder is a pain, since there's no tree view like on Windows. Sure, I'm lazy, but I miss Yeah Write. I'm not going to get a windows laptop just to use it though. Since the laptop I take my notes on is a mac, I'm gonna be biased toward mac solutions.

    Read the article

  • Cisco configuration for public library internet

    - by AlternateZ
    I'm a C/C++ computer programmer turned IT support guy working for a public library. My day is usually spent helping random grandparents learn how to use email, so my networking knowledge is limited to what I can glean from google. Here's the situation. We have a public library with 20 PCs on a LAN and also public wifi access. Previously we were running all of this on 1 ADSL connection and people complained about low speeds. We hired a networking company to set up a Cisco dual-WAN router for us, and purchased an additional ADSL connection. The intention was to give the LAN PCs a guaranteed amount of bandwidth each, and then let the wifi users split the rest. The results were far worse than what we expected, and all we got from the company was excuses and they've since washed their hands of us. During busy periods, net performance on the LAN PCs are so poor that attaching files to gmail etc often times out and fails - far from the "guaranteed amount of bandwidth each" that we hope for! Sometimes it feels like performance is worse than before when we had 1 ADSL link and an unconfigured router? Anyways, surely this is a problem encountered a million times over across the world? (Sharing internet across many users effectively.) What are standard solutions for something like this? I admit to not even knowing the right jargon to google for (load balancing?) I'd appreciate any links to resources/guides that might help me get a better understanding of the problem/solutions, and perhaps some stories of your own experience in solving similar problems. This will help us evaluate and negotiate with network consultants in the future. If its relevant, our router config contains a section "policy-map" with "bandwidth percent" for each class of user (LAN, wifi), and "fair queue".

    Read the article

  • Is there a limit to how many sites can be hosted on a single IP address when using HTTP Host Headers on Windows 2008?

    - by Kev
    For reasons that are lost in the mists of time, our older Windows (2000, 2003) servers have been configured with a "Administrative" IP address and three further "Hosting" IP addresses. There are also additional IP's for sites with SSL certificates. The "Administrative" IP address is where all our internal provisioning, monitoring and other such apps are bound to. We lock this down and don't permit access to it from the outside world (other than over our VPN). The three "Hosting" IP addresses are used for IIS website hosting (in conjunction with host headers). Historically, new site IP address allocations have been rotated through these three IP addresses. I'm not really sure why. I'm building a new batch of servers and I'm considering just having a single hosting IP address. Our servers can host up to 1200 sites on a single machine. Is there a technical limit to the number of IIS sites that can bind to a single IP address? Our Linux platform seems to do just fine with just a single shared IP + host headers. I initially thought this might be an SEO thing, but given that IPv4 address space conservation is paramount I hardly think Google or other search engines could reasonably penalise site rankings just because hundreds of sites hang off the same IP.

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • backup an existing linux server to a virtualbox virtual machine

    - by user146526
    I have some servers and VPSs to many companies across the world. I want to back them up locally. I have some backup solutions enabled to remote hosts, but I want to have a local backup on a computer at home. What I am thinking is: 1) Create a virtualbox virtual machine, install the same version linux as the server. 2) Use rsync to backup the server to the local virtualbox machine. (something like rsync -av --delete --progress --exclude '/dev/' --exclude '/proc/' root@server_ip:// / ) 3) Repeat the command every few days update files. 4) In case of a hard disk failure, or any other bad event, reverse the rsync command and get the files back and continue my bussiness. I tried it with 2 openvz VPS, the one was a backup of the other. I also tried to transfer normal linux server host to openvz machine and it worked great. That way looks pretty clean and easy to me, this is the kind of solution I am looking for. However I need to be sure that this will work if I am going to do it. The question is, will that work ok ? Does anyone see any problem with that ? Do you have any other suggestions ? Thanks

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • Ubuntu inside VirtualBox is slow

    - by Kapsh
    I am running an Ubuntu instance on VirtualBox inside XP. Here are the details: Host: Windows XP Pro Guest: Ubuntu 8.10 Total RAM: 3GB RAM For VM: 1GB Total Video Memory: 128MB Video Memory for VM: 40MB Hard Drive: 200GB Hard Drive for VM: 30GB Processor: 2.80GHz Core Duo The problem is that whenever I am inside the virtual machine, things seem so much slower in general. For example Firefox, Eclipse take longer to load, dragging windows show a lag etc. I have tried running Ubuntu before (not inside a VM) and it seemed fantastically fast. So I am disappointed to have to deal with this situation. But I need access to the XP partition without having to reboot and hence the attempt. I am surprised with the perceived slowness since the whole world seems to be doing virtualization and I cannot imagine everyone works on slow systems knowingly. My question is - is there something I should be doing to boost performance? Am I doing something wrong? This is my home machine and I am not sure if this is the right forum to ask. Thanks.

    Read the article

  • In Outlook 2007 Rules and Alerts, EXACTLY what does "my name" mean?

    - by Cornan The Iowan
    I can't find any definition of "my name" in the Outlook 2007 Rules and Alerts or on the Internet. In this case our email system presents two email addresses for me to the outside world. I'd like BOTH of these addresses to be recognized as being "me". I thought that perhaps if I understood the definition of "my name" in the rules, I could set up my mailbox(es) appropriately. Of course if "my name" actually means a single email address, then I won't be able to do so, but if it means "any email on my account" or "any account meeting [some criteria]", then I might be successful. I'd like to note a subtlety in the rules definitions. While there is a rule named "where my name is in the To or Cc box", the only rule for explicit addresses is "sent to people or distribution list" (I'm assuming that "sent to" means "in the To:" list rather than "in the To: or cc: lists"). Summing up. My preference: 1) Understanding the precise definition of "my name" so that I can use "where my name is in the To or Cc box" to capture both email addresses from my account. 2) Learning the "sent to people or distribution list" actually includes Cc: entries (I can test this myself of course) 3) Any other solution that will let me define a rule where my secondary email address will be detected in EITHER the To: or Cc: boxes.

    Read the article

  • What kind of server configuration is best for a chatting app? [closed]

    - by mohabitar
    I'm just now starting to go deeper into the world of cloud hosting and databases, and am getting overwhelmed by how deep this information goes. It's all a little too much to consume in a short amount of time. I get a lot of pricing information, but I'm unable to determine what that means to me. I'm making what you might compare to an email app. Users can send messages to one another. I just don't understand, out of the several options, what would be ideal for an app like this, where users would be constantly sending and receiving text data. With Amazon DynamoDB, I have to specify a pre-defined throughput with number of reads and writes per second. Sure I can just type 50, but I'm not exactly sure what 50 writes per second represents. I'm trying to determine what would be the most cost efficient solution, and I want to know what a throughput of 50 reads/writes/second compares to. Is that a high number? What is a good throughput number for a message sending app with say 50,000 daily users? I'm just providing specific numbers so I can understand what these throughput numbers represent. 100 transactions/second to me seems like a small number since I'm not familiar with this stuff, so I'm just looking to bring everything in context. What would 100 read/write/second be useful for? Are there any average example values available? And I'm not sure what each service is good for. For a message sending app, is there any reason I'd want to choose say Amazon DynamoDB over Google App Engine? Any insight would be greatly appreciated.

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Log connections to program

    - by Zac
    Besides for using iptables to log incoming connections.. Is there a way to log established inbound connections to a service that you don't have the source to (suppose the service doesn't log stuff like this on its own)? What I'm wanting to do is gather some information based on who's connecting to be able to tell things like what times of the day the service is being used the most, where in the world the main user base is, etc. I am aware I can use netstat and just hook it up to a cron script, but that might not be accurate, since the script could only run as frequently as a minute. Here is what I am thinking right now: Write a program that constantly polls netstat, looking for established connections that didn't appear in the previous poll. This idea seems like such a waste of cpu time though, since there may not be a new connection.. Write a wrapper program that accepts inbound connections on whatever port the service runs on, but then I wouldn't know how to pass that connection along to the real service. Edit: Just occurred to me that this question might be better for stackoverflow, though I am not certain. Sorry if this is the wrong place.

    Read the article

  • Cannot connect to MySQL on RDS (Amazon Web Services) from my laptop

    - by Bruno Reis
    I'm having some trouble connecting to a MySQL 5.1 server on an RDS instance on AWS from my laptop. The detailed description of the problem is here: https://forums.aws.amazon.com/thread.jspa?messageID=323397 In short: I have 2 MySQL servers, both with the same db configuration and firewall (security group) configuration. One of them works fine: I can connect to it from my EC2 instances (ie, from inside the AWS cloud) and from my laptop. The other one doesn't: I can connect from my EC2 instances but not from my laptop. The symptom: a connection attempt from my laptop just hangs, and then times out, as if there was a firewall blocking me (ie, silently dropping my SYN packets). I must say that everything has been working fine for a very long time, and this problem began suddenly, 3 days ago, without any modifications to DB parameters or the security groups. My current analysis of the situation: The firewall (ie, security group) cannot be the problem: both MySQL servers share the same firewall configuration -- I can connect to one of them but not to the other. Later on, I even added a rule to allow inbound connections from 0.0.0.0/0 (ie, I turned off the firewall), and nothing. Oh, I also created a new, fresh security group and changed this instance's SG to the new one (to which I first added my ip address, and then 0.0.0.0/0) but still nothing. The credentials cannot be the problem: I use the same from my laptop and from my EC2 instances -- and the user (which is what Amazon calls master user), in the database, has a host of '%'. MySQL is not blocking my IP due to, say, too many failed connection attemps: I've FLUSH HOSTS on the database, and also I tried to connect using many different source IP addresses, even from all around the world through a VPN proxy service. What could I be missing? I'm asking here because it's been about 36 hours since I've posted on AWS forums but got no answer at all over there... someone here might have a solution! Any input is really appreciated, I'm out of ideas. Thanks!

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing.

    Read the article

  • Cannot change power button or lid close action

    - by Mark Henderson
    I have a Samsung 900x laptop and I want to change it so that when I close the lid, nothing happens (I often close the lid to carry it somewhere 10 seconds away, and by putting it into suspend it cancels any active downloads/etc). Easy, right? Go to Power Options and change it there; just like on every other laptop in the world. Not so fast: Saywhat?! That message only shows up for the nodes for Lid Close Action, Power Button and Sleep Button. I can change every other setting except for those three. I'm definately an Administrator on the computer, and I've googled the error and found dozens of hits on other crappy forums, but of course nothing on those worked (otherwise, I wouldn't be here). And as ususal the "Why can't..." hyperlink gives no useful infomation what so ever (just a generic Help document). So - how can I change what closing the lid does? I will modify the registry directly if I have to.

    Read the article

  • Over gigabit connection, Teracopy does 31MB/s, but Windows 8 does it at ~109MB per second?

    - by Gaurang
    I got my brain-melting first taste of Gigabit networking today, between my 2011 MacMini and Windows 8 Pro desktop connected via Cat.5e to Linksys WRT320N(sporting dd-WRT). After making sure that the line speed on both systems showed 1Gbps, I proceeded to copying a 2.4GB MP4 from the Mini to the Win 8 desktop (SMB sharing). Although satisfied with the 30-34 MB/s that Teracopy was showing (that was a proper step-up for me from 10 MB/s), I still was curious about this massive difference in the advertised and real-world speed. 2 hours of Google had me believing that there were other factors that resulted in less speed, SMB being one. So just for the sake of doing it, I iPerf'd both the systems and guess what that showed - around 875mbps on both systems! I then stumbled upon this little piece of info after which I turned off Teracopy and copied the same file through Windows 8's regular copier. 109 MB/s. Molten brains :) What exactly is causing this? And can I enable such speeds via Teracopy? I really dig the extra features that Teracopy has, will surely miss them now :D

    Read the article

  • Need a helpful/managed VPS to help transition from shared hosting

    - by Xeoncross
    I am looking for a VPS that can help me transition out of a shared hosting environment. My main OS is Ubuntu, although I am still new to the linux world. I spend most of my day programming PHP applications using a git over SSH workflow. I want PHP, SSH, git, MySQL/PostgreSQL and Apache to work well. Someday after I figure out server management I'll move on to http://nginx.org/ or something. I don't really understand 1) linux firewalls, 2) mail servers, or 3) proper daily package/lib update flow. I need a host that can help with these so I don't get hit with a security hole. (I monitor apache access logs so I think I can take it from there.) I want to know if there is a sub $50/m VPS that can help me learn (or do for me) these three main things I need to run a server. I can't leave my shared hosts (plural shows my need!) until I am sure my sites will be safe despite my incompetence. To clarify again, I need the most helpful, supportive, walk-me-through, check-up-on-me, be-there-when-I-need you VPS I can get. Learning isn't a problem when there is someone to turn too. ;)

    Read the article

  • MSMQ on Win2008 R2 won't receive messages from older clients

    - by Graffen
    I'm battling a really weird problem here. I have a Windows 2008 R2 server with Message Queueing installed. On another machine, running Windows 2003 is a service that is set up to send messages to a public queue on the 2008 server. However, messages never show up on the server. I've written a small console app that just sends a "Hello World" message to a test queue on the 2008 machine. Running this app on XP or 2003 results in absolutely nothing. However, when I try running the app on my Windows 7 machine, a message is delivered just fine. I've been through all sorts of security settings, disabled firewalls on all machines etc. The event log shows nothing of interest, and no exceptions are being thrown on the clients. Running a packet sniffer (WireShark) on the server reveals only a little. When trying to send a message from XP or 2003 I only see an ICMP error "Port Unreachable" on port 3527 (which I gather is an MQPing packet?). After that, silence. Wireshark shows a nice little stream of packets when I try from my Win7 client (as expected - messages get delivered just fine from Win7). I've enabled MSMQ End2End logging on the server, but only entries from the messages sent from my Win7 machine are appearing in the log. So somehow it seems that messages are being dropped silently somewhere along the route from XP or 2003 to my 2008 server. Does anyone have any clues as to what might be causing this mysterious behaviour? -- Jesper

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • Fake demostration software for command line

    - by Joe
    I'm looking for some software that would be useful for giving demonstrations. I regularly have to show the effects of scrips ect to classes while talking about their effects, and equaly regularly I have finger trouble and have to rewrite various commands - wasting class time and general energy. I'd like to be able to record a sequence of commands in advance, and then play them back at the speed of my choosing. So I might have a file that containes the commands: echo "hello world!" ls ls -l ls -l | sort I'd like to be able to play these commands back by typing similar ones in. So I'd have a blinking command prompt and if I typed 'echo "hxxx' the command prompt would read home$echo "hell and if I typed any other letters the terminal would fill up with the remainder of the command until I press enter, when it executes the command. The point is that even if I screw up the command when typing it, the command that I'd prepared in advance would be executed. My question is - does similar software exist for giving demonstrations? or even, is this an easy thing to script up...? EDIT - two quick things first of all I'm on osx - but it would be nice to get a general solution for other people who arrive here from google. and second a lot of the comments/answers are concentrating on, in effect, making it fast and easy to enter long commands by means of hotkeys and the like. Actually I'd like it to at least look like I'm typing live - that's why I put in the bit about the one-to-one keymapping, but I don't think I explained that quite as well as I could have...

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >