Search Results

Search found 2821 results on 113 pages for 'curious jo'.

Page 87/113 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • WAMP starts Apache or Mysql, but not both?

    - by ladenedge
    When I install WAMP, the Apache and Mysql services are set to run as the LocalService user and all works well. However, because I need to access remote UNC paths in my PHP code, I need to run at least Apache as a user that exists on both the local host and the remote host - I'll call him WampUser. When both Apache and Mysql are set to start as WampUser, I cannot start both at the same time. If both are stopped, I can start either successfully. When I attempt to start the other, I get Error 1053: The service did not respond to the start or control request in a timely fashion. This error appears immediately - there is no timeout. When at least one of the services is set to start as LocalService, both start fine. I can, therefore, solve my problem by setting Apache to WampUser and Mysql to LocalService, but I'm more interested in why this is happening in the first place. I'm especially curious because this situation does not occur on other servers - something I've done to this server has made these two services exclusive when running as the same user. Here are some miscellaneous data points: I am using Windows Server 2003. I've provided recursive Full Control to the C:\wamp directory for WampUser. Nothing appears in the event log after the service fails. No log entries appear in either the Mysql log or the Apache error log. Neither application appears in the process list when the appropriate service is stopped. Any ideas?

    Read the article

  • Determine the time difference between two linux servers

    - by Paul
    I am troubleshooting a latency network issue on a network. It is probably a nic or cabling issue, but while I was going through the process of figuring it out, I was looking at the timings of a ping packet leaving a network card and arriving at another server. Both linux. So I have tcpdump running on both, and I issue a ping from one to the other, and back again, and looking at the timing differences might have shed light on where the latency is coming from. It is an academic exercise now, as I need to eliminate some more fundamental causes, but I was curious as to how this could be achieved. Given that ntpd is installed and running on two servers, how can I confirm the current time discrepency between the two servers, to whatever level of accuracy is possible - given that we are talking about latency on a local lan, which is ideally a millisecond or so. NTP itself is accurate to a couple of ms under good conditions, and as both servers are in the same environment, they should (presumably) achieve a similar level of accuracy, and so should have a time discrepency between them of a only few ms - but how can I check this?

    Read the article

  • What do different patterns mean in Windows 8 file copy dialog

    - by MainMa
    When copying or extracting files, Windows 8 shows the chart with the speed of the operation. I noticed several patterns: Randomness, High speed at the beginning, then low speed during the most part of the operation, Mostly constant speed. 1. Randomness/nice mountains. 2. High speed at the beginning, then low speed during the most part of the operation. 3. Low speed at the beginning, then high speed during the most part of the operation. (Similar to the previous image, but inverted) 3. Mostly constant speed. (Same as previous image, but without the fast start) I'm curious, what each of those patterns mean? Do some indicate that there may be a problem with hard disk performance? Why the nearly constant speed is so rare, even when copying a single large file from and to a spinning drive, or when copying a single large file or a bunch of small files from and to an SSD?

    Read the article

  • How do I determine the cause of a sustained spike in mysql queries/activity?

    - by mattmcmanus
    So this is more of a "I'm trying to learn about how this works" question rather than "there is a serious problem I can't figure out!" question. I'm setting up a VPS and have been tweaking and changing things here and there. I recently installed munin (like two days ago) and yesterday I noticed a significant increase in mysql activity. So now my curiosity is going crazy. How do I setup/access mysql's query log? I have about 5 databases on the server so I want to see which one is getting all the action. Is there anything else I can do to keep a better eye on what's going on? Here are the graphs. As you can tell, it's not that much activity at all but I'm just curious at the change. The sites that are on the server right now do not get a lot of traffic. It's running a couple drupal sites, only one of which is live. The live one hasn't had a spike in traffic and the last spike was 250 visitors so it's barely a spike at all.

    Read the article

  • Cheapest iSCSI SAN for Windows 2008/SQL Server clustering?

    - by MichaelGG
    Are there any production-quality iSCSI SANs suitable for use with Windows Server 2008/SQL Server for failover clustering? So far, I've only seen Dell's MD3000i, and HP's MSA 2000 (2012i), which both are around $6K with a minimal disk configuration. Buffalo (yea, I know), has a $1000 device with iSCSI support, but they say it will not work for 2008 failover clustering. I'm interested in seeing something suitable for failover in a production environment, but with very low IO requirements. (Clustering, say, a 30GB DB.) As for using software: On Windows, StarWind seems to have a great solution. But it's actually more money than buying a hardware SAN. (As I understand, only the enterprise edition supports having replicas, and that's $3000 a license.) I was thinking I could use Linux, something like DRBD + an iSCSI target would be fine. However, I haven't seen any free or low-cost iSCSI software that supports SCSI-3 persistent reservations, which Windows 2008 needs for failover clustering. I know $6K isn't much at all, just curious to see if there are practical cheaper solutions out there. And finally, yes, the software is expensive, but many small business get MS BizSpark, so the Windows 2008 Enterprise / SQL 2008 licenses are completely free.

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • Exchange Out of Office Reply reset

    - by Richard West
    I have a question. We have an employee that is going to be on maternity leave for the next 8 weeks. I think that Outlook/Exchange is designed to send one out of office message to each person that sends an email to my user for the duration of the out of office reply. Meaning that if someone sends an email to my user each week they are only going to receive one out of office message - the first time they send her an e-mail. My concern is that over time people might forget that she is out of the office. Since they are not receiving any type of reply when they send an email this would seem possible. Does anyone know if Exchange ever resets the out of message notification after a certain amount of time? Like a week or so? I'm not looking for every message to get an out of office message, but I think more than one over the course of 8 weeks would be appropriate. I know that I can turn off and turn back on the out of office assistant to "reset" the replies, but I'm curious if Exchange performs a reset after a certain period of time automatically.

    Read the article

  • Odd behavior of setting REMOTE_ADDR between Apache, Nginx, and AWS ELB

    - by Chris Drumgoole
    I have encountered a strange issue and am curious if others have encountered this as well. and if there is absolutely anything that can be done.. We have a set up where we have multiple AWS EC2 Linux machines sitting behind a ELB. The EC2 machines are running Nginx. Let's refer to these as my production machines (because they are!) I also have a Rackspace cloud machine running apache. Completely separate. Let's call this the test server. Now, there's a ISP here in Singapore that seems to be funneling traffic through a transparent proxy or something, and when you do a IP check, the IP often changes. In fact, I noticed that when I check on http://www.whatismyip.com, the ip seems to be stable (doesn't change) across refreshes. But, http://www.whatismyipaddress.com, on refreshing, the IP changes! (so my ISP is doing weird stuff). Now, back to my set up, I noticed a couple of things: Checking the REMOTE_ADDR variable from PHP when connecting to a single Nginx production machine (bypassing the load balancer), is set to the stable IP that does change. Checking the REMOTE_ADDR variable from PHP when connecting to the test Apache server, it is set to the IP that does change on refreshes. Checking the headers when connecting to the nginx production machines through the ELB, the ELB sets the HTTP_X_FORWARDED_FOR to the stable IP. Has anyone experienced this odd behavior? Is there nothing that I can do? And which IP should I "trust"? (the one Apache gives, or the one ELB and Nginx gives?) Thanks! Chris

    Read the article

  • Load Testing a Security/Gateway Appliance

    - by Joel Coel
    In a couple weeks I will load testing a security/gateway appliance. We're a small residential college, and that "residential" means the traffic moving through the appliance is a bit like the Wild West. We have everything from Facebook to World of Warcraft, BitTorrent to Netflix, or Halo to YouTube... basically anything you might find in the home of a high-school or college aged person. Somewhere in there some real academic work gets done as well. We rely on our current appliance for traffic shaping, antivirus, malware filtering, intrusion detection on our servers, logging and abuse reporting, and even some content filtering. All this puts a decent load when we have students around, and I'm concerned about the ability of the new candidate to keep up. On paper it should handle things, but I'm worried. Prior experience is that vendors greatly over-report what an appliance can handle. The product also includes a licensed session limit, and I'm also worried that just a few misbehaving students could unwittingly bring us to that limit and cause service disruptions. I need to know this will work for our campus in order to commit to it. Going a performance level higher in that product takes the pricing way out of line with what we expect and have done in the past. What I need is a good way to load test this guy. My problem is that our current level of summer traffic is less than one percent of what it will be when students come back just six weeks from now. Any ideas on how to really stress this thing and see what it can do, in a way that will give me some clear ideas o. How that will scale for our campus? For the curious, I'm looking at a Watchguard 515, but it could be anything. If I were evaluating a competitor, I'd ask the same question.

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • Slow Citrix connection related to mapped network drives

    - by George
    I have this weird issue with Citrix being slow and maybe users just being a little dramatic, but I am curious as to why that happens. Let me give you a little bit of a background. Citrix is running off of Windows 2003 server, TSprofiles and file server were located on the same server, until recently. We have moved our file server over to a new server with tons of space. We have Citrix on one server, TSprofiles on another and file server on third. We are using logon scripts to map home drives, shared drive and etc. Now, up until we made the file server move, the logon process took several seconds and most users couldn't even notice logon script being executed as they logon. Now, it takes upwards of several minutes and users can see logon script being executed at a slow pace, one line at a time. The only new variable in this whole scenario is the new file server. All the servers are physically located in the same location and on the same subnet. So, I guess my question is, if anyone can explain why a sudden sluggishness? And any tools I can use to troubleshoot the issue? Thanks!

    Read the article

  • Running Upstart user jobs on startup

    - by dgel
    I am running Ubuntu server 11.04. I have created an Upstart user job as described here. I have the following file at my /home/myuser/.init/sensors.conf: start on started mysql stop on stopping mysql chdir /home/myuser/mydir/project exec /home/myuser/mydir/env/bin/python /home/myuser/mydir/project/manage.py sensors respawn respawn limit 10 90 As myuser I can start, stop, and reload the job fine- it works perfectly: $ start sensors sensors start/running, process 1332 $ stop sensors sensors stop/waiting The problem is that the job is not starting automatically at boot when mysql starts. After a fresh boot, mysql is running but my sensors job is not. What's strange, is that although the job doesn't begin on bootup, if I use sudo to restart mysql it does indeed start my job. The following commands are run as myuser from a fresh startup: $ status sensors sensors stop/waiting $ sudo restart mysql mysql start/running, process 1209 $ status sensors sensors start/running, process 1229 The documentation for Upstart user jobs is pretty limited. What is the correct technique to have a user job start automatically on startup of the system? I know I can just throw something in rc.local to start it, or I could move my sensors.conf to /etc/init but I'm curious if there is a way to do it using just Upstart.

    Read the article

  • Midnight Commander Woes: Output while panels are active, and tab completion.

    - by Eddie Parker
    I'm trying out midnight commander (loved Norton back in the day!) and I'm finding two things hard to work out. I'm curious if there's ways around this or not however. 1) If the panels are active and I issue a command that has a lot of output, it appears to be lost forever. i.e., if the panels are visible and I cat something (i.e., cat /proc/cpuinfo), that info is gone forever once the panels get redrawn. Is there anyway to see the output? I've tried 'ctrl-o', but it appears to just give me a fresh sub-shell and wipes the previous output away. Pausing after every invocation is a bit irritating, so I'd rather not use that option. 2) Tab completion for commands When mc is running, it consumes the tab character for switching panels. Is there any way to get around this so I can still type in paths and what not on the command line? I'm running cygwin if that matters at all.

    Read the article

  • How can I safely close this window and forever avoid seeing similar pop-ups from Mackeeper Zeobit's malware and spyware?

    - by Michael Prescott
    The attached image shows a window that just popped up and the only button available is the OK button. I could Force quit Safari, but I've got several sites open right now and don't want to try and find my place again. Besides, I've seen similar hacks in the past and I'd like to learn how to handle them in a way better than just a brute force-quit. I've never heard of MacKeeper or Zeobit, so I opened Firefox and did a few searches while Safari is obviously still stuck, waiting for me to click the sneaky OK button in the dialog window. Anyhow, at least the first few pages of most search results contain lots of blabbering from questionable witnesses about how MacKeeper saved them from some malware or spyware. However, any company that is hacking the browser to maliciously install their product is itself the criminal and not providing a true security application. So, there are three questions here: How can I close this window? Can I do something to Safari to avoid these hacks in the future? (Just curious) Is MacKeeper or Zeobit somehow loading the search results so that no information about their application being malware or spyware is listed (I can't be the only person in the world that is offended by their tactics, even though it appears I am)?

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Portable, battery-powered, wireless access point, ethernet adapter

    - by Jed
    I am in need of an adapter that will convert an ethernet port into a wireless access point. I have found a handful of devices, but I'm unable to find a device that is battery powered. Does a self-powered wireless access point even exist? The particular scenario that I will be using the device for is not your typical computer/PC scenario. For the curious, here's a bit of background on the problem I'm trying to solve: I make devices (controllers) that monitor water systems. Our controllers have a Webserver that serves out web pages so that users can configure the controller's settings. Typically, the user will use a cross-over cable to connect directly to the controller's ethernet port with their laptop to gain access to the controller's web pages. Now that tablets (devices that don't have an ethernet port - iPad, for example) are becoming more common, I need to find a device that will convert the controller's ethernet port into a wireless access point so that the user can connect to the controller's web pages via Wi-Fi or Bluetooth. It's worth noting that this wireless device that I'm looking for will NOT be permanently installed on the controller. It will be a portable device that the user will use on any of his controllers when he needs to make a connection to the controller. If you know of a device that will solve the scenario that I mention above, please share your info.

    Read the article

  • Speeding up Outlook Express on Windows XP over satellite

    - by John
    My brother is in the field with Doctors Without Borders. I'm posting this question on his behalf. We use outlook express (on a pc running windows XP) and a 9600 baud dial up satellite phone modem to get our email direct from the server in Paris. As this is a very expensive way to communicate (our satellite bill is $50K a year, no joke), it seems like trying to streamline is a good idea. Here's the question- when we connect, the sequence goes: Send outbox mails. This goes pretty quickly, probably 10-15 seconds for each email, up to maybe a couple minutes for an email of 150k or so). The status bar moves pretty quickly, according to the emails sent. The system then says "Checking for new messages on (our account name), and "Receiving list of messages from server". This takes a long time. Like 10-15 minutes. The status bar crawls along. Then it receives the messages. "Receiving messages from server". Again, each message takes 10-15 seconds, and this part moves along reasonably fast. I'm curious as to what is going on in the second part. It takes forever, and doesn't seem to be part of the sending or receiving messages themselves. Is there a way to speed up the process by changing a preference with communicating with the server or something? Does anyone have any advice for him speeding up what Outlooks Express is doing? Obviously his software is ancient and adding more software is not realistic based on the connection speed. Thanks!

    Read the article

  • Are Windows Domain Service Accounts Really Necessary?

    - by Zach Bonham
    One of the biggest problems we have in automating application deployments is the idea that running IIS AppPools and Windows Services under domain service accounts is a 'best practice'. Unfortunately, this best practice sometimes causes deployment headaches in that either we need to provision a new domain level service account quickly, or once we have the account, we now need to manage the account credentials. I had a great conversation about not making domain level service accounts a requirement and effectively taking one of two approaches: Secure at the node level using machine account(domain\machine$) and add the node to appropriate ActiveDirectory/Sql groups/roles Create local app specific accounts on each machine (machine\myapp) and add that account to appropriate ActiveDirectory/Sql groups/roles (the password here can change per deployment, it doesn't need to be stored) In both cases, it seems that its easier to manage either adding an account to appropriate group/role, or even stand up new, local account, than it is to have to provision a new domain level account and manage those credentials. This would hopefully ease the management burden on ActiveDirectory, Sql Server and Operations teams as there would be no more password management. We've not actually been able to implement this in practice yet. I am coming from a development background, so I'm curious as to how many ways this approach could go wrong? Can we really get rid of domain level service accounts with this direction? I'd appreciate any thoughts from anyone who has taken this path! Thanks! Zach

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Bad sectors, S.M.A.R.T., SpinRite, firmware on platter and drive id questions.

    - by Christopher Galpin
    Is it possible for S.M.A.R.T. to give false readings (say I was fiddling with lots of recovery programs, transfers, so on and so forth) or is it absolutely a read-only direct correlation to the physical status of a drive? Does SpinRite level 5 "recover bad sectors" operate on those marked at the factory? Are they on the same level as your generic bad sector, with SpinRite thus having full access? (Also I'm curious if SMART's bad sector count is zero'd afterward or if it includes factory marked sectors.) The main firmware of some drives, like a WD Passport is stored on the platter. How is it protected? Is it through marking them as bad sectors? If so, I'm wondering if SpinRite's sector recovery could bring about firmware corruption on these drives. Is the failure of a drive to report valid identity information (hdparm -I /dev/xx) consistent with corrupted firmware, or just general disk failure? I may be misunderstanding the role of firmware here. I feel I've read a drive's identity information is on the platter, just like the partition tables and so on. Is this true? (Apologizes if this is more appropriate for SuperUser.)

    Read the article

  • ubuntu - Best way of repartitioning a (running) production server

    - by egarcia
    I've got an (externally hosted) production server running Ubuntu LTS. It serves webpages (rails) and has an svn repository accesible through Apache, and a PostgreSQL db. I've got ssh access to the server and root privileges. Most of the "interesting" stuff is located in /var/ : svn repositories are inside /var/svn, web pages under /var/www, etc. Yesterday I was curious about how much disk space had it left, so I did the following: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 950M 402M 500M 45% / varrun 990M 64K 990M 1% /var/run varlock 990M 0 990M 0% /var/lock udev 990M 76K 989M 1% /dev devshm 990M 0 990M 0% /dev/shm /dev/md5 4.7G 668M 4.1G 15% /usr /dev/md6 4.7G 1.4G 3.4G 29% /var /dev/md7 221G 28M 221G 1% /home none 990M 4.0K 990M 1% /tmp My 'var' partition, which holds most of the interesting part, is only 4.7G big. The /home/ partition, on the other hand, is 221G, but it is mostly unused. I should have checked the disk layout before starting installing stuff. Ideally I would need /var/ and /home/ to be "switched" - /home/ should be the one with 4.7G, and /var/ the one with 221G. Is there a way to solve this without having to reinstall the whole thing?

    Read the article

  • How long will a USB key with an OS installed on it last?

    - by Xananax
    I've heard numerous times that installing an OS on a USB key is a bad thing to do, as USBs typically have a certain number of writes before dying, and installing an OS on it will wear it out (unless it's used sporadically for rescue purposes). Nonetheless, I am very tempted to install some flavour of Linux (Ubuntu or Arch, I haven't decided yet) on a small, transportable, USB Key. My problem is, although you read a lot that it's "bad", you are never told how bad. How long would it last (provided, say, a pc that is 24/7 on)? A month? A year? Five years? Is there recipes to make it last longer? Is there any reason beside weariness that should prevent me from attempting this? I mean, if it can be calculated, then I could theoretically shield myself by doing regular backups on another key when the deadline gets close (for example). Notes I am not talking of using a USB as a live CD, but actually installing the OS on it.) When I say "USB Key", I refer to the little USBs with a flash memory, not an external USB hard drive. For the curious, my reason is that I work in a lot of different places, on different PCs, and I have a very customized session, with my own WM, my own key bindings, my own scripts, , a selection of plugins for firefox and chrome, etc, and currently I am synchronizing all this through a mix of dropbox, git, and transporting files on USBs, and and it's becoming a chore. It would be much simpler for me to just plug the USB and mount the hard disk of the PC I am using and use it's processing power without actually needing to install any OS on it.

    Read the article

  • Where in the stack is Software Restriction Policies implemented?

    - by Knox
    I am a big fan of Software Restriction Policies for Microsoft Windows and was recently updating our settings for this. I became curious as to where Microsoft implemented this technology in the stack. I can imagine a very naive implementation being in Windows Explorer where when you double click on an exe or other blocked file type, that Explorer would check against the policy. I call this naive because obviously this wouldn't protect against someone typing something in a CMD window. Or worse, Adobe Reader running an external application. On the other hand, I can imagine that software restriction policies could be implemented deep in the stack almost at the metal. In this case, the low level loader would load into memory the questionable file, but mark the memory in the memory manager as non-executable data. I'm pretty sure that Microsoft did not do the most naive implementation, because if I block Java using a path block, Internet Explorer will crash if it attempts to load Java. Which is what I want. But I'm not sure how deep in the stack it's implemented and any insight would be appreciated.

    Read the article

  • Name one good reason for immediately failing on a SMTP 4xx code

    - by Avery Payne
    I'm really curious about this. The question (highlighed in bold): Can someone name ONE GOOD REASON to have their email server permanently set up to auto-fail/immediate-fail on 4xx codes? Because frankly, it sounds like "their" setups are broken out-of-the-box. SMTP is not Instant Messaging. Stop treating it like IRC or Jabber or MSN or insert-IM-technology-here. I don't know what possesses people to have the "IMMEDIATE DELIVERY OR FAIL" mentality with SMTP setups, but they need to stop doing that. It just plain breaks things. Every two or three years, I stumble into this. Someone, somewhere, has decided in their infinite wisdom that 4xx codes are immediate failures, and suddenly its OMGWTFBBQ THE INTARNETZ ARE BORKEN, HALP SKY IS FALLING instead of "oh, it'll re-attempt delivery in about 30 minutes". It amazes me how it suddenly becomes "my" problem that a message won't go through, because someone else misconfigured "their" SMTP service. IF there is a legitimate reason for having your server permanently set up in this manner, then the first good answer will get the check. IF there is no good reason (and I suspect there isn't), then the first good-sounding-if-still-logically-flawed answer will get the check.

    Read the article

  • Slow File Copy observed copying 40GB files across network to iSCSI device

    - by Rick
    Here's a curious ones for the gurus: Setup: Source Machine: Windows Server 2003 R2 machine with local hard drive. VHD file of 40GB. 1 x 1Gbps network card, Cat6 cable, switch. Target Machine: Windows Server 2008 R2 machine with iSCSI connection to iSCSI target on separate machine (1TB, RAID5). 1 x 1Gbps network card, Cat6 cable, connected to same switch as for Source Machine. Second 1Gbps network card, Cat6 cable, connected via isolated switch to the iSCSI target. Switches are Netgear JGS524 model (web managed). If I copy from the Win2003R2 machine to Win2008R2 machine local drive I get 40GB in 45 minutes, 36 seconds. If I copy from the Win2008R2 machine to the iSCSI target (local drive to iSCSI target) I get 40GB in 37 minutes 56 seconds. If I copy from the Win2003R2 machine to the iSCSI target via the Win2008R2 machine I get 40GB in 3 hours, 50 minutes, 24 seconds. All copies were done via the following command issued on the Win2008R2 box: XCOPY <source> <target> /J XCOPY /J - Copies using unbuffered I/O. Recommended for very large files. So, what's the bit I'm missing here? Why does a back-to-back copy take in total 1 hour, 23 minutes, 32 seconds when a "straight through" copy take almost 3 times as long? Switches show no errors, network hovers around the 3% utilisation mark for the duration of the copy (whereas the "back-to-back" copies are around the 25% utilisation mark). What have I missed?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >