Search Results

Search found 1668 results on 67 pages for 'andrew arnott'.

Page 19/67 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Programs keep waiting for external disk to spin up - how to ignore disk?

    - by Andrew J. Brehm
    Like many Mac users I have an external Firewire disk hooked up to my Mac to be used by Time Machine. This works very well, backup-wise. The problem is that very often when I use a Mac application and try to open a file, the file selection dialogue window hangs until the external disk has spun up. I never ever want to open a file on the external disk. Sometimes this happens even when I just want to save a file I already saved (i.e. type something and press meta-s). Is there anything I can do about this?

    Read the article

  • Firefox only shows a black window over RDP

    - by Andrew Aylett
    I have a Windows 7 desktop running Firefox 4. If I remote-desktop (RDP) from another PC into my already-running session, the Firefox window turns black. Flash applications are still visible, if I'm on a tab with one, and I can apparently interact with Firefox normally, I just can't see anything in the window. I'm after an explanation, if possible, and (even better!) a workaround or fix that doesn't involve a different web browser :).

    Read the article

  • Cannot install grub to RAID1 (md0)

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat This is the my mdadm output after sync: /dev/md0: Version : 0.90 Creation Time : Wed Feb 17 16:18:25 2010 Raid Level : raid1 Array Size : 470455360 (448.66 GiB 481.75 GB) Used Dev Size : 470455360 (448.66 GiB 481.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Nov 1 15:19:31 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 92e6ff4e:ed3ab4bf:fee5eb6c:d9b9cb11 Events : 0.11049560 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. This is my fdisk -l output: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057d19 Device Boot Start End Blocks Id System /dev/sda1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sda2 940910985 976768064 17928540 5 Extended /dev/sda5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table This is my grub install output: root@answe:~# grub-install /dev/sda /usr/sbin/grub-setup: warn: Attempting to install GRUB to a disk with multiple partition labels or both partition label and filesystem. This is not supported yet.. /usr/sbin/grub-setup: error: embedding is not possible, but this is required for cross-disk install. root@answe:~# grub-install /dev/sdb Installation finished. No error reported. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... Anybody can resolve this issue? I have big headache with this.

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • Ubuntu 9.04 Cannot Connect to visible open wifi ap (reason 6)

    - by Andrew Bolster
    I'm travelling currently so the last network i connected successfully to was my home wpa-psk network. I hadn't tried anything until i got to my accommodation that is an open network (that I'm on now on the Win7 partition on my laptop). The network (and a similar archetypical 'linksys' open network, aswell as some protected local networks are correctly displayed in network-manager and upon selection, it happily spins around to its hearts content for a while before saying 'no chance boy'. /var/log/syslog spills out the usual combination of wpa_supplicant and kernel messages, the most interesting of are that the kernel deauthentication reason 6 response. 6 apparently means class2FrameFromNonAuthStation...Client attempted to transfer data before it was authenticated. Anyone seen anything like this? I've already tried going closer to the router to no avail. I don't remember seeing this any other time I've connected to a open AP, even if that AP is far away. (Signal strength for this AP is good, kismet says its around -57dBm, well above the threshold of -80dBm, and I've tried all the suggestions from the 'Related Questions'

    Read the article

  • Using cURL through SSH tunnel or VPN

    - by Andrew
    Hello, I would like to set my CURL to use SSH tunneling for certain domains. How can I accomplish that? I can also set up VPN or SOCKS or whatever, but I need to use CURL on local machine, but use the IP of remote machine for those connections.

    Read the article

  • multiple problems--windows 7 mbr, others

    - by Andrew
    After buying a new laptop(XPS L501X) with windows 7 preinstalled, I decided to make it dual boot with windows Vista. I now know about the issue with installing previous versions of windows, and the deletion of the MBR. Normally this wouldn't be a problem, but Microsoft no longer sends install disks with their computers, so there is no way to just re-install windows 7. If anyone can help it would be much appreciated. Data preservation is not an issue, as the laptop is new.

    Read the article

  • Automatically mount a remote folder on boot

    - by Andrew
    I'm trying to mount a Windows folder on my Ubuntu machine on start up. I've tried following this page here, modifying /etc/fstab and appending sshfs#my_user@remote_host:/path/to/directory <local_mount_point> fuse user 0 0 to it, but it fails; on start up, I get an error saying that the mounting failed, and I can press S to skip or M to recover manually. I also tried following this page here, appending /usr/bin/sshfs -o idmap=user my_user@remote_host:/path/to/directory <local_mount_point> to the /etc/rc.local file, but this doesn't help either; Ubuntu just boots up normally without mounting. I have Cygwin installed on my Windows machine, and I can run everything smoothly, such as sshing without passwords, and mounting it manually. I've also tried to run the modified rc.local file $ /etc/rc.local, and it works perfectly, but I just can't seem to get the folder mounted on start up. Can someone help me?

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • Transferring an SQL Processor License to a virtual hosted environment

    - by Andrew Shepherd
    My company is currently hosting a service in-house, and we want to move to an externally hosted environment. We would then be using a virtual server. I understand that this might be spread across multiple machines, but from my perspective as a customer, this layer is abstracted away - I shouldn't know or care about the hardware that the OS is hosted on. We have a licensed edition of SQL Server 2008. This is one Processor license. Will it be a violation of the licensing agreement to use this in a virtual environment. From the reference guide here it says When licensed Per Processor With Workgroup, Web, and Standard editions, for each server to which you have assigned the required number of per processor licenses, you may run, at any one time, any number of instances of the server software in physical and virtual operating system environments on the licensed server. However, the total number of physical and virtual processors used by those operating system environments cannot exceed the number of software licenses assigned to that server For enterprise edition there is an added option: if all physical processors in a machine have been licensed, then you may run unlimited instances of SQL server 2008 in one physical and an unlimited number of virtual operating environments on that same machine. I'm having trouble getting my head around this. Would I theoretically have to get a license for every processor in this virtual environment (which is effectively impossible because I have no way of knowing how many processors there actually are)? Or can I just say that it's hosted on one "virtual" server, so that's OK?

    Read the article

  • How do I set up different configurations on an Ubuntu laptop based on different physical locations?

    - by Andrew Larned
    I'm looking for a way to have a couple (or three) multiple configurations set up on my laptop, and easily switch between them. To be more specific, when my laptop is at work, it's plugged into a second monitor, and has a specific set of networks configurations. At home, the second monitor is gone, the network configurations are different. At a public wireless point there are other configurations to set, etc. I know I can go into my preferences and turn on/turn off the monitor, and mess with the networking preferences, and so on, but I'm looking for a way to change a bunch of preferences all at once, and if it's possible to do that automatically, maybe based on the wireless APs in the vicinity, that would be even better.

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • Chrome 33 shows ugly, blocky, pixelated fonts in Linux

    - by Andrew Mao
    After updating to the latest version of Chrome (33) on my Gentoo Linux box, certain sites such as GitHub have started rendering with ugly, pixelated, non-antialiased fonts. Small text is now basically impossible to read. Before this, GitHub had looked the same to me on Windows, Linux, and Mac computers. So what has happened here and how can it be fixed? EDIT: Appears to be fixed on the stable release of Chrome 34.

    Read the article

  • Joining new DC to AD - DNS name does not exist

    - by Andrew Connell
    I had a DC fail on me recently and trying to add a new one to my domain, although I'm sensing I might have other issues in my domain. I'm a dev at heart and know just enough about AD to be dangerous so looking for some assistance. My working DC is RIVERCITY-DC12. I'm trying to promote RIVERCITY-DC14 as a DC to the RIVERCITY domain, but when I run DCPROMO, at the NETWORK CREDENTIALS step where I point to the name of the domain (rivercity.local), I get "An AD DC for the domain rivercity.local cannot be contacted" and in the details see "The error was DNS name does not exist" Looking at RIVERCITY-DC12, I can see DNS is working, I've been able to query it from other machines in my domain, and no errors are reported in the DNS category within the Event Viewer. When I checked the FMSO roles, it shows RIVERCITY-DC12 is the machine for all listed roles. Not sure what I should do next or how to troubleshoot/investigate after searching around for a solution... ideas? Environment: Domain: rivercity (rivercity.local) Forest functional level: Windows 2000 (I'm more than happy to raise this) Windows Server 2008 All servers are Windows Server 2008 R2 SP1 (fully patched)

    Read the article

  • How to reliably recieve message from AWS that my instance was rebooted / terminated / stopped?

    - by Andrew Smith
    I have Nagios, and I want it to stop monitoring instances when they are stopped from the console. The requirements are: The message passed from AWS is 100R% reliable, e.g. when Nagios is down, and the message cannot be delivered, it will be re-delivered promptly when Nagios is up The message will pass quickly There is no need to scan status of all instances via EC2 API all the time, but only once a while Many thanks!

    Read the article

  • LAMP Setup, PHP's session_start permission denied

    - by Andrew
    I'm trying to set up a development environment for a legacy system that runs CentOS 4.8, PHP 4.3.9, and MySQL 4.1.22. I'm matching OS and software versions to keep the development server as close to the production server as possible. When I fire up PHPMyAdmin's setup script (version 2.11.10.1, of course) the installation errors out and I see these errors in my error log: [client 172.18.141.74] PHP Warning: session_start(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in /home/www/intranet/phpmyadmin/libraries/session.inc.php on line 87 [client 172.18.141.74] PHP Warning: Unknown(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [client 172.18.141.74] PHP Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0 I've done some searching on ServerFault and on teh Googles and I see that a common reason for this error is that the session.save_path isn't writable by the www user. I also found where in /etc/php.ini this URL is set: session.save_path. My session.save_path is set to: session.save_path = /var/lib/php/session I've since changed the owner and the group of /var/lib/php/session and still have the same error. Here's the result of ls -la for /var/lib/php [root@localhost php]# ls -la total 24 drwxrwxr-x 3 www www 4096 Oct 23 20:21 . drwxr-xr-x 17 root root 4096 Oct 23 20:31 .. drwxrwx--- 2 www www 4096 Jun 1 2009 session ...But I'm still getting the same error. Is there another possibility for why I'm getting this error?

    Read the article

  • 2Wire USB Network Adapter Not Working With Windows 7

    - by Andrew
    I installed Windows 7 on a separate drive, and everything works fine, but I can't get the above network adapter to work properly. It's displayed in Device Manager as a USB device, and I have the driver that makes it function in Vista, but when I run the Driver Wizard and direct it to directory where I have the Vista driver, it almost immediately says "Can't install driver." Is there any work around to this?

    Read the article

  • Difficult to point-and-click target with Apple Magic Trackpad?

    - by Andrew Swift
    My Magic Trackpad is great for most things involving dragging and gestures, but for simple point-and-click use it is starting to get really annoying. For example, my bank has a numeric code that I need to enter from a visual grid on the screen, by clicking on the six numbers in order. To do this is quite difficult with the Magic Trackpad. Idea 1: it is because when you start dragging on the touchpad, there is a brief delay before the mouse starts to move. The delay might be enough to screw up my anticipation of where the cursor is going to go. Idea 2: it may have to do with the cursor acceleration. If I move it a little, the cursor moves a little. If I move it a lot, it goes much faster. I have tried various settings in the preference pane, and although the cursor does move more or less quickly, it doesn't seem any more or less easy to hit a given target. As I am writing this post, I tried a brief experiment -- I chose a spot on the page (the bracket in the superuser logo) and tried to move the cursor there in one quick movement. It's extremely difficult, even though I've been using Macs and touchpads since 1999. There is something about the behavior of the trackpad that is making it impossible for me to learn how to accurately predict where the cursor will end up. Does anyone else have this problem?

    Read the article

  • Avoiding spam filters on my CentOS 5.5 64bit server?

    - by Andrew Fashion
    I run a social network on my web server, with about 15,000 members right now. My administration section let's me Mass Email all my users. Currently it uses the built in PHP mail function. What is the best way to congfigure my server to bypass spam? Can I install anything on the server? Or should I just make the social network use SMTP? The admin panel lets me choose SMTP or built-in mail function. I'm not to familiar with mailing from servers, as I usually use Aweber for my mailing, but I cannot use Aweber for this as they will not let me just import 15,000 emails. Let me know, thanks.

    Read the article

  • nginx redirect proxy

    - by andrew
    I have a web app running on a nginx server on local ip 192.168.0.30:80 I have this in my etc/hosts 127.0.0.1 w.myapp.in If someone accesses my app using a "w" subdomain, it shows a webdav interface, otherwise it runs normally (for example, someone calls http://myapp.in , it goes into the app, and http://w.myapp.in goes into webdav interface - this is done within the app, nginx has nothing to do with it) Because I don't have a dns or anything like that, users must access the app by ip. A problem appears if someone wants to access the webdav interface, because you cannot access the app by a subdomain - unless you write a line in your local hosts file, which is not a solution) A possible solution If it's possible to setup the nginx server so that if someone calls http://192.168.0.30 (on port 80), it goes normally into the app, but if a user tries to access say http://192.168.0.30:81 (another defined port) it redirects internally to w.myapp.in, and the app sees the subdomain Given the app, can this be done? If yes, what should I put in the nginx config file? And if you guys think of a better solution, I'm open to any.

    Read the article

  • PC doesn't power up anymore

    - by Andrew
    I will tell you the story behind the problem. The computer needed to be reinstalled, it has 2 HDDs so I had to un-plug one to see which is which, to save as much data as possible. After unplugging the first one, it booted with the reinstallation CD, it wasn't the HDD I was looking for so I turned it off and unplugged the other one. After unplugging and turning on, right after booting test, the HDD was turned off, only the CPU fan was working. Turned it off with the power supply's button help, holding the power button didn't do it. And now it doesn't want to turn on anymore. I tried with another test power supply but the result is the same, doesn't want to turn on. Any idea?

    Read the article

  • How to extract jpegs from a video file using ffmpeg

    - by Andrew Simpson
    I am using C# and ffmpeg. In this scenario I have 279 individual jpegs and i have used ffmpeg to create a AVI file from these images on my client. CMD Line: -f image2 -r 10 -i "C:\000EC902F17F\img%05d.jpg" -s 352x288 -y "C:\1\test.avi" I then upload to my server. CMD Line: -i c:\1\1.avi c:\1\img-%05d.jpg I then extract jpegs from the AVI file. I get 265 jpegs back. Obviously ffmpeg is dropping these frames (most probable) when the avi is 1st created. Is there a way to 'force' to encode using ALL the images I have? Thanks. PS I did not specify any command line option other than the size of the video output. As far as I am aware if none are specified then ffmpeg automatically chooses the best ones?

    Read the article

  • Why does Windows share permissions change file permissions?

    - by Andrew Rump
    When you create a (file system) share (on windows 2008R2) with access for specific users does it changes the access rights to the files to match the access rights to the share? We just killed our intranet web site when sharing the INetPub folder (to a few specified users). It removed the file access rights for authenticated users, i.e., the user could not log in using single signon (using IE & AD)! Could someone please tell me why it behaves like this? We now have to reapply the access right every time we change the users in the share killing the site in the process every time!

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >