Search Results

Search found 1683 results on 68 pages for 'andrew barber'.

Page 15/68 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Lynx web browser usage

    - by Andrew
    Does anyone still use the Lynx text-only web browser? It would seem useful for certain classes of low-end mobile devices, especially if one is billed per KB of data transfer.

    Read the article

  • Do data sources travel with a particular mail merge document?

    - by Andrew
    Do data sources that you set up (particularly to SQL Server) travel with a mail merge document? In other words, if I set up data sources in a mail merge document on my machine and then save and send that document to a co-worker and she opens it on her machine, will the data sources still be there when she opens it? Or, will she have to set them up again herself?

    Read the article

  • Win7 - Some pinned program's icons are corrupt, show default

    - by Andrew Backer
    I have both FF 3.6 & Chrome pinned to my taskbar in win7. The icons for these two programs show up as the ugly default icon from yesteryear. Strange. Is there some way to force an icon refresh for pinned programs? When I first added them they showed properly, but several days later they reverted to this state Un-pinning the program causes the icon to show up properly, and re-pinning it causes it to break again. These other proggies show up fine: Media Center, Media Player Classic HC, Hulu Desktop, WMP, and the folders. I have 2 user accounts on this box, and both are showing this behavior. I have tried changing the taskbar icon size to 'small' and back, but to no effect. Edit (Add) The icons show up as broken in the start menu too, but I can navigate to the EXE directly. When I click "change icon" in the properties for the start menu entry I get the error : Can not find %ProgramFiles%\Google\Chrome...\chrome.exe.

    Read the article

  • Copying files within a Workgroup

    - by Andrew La Grange
    I have three boxes operating in a Windows Server workgroup within a closed network. (No Domain / No AD) There are several derivations of the scenario that I'm about to outline, but I'm sure I will be able to retool the solution as and when I need. Essentially the boxes are: 2 x Windows Server 2008 R2 x64 Standard 1 x Windows Server 2000 Standard I need to be able to schedule the copying/and-or/moving of files from various directories and each of the boxes. Each box has a different username and password for the administrator. I have PowerShell 2.0 on the two Win2K8 boxes (obviously). Previously I have used mapped network drives to copy the files, and cmd line batches, but I'd much rather use Powershell if possible (with Shares and/or $ notation). However the Copy-Item cmdlet doesn't seem to be processing the Credential correctly. Perhaps some Powershell gurus out there might be able to help me. Essentially I'd like to schedule a PS run of script to push backup files onto my WIn2k box (old fileserver) periodically.

    Read the article

  • What are "Excess Fragments" in defragmenting a hard drive?

    - by Andrew Swift
    I'm defragmenting my hard drive (XP SP3) with PerfectDisk 7.0, and it finds 816,659 excess fragments when I ask for an analysis. [update] Specifically, it shows that the 1TB disk is 14% fragmented with 19693 fragments and 816,659 excess fragments. About 20% of the disk is still free space. What does excess fragments refer to? What is the difference between fragments and excess fragments? I have had problems in the past where I defragmented a fragmented disk and many files were corrupted. It seemed as though "excess fragments" referred to orphan pieces, where the program couldn't find out where to put them. If that was true, then defragmenting a disk resulted in many incomplete files, and in fact I defragmented a disk full of MP3's and got a lot of corrupted files as a result. Instead, I started to simply format a separate disk and copy everything from one to the other. That way there were no orphan bits, and no file corruption. Does anybody know what "excess fragments" really are?

    Read the article

  • How to establish the real-time communication between Shopping cart running MySQL and Internal System Running PostgreSQL [closed]

    - by Andrew
    I am thinking about the way of establishing some-sort of real-time connection between MySQLpowered shopping cart and internal system that is running on PostgreSQL. Could you give me some sort of insight on this topic? For example, I can write some sort of csv export application, then enable remote MySQL for over the internet connection and then import csv to mysql directly from PC. Or upload csv and run cron on server. But this way of import-export causing delays; so I would like to link databased (or some msort). I have never done it before and would like to hear some opinions about this. Another way "just a thought" might to implement triggers that would initiate the update process via csv; but again, I would like to avoid csv. Do you have any good advise? Maybe some specific examples?

    Read the article

  • public key always asking for password and keyphrase

    - by Andrew Atkinson
    I am trying to SSH from a NAS to a webserver using a public key. NAS user is 'root' and webserver user is 'backup' I have all permissions set correctly and when I debug the SSH connection I get: (last little bit of the debug) debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering DSA public key: /root/.ssh/id_dsa.pub debug1: Server accepts key: pkalg ssh-dss blen 433 debug1: key_parse_private_pem: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> Enter passphrase for key '/root/.ssh/id_dsa.pub': I am using the command: ssh -v -i /root/.ssh/id_dsa.pub [email protected] The fact that it is asking for a passphrase is a good sign surely, but I do not want it to prompt for this or a password (which comes afterwards if I press 'return' on the passphrase)

    Read the article

  • Compiling Gearman PHP Library for CentOS 5.8

    - by Andrew Ellis
    I've been trying to get Gearman compiled on CentOS 5.8 all afternoon. Unfortunately I am restricted to this version of CentOS by my CTO and how he has our entire network configured. I think it's simply because we don't have enough resources to upgrade our network... But anyways, the problem at hand. I have searched through Server Fault, Stack Overflow, Google, and am unable to locate a working solution. What I have below is stuff I have pieced together from my searching. Searches have told said to install the following via yum: yum -y install --enablerepo=remi boost141-devel libgearman-devel e2fsprogs-devel e2fsprogs gcc44 gcc-c++ To get the Boost headers working correctly I did this: cp -f /usr/lib/boost141/* /usr/lib/ cp -f /usr/lib64/boost141/* /usr/lib64/ rm -f /usr/include/boost ln -s /usr/include/boost141/boost /usr/include/boost With all of the dependancies installed and paths setup I then download and compile gearmand-1.1.2 just fine. wget -O /tmp/gearmand-1.1.2.tar.gz https://launchpad.net/gearmand/1.2/1.1.2/+download/gearmand-1.1.2.tar.gz cd /tmp && tar zxvf gearmand-1.1.2.tar.gz ./configure && make -j8 && make install That works correctly. So now I need to install the Gearman library for PHP. I have attempted through PECL and downloading the source directly, both result in the same error: checking whether to enable gearman support... yes, shared not found configure: error: Please install libgearman What I don't understand is I installed the libgearman-devel package which also installed the core libgearman. The installation installs libgearman-devel-0.14-3.el5.x86_64, libgearman-devel-0.14-3.el5.i386, libgearman-0.14-3.el5.x86_64, and libgearman-0.14-3.el5.i386. Is it possible the package version is lower than what is required? I'm still poking around with this, but figured I'd throw this up to see if anyone has a solution while I continue to research a fix. Thanks!

    Read the article

  • How do I configure permissions for a cluster share using Powershell on 2008?

    - by Andrew J. Brehm
    I have a cluster resource of type "file share" but when I try to configure the "security" parameter I get the following error (excerpt): Set-ClusterParameter : Parameter 'security' does not exist on the cluster object Using cluster.exe I get a better result, namely the usual nothing when the command worked. But when I check in Failover Cluster Manager the permissions have not changed. In Server 2003 the cluster.exe method worked. Any ideas? Update: Entire command and error. PS C:\> $resource=get-clusterresource testshare PS C:\> $resource Name State Group ResourceType ---- ----- ----- ------------ testshare Offline Test File Share PS C:\> $resource|set-clusterparameter security "domain\account,grant,f" Set-ClusterParameter : Parameter 'security' does not exist on the cluster object 'testshare'. If you are trying to upda te an existing parameter, please make sure the parameter name is specified correctly. You can check for the current par ameters by passing the .NET object received from the appropriate Get-Cluster* cmdlet to "| Get-ClusterParameter". If yo u are trying to update a common property on the cluster object, you should set the property directly on the .NET object received by the appropriate Get-Cluster* cmdlet. You can check for the current common properties by passing the .NET o bject received from the appropriate Get-Cluster* cmdlet to "| fl *". If you are trying to create a new unknown paramete r, please use -Create with this Set-ClusterParameter cmdlet. At line:1 char:31 + $resource|set-clusterparameter <<<< security "domain\account,grant,f" + CategoryInfo : NotSpecified: (:) [Set-ClusterParameter], ClusterCmdletException + FullyQualifiedErrorId : Set-ClusterParameter,Microsoft.FailoverClusters.PowerShell.SetClusterParameterCommand

    Read the article

  • Windows Server 2012 Metro shortcut icons do not show for other users

    - by Andrew
    I have installed SQL Server 2012 and SharePoint 2013 on my Windows Server 2012 machine using a dedicated domain install account. When I log into the same machine with a user account, all the icons for these applications are missing! I can still access the applications by finding them in 'Program Files', however it is very annoying. (For example, I'm not exactly sure where the SharePoint PowerShell is located, and frankly I don't want to know either) In previous versions of Windows Server, the Icons always showed up in the Start Menu. Does anyone know how I can copy the shortcuts in one account to another?

    Read the article

  • Easiest way to replace preinstalled Windows 8 with new hard drive with Windows 7

    - by Andrew
    There are all kinds of questions and answers relevant moving Windows 8 to a new hard drive. I'm not seeing anything quite applicable to my situation. I have a new, unopened, unbooted notebook with pre-installed Windows 8. I will be replacing the hard drive before ever booting, unless that is not possible for some reason. I want to "downgrade" to Windows 7 Pro, and I want a clean installation. To do so legitimately, I apparently either need to: Upgrade Windows 8 to Windows 8 Pro using Windows 8 Pro Pack, then downgrade; or Just install a newly-licensed copy of Windows 7 Pro. (Let me know if I've missed an option.) Installation media is likely not a problem, though if I need something vendor-specific that I cannot otherwise download, that could present an issue (Asus notebook, if that matters). If I could, I would just buy the Pro Pack upgrade, swap the hard drive (without ever booting), then install Windows 7 Pro directly on the new hard drive, using the Pro Pack key for activation. Will this work? Are there any activation issues? Edited to clarify, as some comments and answers indicate confusion: Here is, ideally, what I want to do: Before ever powering on the notebook, remove the current hard drive. Replace this hard drive with a new, blank hard drive. Install a clean copy of Windows 7 Pro on this new, blank hard drive. Unless I have no choice to accomplish the end result (a clean install of Win7 Pro on the newly-installed, previously-blank hard drive), I am not wanting to: Install Windows 7 "over" the current Windows 8 install (after upgrading to Win8 Pro). That would involve using the currenly-installed hard drive. I want to use a new, different hard drive. Copy the Win8 install to the new hard drive, then install Windows 7 "over" that installation. Install Windows 7 "over" the current Windows 8 install (after upgrading to Win8 Pro), then copy the installation to the new hard drive. If I have to use one of those three options, I will, but only if there is no other choice. Please note that this question is not about licensing: I will purchase the necessary license(s) to accomplish this procedure legally (apparently either Win8 Pro Pack or Win7 Pro -- the former currently appears less expensive).

    Read the article

  • Is surge protection actually needed?

    - by andrew
    Am I am an idiot for not using a surge protected powerboard? Does this mean my computer gets fried in a power outage? Which particular parts of the computer are most vulnerable to damage if I get a 'surge'? Sorry for being a newb.

    Read the article

  • Cannot find Power Management Tab in XP

    - by Andrew Heath
    I have the problem that when I send my computer to sleep it wakes if you bump the table, floor, burp etc. I have read many threads that say go to Device Manager Mouse Properties Power Management Tab and uncheck the box for wake. My problem is I do not have a Power Management Tab! Anyone know how to enable the tab or stop the mouse from waking my machine? And no, turning it upside down doesn't work either!

    Read the article

  • Recommendations for USB flash drive fast at writing small files

    - by Andrew Bainbridge
    I want a drive that I can be used as my work drive, storing a Subversion repo and sandbox for a small project. I'd also like it to be able to store a DVD rip. At the moment I've got a Super Talent pico-C 8gb. It's fast at reading and writing DVD rips, but the performance on small files (ie less than 4k) is utterly terrible (we're talking floppy disk speeds here). This Ars review measured a similar Super Talent drive and pretty much confirmed my measurements (take a look at the random write speeds on page 5). So, I'm looking for a 8gb or bigger drive that doesn't suck at read and write of small files and still has acceptable performance for very large files.

    Read the article

  • How to install PHP, Pear, PECL, and APC with Homebrew on Mac OS X?

    - by Andrew
    I'm trying to install APC for PHP 5.3 in the easiest way possible. I love Homebrew so I started down that route. I was able to install PHP 5.3.6 with this command: brew install https://github.com/adamv/homebrew-alt/raw/master/duplicates/php.rb --with-mysql I think this is supposed to install PHP, Pear, and PECL. It seems to install these just fine. Now when I try to install APC: $ pecl install apc downloading APC-3.1.9.tgz ... Starting to download APC-3.1.9.tgz (155,540 bytes) .................................done: 155,540 bytes Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in PackageFile.php on line 305 Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 Fatal error: require_once(): Failed opening required 'Archive/Tar.php' (include_path='/usr/local/Cellar/php/5.3.6/lib/php') in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 How can I fix this?

    Read the article

  • Ubuntu Device-mapper seems to be invincible!

    - by Andrew Bolster
    I'm working on a hopefully unrelated question question and I've got to a strange situation. First: I know very little about the very low level hardware kernal storage driver magix, so I'm hoping a) someone can help and b) someone can explain it to me better. I've been trying a dozen different configurations of my 2x500GB SATA drives over the past few hours involving switching between ACHI/IDE/RAID in my bios; After each attempt I've reset the bios option, booted into a live CD, deleting partitions and rewriting partition tables left on the drives. Now, however, I've been sitting with a /dev/mapper/nvidia_XXXXXXX1 that seems to be impossible to kill! its the only 'partition' that i see in the Ubuntu install (but I can see the others in parted) but it is only the size of one of the drives, and I know I did not set any RAID levels other than RAID0. Anyone have any ideas how I can kill this and get back to just two independent IDE drives? Or can anyone convince me of a reason to go the AHCI route? Many thanks in advance.

    Read the article

  • Ubuntu Nvidia Xorg Twinview doesnt like my monitors

    - by Andrew Bolster
    Basically, using the latest available ubuntu drivers (195.36.15) I cannot for the life of me get my two monitors to operate at suitable resolutions. When not using the drivers atall and going single-screen, both monitors support 1680x1050, but this option is only shown for one monitor in nvidia-settings, and when i manually add a metamode into the xorg.conf, it just gives up initialising the second screen. (**) Mar 25 15:49:47 NVIDIA(0): TwinView enabled (II) Mar 25 15:49:47 NVIDIA(0): Assigned Display Devices: CRT-0, CRT-1 (II) Mar 25 15:49:47 NVIDIA(0): Validated modes: (II) Mar 25 15:49:47 NVIDIA(0): "1680x1050,1680x1050" (II) Mar 25 15:49:47 NVIDIA(0): Virtual screen size determined to be 1680 x 1050 Any ideas?

    Read the article

  • Ubuntu Nvidia Xorg Twinview doesnt like my monitors

    - by Andrew Bolster
    Basically, using the latest available ubuntu drivers (195.36.15) I cannot for the life of me get my two monitors to operate at suitable resolutions. When not using the drivers atall and going single-screen, both monitors support 1680x1050, but this option is only shown for one monitor in nvidia-settings, and when i manually add a metamode into the xorg.conf, it just gives up initialising the second screen. (**) Mar 25 15:49:47 NVIDIA(0): TwinView enabled (II) Mar 25 15:49:47 NVIDIA(0): Assigned Display Devices: CRT-0, CRT-1 (II) Mar 25 15:49:47 NVIDIA(0): Validated modes: (II) Mar 25 15:49:47 NVIDIA(0): "1680x1050,1680x1050" (II) Mar 25 15:49:47 NVIDIA(0): Virtual screen size determined to be 1680 x 1050 Any ideas?

    Read the article

  • Can I get "disk utilization" from a NetApp filer via SNMP?

    - by Andrew
    On a NetApp filer's command line I'm running "sysstat -u" to show disk utilization, (actually the utilization of the single busiest disk). By disk utilization, I mean "percent of time the disk is busy", not "how much space on the disk is being used to store data/metadata". Is there a way to get disk utilization info through SNMP? The netapp.mib file doesn't appear to expose this. It does have CPU utilization, disk usage & capacity information, etc, but not disk utilization. The MIB-II (rfc1213) seems to be the only other information exposed by the filer through SNMP. I hope I am missing something. The "CP (consistency point) time" metric is exposed through the NETAPP-MIB in SNMP, but this seems to only partially correlate with disk utilization under write load, and not really at all under read load.

    Read the article

  • How do I backup Credentials Manager passwords (Windows 7)

    - by Andrew J. Brehm
    I am trying to create a backup of my stored passwords in Credentials Manager. But after Windows switches to the secure desktop to get the password for the backup file it simply announced that "Your stored logon credentials could not be backed up" and gives as explanation "Element not found", neither of which is helpful. (In fact I hate the "X could not Y" type of error message). I am an administrator on the machine and there is only one password in Credentials Manager. The sole point of the backup is to create a nearly empty Credentials Manager so that I don't have to delete manually hundreds of password entries every time I have to change my domain password. (I think Microsoft haven't throught this through properly. There appears to be no way to delete more than one entry at a time.) Any ideas?

    Read the article

  • Apache Process question about RAM usage

    - by Andrew Fashion
    So everytime I load a new page, I notice a new HTTPD process opens, every time I click a page, and each process says it's using anywhere from 2-4.5% of memory. Does that mean every single process is running at that time using 2-4% of RAM? It's a brand new server and I'm the only one on the server at the moment. Or does it mean all the other processes are dying, and only the new one is active. Because 4% of my 2048MB of RAM is already 82MB for just one process!?!? Let me know, because I am trying to determine what I need to beef my server up in order to handle high loads of traffic. I'm expect to get 20,000 uniques per day on launch. I am currently running a Dual Quad Xeon server, with only 2GB of ram, I will upgrade to 8GB or more shortly. Let me know what you suggest! thank you [root@D18634 log]# top | grep 'httpd' 11315 apache 15 0 362m 82m 24m R 12.3 4.1 0:03.00 httpd 11310 apache 16 0 322m 41m 21m S 5.7 2.1 0:02.98 httpd 11315 apache 15 0 362m 83m 25m S 24.3 4.1 0:03.73 httpd 11319 apache 16 0 324m 42m 20m R 1.0 2.1 0:01.85 httpd 11319 apache 16 0 362m 82m 23m R 78.5 4.1 0:04.21 httpd 11321 apache 16 0 323m 44m 23m S 35.3 2.2 0:04.13 httpd 11319 apache 15 0 361m 82m 23m S 8.3 4.1 0:04.46 httpd 11321 apache 15 0 323m 44m 23m S 35.9 2.2 0:05.21 httpd 11313 apache 15 0 324m 41m 19m S 48.6 2.1 0:03.23 httpd 11322 apache 16 0 354m 72m 20m R 11.0 3.6 0:05.11 httpd 11322 apache 16 0 354m 72m 20m S 23.9 3.6 0:05.83 httpd 11314 apache 16 0 355m 75m 22m R 18.3 3.7 0:04.64 httpd

    Read the article

  • How does NMap decide to print a progress line?

    - by Andrew Bolster
    Checking a larger subnet than I normally do; mapping out a cluster suite in a university for a traffic mapping project (permission attained), and I was wondering something. NMap usually prints its progress periodically, but I'm unclear to what that 'periodically' is, because the cirrent scan printed a line for basically every 100th of a percent up to 1% done, then one at 1.5%, and has said nothing since. I suspect that it changes at different 'levels' but does anyone have an actual answer?

    Read the article

  • Using multiple accounts on Gmail on a Blackberry

    - by Andrew G. Johnson
    I love the whole "one inbox" concept on my blackberry. Facebook updates, SMS/MMS messages, and email all lands in the same list. Problem is they assume I have a simple Gmail account without multiple accounts. I have my Gmail account setup to receive email from ~15 different accounts. I assume a lot of tech people do something like this. How have you gotten around in?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >