Search Results

Search found 22481 results on 900 pages for 'andy may'.

Page 662/900 | < Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >

  • Can't install xclip on Ubuntu 10.10

    - by wildster
    I'm trying to load an SSH key to Github from a new machine and this command is not working: sudo apt-get install xclip Reading package lists... Done Building dependency tree Reading state information... Done Package xclip is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package xclip has no installation candidate when I try: sudo aptitude install xclip Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done No candidate version found for xclip No candidate version found for xclip The following partially installed packages will be configured: synaptics-dkms 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Writing extended state information... Done Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Any idea how I can install this? Mucho thanks in advance

    Read the article

  • How to install rmagick on Ubuntu 10.04?

    - by Andrew
    Here's what I've done so far: sudo apt-get install imagemagick libmagickcore-dev This did not throw any errors, so I think that ImageMagick is installed fine. Then I tried installing the gem: sudo gem install rmagick This resulted in the following error: ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... yes checking for ImageMagick version >= 6.4.9... yes checking for HDRI disabled version of ImageMagick... yes checking for stdint.h... yes checking for sys/types.h... yes checking for wand/MagickWand.h... no Can't install RMagick 2.13.1. Can't find MagickWand.h. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out What do I need to do to install rmagick on Ubuntu 10.04?

    Read the article

  • Strange battery behavior on laptop

    - by EpsilonVector
    My laptop is behaving rather strangely lately, and I was hoping to get some idea as to what may be causing such symptoms. The problem: When charging, very minute or so it loses connectivity with the AC adapter for a split second, and regains it back immediately. When this happens the little light that indicates the computer is plugged in does flicker off and back on, but I checked the adapter by replacing the battery on my laptop, and this indeed solves the problem, so it is probably the battery which is at fault, not the adapter (I also tried to move the adapter's wire around just to make sure it had nothing to do with torn wires). I suppose that the obvious solution is to get a new battery, but as far as battery defects go- this is a rather strange one; it loses connection with the adapter, but still powers the computer, and changing the power setting to a balanced plan (was maximum performance) seem to have solved the problem too. Is there a chance this is not simply the battery, but some kind of other electronic defect? And if not, what can cause it to behave so strangely? PS I tried to recalibrate it- didn't help.

    Read the article

  • What is the 'best practice' for installing perl modules on Solaris/OpenSolaris?

    - by AndrewR
    I'm currently in the process of writing setup instructions for some software I've written that is implemented as a set of Perl modules. Having done this for various flavours of Linux, I'm now doing the same for Solaris/OpenSolaris (v10 only). Part of the setup process is to make sure that dependent Perl modules are installed. This has been pretty easy on Linux as the Perl modules I require tend to be within the distro's packaging system (eg yum install perl-Cache-Cache). This is not the case on Solaris so I'm working on setup instructions that use the CPAN module to fetch dependent modules (eg perl -MCPAN -e 'install Cache::Cache'). This works ok but there are known problems with modules that require things to be built with a C compiler. The problem is that the C Makefile generated assumes you're using Sun's compiler and uses command-line options not understood by gcc, which you may be using instead. Consulting teh Internetz has thrown up a number of solutions to this: Install and use Sun's compiler Use the perlgcc wrapper script Edit the makefiles by hand (yuk) All of these work. My question to those more familiar with Solaris than me is: Is one of these the 'best' or 'most commonly used' method?

    Read the article

  • Same netmask or /32 for secondary IP on Linux

    - by derobert
    There appear to be (at least) two ways to add a secondary IP address to an interface on Linux. By secondary, I mean that it'll accept traffic to the IP address, and responses to connections made to that IP will use it as a source, but any traffic the box originates (e.g., an outgoing TCP connection) will not use the secondary address. Both ways start with adding the primary address, e.g., ip addr add 172.16.8.10/24 dev lan. Then I can add the secondary address with either a netmask of /24 (matching the primary) or /32. If I add it with a /24, it gets flagged secondary, so will not be used as the source of outgoing packets, but that leaves a risk of the two addresses being added in the wrong order by mistake. If I add it with /32, wrong order can't happen, but it doesn't get flagged as secondary, and I'm not sure what the bad effects of that may be. So, I'm wondering, which approach is least likely to break? (If it matters, the main service on this machine is MySQL, but it also runs NFSv3. I'm adding a second machine as a warm standby, and hope to switch between them by changing which owns the secondary IP.)

    Read the article

  • VPN Client solution

    - by realtek
    I have several VPN's that I need to establish on a daily basis but from multiple workstations. What I would like to do it have either a server or vpn router that can perform this connection itself and that I can then route traffic through this device or server depending on the subnet I am trying to reach. The issue is that I only use VPN Clients to connect, so I am basically trying to achieve almost a site to site VPN but by using basically a VPN Client type connection from my network. The main VPN Client I use is the Sonicwall Global VPN Client where I initially use a Preshared Key and then it always prompts me for a username and password (not RSA key). My question is, is there any type of linux distro or even a hardware vpn router that can do this and connect to a Sonicwall device as if it were a client? I have tried pfSense which is very good but it fails to connect, probably due to a mismatch of settings. I have tried many others. Even dd-wrt on my router but it does not support whatever protocol Sonicwall uses. (I thought L2TP/IPSec) but it appears it may not be that. Any advice would be great! The other other thing I have thought of that I have not tried yet is Windows Server Routing and Remote Access but I have a feeling that won't work either. Thanks

    Read the article

  • What ways are there to set permissions on an Exchange 2003 mailbox?

    - by HopelessN00b
    I'm having a difficult/impossible time tracing down a permissions issue on an Exchange 2003 mailbox, and I was wondering if I'm missing any technical possibilities here. The basic question is what ways are there to set a user's permissions to access a mailbox in Exchange 2003? I know of two. Permissions on the mailbox itself (Mailbox Rights) and having delegated rights. And then, if it's possible, how would one view all the permissions (including delegated permissions) on the mailbox? The situation is that a new user who's been set up "exactly like all the others" in his department (pretty sure he was copied via the right click option in ADUC, in fact) can't access a specific shared mailbox, which I've been assured about a dozen other people do have access to and access on a regular basis. As to how they got permissions to the mailbox, no one knows, so it must have been granted by a white wizard whose spell has since worn off, so now IT has to handle it instead. Anyway... This mailbox is a normal AD user, created as a service account, for which no one knows the password (of course), so it's probably not the case that this service account was being used to delegate permissions. Upon taking examining the Mailbox Rights directly... Here are the permissions I see: This leads me to believe that one of two things are happening - the managers have been delegating full mailbox permissions to the rest of the department, or everyone's logging in using... not their own account. But, before I get too excited about the prospect of busting out the LART and strolling over to that department, I want to make sure I'm not missing another possible explanation. Like most of the rest of the world, I ditched Exchange 2003 at the earliest possible opportunity, and had been looking forward to never seeing it again, so I'm a bit rusty on the intricacies of how it [mostly, sort of] works. Anyone see any or possibilities, or things I may have missed, or does the LART get to come out and play?

    Read the article

  • Apache HTTPd FollowSymLinks path permission

    - by apast
    Hi, I'm configuring my development environment with a basic Apache HTTPd configuration. But, to avoid a often problem, I want to map my test URL to my development folder. I'm using Ubuntu. My development path is located under the following example path: /home/myusername/myworkspace/hptargetpath/src/pages Considering the following symbolic link mapping: #ls -l /opt/share/www/mydevelopmentrootpath: lrwxrwxrwx 1 root root 77 2011-02-13 18:53 /opt/share/www/mydevelopmentrootpath -> /home/myusername/myworkspace/hptargetpath/src/pages With this folder mapping, I configured Apache HTTPd with the following configuration: <VirtualHost *:*> ServerName local.server.com ServerAdmin [email protected] DirectoryIndex index.html DocumentRoot /opt/share/www/mydevelopmentrootpath <Directory /opt/share/www/mydevelopmentrootpath/ > Options +Indexes Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> But, I'm receiving a 403 Forbidden error when I want to access index.html under the address http://local.server.com/index.html. 403 Forbidden You don't have permission to access /index.html on this server. On httpd debug log, I checked the following message: [Sun Feb 13 19:34:47 2011] [error] [client 127.0.1.1] Symbolic link not allowed or link target not accessible: /opt/share/www/mydevelopmentrootpath I'm thinking that this problem is been generated by some path permission. It's not a direct permission to directory, but some intermediate directory in the path. There's a directive on httpd core Options: SymLinksIfOwnerMatch The server will only follow symbolic links for which the target file or directory is owned by the same user id as the link. But, I tested it without effects. Somebody may help me? I think that it's a trivial configuration on development environment. Best regards, And Past

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Can't seem to get chassis fans running

    - by TK Kocheran
    I've got a ASUS ROG Maximus V Extreme and I'm trying to connect my fans to the chassis fan pins to get them running according to the motherboard. I know for sure that my fans work, as when I test them with my Molex connector, they all happily power on. Here's two of my chassis fans connectors (there are 3-4): Here's the connector that came with either my motherboard or the PSU, can't remember :) I've never seen one of these strange cables before. All I know is that if I plug in the 4-pin mobo connector to either of these fan plugs, fans don't come on and don't show up in the BIOS. (Motherboard has a crazy awesome UEFI BIOS and shows you if it sees the fans.) If I try plugging the 4-pin connection into the mobo and the other side into the PSU, I can't POST. If I plug the PSU connector in without the mobo connector, fans come on. What could I be doing wrong here? Is it a problem with the cable I'm using? Is there something I may have missed in the build?

    Read the article

  • NetBackup's bplist doesn't get user/group info for Windows files

    - by Gnustavo
    I'm trying to get information about storage consumption from NetBackup's bplist output. I'm running NBU 6.0MP5 on a RHEL 3 server. The server is backing up several Solaris, Linux, and Windows machines. When I use bplist to get information about files backed up on any UNIX machine I get something like this: # bplist -C unixclient -R 99 -l -s 01/28/2006 -e 01/29/2006 / drwxr-xr-x test ccase 0 Nov 16 09:28 /l/home2/test/ -rw------- test ccase 4737 Jan 06 17:54 /l/home2/test/.bash_history -rw-rw-r-- test ccase 104 Nov 11 2004 /l/home2/test/.bashrc However, when I use it to list files backed up on any Windows client I can't get the user and group information. They both always appear as 'root'. Like this: # bplist -C winclient -t 13 -R 99 -l -s 02/20/2006 / drwx------ root root 0 Feb 20 14:26 /C/temp/ -rwx------ root root 41 Feb 20 14:26 /C/temp/asdf.txt drwx------ root root 0 May 25 2004 /C/temp/CTRMNGR/ Does anyone know why bplist doesn't show the correct user/group for Windows files? If it can't, is there a way to get that information using another command? Thanks. Gustavo.

    Read the article

  • Where is '/host' declared for mount in Wubi (Ubuntu 9.10)?

    - by Pedro
    Hi! I'm using Wubi (ubuntu 9.10), and I couldn't find where '/host' mountpoint is declared for mounting. There's no entry in fstab, but it's listed in /proc/mount and mounted at boot time. Any ideas? pedroel@ubuntu:~$ cat /proc/mounts rootfs / rootfs rw 0 0 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,relatime,mode=755 0 0 /dev/sda1 /host fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 /dev/loop0 / ext4 rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /sys/fs/fuse/connections fusectl rw,relatime 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 none /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0 none /var/run tmpfs rw,nosuid,relatime,mode=755 0 0 none /var/lock tmpfs rw,nosuid,nodev,noexec,relatime 0 0 none /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0 /dev/loop1 /home/pedroel/Downloads ext4 rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 gvfs-fuse-daemon /home/pedroel/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0 /dev/mapper/isw_efhafcifi_RAID_Volume01 /media/RAID_D fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 pedroel@ubuntu:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 /host/ubuntu/disks/root.disk / ext4 loop,errors=remount-ro 0 1 /host/ubuntu/disks/pedro.disk /home/pedroel/Downloads ext4 loop,errors=remount-ro 0 1 /host/ubuntu/disks/swap.disk none swap loop,sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 Thanks in advance, Pedro

    Read the article

  • Can't access one directory via HTTPS + public FQDN

    - by Justin James
    Hello - I have the strangest IIS error that I've ever seen in my life. I have an application/directory on an IIS server, that throws an error 500 when accessing ANY of the content in it, including HTML documents, when accessed via HTTPS AND the machines FQDN. When I access it with "localhost" it works fine. When I added a bogus entry for the NIC's IP in the hosts file, it worked fine. When I access it with the machines name and HTTP it works fine. Here's a chart (the machine's name is "lofn.titaniumcrowbar.com"): http - lofn.titaniumcrowbar.com: works https - lofn.titaniumcrowbar.com: broken https - localhost: works https - temp.titaniumcrowbar.com (put into hosts file): works I set up tracing, and I got some useless information: "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)" This would make sense, except this happens when pulling up static content. While the directory may be an "application", the content is all static in it. Any/all suggestions, no matter how strange, are VERY appreciated. Thanks! J.Ja

    Read the article

  • Need help trying to diagnose Symmetrix SAN performance issues

    - by arcain
    I am helping to benchmark hardware for a new SQL Server instance, and the volume presented to the OS for the data files is carved from a set of spindles on a Symmetrix SAN. The server has yet to have SQL Server installed, so the only activity on the box is our benchmarking. Now, our storage engineers say that this volume and it's resources are dedicated to our new server (I don't have access to see the actual SAN config) however the performance benchmarks are troubling. For example, the numbers look good until suddenly, and randomly, we see in our IO benchmarking tool wait times of 100 seconds, and disk queue lengths of 255 in perfmon. This SAN has an 8 GB cache, plus there are other applications besides ours that use the SAN. I'm wondering if (even though the spindles for our volumes should be dedicated to us) the cache may be getting hammered during the performance testing, or perhaps the spindles our volumes are on aren't really dedicated to us. We're not getting much traction from our storage engineers in helping us track down the problem, so if anybody has experience with diagnosing a problem like this and would like to share insights and troubleshooting methodologies, I'd appreciate it.

    Read the article

  • How do you use VIM to edit tabular data (tables)? Specifically, BIND (named) DNS db files.

    - by Richard Bronosky
    I'm usually a purist when it comes to vimming. I don't like remapping keys, or learning to rely on a bunch of plugins. I like to feel just as powerful on foreign boxen as I do on my own dev box. I do, however, believe in syntax files. Even though the solution may not be a syntax file (bindzone.vim is what I use), I want it bad enough to do whatever. I regularly view or edit tab (or comma, but that would be a bonus) delimited data. I hate having to set my tabstop to some ridiculous number in order to have everything line up. Example: The BIND zone files are ~40+,6,2,5,15+. So, even though I could view them on a single screen, if I set ts=40, I cannot. I have been searching for a "dynamic tab size" solution for years, but no luck. I hate that my only good way of editing or even visualizing tabular data is to scp it to my work station and open it in Open Office. There has to be a better way.

    Read the article

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • How do I stop VLC from stealing my volume buttons

    - by MGOwen
    when I press the volume buttons on my laptop, usually the system volume is changed. However, when I do this with VLC it "steals" the presses and adjusts it's own "volume" instead. The system volume is also changed. I can't find any way to turn this off in VLC. Does anyone know? Update: Sorry, some more details I should have included originally: VLC VERSION: 1.1.4 (and a few previous releases, back to about 1.1.0 or so, I think) OS: Win Vista Pro 32 HARDWARE: Dell 1720 laptop (the volume buttons are little buttons on the front of the unit, they may work something like "media" keyboard volume buttons) Update: The buttons seem to map to Ctrl+Alt+b and Ctrl+Alt+c (according to the shortcut key box in windows shortcut properties) but the VLC advanced preferences hotkeys screen doesn't list these as the keys it uses for volume. I changed it so there are no volume hotkeys in VLC settings - no luck it still steals the presses and adjusts the volume. Also, pressing Ctrl+Alt+b or c doesn't change my system volume, so who knows what windows or VLC are doing to recognise those volume buttons. :(

    Read the article

  • What's the issue with this Samba setup?

    - by Dan Nestor
    I asked this on superuser, but I realized that may be the wrong place. I am duplicating the question here, I hope this is allowed. I am trying to share a directory through samba. In smb.conf I have the following: [global] workgroup = WORKGROUP security = user passdb backend = tdbsam netbios name = <hostname> [share_name] path = </path/to/share> writable = yes valid users = <username> <username>, the user in question, is the owner of directory /path/to/share. Permissions on the directory are 755. If I try to connect from another computer, the connection attempt is unsuccessful (I assume it's an authentication error, because it re-prompts me for the password). The client requires a domain name for authentication, I tried both WORKGROUP and the hostname/netbios name of the samba server. Samba logs on the server have no mention of the failed connection attempt. Firewall on the server is down. What am I doing wrong? Update: have since run smbpasswd -a <username> and now I am getting a clear error message, "not enough permissions to view contents of share".

    Read the article

  • Android webbrowser returns code 500 for webpage on Nginx webserver

    - by Paxxil
    Hey! I've come to a very weird behavior of a web browser on android mobile phone (I've tried HTC Wildfire and HTC Desire phones). I have a web server with Nginx v0.8.54. When i try to open a web page on the phone it shows me error: The requested item could not be loaded! (Status code: 500) BUT it only happens when I am requesting page through Mobile network. On Wifi it works just fine .... but there is more .... if I stop Nginx and start Apache web server it works just fine on both Mobile network and wifi. I've also tried other mobile network and it is the same behavior. Some server stats: Firewall is OFF Selinux is OFF the web page (using Nginx web server) opens normally on any other browser (IE, FF, Opera, Chrome, Safari) on the laptop or PC Nothing in nginx error.log This is the only entry in access.log when the page is requested: xxx.xxx.xxx.xxx - - [17/Mar/2011:11:19:49 -0500] 200 "GET / HTTP/1.1" 27405 "-" "Mozilla/5.0 (Linux; U; Android 2.2; en-gb; Desire_A8181 Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" "-" index.html has only "Hello World" string in it. There is no fishy javascript or anything else. .... but there is even more.... if i open the same page on another server, with the same Nginx build, with the same server and web server configuration.... it opens just fine. if anyone has any idea on what may be going on, i would really appreciate it if you let me know. Thanks! EDIT: i forgot to mention that page opens OK on Iphone and Nokia

    Read the article

  • Why do hosts prefer Linux to Windows Server?

    - by iconiK
    So far I see a HUGE majority of hosts provide only Linux shared hosting, providing Windows only to VPS (or even to only dedicated servers). Why is it so? While Windows is a lot more expensive than Linux (though it depends on a lot of factors, not just initial and support license cost), it also provides ASP.NET, IIS and of course, Microsoft SQL Server. I know in the past it might have been because of cPanel being Linux only but now they have a Windows version. But still, why is Linux predominantly used on shared hosting? PHP works on both systems. IIS can be (and probably is) faster. MySQL runs on both systems as well. cPanel has a Windows version. Python, Perl, Ruby, all run on Windows as well. You even have MS SQL Server Express, which I find more superior than MySQL in both speed and features. Access is there for low usage requirements, as is SQLite (which is so great for quick small stuff). And with PowerShell you have a good alternative to the Unix shell. EDIT: I am looking for common reasons, I realize each hosting company (and/or it's clients) may have different needs. This becomes very important when you get to VPS or Cloud which give you a full operating system to use.

    Read the article

  • How to play 24 fps video smoothly on a 60Hz display?

    - by netvope
    I use mpc-hc to play videos on Win7 x64. With the default settings (#1), video playback is great most of the time. But for panning shots, playback is not smooth. I stepped through the video frame by frame and found that the panning movement is smooth (e.g. each frame shifts horizontally by 10 pixels), so the problem is how the 23.976 fps video is interpolated to 60Hz. The judder looks like what would be caused by a "2:3 pulldown", where the frames are played unevenly like: frame 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, etc (#2) Using "optimal renderer settings" (#3) instead of the default disables the Aero theme and causes tearing. Setting my LCD display to 50Hz may have improved the judder slightly (but I can't really tell). My display does not support 24Hz or 48Hz, and forcing them in the Nvidia control panel gives blurry screen. I've tried other video players (VLC and KMPlayer), the ReClock Directshow Filter, video files from different sources (#4), turning on/off DXVA, and a computer with a different GPU, but the judder in the playback is similar. None of them solved the problem. So, how can I play 23.976 or 24 fps video smoothly on a 60Hz display? I think a video player could make the video smoother by doing linear interpolation, such as: 1. 100% frame 1 2. 60% frame 1 + 40% frame 2 3. 20% frame 1 + 80% frame 2 4. 80% frame 2 + 20% frame 3 5. 40% frame 2 + 60% frame 3 6. 100% frame 3 7. 60% frame 3 + 40% frame 4 .. etc Can any existing video player do this? Footnotes: (#1) Video renderer: EVR Custom Pres. (#2) This example converts a 24 fps video into 30 fps (#3) View Renderer settings Reset Reset to optimal renderer settings (#4) The files I have are all H.264 mkv files, but I don't think the file format/encoding matters.

    Read the article

  • How do I create a read only MySQL user for backup purposes with mysqldump?

    - by stickmangumby
    I'm using the automysqlbackup script to dump my mysql databases, but I want to have a read-only user to do this with so that I'm not storing my root database password in a plaintext file. I've created a user like so: grant select, lock tables on *.* to 'username'@'localhost' identified by 'password'; When I run mysqldump (either through automysqlbackup or directly) I get the following warning: mysqldump: Got error: 1044: Access denied for user 'username'@'localhost' to database 'information_schema' when using LOCK TABLES Am I doing it wrong? Do I need additional grants for my readonly user? Or can only root lock the information_schema table? What's going on? Edit: GAH and now it works. I may not have run FLUSH PRIVILEGES previously. As an aside, how often does this occur automatically? Edit: No, it doesn't work. Running mysqldump -u username -p --all-databases > dump.sql manually doesn't generate an error, but doesn't dump information_schema. automysqlbackup does raise an error.

    Read the article

  • ADSL Modem/Router sometimes hands out incorrect IP addresses

    - by Peter Keevill
    My setup is as follows:- Main ADSL modem / router (switch) configured as DHCP server with address range 192.168.0.25-60 The office machines are configured with fixed IP ( not in the same address pool of course ) and hard wired to this router. A wireless access point ( Router ) is connected to provide Internet access for guests in a separate area. This router is NOT configured as a DHCP server. Wireless authentication is turned off. IP address lease times are set to 4 hours. Sometimes guests are able to connect to the wireless access point but they are not given a valid IP. They get 169.x.x.x addresses. Rebooting their machines does not resolve the problem. The only way to resolve is to reboot the main ADSL/router which is often frustrating for other users who are successfully connected with valid IP and DG. The problem seems to occur more frequently to Apple/Mac guests although it also sometimes occurs with Win machines. I personally use Ubuntu on my Laptop and thus far, never have had any problem connecting and getting a valid IP address in the guest area. One further point of note which may give a clue is that certain guests ( always Apple/Mac ) get lease times of 90 days. However, this does not 'stack out' the number of available addresses and of course, rebooting the router clears them until the next time they login.

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • Word Document Turns to Read-Only

    - by Psycho Bob
    I am running into an issue with a user whose Word document is somehow turning itself into Read-Only. The user is using Word 2003 and is accessing a document that is in a Server 2008 share. The document itself starts out as a normal, editable document (user has Full Control permissions), and the user is able to save and do the 'normal' things you would do to a document. However, after a couple of saves, the document turns to Read-Only (according to the title bar) even though the Read-Only attribute is not checked on the document's properties. Here is some additional information about the situation: *User has approximately 5-8 Word documents open at a time *User saves the document frequently (sometimes at a frequency of once per minute) *Once the document is closed it will open as a normal document if reopened *When the document does turn to Read-Only the user will do a "Save As" on the document and save it as FILENAME # where # is some increment of how many times this has happened (some documents are up to their 30th iteration) I understand that there is probably some room for user education here and that they could just be copying the RO document to a new one, closing and opening the RO doc, then copying all the information back. However, I would like to get to the route cause of the problem and try to stop it from happening in the first place. UPDATE: Apparently the reinstall did not fix the issue. I researched the issue a bit more and found that disabling the background save may take care of it, but I haven't had a chance to try it yet. Does anyone else have any other ideas?

    Read the article

< Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >