Search Results

Search found 11982 results on 480 pages for 'non greedy'.

Page 79/480 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • How can I make non-anti-aliased text look good in Firefox on Mac OS X?

    - by cosmic.osmo
    After being a Windows user for the last 10 years, I got a MacBook Pro, which I'm working on configuring to my liking. I find small-size anti-aliased text to be blurry and hard to read, so I typically disable it. I've found the settings in the General Control Panel, and used TinkerTool to increase the anti-alias threshold size to 18pt. Mac OS X and other applications appear to respect these settings. A problem appears when I use Firefox. By default, it's configured to ignore the Mac OS anti-alias settings. This is changed by going to about:config, and setting gfx.use_text_smoothing_setting = true (default is false). However, even with this setting, it appears Firefox is still rendering the fonts under the assumption that they will be anti-aliased, which results in very odd and uneven spacing, as you can see in this example (pay attention to the placement of the "s" in "Disable"): With anti-aliasing: Without anti-aliasing: How can I configure Firefox to both not use anti-aliasing and to use correct font spacing? I'm using Mac OS X Lion and Firefox 5.

    Read the article

  • What is the cheapest non-colocation way to serve about 10 static files at a rate of 100 megabits per

    - by Mark Maunder
    I've looked at Amazon S3 and it costs roughly $4746 per month for 100 megabits/s (which translates into 31,640 Gigabytes of data transferred. That's at a rate of $0.15 per gig.) I haven't found a cheaper "cloud" option. I'm curious if there's any other cloud hosting option out there cheaper than S3. Uptime is not an issue because I can build failover for most things into the browser. e.g. I can use javascript to say "if the image didn't load then go to this other URL instead." FYI I'm currently using a colocation facility which is about 30% cheaper than S3 and I'm familiar with colo prices - so this question is really about "cloud" services and by that I mean services where I don't have to worry about the infrastructure.

    Read the article

  • Hiding a HTTP Auth-Realm by sending 404 to non-known IPs?

    - by zhenech
    I have an Apache (2.2) serving a web-app on example.com. That web-app has a debug-page reachable via example.com/debug. /debug is currently protected with a HTTP basic auth. As there is only a very small user-base who has access to the debug-page, I would like to hide it based on IP address and return 404 to clients not accessing from our VPN. Serving a 404 based on IP-address only is easy and is described in http://serverfault.com/a/13071. But as soon I add authentication, the users see a 401 instead of a 404. Basically, what I need is: if ($REMOTE_ADDR ~ 10.11.12.*): do_basic_auth (aka return 401) else: return 404

    Read the article

  • Is it possible to temporarily disable non-global zones?

    - by Gary
    I frequently need to install a package on the global zone for a quick test on a development box. When there are multiple prompts for one package I have to answer them for each zone. If the zone is not running then I need to wait for the zone to start up, answer the prompts, etc. This is particularly annoying when if I'm getting packages from http://www.sunfreeware.com and using the pkg-get utility which nicely pulls in dependencies for you. Can I disable the zones temporarily? I haven't found a way to do this. Thanks.

    Read the article

  • College network - can I point non-domain student computers to our SUS server?

    - by Joel Coel
    Since I started here 3 months ago, one of the things that's really bothered me about the way this network is setup is something that shows up on the daily bandwidth consumption report. I get a list of top-visited sites by hits and by size, and invariably the top site (to the point that it's bigger than all the other top sites combined) is au.download.windowsupdate.com. We're pulling in ~30GB/day in windows updates. This is every day, not just after a patch Tuesday. After a patch day, it jumps closer to 40GB for a couple days. The key here is that almost none if it is by machines that I'm responsible for. My machines are for the most part fully patched, and when they're not they'll pull from a SUS server, so new updates are downloaded only once. It used to be closer to 50GB/day because most of the machines in our computer labs use DeepFreeze and weren't applying updates correctly, but that's fixed now. So the problem is definitely student-owned machines in the dorms, some of which are re-downloading the same updates in background each day, over and over. I'd love to have these machines start pulling from our SUS server. Then, if they don't ever actually install them at least they're not leeching bandwidth from our public internet connection. Any ideas on how to resolve the situation?

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • What methods are available for updating a non-Internet-connected VMWare ESXi host?

    - by romandas
    I have a stand-alone installation of VMWare vSphere Essentials, with a vCenter Server and 3 ESXi 4.0 host servers. The environment is intended to remain as a stand-alone network, with the exception that I can "float" a workstation or server between the 'Net and the VMWare network for patches and maintenance. With other installations, where the Internet is available, I've used the vSphere Host Update utility to connect to VMWare and then apply the patches to the ESXi hosts. My problem is that this utility does not seem to function if it cannot connect to both VMWare and the ESXi host at the same time, as the scan for patches function will not scan the server without connecting to VMWare's site to sync its repository first. Even if I sync it, disconnect from the 'Net and connect to the VMWare network, it still won't scan hosts for required patches -- it will prompt for syncing with VMWare and if you click No to syncing, the scan does not occur. Does anyone know of other options for updating the ESXi hosts in some automated fashion? I believe I can manually pull down required patches and apply them, but this will not scale well, and in the future I'm sure I'll want something a bit more scalable.

    Read the article

  • Why am I seeing Zero errors in non-ECC RAM?

    - by Alexander Shcheblikin
    According to sources, memory errors are a very probable event: Some say the probability of a DRAM error is 95% in just 3 days of operation of a computer with just 4 GB of RAM, others say 32% of servers experience at least one error in a month with 8% of DIMMs being at fault. Contrary to those horrors, in my more than 10 years of personal computers use I have seen exactly none of the memory errors. I admit I never paid special attention to the subject. However, I have ventured multi-hour memtest86 runs couple of times and never seen an error either. Some of the factors that IMO should aggravate the memory problems: I build my computers out of the most "bulk commodity" parts: mainstream budget motherboards and the next to cheapest memory. also I usually max out the technology available, e.g. in the times of 32 bit OS'es I used 4 GB of RAM and with the current desktop CPUs and the newer 64 bit OS'es I use 32 GB of RAM. memory usage is moderately heavy with lots of virtual machines up running small and big tasks 24/7/365. But nevertheless, no memory-related problems ever found! How's that?

    Read the article

  • How can I configure apache2 to use a non - exportable ssl certificate managed by windows?

    - by Samuel Rossille
    On Windows Server 2008 R2, my IT administrator has installed a certificate using the windows certificate management tool. The certificate is for *.thedomain.com. He set it up as not exportable for security reasons: I'm not suposed to be able to put my hands on the certificate. This configuration would allow me to use the certificate with microsoft products, but not to go away with the certificate. Q: It there a way to configure Apache 2 to use this certificate "the windows way"?

    Read the article

  • How to determine non-movable files in Windows 7?

    - by David
    Is there a way to determine which unmovable files are preventing Shrink Volume from releasing the full potential free space? Background: I have a 90 GB partition with Windows 7 on it, and 60 GB free space. I want to shrink it down to about 40 GB, and use the reclaimed 50 GB for a separate data partition. The Shrink Volume tool in Disk Management is only willing to give me 8 GB back. My understanding is that this is because of immovable files. I've followed the instructions found here, which involved disabling hibernation, pagefile, system restore, kernal dump, making sure all related files were deleted, and defrag'ing. I have successfully followed those same instructions before on this same drive, and partitoned the original 150 GB space into 90 GB and 60 GB, but I'm not so lucky this time.

    Read the article

  • How to send Bluetooth audio to non-Bluetooth speakers?

    - by wonsungi
    In short, I am looking for a Bluetooth-3.5 mini stereo converter. What is this type of device called, and what are some of the best models (is there a difference in audio quality/lag)? I wish to connect some speakers (Altec Lansing inMotion IM7), which does not support Bluetooth, to my laptop (Lenovo X301) wirelessly. Currently, I can connect my laptop's headphone jack to the AUX jack on my speakers via a mini stereo cable. How do I replace this cable with some type of Bluetooth setup? I am not sure what this Bluetooth device is called. I thought I found something, but it actually does the opposite of what I need (3.5 mini stereo-Bluetooth). (My OS is Vista Enterprise, if that matters)

    Read the article

  • Both nginx and php5-fpm init.d startup scripts are non-functional and returning no errors..? But they used to work perfectly

    - by Ollie Treend
    I have been using nginx and php5-fpm on my Ubuntu box for a while now. Everything has been configured and setup correctly, and it ran like a charm. I have been keeping the packages updated & upgraded as usual, but haven't touched the nginx OR php5-fpm config files at all (thus I'm pretty sure this isn't my fault... ) Basically, I noticed nginx wasn't running as it should be. I ran the command sudo service nginx start, and the script did nothing. The same thing happens when trying to do anything - start, stop, restart or reload. This also happens for the "php5-fpm" init script - although all other init scripts seem to be functioning correctly. When trying to start nginx OR php5-fpm, this is what happens: root@HAL:/etc# service php5-fpm start root@HAL:/etc# I can't understand what is going wrong. The script isn't returning errors, but similarly it isn't starting the daemon or reporting success as usual. For reference, both installations are from the official nginx and php5-fpm PPAs. The fact that both started doing this at the same time has thrown me - since they are both unrelated packages. I have since purged both sets of packages from my system with apt-get purge ... and also apt-get remove --purge ... both of which have successfully removed the packages, their config files, and their init.d startup scripts. After having reinstalled nginx, I now have a functioning startup script again - I can start the web server as usual. However, php5-fpm is still experiencing the strange premature exiting of the startup script.. and I really can't figure out what's causing it. I have no idea what caused this to occur initially, but have managed to fix nginx. I now need to fix the php5-fpm startup script. If anybody could shed some light on this situation, I would be very grateful! The chances are both these issues are related - and they were caused by me doing something stupid. But now I need to fix it. This time I was lucky - because these problems are just on my development server. But I have 2 other live servers which are configured in a similar way, and I am worried the same thing will happen to these two as well! Has anybody else come across this? Do you have any words of advice? Thank you

    Read the article

  • "Service Unavailable" when browsing to static HTML page in non-application IIS website on Windows 2003 (possibly SharePoint WSS 2.0 related?)

    - by Jordan Rieger
    Background: My client has an old Pentium III Windows 2003 server whose 16/36 GB disks are dying. On it he has a database-driven web site and email application that needs further customization by a developer (me). First we need to get it working on the new server. The original developer is no longer available to provide a system setup guide. So my client got a tech who imaged the old drives over to the new server and managed to get it booting. But the IIS-driven site no longer works. In fact it seems that IIS itself does not work. Problem: Service Unavailable when attempting to browse from the server itself to the URL for a local Web Site called test which I setup in IIS to serve a single static index.htm file. This I did to isolate the problem, and eliminate the client's application from the equation. The site is setup on port 80 with the host header "test.myclientsdomain.com", and I used the etc\hosts file to point that host at the local IP. I know the host entry took effect because I can ping it. When doing an iisreset, I get: Attempting start... Restart attempt failed. IIS Admin Service or a service dependent on IIS Admin is not active. It most likely failed to start, which may mean that it's disabled. Despite this message, the services all stay in the Started state. The only relevant System event logs I found are: Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1002 Date: 11/4/2012 Time: 11:04:47 PM User: N/A Computer: ALPHA1 Description: Application pool 'DefaultAppPool' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1039 Date: 11/4/2012 Time: 11:13:12 PM User: N/A Computer: ALPHA1 Description: A process serving application pool 'DefaultAppPool' reported a failure. The process id was '5636'. The data field contains the error number. Data: 0000: 7e 00 07 80 ~.. And one Application event log: Event Type: Error Event Source: Windows SharePoint Services 2.0 Event Category: None Event ID: 1000 Date: 11/4/2012 Time: 11:34:04 PM User: N/A Computer: ALPHA1 Description: #50070: Unable to connect to the database STS_Config on ALPHA2\SharePoint. Check the database connection information and make sure that the database server is running. That last log tells me that the tech may have initially tried to have both the old and the new server running, by renaming the new server from ALPHA1 to ALPHA2. And perhaps SharePoint grabbed onto that change, and now can't tell that the machine name has been switched back to the old ALPHA1. But why would SharePoint interfere with a static IIS web site serving a single HTML file? The test site is not even within an Application pool (I clicked the Remove button.) What I have tried/eliminated: No relevant services seem to be disabled: IIS Admin, WWW Publishing, Sharepoint Timer Giving Full Control to All Users/Everyone on the c:\inetpub\test folder serving my test site. I can connect to and query the local SharePoint config database (ALPHA1\SHAREPOINT\STS_CONFIG) from SSMS. But when I try to do stsadm -o setconfigdb -connect -databaseserver ALPHA1\SHAREPOINT it tells me The SharePoint admininstration port does not exist. Please use stsadm.exe to create it. And when I do that, using the port 9487 specified in the IIS SharePoint Admin site config, it tells me the port is already in use. Needless to say, simply browsing to the admin site gives me a similar error about being unable to reach the config database. I didn't want to go further down the SharePoint path as it may be completed unrelated to my IIS issue, and I don't even know yet if SharePoint is required for this application to work. The app itself is ASP.Net/C#/Silverlight and a little MS Word integration (maybe that's where the SharePoint stuff comes in.)

    Read the article

  • How to convert non key, value java arguments to applet params? (args like -Xmx64m)

    - by bwizzy
    I'm trying to use xvpviewer (based on TightVNC) to VNC into my VMs running on Citirx XenServer. There are a couple of caveats required with trusting the certificate from XenServer which I've got working. Essentially I'm trying to convert the java command below (which works on the command line to launch VncViewer) for use in an applet that can be accessed via HTML page. java -Djavax.net.ssl.trustStore=/tmp/kimo.jks -Xmx64m -jar VncViewer.jar HOST "/console?ref=OpaqueRef:141f4204-2240-4627-69c6-a0c7d9898e6a&session_id=OpaqueRef:91a483c4-bc40-3bb0-121c-93f2f89acc3c" PORT 443 PROXYHOST1 192.168.0.5 PROXYPORT1 443 SocketFactory "HTTPSConnectSocketFactory" I know I can put the HOST, PORT etc arguments into param tags for the applet but I'm not sure how to apply the two initial argments.

    Read the article

  • How to set up Pegasus Mail in non-admin account?

    - by thursdaysgeek
    I'm using Pegasus Mail for my mail client, and on my old computer (XP Pro), I could log in as the guest or admin account and it would still work. I recently got a new computer, with XP Pro, and I've set the mail client to work fine when I log in to the admin account, but when I log into the guest account, it always wants the SMTP, POP, and other connection information. How do I get it to remember that? I tried making an admin account, setting it up, and then downgrading that account, but that didn't work either. Does anyone even use PMail?

    Read the article

  • Why some "non-profit" hoax and spam are created? [closed]

    - by naxa
    Many spam/hoax has no direct link to any ripoff site or similar, they're just making sure people spread them ("forward this to at least 10 people or else"). Some of those may be created out of good faith, I'm not interested in those... But the rest, I since long suspect that there is some other reason for making them other than making fun of people (without getting much of the feedback)... Why are these created?

    Read the article

  • Let CGI-PHP load a non-default shared library.

    - by ralle
    In Apache2 I configured PHP as CGI in a virtual host: SetEnv PHPRC "/usr/local/php5.3" ScriptAlias /php5.3 "/usr/local/php5.3/bin" Action application/php5.3 /php5.3/php-cgi AddType application/php5.3 .php Everything works fine. Now I have some issues with the standard version of the GD because it restricts me in settings several hinting and anti-aliasing stuff for fonts. Therefore I want to modify the GD source and create a new shared library. Since I don't want a modifed library in my system I want only PHP to use that library. My question now: How can I change the Apache configuration in a way that PHP uses a certain new version of the library? Something like this does not work: ScriptAlias /php5.3 "LD_LIBRARY_PATH=/path/to/my/lib:$LD_LIBRARY_PATH /usr/local/php5.3/bin"

    Read the article

  • Server 2012 intermittently fails to respond to pings from single host, even with firewall disabled, but responds to non-ICMP requests fine

    - by James Westbury
    This one is kind of weird. I've got the following machines involved: DC01 - 10.1.2.42, Server 2012, domain controller & DNS server, physical machine nagiosv - 10.1.2.35, CentOS 6.4, Nagios, virtual machine CB01 - 10.1.3.81, Ubuntu 12.04 LTS, couchbase server, virtual machine So, I noticed something was wrong while configuring this new Nagios VM. I started seeing DC01's state flapping. I logged into nagiosv when I saw this happening, and attempted to ping DC01, both by FQDN and its IP address. Neither worked. I tried pinging the machine from CB01, which is another VM on the same virtual switch/physical NIC as nagiosv, and that worked fine. Pings still failing from nagiosv at this time. DC01 is also an internal DNS server, so I ran dig google.com from nagiosv, and was able to run a query against DC01 just fine: ;; Query time: 1 msec ;; SERVER: 10.1.2.42#53(10.1.2.42) ;; WHEN: Fri Nov 1 07:53:51 2013 ;; MSG SIZE rcvd: 204 Pings still failing from nagiosv, though. I can ping from DC01 to nagiosv, and that works, and I can still ping from other VMs on the same physical NIC into DC01, and that works. I should mention at this point that I've disabled the firewall on DC01 for testing purposes, and it doesn't make a damned bit of difference. (Even with the firewall enabled, I have a blanket exception for ICMP from the local subnet, so it shouldn't make a difference, but I figured I should test it anyway.) I loaded up Wireshark on DC01 and pinged it from nagiosv again. What I see is a bunch of echo requests coming in and not a single reply going back out. Filtered results here, showing all ICMP traffic during a 15-second period. A few more bits of info: There are no IP conflicts on the network. MAC addresses on the incoming pings match the MAC on the VM. There are no duplicate MACs on the network, as far as I can see. I have absolutely no idea why DC01 is failing to respond, here. Any ideas?

    Read the article

  • How to make a non-root user to use chown for any user group files?

    - by user1877716
    I would like to make a user super powerful, with almost all root rights but unable to touch a the root user (to change the password of the root). My goal is to user "B" to manage my web server. The problem is user B need to able to run the chown and chmod commands on some files belonging to other users. I tried to put B in root group or use visudo, but it's not enough. I'm working an Centos 6 system. If some body have ideas!

    Read the article

  • I'll be setting up a dedicated web server at work soon, my first non hobby server - What should I know?

    - by Rogue Coder
    I've been running my own dedicated server running CentOS and a LAMP stack for 2-3 years now, but it's only been hosting my own websites which aren't super important. However, I will soon be setting up a Linux Webserver and Linux Database Server at work, and I'm wondering what are some important things I should be doing. It's an internal server only, so only people in the company can access it. Should I get a slave server for both of my servers for backups? If I do this, how many backups should I be keeping and how often should those backups be done? Right now on my current server I run a cron job nightly to backup my MySQL databases (Usually 40mb files once compressed), and bi-weekly cron jobs to backup my web root. I just store these files on my local computer via FTP. Also, for an internal server like this, should I look at using LightHTTPD or NginX to increase performance, or will Apache be fine?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >