Search Results

Search found 4634 results on 186 pages for 'oscar red carpet'.

Page 134/186 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Using secure proxies with Google Chrome

    - by cYrus
    Whenever I use a secure proxy with Google Chrome I get ERR_PROXY_CERTIFICATE_INVALID, I tried a lot of different scenarios and versions. The certificate I'm using a self-signed certificate: openssl genrsa -out key.pem 1024 openssl req -new -key key.pem -out request.pem openssl x509 -req -days 30 -in request.pem -signkey key.pem -out certificate.pem Note: this certificate works (with a warning since it's self-signed) when I try to setup a simple HTTPS server. The proxy Then I start a secure proxy on localhost:8080. There are a several ways to accomplish this, I tried: a custom Node.js script; stunnel; node-spdyproxy (OK, this involves SPDY too, but later... the problem is the same); [...] The browser Then I run Google Chrome with: google-chrome --proxy-server=https://localhost:8080 http://superuser.com to load, say, http://superuser.com. The issue All I get is: Error 136 (net::ERR_PROXY_CERTIFICATE_INVALID): Unknown error. in the window, and something like: [13633:13639:1017/182333:ERROR:cert_verify_proc_nss.cc(790)] CERT_PKIXVerifyCert for localhost failed err=-8179 in the console. Note: this is not the big red warning that complains about insecure certificates. Now, I have to admit that I'm quite n00b for what concerns certificates and such, if I'm missing some fundamental points, please let me know.

    Read the article

  • Do registry issues with Win7 persist through a recovery from a system image?

    - by user59089
    So I need a bit of advice, please; here's my situation: I have 1) a system image on an a brand new external 1 TB SATA drive, that I managed to successfully capture before my 2) primary system drive went down. I realize this is a fairly simple matter of buying a new primary drive and performing the recovery to the fresh disk...however, the issue is that I believe Win7 was also having some significant issues of its own--basically, Update unable to install updates, and Backup continually ditching the auto backup schedule. I'd been trying to address those issues when my system was still working, but it's been so fruitless, I'm convinced a Win7 re-install would be best, and now I'm concerned that if I was in fact having what I believe are likely registry-related issues before, that these will persist through a recovery--would that likely be correct? I'm mainly worried about recovering my files, so if I did a full recovery from the image, should I be able to then access my individual files, and copy them manually to an external drive, so I can then do a full re-install of Win7? Sory if this seems obvious, but I've never done a recovery before and just trying to make sure there's no red flags with what I have in mind...

    Read the article

  • What's the state of the art in image upscaling?

    - by monov
    I like to collect cool pics and use them as wallpapers or for other things. Often, artists publish only low-res versions, probably for fear of theft. Example: Gabriel Pulecio's BIRDS Now, if I want to use that as a wallpaper, I'd have to upscale it, and obviously that'd make it look blurry because of the bicubic interpolation. I realize there's no real way to get a high-res version from a low-res pic, because the information is not simply there. That said, I'm wondering if heuristics have been developed for upscaling with less apparent loss of quality. Those would probably be optimized for specific image types. For photorealistic pictures, for cartoons with large flat areas, for pixel art... One algorithm I'm aware of is Seam Carving. It works for some kinds of pics, especially ones with a plain, undetailed or uninteresting background, and a subject that strongly stands out. But it's far from being general-purpose. Applying it to the above pic produces this. It looks quite sharp, but the proportions are horribly distorted because the algorithm is not designed for this kind of pic. Another is Pixel art scaling algorithms. Those are completely unfit for anything other than actual pixel art that's pixelized to begin with. For example, I tried the scale2x windows binary on my pic, but its output was nearly indistinguishable from nearest-neighbour scaling because the algorithm didn't detect any isolated pixely fragments to work from. Something else I tried was: I enlarged the image in Photoshop with bicubic interpolation, then I applied unsharp mask. The result looks pretty bad. The red blotch is actually resized reasonably well, but the dove is far from it. What I'm looking for is some app that makes a best-effort attempt at upscaling any input image while minimizing blurriness. If you know of any, I'll be thankful. Note that the subjective prettiness and sharpness of the result is what matters... the result doesn't need to be completely faithful to the original small image.

    Read the article

  • uWSGI cannot find "application" using Flask and Virtualenv

    - by skyler
    Using uWSGI to serve a simple wsgi app, (a simple "Hello, World") my configuration works, but when I try to run a Flask app, I get this in uWSGI's error logs: current working directory: /opt/python-env/coefficient/lib/python2.6/site-packages writing pidfile to /var/run/uwsgi.pid detected binary path: /opt/uwsgi/uwsgi setuid() to 497 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes uwsgi socket 0 bound to TCP address 127.0.0.1:3031 fd 3 Python version: 2.6.6 (r266:84292, Jun 18 2012, 14:18:47) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] Set PythonHome to /opt/python-env/coefficient/ *** Python threads support is disabled. You can enable it with --enable-threads *** Python main interpreter initialized at 0xbed3b0 your server socket listen backlog is limited to 100 connections *** Operational MODE: single process *** added /opt/python-env/coefficient/lib/python2.6/site-packages/ to pythonpath. unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) *** no app loaded. going in full dynamic mode *** *** uWSGI is running in multiple interpreter mode ***` Note in particular this part of the log: unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) **no app loaded. going in full dynamic mode** This is my Flask app: from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello, World, from Flask!" Before I added my Virtualenv's pythonpath to my configuration file, I was getting an ImportError for Flask. I solved this though, I believe (I'm not receiving errors about it anymore) and here is my complete configuration file: uwsgi: #socket: /tmp/uwsgi.sock socket: 127.0.0.1:3031 daemonize: /var/log/uwsgi.log pidfile: /var/run/uwsgi.pid master: true vacuum: true #wsgi-file: /var/www/coefficient/coefficient.py wsgi-file: /var/www/coefficient/flask.py processes: 1 virtualenv: /opt/python-env/coefficient/ pythonpath: /opt/python-env/coefficient/lib/python2.6/site-packages This is how I start uWSGI, from an rc script: /opt/uwsgi/uwsgi --yaml /etc/uwsgi/conf.yaml --uid uwsgi And if I try to view the Flask program in a browser, I get this: **uWSGI Error** Python application not found Any help is appreciated.

    Read the article

  • Optimal file system type and mount options for an rsnapshot dedicated drive

    - by Nimmy Lebby
    We have an external USB 2 drive that we are using as a backup drive for our configuration. We use rsnapshot for the backups. It uses a few standard commands for managing snapshots: rm -rf: deletes expired snapshots mv: moves older snapshots down a slot cp -al: duplicates last snapshot to new slot rsync -a --delete --numeric-ids --relative: synchronizes new snapshot As you could see by the log below, the majority of the time is spent on the rm -rf and the cp -al steps: [25/Dec/2010:14:00:02] rsnapshot hourly: started [25/Dec/2010:14:00:02] echo 21012 > /var/run/rsnapshot.pid [25/Dec/2010:14:00:02] rm -rf /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.4/ /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.3/ /mnt/extdrive/snapshots/hourly.4/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.2/ /mnt/extdrive/snapshots/hourly.3/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.1/ /mnt/extdrive/snapshots/hourly.2/ [25/Dec/2010:14:15:48] cp -al /mnt/extdrive/snapshots/hourly.0 /mnt/extdrive/snapshots/hourly.1 [25/Dec/2010:14:23:32] rsync -a --delete --numeric-ids --relative /etc /mnt/extdrive/snapshots/hourly.0/sm4/ [25/Dec/2010:14:23:52] touch /mnt/extdrive/snapshots/hourly.0/ [25/Dec/2010:14:23:52] rm -f /var/run/rsnapshot.pid [25/Dec/2010:14:23:52] rsnapshot hourly: completed successfully My questions: I'm currently using ext4 for the filesystem. Maybe this is not the best choice from those available in Red Hat. Anyone have any recommendations that would speed up the process? The partition's mount options are sync,dirsync 1 2. Is there a way to optimize this since it's solely used for rsnapshot? Of course, reasoning would be greatly appreciated.

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Need to boot into chkdsk from USB on Windows netbook

    - by Gaz Davidson
    While attempting to install Ubuntu on a 32-bit Windows XP netbook, the partition resize operation failed due to inconsistencies in the NTFS filesystem (lesson learned: run chkdsk /f in Windows before trying to resize a partition in Linux). Now the installer only gives the option to replace Windows with Ubuntu, the partition can't be resized in gparted, which displays a red exclamation mark and an error log when you click it. To make matters worse, we're also unable to reboot into Windows to get at chkdsk. We get a BSoD when choosing any of the options (including the DOS recovery console thing). The netbook has no CD-ROM drive, contains no recovery image and our only connection to the Internet is via the hotspot on my mobile device. We don't have Windows recovery CDs, but we do have a USB flash drive. We have a 64-bit laptop running Ubuntu 12.04 and Windows 7 (both 64-bit). So, on to the question: Is anyone aware of a way to get into a DOS recovery console and run chkdsk from a USB disk drive, without having to pirate Windows XP or download hundreds and hundreds of megabytes of crap? If it was my device I'd just flatten it, but it isn't. Please help!

    Read the article

  • Enable CPU fan always on

    - by Gundars Meness
    I am using 3 years old overheating laptop and I want my CPU fan to be spinning 24/7 regardless of the consequences. How to make it spin? The problem is that CPU & GPU heats up to 68°C (154 F) right after boot and never goes down, because CPU fan is not spinning full throttle. It starts spinning faster when temperature goes over 70°C and stops when it reaches seventy again. When doing heavy work on databases, it gets from 70 to 90 in no-time and automatically powers off. Bios does not contain any "fan spin 100%" options, just "spin slowly all the time" and "auto" which is more useless than the first one since my fan doesn't have pwm wire. Currently I'm solving this with cooling stand (3x5V), but it isn't much of a help. I would rather use the CPU fan since it is the only fan directly responsible for cooling down CPU/GPU. But how to make it spin 100% all the time? Should I attach it's red power wire to motherboard to get constant 5V (is there such option?), or is there an option to control it via software? Laptop: Samsung R528 2.3 GHz Intel i3 with Nvidia GeForce 310M Bios: Phoenix 03KT.M003.20100622.KSJ (and that is latest update) OS: Ubuntu 12.04.2 LTS with 3.2.0.51 kernel CPU fan: Image/Description Has 5V 0,4A and only 3 pins, no pwm. P.S. Yes, I did clean everything with alcohol, freed the air vents, changed thermal paste etc; that reduced temperature by 4 degrees.

    Read the article

  • Which Linux distributions work on IBM's JS20 PowerPC blade?

    - by Matthew Rankin
    Which Linux distributions have people successfully gotten working on an IBM JS20 Blade, which has the PowerPC 970 processor? Specifically, I'm interested in distributions other than RHEL and SLES. What gotchas need to be watched out for when installing a particular distribution of Linux on the JS20? Non-Specific Distribution Information IBM's Linux on BladeCenter JS20 IBM's eServer BladeCenter JS20 Whitepaper — "The JS20 blade supports all popular Linux distributions including Red Hat®, Inc., and SUSE LINUX." PenguinPPC Distributions List PPCLinux — "This project is a repository of information on how to run GNU/Linux on PowerPC architectures." Ubuntu Specific Information Ubuntu PowerPC FAQ — "Ubuntu 6.10 was the last officially supported PowerPC version of Ubuntu." Ubuntu PowerPC Download Ubuntu 8.04 PowerPC Supported Hardware Ubuntu 9.10 Ports — Mac (PowerPC) and IBM-PPC (POWER5) server install CD. For Apple Macintosh G3, G4, and G5 computers, including iBooks and PowerBooks as well as IBM OpenPower machines. Debian Specific Information Debian on PowerPC —"We may have a 64bit port in the future." Looks like there is only a 32-bit version available currently. Debian on JS20 blades Installing Debian Etch on IBM JS20s Gentoo Specific Information Gentoo Linux Crux PPC Specific Information Crux PPC

    Read the article

  • Network bridge does not work on Windows 7

    - by D. Strout
    I am trying to use Windows 7 to bridge my wireless and wired connections together to get wireless on a second Windows XP computer (fresh install). The network bridge is created successfully, but when I connect the cable from the first computer with the bridged connections to the second one with just a wired connection, nothing happens. The second computer doesn't connect, and the first computer shows no sign of anything different. I tried bridging the connections of a third computer, and connecting this computer to the second computer. This worked (third computer bridged the wireless to the second computer). Thus, the problem must be with the first (Win 7) computer. However, I have no idea what the problem could be. All Internet Connection Sharing is turned off. Homegroup is disabled (it was originally enabled, I thought that might be a problem, so I disabled it). Also, I had VMWare fusion installed, and that created extra items in the "This connection uses the following items" box in the properties dialog. Thinking this might be causing issues, I uninstalled that too. Still, with everything I tried, I can't get it to work. Any suggestions? Edit: Another thing I noticed that might be worth mentioning: The network icon in the Win7 taskbar has a red X on it that means it's not connected, but when I click the icon, it says connected next to my wireless connection, and I am able to access the internet.

    Read the article

  • Immediately tell which output was sent to stderr

    - by Clinton Blackmore
    When automating a task, it is sensible to test it first manually. It would be helpful, though, if any data going to stderr was immediately recognizeable as such, and distinguishable from the data going to stdout, and to have all the output together so it is obvious what the sequence of events is. One last touch that would be nice is if, at program exit, it printed its return code. All of these things would aid in automating. Yes, I can echo the return code when a program finishes, and yes, I can redirect stdout and stderr; what I'd really like it some shell, script, or easy-to-use redirector that shows stdout in black, shows stderr interleaved with it in red, and prints the exit code at the end. Is there such a beast? [If it matters, I'm using Bash 3.2 on Mac OS X]. Update: Sorry it has been months since I've looked at this. I've come up with a simple test script: #!/usr/bin/env python import sys print "this is stdout" print >> sys.stderr, "this is stderr" print "this is stdout again" In my testing (and probably due to the way things are buffered), rse and hilite display everything from stdout and then everything from stderr. The fifo method gets the order right but appears to colourize everything following the stderr line. ind complained about my stdin and stderr lines, and then put the output from stderr last. Most of these solutions are workable, as it is not atypical for only the last output to go to stderr, but still, it'd be nice to have something that worked slightly better.

    Read the article

  • Setup staging with multiple SVN

    - by Kapil Sharma
    We are a startup, setting new environments for product to be released soon. Planned server structure with planned release flow is as shown in below image It ideally have a local server (or Staging server, shown in green) in local office, without public IP address and Production Server (Red) at Amazon EC2. Both local and production server have there own SVN copy. Management here want to update production server with production SVN and without providing its access to developers (including freelancers/contract employees). So for developers, there is a Local SVN on local server. Another purpose of local SVN to keep a copy of code on local server, which is under our direct control. Although there are some technical concerns like how will code at local server will be updated from local SVN and commit on production SVN but bigger question is, is that structure correct? Major requirement remain don't provide production SVN access to developers. What are other possible options to achieve that? Another minor question, if suitable here, if above structure is correct, is it possible for a SVN checkout to get updated from one SVN (Local SVN) but commit to other (Production SVN)? If yes, How? edit An answer has been accepted but for bounty, I'm still looking for answer Is that structure correct? Its pros/Cons? Technical solution is already provided by accepted answer.

    Read the article

  • Scanning website for vulnerablities

    - by Kristen
    I have found that the local school's website installed a Perl Calendar - this was years ago, it has not been used for ages, but Google has it indexed (which is how I found it) and it full of Viagra links and the like ... program was by Matt Kruse, here is details of the exploit: http://www.securiteam.com/exploits/5IP040A1QI.html I've got the school to remove that, but I think they also have MySQL installed and I'm aware that out-of-the-box there have been some exploits of Admin Tools / Login in old versions. For all I know they also have PHPBB and the like installed ... The school is just using some cheap, shared hosting; the HTTP response header I get is: Apache/1.3.29 (Unix) (Red-Hat/Linux) Chili!Soft-ASP/3.6.2 mod_ssl/2.8.14 OpenSSL/0.9.6b PHP/4.4.9 FrontPage/5.0.2.2510 I'm looking for some means of checking if they have other junk installed (quite possibly from way back, and now unused) that might put the site at risk. I'm more interested in something that can scan for things like the MySQL Admin exploit rather than open ports etc. My guess is that they have little control over the hosting space that they have - but I'm a Windows DEV, so this *nix stuff is all Greek to me. I found http://www.beyondsecurity.com/ which looks like it might do what I want (within their evaluation :) ) but I have a worry about how to find out if they are well known / honest - otherwise I will be tipping them a wink with a Domain Name that may be at risk! Many thanks.

    Read the article

  • Firewall for internal networks

    - by Cylindric
    I have a virtualised infrastructure here, with separated networks (some physically, some just by VLAN) for iSCSI traffic, VMware management traffic, production traffic, etc. The recommendations are of course to not allow access from the LAN to the iSCSI network for example, for obvious security and performance reasons, and same between DMZ/LAN, etc. The problem I have is that in reality, some services do need access across the networks from time to time: System monitoring server needs to see the ESX hosts and the SAN for SNMP VSphere guest console access needs direct access to the ESX host the VM is running on VMware Converter wants access to the ESX host the VM will be created on The SAN email notification system wants access to our mail server Rather than wildly opening up the entire network, I'd like to place a firewall spanning these networks, so I can allow just the access required For example: SAN SMTP Server for email Management SAN for monitoring via SNMP Management ESX for monitoring via SNMP Target Server ESX for VMConverter Can someone recommend a free firewall that will allow this kind of thing without too much low-level tinkering of config files? I've used products such as IPcop before, and it seems to be possible to achieve this using that product if I re-purpose their ideas of "WAN", "WLAN" (the red/green/orange/blue interfaces), but was wondering if there were any other accepted products for this sort of thing. Thanks.

    Read the article

  • Problems installing MySQL-python via yum / missing dependency / incompatibility problem?

    - by bs0
    I have come up against problems installing MySQL-python via yum. Our server is running Centos 5.5 and MySQL Version 5.1.45, Python-dev is installed. Yum complains about the missing dependency libmysqlclient_r.so.15: Missing Dependency: libmysqlclient_r.so.15()(64bit) is needed by package MySQL-python-1.2.1-1.x86_64 (base) The server is up to date and the packages mysql mysql-devel python-devel are installed. The missing dependency is nowhere on the system: locate libmysqlclient /usr/lib64/libmysqlclient.so /usr/lib64/libmysqlclient.so.15 /usr/lib64/libmysqlclient.so.16 /usr/lib64/libmysqlclient.so.16.0.0 /usr/lib64/libmysqlclient_r.so /usr/lib64/libmysqlclient_r.so.16 /usr/lib64/libmysqlclient_r.so.16.0.0 /usr/lib64/mysql/libmysqlclient.a /usr/lib64/mysql/libmysqlclient.la /usr/lib64/mysql/libmysqlclient.so /usr/lib64/mysql/libmysqlclient_r.a /usr/lib64/mysql/libmysqlclient_r.la /usr/lib64/mysql/libmysqlclient_r.so /usr/local/cpanel/lib64/libmysqlclient.so.14 rpm -qa | grep -i mysql MySQL-devel-5.1.45-0.glibc23 MySQL-bench-5.0.89-0.glibc23 MySQL-shared-5.1.45-0.glibc23 MySQL-server-5.1.45-0.glibc23 MySQL-test-5.1.45-0.glibc23 MySQL-client-5.1.45-0.glibc23 The Python version is python-2.4.3-27.el5.x86_64: Python 2.4.3 (#1, Sep 3 2009, 15:37:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2 Any suggestions would be greatly appreciated.

    Read the article

  • Coffee spilled and went inside CPU...computer not starting

    - by Harpreet
    Today coffee got spilled over my table, and some of it (very less) reached the CPU placed under the table. I think little bit of it got inside the CPU through the front face of the CPU. As that happened the fan started running very fast and made noise. I tried to restart to see if it becomes fine, but the computer didn't start again. First it gave an error of "Alert! Air temperature sensor not detected" and didn't start. Next I tried again multiple times of starting the computer but then it gave some memory error. I was not able to start the computer. Incase there's a problem in hard disk or something related to memory, is there any way we can extract our work or data? I am scared if I am not able to extract my work in case some problem occurs like that. What options would I have? Help! EDIT: I have attached the photo here and you can see the area spilt in red circle. The hard drive electronics have been affected and internal speaker may also have been affected. Any advise on cleaning and if hard drive can work? EDIT 2: Are there any professional services offered to extract data from blemished hard disk, like this one, in case I am not able to run it personally?

    Read the article

  • Internal/External Moodle - DNS

    - by Chief17
    Network diagram: I have a moodle (a VLE) setup that I want to be internally and externally accessible. The green route on the diagram below is the route I would like the traffic to take when the user is inside the LAN, and the red route is seemingly what it does take. The website has a domain name (like most websites do). From the User PC, if I ping the domain name, I get the internal IP of the webserver (because of a hosts file entry), if I nslookup the domain name I also get the internal IP of the webserver (because of an A record on my DNS server). Running the same two commands on the webserver gives me the webservers external IP. (going well so far) If I use PHPs gethostbyname() on the moodle website and use domain name as a parameter (getting php/apache to resolve the hostname) it returns the exernal IP of the webserver (good news, DNS seems to be doing what I want it to). All things so far seem to be going well. The only thing that is confusing me and preventing the moodle single sign on from working is the fact that if I get moodle to show my IP address, it says that it is an external one (outside my NATting firewall) when it should show an internal IP. This is the issue, any ideas on how to go about resolving this? Any ideas on tests I can perform (I have also tried a tracert and the request goes directly to the webserver), anything? Thanks all!

    Read the article

  • How to fix display on external Samsung Syncmaster shifted to the right when connected to Macbook Pro?

    - by joe larson
    Is there something special I need to do to be able to use external LCD displays with my new MacBook Pro? Do I need extra software, or do I possibly need a different cable? I'm attempting to use an external display with my MBP. I've got a "Mini DisplayPort to VGA Female Adapter for Mac", plugged into the thunderbolt port on my MBP, which I understood should be compatible with thunderbolt. I've tried this with three different SyncMaster models: a B2330 (21.5"), a EX2220 (22"), and a third (also 22" ish) which I don't have the model # for -- but all are 1920x1080 resolution; plus an additional HP monitor of similar size and resolution. In all four cases, the MBP recognizes the screen and choses the correct resolution. However, the display is shifted over about 1 inch. This is true no matter if I change screen resolutions also. The controls on the monitor for horizontal position don't help. Also, sometimes (especially if I drag an app over into the second screen), the screen starts skipping left to right and having bands of fuzz. Additionally, the monitor will periodically blink off for a moment, trying to switch from Digital to Analog and back (the Syncmaster shows text on the screen to tell you it's trying to do this). Often when it comes back from one of these blank-outs, it will show OK (no skipping or fuzz) but still shifted right; then after a few seconds it will go wrong again skipping and fuzzy. This photo shows the worst of it. I've added red rectangles to show the physical edge of the screen, and a yellow rectangle to show the empty space on the left of the screen. (Sorry for the awful quality and lighting!) Also, it's worth noting I am on Mac OS X 10.6.7, and yes I have this update 1.4 installed.

    Read the article

  • Optical Audio out stuck on on a MacBook

    - by Clinton Blackmore
    Apple have made an interesting headphone port for the MacBook (and some other Intel Mac models). It works like a standard jack: nothing plugged in - audio comes out of built-in speakers headphones/external speakers plugged in - plays through headphones/external speakers but you can also use a special adapter (which trips a tiny microswitch) to get an optical audio out signal (which you can presumably plug into a nice surround-sound system). This is all well and good except when, like auto-tracking, it doesn't work, and you are left with nothing to adjust. Users report that they get no sound when they have nothing plugged in and that a red light emanates from the headphone port. If you go to System Preferences - Sound - Output, it will say (IIRC) "Optical Out" instead of "Internal Speakers". The only solution I'm aware of is to try to reset the switch by inserting and removing a set of headphones or a toothpick, perhaps wiggling it inside of the port, and hoping that you luck out and get it. Are there other ways to fix this problem? Does anyone know where the microswitch is or have a good technique to reset it?

    Read the article

  • How to fix Quicktime video color problems on nVidia chipset and Windows XP?

    - by Matthew Glidden
    My laptop frequently plays video as if in very low-color mode. Though the sound remains clear, it looks terrible, showing only a few shades of red, blue, or yellow. (It's even worse than 8-bit color.) The problem doesn't happen consistently, so I'm looking for troubleshooting advice or known solutions. I use a Dell Latitude D620 laptop with Windows XP, on-board nVidia video chipset, Quicktime, and multiple monitors (laptop screen + VGA-connected LCD). Color problems happen in each application I tried, iTunes, a browser, and the Quicktime standalone player. It doesn't happen right after reboot, so could be from a sleep-wake cycle, or at least being on for an extended period. Google results suggest reinstalling nVidia drivers, which I've done several times with no change. I have found 2 workarounds. Reboot, sacrificing significant time and disrupting work In nVidia control panel, change color to 16-bit, and then back to 32-bit This happens with all video playback, so it's definitely not one corrupt file. I use workaround #2 consistently, but would love a longer-term solution.

    Read the article

  • How to fake ip at localhost without LoopBack.

    - by sexer
    How can i fake an ip on my own PC? for example if there were an ip address lets say 201.91.81.71, that Host is somewhere outside of my red and is hosting a webserver. How can set a website on my own PC, and when i go to browser and try to explore 201.91.81.71 it actually explore the website at my own PC? pd: I need it with IP addresses not domain names, since I need to implement it on a non-web service. First guess was installing a LoopBack with 201.191.81.71 as ip, but since some times the subnet works and some other it doesn't isn't a stable solution. Second guess was adding a route to route table : route add 201.91.81.71 mask 255.255.255.255 192.168.1.2 192.168.1.2 is the ip address of my NIC. If i could add this route it would work but windows doesn't let me do so. route add 201.91.81.71 mask 255.255.255.255 127.0.0.1 it doesn't let me set as gateway 127.0.0.1 if 201.91.81.71 isn't set in a NIC, so thats why i set sometimes loopback and this route add is auto, but it needs a subnet mask which doesn't match the ip and cannot set 255.255.255.255, im in real throubles here. can i get some help? thx.

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • No headphone or speakers plugged in - Windows 7 issue

    - by Amit Ranjan
    I am facing a wierd issue between my sound card driver and Windows7(any edition). I have a sony vaio notebook (VPCEB24EN). Two days ago, on start I got diabled wifi, charging and speakers. Then I restart the PC and everthing worked fine. Later on I again restarted my machine, and then I found the my speakers were not working. I thought, restart will work. I restarted many more times but it did'nt work. I searched google and found that , it might be due to : 1. Hardware is not switched on from bios. 2. No hardware. 3. overriding 64-bit drivers with 32-bit drivers. To make it working, I restored my laptop from scratched, but while restoring pc, the realtek HD drivers, it gave me an error 505. I then formatted the drive and installed Win7 Ultimate 32bit (With PC was, Win7 64-bit Home Basic). I got lots of yellow exclamation in device manager, thinking now this will resolve my issue. But Even after the installing all drivers on a fresh installation, I was still with the same position. A red cross on speaker- No Speakers or Headphone plugged in. Please Note: My Laptop is Vaio , E Series, VPCEB24EN. Audio : Intel® High Definition Audio compatible but accepts Realted Audio. While using BIOS Agent, I got Intel Chipset 5 Series Audio Adapter and ATI RV370 Audio adapter found on my board. Installed is Win7 32bit Ultimate. factory default was Win7 64-bit Home Basic Memory: RAM 3GB/ 320GB HDD Display : ATI Mobility Radeon™ HD 5145 Graphics

    Read the article

  • Not able to find scripts present in /etc/profile.d directory [on hold]

    - by priya
    I am using Red Hat Linux 6.0 ... using davinchi board. I have to change system clock resolution so I am changing (HZ) env var. For this I have written script so that I can change HZ = 1000 n insert that script in /etc/profile.d and write code for loop in /etc/profile so that while running as usual /etc/profile can load the scripts present in /etc/profile.d. But when I am logging into the system at root level then showing error as "-bash: ./etc/profile.d/resolution.sh(my script name): No such file or directory Also here why it is showing ./etc and not /etc . Is something related to that?? Also I tried to add script in /etc/init.d but still no change in value of HZ takes place. Please tell where to change so that this env var can get changed. The script(resolution.sh) written has :- #!/bin/bash export HZ=1000 The content of /etc/profile which I entered is: if [ -d /etc/profile.d ]; then for i in /etc/profile.d/*.sh; do if [ -r $i ]; then .$i fi done unset i fi And the output of grep command is -rw-r--r-- 1 root root 535 Feb 4 2004 profile -rwxr-xr-x 2 root root 4096 Feb 2 2004 profile.d

    Read the article

  • Coffee spilled inside computer, damaged hard drive

    - by Harpreet
    Today coffee spilled over my table, and some of it (very less) reached the PC case placed under the table. I think little bit of it got inside the PC case through the front. As that happened the fan started running very fast and made noise. I tried to restart to see if it becomes fine, but the computer didn't start again. First it gave an error of "Alert! Air temperature sensor not detected" and didn't start. Next I tried again multiple times of starting the computer but then it gave some memory error. I was not able to start the computer. Incase there's a problem in hard disk or something related to memory, is there any way we can extract our work or data? I am scared if I am not able to extract my work in case some problem occurs like that. What options would I have? Help!! EDIT: I have attached the photo here and you can see the area spilt in red circle. The hard drive electronics have been affected and internal speaker may also have been affected. Any advise on cleaning and if hard drive can work? EDIT 2: Are there any professional services offered to extract data from blemished hard disk, like this one, in case I am not able to run it personally?

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >