Search Results

Search found 5169 results on 207 pages for 'lcd displays'.

Page 140/207 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Apache2 - 500 internal server error

    - by Lucio Coire Galibone
    i'm running a VPS with Linux CentOs 6 with 4 GB of RAM, 10 GB of HD and 2 virtual CPU Intel(R) Xeon(R)CPU L5640 @ 2.27GHz. As my host says each virtual CPU must be at least 0.5 physical cpu. At certain times of the day, those with more traffic, trying accessing my php script i receive intermittently "500 internal server error". I activate logging to debug level from apache, and also the PHP logging with E_ALL, but I can't find reference to Error 500 in any logs(I checked the right logs!). I haven't got any .htaccess file in path script. The strange thing is that the error start at first php line in the script (the previous html displays correctly, but at the first php line the script send 500 error). The cpu load is always good (max 0.15 0.08 0.01) and RAM is close to 95% but it arrived to swap just 2 times in a month with 2-5 MB. Apache works with prefork with this values: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 280 MaxClients 280 MaxRequestsPerChild 4000 </IfModule> Everthing works correctly and I don't get any error in quiet times, but i start receive errors when traffic rises (6-9000 visits per hour). Can i solve the problem increasing resources? (i can upgrade RAM up to 16 GB). It can depend from reaching MaxClients (but apache must write it on log, right?)? If I upgrade RAM to 6 or 8 GB i have to calculate MaxClients value with this? MaxClients = Total RAM dedicated to the web server / Max child process size Max child process size is around 20M. How else can the problem be? Thanks in advance

    Read the article

  • Windows won't sleep after booting from grub

    - by mkasberg
    I recently added a second hard drive to my computer and I am using it do dual-boot Linux (Ubuntu 12.04) with Windows 7. Both hard drives are SATA. I am using the default grub bootloader on my second hard drive. The windows drive is unmodified. To get to grub, I changed the hard disk boot priority in my BIOS (P35-DS3L) to boot from the second drive. The problem I'm having is that when I boot to Windows 7 (on sda) from grub (on sdb), Windows 7 will not go to sleep (from the start menu). The display shuts off momentarily as if its going to sleep, then comes back on and displays the switch-user screen. Powercfg -lastwake does not show anything. I am sure that this is related to booting from grub on sdb because when I change the hard disk boot priority in the BIOS to boot from my (unmodified) Windows hard disk, the computer goes to sleep fine. It occurred to me that installing grub on sda might solve the problem, but I'd rather not since I like to have my windows hard disk unmodified so that booting to it from the BIOS boots directly to windows. A possible work around is to use the BIOS as a bootloader, by pressing F8 to select the boot device. Still, I'd like to know why the problem is happening in the first place.

    Read the article

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

  • Linux Mint reset display resolution from console

    - by wullxz
    I have a Linux Mint 13 Xfce in a VMware Workstation 8 VM and set the resolution from 800x600 to 1280x768 and now I get permanently logged out when I try to login. I knew how to get back to my old resolution back in the xorg.conf days but Linux Mint now uses xrandr which won't display any displays when running # xrandr because X is not running (of course not - I can't login over GUI). I know that there are configuration files in /etc/X11/Xsession.d/ because I configured a debian based thinclient's resolution in a file called /etc/X11/Xsession.d/91configure_display but that file doesn't exist in my Linux Mint VM. So, how do I reset my X screen resolution from console? Edit: I forgot to tell you that I can't change resolution in console: # xrandr -s 800x600 Can't open display This message appears every time I use xrandr or xrandr -s *resolution* Update: I tried what bWowk suggested: # export DISPLAY=:0.0 # xrandr -s 800x600 No protocol specified No protocol specified Can't open display :0.0 So, that doesn't work either. Isn't there a configuration file that is executed every time X starts? X is running btw - ps aux | grep X shows one process /usr/bin/X running.

    Read the article

  • All browsers refusing to load a specific image on a webpage?

    - by Johnson
    Out of nowhere today, all 3 of my browsers (FF/Chrome/IE, OS = Win7 x64) are refusing to load the homepage of interfacelift.com correctly. It works fine on other PC's in the house (on the same network), so it is definitely related to this one PC. The browser won't load the main image on the page correctly (even though the source code looks good), however if I direct the browser to the exact location of that image, then it displays fine. So obviously I can get the HTML index (which locates the resource) and I can get to the resource. So why heck isn't it displaying properly on the index page? It's almost as if the HTML rendering engine has gone bad, on all 3 browsers at once. I've browsed to a bunch of other sites (including sites very heavy on JS, with HTML much more complex than the one in question here) and am seeing nothing funny. Only thing wonky I've done with my PC in the past several hours was replacing the system file Magnifier.exe with a copy of cmd.exe while playing around with some of the ideas mentioned in this guide. However, I've since then restored the files to their previous state, and I don't know how Magnifier would be related to this even if I hadn't restored it. Any ideas? I'm stumped! EDIT: Here is what the broken page looks like in Chrome. And here is the image loaded correctly by itself.

    Read the article

  • How can I set Thunderbird's "Recipient" column to display my email address rather than a friendly na

    - by Howiecamp
    After configuring a single, unified Inbox within Outlook 2007 to unify multiple email accounts, I found Thunderbird 3's Smart Folder feature. It works great, providing individual inboxes for each of your email accounts and a unified inbox which provides a unified, virtual view of those other inboxes. Thunderbird is smart enough so what when I reply to an email addressed to a specific email account, the reply is "From" that email address. In order to know which inbound email was to which of my accounts, I added the "Recipient" column to the inbox Smart Folder: What's displayed in the Recipient column depends on how the sender/sender's email client addresses the email. If they send it to just "[email protected]" without specifying a friendly name, the Recipient column displays "[email protected]" and there's no ambiguity about which account the email was sent to. However, if the sender has me in their address book (likely stored with a friendly name), it will be addressed as "Howard Camp [[email protected]]" and then show in the Recipient column as "Howard Camp". The problem is that if someone emails me with a friendly name at another of my email accounts (e.g. "Howard Camp [[email protected]]", the Recipient column will also display "Howard Camp" and I can't tell which account it's to until I open the message and/or look at the details. How can I configure Thunderbird to always display my email address rather than the sender-specified friendly name in the Recipient column?

    Read the article

  • How to install a private user script in Chrome 21+?

    - by Mathias Bynens
    In Chrome 20 and older versions, you could simply open any .user.js file in Chrome and it would prompt you to install the user script. However, in Chrome 21 and up, it downloads the file instead, and displays a warning at the top saying “Extensions, apps, and user scripts can only be added from the Chrome Web Store”. The “Learn More” link points to http://support.google.com/chrome_webstore/bin/answer.py?hl=en&answer=2664769, but that page doesn’t say anything about user scripts, only about extensions in .crx format, apps, and themes. This part sounded interesting: Enterprise Administrators: You can specify URLs that are allowed to install extensions, apps, and themes directly through the ExtensionInstallSources policy. So, I ran the following commands, then restarted Chrome and Chrome Canary: defaults write com.google.Chrome ExtensionInstallSources -array "https://gist.github.com/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://gist.github.com/*" Sadly, these settings only seem to affect extensions, apps, and themes (as it says in the text), not user scripts. (I’ve filed a bug asking to make this setting affect user scripts as well.) Any ideas on how to install a private user script (that I don’t want to add to the Chrome Web Store) in Chrome 21+? Update: The problem was that gist.github.com’s raw URLs redirect to a different domain. So, use these commands instead: # Allow installing user scripts via GitHub or Userscripts.org defaults write com.google.Chrome ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" This works!

    Read the article

  • Nvidia GTS 360M in Asus laptop shuts off when gaming on external display

    - by mr odus
    I have had this problem on my Asus G51j Intel I5, Nvidia GTS360M graphics card. Just updated the drivers to latest off Nvidias site. Most games I play, If I hook it up through an external monitor(this happens whether it's HDMI or VGA). The laptop hard shuts down after about 20 minutes, give or take. This tends to happen on more graphically intense games like Call of dutys, bioshock. I'm running Windows 7, latest nvidia drivers. Games work fine on my laptops screen, and Movies, general computing work fine on the external displays. The laptop always sits on a cooling pad and the latest time was in front of my AC unit, I ran heatfan or whatever the heat tracking software was, and my temperatures were normal through a shut down. This has been happening for the life of the laptop. I dont play very many games, and even fewer on an external monitor, so this issue doesnt come up much Is it possible I have a faulty graphics card? Is there anything else I can try?

    Read the article

  • How to connect to a computer that is in Sleep mode over the internet

    - by Gerhard Weiss
    How to connect to a computer that is in Sleep mode over the internet? I am using LogMeIn to connect to another computer offsite. I just installed Windows 7 RC on that system and found that the Sleep mode actually works. Currently LogMeIn does not connect when the system is in Sleep mode or Hibernate mode (that is what their error message displays when you try). Is there a way to get LogMeIn to connect to a system in Sleep mode? Is there other software that gives simliar LogMeIn functionallity (like RDP, etc.) that could be used on Windows 7 instead. I just use LMI for connecting and nothing else (no printing or file transfers). A Non-expensive options (such as free) would be better. I have seen web sites mentioning "Wake on LAN". Does anyone have some good links on how to set this up to be accessed over the internet? Edited: It looks like LogMeIn BETA might be the solution. https://beta.logmein.com/welcome/nextgen/ Has anyone tried this beta yet?

    Read the article

  • Is there a small business router that shows bandwidth usage graphs in the admin panel?

    - by Robert Drake
    I support a large number of public libraries that are having their networks upgraded in response to a grant application. These libraries are generally home to between 6-15 computers and have little or no tech services either onsite or contracted remotely. In order to justify current and future purchases, a number of the libraries have requested routers that can provide bandwidth usage graphs that they can show to their managing boards. Is there a small business router that displays traffic graphs in the router administration web interface? The router needs to suppport DHCP and basic firewalling. No other features are required. Further, the reports just need to show overall trends. It is not necessary to show traffic by IP, by protocol/application, or by time of day. They just need an overall week to week, month to month, trend line. I'm familiar with MRTG/PRTG/tools that collect SNMP data from the router, but the libraries don't have the expertise for the configuration. I've considered installing the tomato firmware on some cheap home/home office routers, but if there's a commercial product that can be purchased that would be significantly simpler. Also the library boards would be much more likely to approve the purchase of a commercial product over a 'hacked' one. Any assistance would be appreciated.

    Read the article

  • Diagnosing linux issues with ipod syncing in Ubuntu

    - by alexpotato
    Issue: I am currently using Ubuntu 9.10 with a 5th generation Ipod 60 GB Black video classic. In general, it seems that Ubuntu can always detect the usb hd and displays it on my desktop. However, some applications seem to detect the ipod (e.g. Rythymbox and gtkpod do but Banshee does not) and some don't. I narrowed down the banshee issue to a bug that requires Nautilus to be restarted (although it would be nice to not have to do this). Also, Whenever I sync between these applications, it appears that everything is working fine during the sync but when I disconnect the ipod and browse, all of the songs seem to be there but the playlists are not. If I reconnect the ipod, in banshee specifically it sees the space usage as "other". What I am looking for is some way to at least understand what is and is not working OR directions to some where that can help me learn what's going on. I have already tried: -IRC. Either the channel is too general (e.g. #ubuntu) or no one is ever one (e.g. #banshee) -The web. Most of what I've found is too specific to one particular bug or too general. Any thoughts?

    Read the article

  • How do I get netcat to accept connections from outside the LAN?

    - by Chris
    I'm using netcat as a backend to shovel data back and forth for a program I'm making. I tested my program on the local network, and once it worked I thought it would be a matter of simply forwarding a port from my router to have my program work over the internet. Alas! This seems not to be the case. If I start netcat listening on port 6666 with: nc -vv -l -p 6666, then go to 127.0.0.1:6666 in a browser, as expected I see a HTTP GET request come through netcat (and my browser sits waiting in vain). If I go to my.external.ip.address:6666, however, nothing comes through at all and the browser displays 'could not connect to my.external.ip.address:6666'. I know that the port is correctly forwarded, as www.canyouseeme.org says port 6666 is open (and when netcat is not listening, that its closed). If I run netcat with -g my.adslmodem's.local.address to set the gateway address, I get the same behavior. Am I using this command line option correctly? Any insight as to what I'm doing wrong?

    Read the article

  • Serving only certain files from a directory to users on IIS7

    - by HarbingTarbl
    I'm have a need to show the most up to date version of a certain file in a directory to users who access a folder on my site (lets call this folder logs). I can't just move the file into the folder as another process relies on being able to find and edit this file while it is running. At first I had thought I could just create a folder on my site, give it the correct permissions and then create a symbolic link to the file. However it seems IIS7 does not follow symlinks. Another solution would be to create a phpscript that pulls the correct file and displays it, but that felt like over-engineering the solution. I know that on Apache this would be simple, but I can't figure out how to do it with IIS7. To give an idea of the folder structure I'm working with. The directory looks like this. Root --File I need to serve. --File containing plain text passwords. --Other folders/files. I can't move any of these files. If I just serve the entire directory using Virtual Directories in IIS I'll also be sharing files and folders containing configuration and other sensitive information.

    Read the article

  • Drive Mapped On Amazon EC2 Instance Startup Disconnected After Logon

    - by jsn
    I am launching an Amazon EC2 instance (Windows Server 2012), and on startup (though User Data field), it runs this powershell script: <powershell> NET CONFIG SERVER /AUTODISCONNECT:-1 # clear all prior connections net use * /delete /y > C:\delLog.txt 2>&1 # mount new drive net use R: \\dbHost\share /user:username pass /persistent:yes > C:\useLog.txt 2>&1 ipconfig /all > C:\ipLog.txt </powershell> When it launches and I connect to it through RDP, it shows in explorer "Diconnected Network Drive (R:). If I double click it, error message displays "R:\ is not accesible. Access id denied". Normally, it would ask me for credentials to reconnect. I need for this drive to be connected through the duration of the instance. delLog.txt contents: You have these remote connections: T: \\dbHost\share Continuing will cancel the connections. The command completed successfully. useLog.txt contents: The command completed successfully. ipLog.txt contents as expected. The net use commands works fine by itself, it connects. Anyone have any idea what could be wrong? There is only one account on these machines - Administrator. It is a samba share to a Linux server on a private network.

    Read the article

  • Leopard mail.app quoted-printable weirdness

    - by pehrs
    I am not sure if this is a bug in mail.app, or a configuration I just can't find. It might also be a strange sideffect of GPGmail. Mail.app correctly displays all e-mails on my IMAP server, except for the e-mails in my "Sent Messages" folder. In the sent messages folder it messes up åäö, in typical quoted-printable with wrong char-set fashion. They become ‰ˆ. When looking at the source of the e-mails it seems like the header generated by mail.app is correct: Message-Id: <> From: To: In-Reply-To: <> Content-Type: multipart/signed; protocol="application/pgp-signature"; micalg=pgp-sha1; boundary="Apple-Mail-4--741321197" X-Smtp-Server: smtp.example.com Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v936) Subject: Example subject Date: Fri, 26 Mar 2010 10:14:14 +0100 References: <> X-Pgp-Agent: GPGMail 1.2.0 (v56) This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --Apple-Mail-4--741321197 Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes Content-Transfer-Encoding: quoted-printable <Text here with =E5=E4=F6> --Apple-Mail-4--741321197 content-type: application/pgp-signature; x-mac-type=70674453; name=PGP.sig content-description: This is a digitally signed message part content-disposition: inline; filename=PGP.sig content-transfer-encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.12 (Darwin) iEYEARECAAYFAkus62kACgkQlIRLofxhDjYnnwCcDmCXuMGsKlh3a418s12coJgn 36sAoKMdkP3+g/OMK+Ps7AbjQq4Nbqzv =XMko -----END PGP SIGNATURE----- --Apple-Mail-4--741321197-- Thunderbird has no problem displaying the messages. So, how can I get mail.app to use the correct charset?

    Read the article

  • Mac OS X software always order files alphabetically rather than by type.

    - by george
    I have noticed many Mac applications sort the files alphabetically rather than by type. A good example would be Coda by panic.com. The files in the file menu are organized alphabetically. I requested for them to add the feature to organize files by type, and they've said that it's a Finder thing. So I looked at other applications to see if they were organizing by type. I noticed Dreamweaver CS4 had this same problem and now including Dreamweaver CS5. There has to be something in the Mac that does this and that I can modify. I played with Spotlight and it now displays its files by type (thinking that's what I can do) but it didn't take effect in other applications. What library are these applications using to display a file menu for their files? here is an example-- file menu layout of coda by panic.com. (i couldnt post another link because it wouldnt let me). can you see how everything is organised alphabetically rather than by folder? i just want the file menu to show all folders first then all the files. 1) http://www.iaddesign.com/coda.png there must be a way to modify mac to let me to do this.

    Read the article

  • OS X Hard drive recovery

    - by Adam
    I am trying to recover data from a bad Seagate 1TB hard drive in a 2010 iMac. One day the iMac wouldn't boot (stuck at gray screen on startup). I removed the hard drive from the iMac and connected it to a MacBook using a 3.5" HDD to USB adapter. The hard drive wouldn't mount but it did display in Disk Utility that that there were 2 partitions on the disk. I tried to run Disk Warrior and it showed thousands of errors but still wouldn't mount. At this time the hard drive only show one partition in Disk Utility. Next I tried putting the hard drive in a desktop PC and running Spin Rite - which then gave me several division overflow errors (even with running Spin Rite with a newer version of DOS). The SMART status on the drive reports that the drive has had failures and HD Tune referenced the drive had once hit 59 degrees celsius. Disk Utility gives me the following message when running a pair: Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files. Overall, the hard drive spins up and sounds OK - there are no clicking noises but the hard drive won't mount and displays as a light gray "Macintosh HD" in disk utility. Any tips or advice on how to recover data on this drive would be GREATLY appreciated! Are there any other tools I can try before calling it quits on this drive? Thank you

    Read the article

  • Tool to modify properties/metadata of a PDF? i.e. Change "Title", "Author"? Sony Reader showing som

    - by Chris W. Rea
    I own a Sony Reader PRS-600 ebook reader. I bought a ton of Manning Publications ebooks (DRM-free) recently. Many of the books are PDFs since not all the ones I wanted are available in epub format. The problem: Some of the PDF books I purchased have incorrect or missing metadata. Making things worse, the Sony Reader only displays the "Title" from the PDF metadata when displaying book titles in the reader's collection of books! The Reader doesn't display the filename. So, even though I have a PDF informatively named "Windows PowerShell In Action.pdf", it shows up as "untitled" in the Reader. Imagine how useful the Reader's list of book titles becomes when many are just "untitled" or "unnamed document" ! Yes, it is maddening. So – short of expecting the publisher to fix the files or Sony to add a filename-based list instead, I'm looking for a way to fix the PDF metadata. I can view the metadata with Adobe Reader, but it doesn't permit modification of the properties. Leading to: Question: Is there a tool – free, or cheap – and either for PC or Mac, that can modify the properties / metadata of a DRM-free PDF document? I want to correct "Title" and "Author" fields, specifically.

    Read the article

  • Question about Displaying Documents and the CQWP in MOSS 2007

    - by Psycho Bob
    My organization is in the process of converting our intranet over to a SharePoint solution. Part of this intranet will be the movement and organization of all our internal documents. Currently, we have 11 pages of document links, each with its own subheadings. So far I have it set where each document has a custom field called "Page" with a check box list of all the document pages on the intranet site. On each individual page, I have setup a Content Query Web Part that displays the documents that have the corresponding Page value set (i.e. if a document Page value has been checked for "HR" it will appear on the HR page). The goal of this setup is to allow the nontechnical personal who will be responsible for the maintenance of the documents to be able to upload new documents to the documents list and note on which pages they should appear on without having to manually update the pages themselves. The problem that I am having is that I cannot seem to find a good way to sort the documents into their subheadings once they are on the appropriate page. I could create individual check boxes for each page/subheading combination, but this would create a list of approximately 50-75 items. Does anyone have any ideas as to how I could accomplish this, either via CQWP or by different means? Goals/Requirements of Installation Allow Intranet documents to be maintained by nontechnical personnel Display documents on the appropriate pages without user having to edit actual page or web part Denote document page location using user settable document attributes (if possible) Maintain current intranet organization and workflow Use only one document list without subdirectories NOTE: I am aware that this is not the most efficient or elegant way to do things, but these are the requirements I have been given for the project.

    Read the article

  • None of usb-controllers are working after a crash

    - by Cray
    So I have a GA-p35-DS4 mboard with a Q6600, running windows7-x64. After a random crash with a bluescreen (not caused by anything particular as I recall), none of the usb-controllers are working. (And none of the devices connected to those usb ports). All the controllers are showed up in the device manager, but every one has a warning-icon (as they are not functioning properly). The windows identifies them correctly, it shows exactly the model of each controller, and it says that the driver is installed. Now, when I try to reinstall the driver (Update driver menu item in the device manager), it tries to find it, finds the driver, but quits saying that the driver was found but could not be installed. Additionally, it displays "This operation requires an interactive window station", whatever that means... Now get this, the same thing happens with a new pci-usb controller card! It is found (actually each time the machine starts, it is found as new hardware), but trying to install drivers leads nowhere, I am getting the same message about an interactive window station. (tried to install drivers from the acoompanying CD) I have tried deleting those devices from device manager and let windows find them again, but that leads to same results. None of the ports on this extension card work either. This should not be a hardware problem, a usb keyboard connected to the builtin usb controller works in bios, and even in another OS running the same computer (old xp installation) What to do? Will I have to reinstall? Can deleting infcache.1 help? Is there some way to let windows remove all old drivers and try to find all hardware again, this time only looking for drivers on a windows install disk or something?

    Read the article

  • Fix overscan in Linux with Intel graphics Vizio HDTV

    - by Padenton
    I am connecting my server to my HDTV so that I can conveniently display it there. My VIZIO HDTV cuts off all 4 edges. I already realize it is not optimal to be running a GUI on a server; this server will not have much external traffic so I prefer it for convenience. I have already spent countless hours searching for a fix, but all I could find required an ATI or NVIDIA graphics card, or didn’t work. In Windows, the Intel driver has a setting for underscan, though it seems only to be available by a glitch. Here’s my specs: Ubuntu Linux (Quantal 12.10) (Likely to switch to Arch) This is a home server computer, with KDE for managing(for now, at least) Graphics: Intel HD Graphics 4000 from Ivy Bridge Motherboard: ASRock Z77 Extreme4 CPU: Intel Core i5-3450 My monitors: Dell LCD monitor Vizio VX37L_HDTV10A 37" on HDMI input I have tried all of the following from both HDMI?HDMI and DVI?HDMI cables connected to the ports on my motherboard: Setting properties in xrandr Making sure drivers are all up to date Trying several different modes The TV was “cheap”; max resolution 1080i. I am able to get a 1920x1080 modeline, in both GNU/Linux and Windows, without difficulty. There is no setting in the menu to fix the overscan (I have tried all of them, I realize it’s not always called overscan). I have been in the service menu for the TV, which still does not contain an option to fix it. No aspect ratio settings, etc. The TV has a VGA connector but I am unsure if it would fix it, as I don’t have a VGA cable long enough, and am not sure it would get me the 1920x1080 resolution which I want. Using another resolution does not fix the problem. I tried custom modelines with the dimensions of my screen’s viewable area, but it wouldn’t let me use them. Ubuntu apparently doesn’t automatically generate an xorg.conf file for use. I read somewhere that modifying it may help solve it. I tried X -configure several times(with reboots, etc.) but it consistently gave the following error messages: In log file: … (WW) Falling back to old probe method for vesa Number of created screens does not match number of detected devices. Configuration failed. In output: … (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" Number of created screens does not match number of detected devices. Configuration failed. Server terminated with error (2). Closing log file. Tried using 'overscan' prop in xrandr: root@xxx:/home/xxx# xrandr --output HDMI1 --set overscan off X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 11 (RRQueryOutputProperty) Serial number of failed request: 42 Current serial number in output stream: 42 'overscan on', 'underscan off', 'underscan on' were all also tried. Originally tried with Ubuntu 12.04, but failed and so updated to 12.10 when it was released. All software is up to date. I am not opposed to reinstalling my OS, likely will anyways (my preference being Arch).

    Read the article

  • TV not detected by Windows/VGA - when there is a WHDI device in the signal chain

    - by ashwalk
    I'm at my wit's end with this one... I had an EVGA GTS 250, and I used to plug it's HDMI out into a WHDI sender, which transmitted to its corresponding WHDI receiver 15ft away, which then connected to a Samsung LN40D LCD TV through another HDMI cable. PC/VGA < [hdmi cable] < WHDI sender <[air] WHDI receiver < [hdmi cable] < TV It was perfect, stable, no perceivable latency. I just plugged everything the first time and it worked instantly. It sent 5.1 audio, and Windows/nVidia Control Center detected the TV by its name. The WHDI device is this one: http://goo.gl/Q8iWI5 Now I bought an EVGA GTX 650, and WHDI doesn't work anymore. Both Windows and nVidia Control Center won't detect the TV, only the monitor that's connected via DVI. The TV screen shows "TX202913 connected. Check video signal." on top of a black screen. Though the device is not the problem itself, just the fact that it's not allowing direct connection between PC and TV. I would bet that if put an AVR in its place I'd also have this issue. The HDMI on this new card works with other monitors. If I put the older card back, WHDI works normally. I have googled this for 5 months on and off. Once I bumped into a page that showed how to force a display device to always-on through registry edit. Once I restarted windows, the Tv (through WHDI) displayed my expanded or duplicated desktop at 1024x768 ONLY, and listed the display as "digital display". I could not change the resolution and it wouldn't playback audio (although the option was available at nVidia Control Center HDMI audio options, but did not work). This proves that there is no conflict between the devices, except that software-wise, Windows cannot, for the life of it, understand that there's a TV there to send video/audio to. Since this won't do (no audio, poor video), I reverted this regedit. It's also not an EDID problem within the TV, since when connected directly it works. The last weird bit of this saga is that today, I reminded of Windows' "Add Device" dialog, gave it a go, and a "Samsung Generic UPNP TV" showed up, which I promptly installed the drives for, rising to a climax of... ...NOTHING HAPPENING. As far as I can tell, it really didn't change anything other than using up a few kb in my main disc. I should also say that I looked a LOT into handshake problems and nothing applied either. Do any of you have an idea of what may be going on? I can't stand the thought of having a us$200 device not working because of the addition of a newer graphics card, when the much older one had no issues. There is absolutely NO REASON for this to happen. There is NO documentation on WHDI online. Apparently no one buys this stuff. For the same reason, no one responded to this same plea for help on NVidia and EVGA forums. Worst case, this can be a warning about this setup for people in the future. Thanx in advance.

    Read the article

  • Outlook 2007/2010 autodiscovering old Exchange info

    - by Dan
    I currently have an Exchange setup as follows: two Exchange 2003 servers clustered together set up as the current mailbox stores, one Exchange 2003 setup as a frontend, one Exchange 2007 set up as a frontend (was set up for testing by my predecessor, never really used intentionally), and now four Exchange 2010 servers - two mailboxes in a DAG and two with Hub/CAS. Everything seems to be working fine with one exception - Outlook 2007/2010 clients are still autodiscovering the test 2007 frontend and not the 2010 CAS array. I know this because there's an expired cert on the 2007 box so the client displays a cert error when you attempt to autocreate the outlook profile. From what I've read, there is an SCP (Service Connection Point) in AD that is pointing to the old server and it is getting returned first, causing Outlook to try it first. How can I prevent Outlook from even attempting to connect to this 2007 box from now on? http://www.msexchange.org/articles_tutorials/exchange-server-2010/management-administration/exchange-autodiscover.html When Outlook 2007 is installed on a domain joined workstation then the Outlook client will query Active Directory for the Autodiscover information. Active Directory will return a list of SCP’s and the Outlook client will automatically select the first SCP in this list. Using the information found in the SCP the Outlook client will contact the Client Access Server for its configuration information and the Outlook client will be configured automatically.

    Read the article

  • Massive SQL issue shutting down our site.

    - by Pselus
    Our website has started timing out like crazy today. All of our clients are finding it unusable. The only error we can seem to trace down as a potential problem is this: SQLAllocHandle on SQL_HANDLE_DBC failed Error ASP Description Error Category Microsoft OLE DB Provider for ODBC Drivers I have no idea what it means or how to go about fixing it. Anyone ever encountered this error before? Currently, you can log in to our site, but then once you go to do anything else, you find yourself logged out or nothing happens. We have a lot of Ajax going on so the "nothing happens" probably has to do with the Ajax pages not loading properly due to logouts and so nothing displays to the user. Like I said, I'm at a loss. Anyone have any advice on this error? EDIT I realize that this isn't necessarily a programming question, but we are a small startup company that just yesterday started talking about how we need to get a backup server running. Apparently we talked about it too late. We don't have a DBA, just 2 mid level programmers trying their hardest to keep our clients happy. So please, if you have any assistance give it but please don't close my question right now. EDIT 2 Turns out we had something on our server running called "ServerMask" that makes our IIS server look like Apache to the outside world. Shutting it down fixed our issue. Still no idea why it was messing up but it was the problem apparently. Thanks to everyone who tried to help.

    Read the article

  • Subdomains and address bar

    - by Priednis
    I have a fairly noob question about how subdomains work. As I understand at first the DNS server specifies that a request for certain subdomain.domain.com has to go to the IP address of domain.com, and the webserver at domain.com further processes the request and displays the needed subdomain page. It is not entirely clear to me how (for example Apache) server does it. As I understand there can be entries in vhosts.conf file which specify folders that contain the subdomain data. Something like: <VirtualHost *> ServerName www.domain.com DocumentRoot /home/httpd/htdocs/ </VirtualHost> <VirtualHost *> ServerName subdomain.domain.com DocumentRoot /home/httpd/htdocs/subdomain/ </VirtualHost> and there also can be redirect entries in .htaccess files like rewritecond %{http_host} ^subdomain.domain.com [nc] rewriterule ^(.*)$ http://www.domain.com/subdomain/ [r=301,nc] however in this case the user gets directed to the directory which contains the subdomain data but the user gets "out" of the subdomain. I would like to know - how, when going to subdomain.domain.com the subdomain.domain.com, beginning of address remains visible in the address bar of the explorer? Can it be done by an alternate entry in .htaccess file? If a VirtualHost entry is specified in the vhosts.conf file, does it mean, that a new user account has to be specified for access to this directory?

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >