Search Results

Search found 15249 results on 610 pages for 'tv shows'.

Page 468/610 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • nginx won't serve an error_page in a subdirectory of the document root

    - by Brandan
    (Cross-posted from Stack Overflow; could possibly be migrated from there.) Here's a snippet of my nginx configuration: server { error_page 500 /errors/500.html; } When I cause a 500 in my application, Chrome just shows its default 500 page (Firefox and Safari show a blank page) rather than my custom error page. I know the file exists because I can visit http://server/errors/500.html and I see the page. I can also move the file to the document root and change the configuration to this: server { error_page 500 /500.html; } and nginx serves the page correctly, so it's doesn't seem like it's something else misconfigured on the server. I've also tried: server { error_page 500 $document_root/errors/500.html; } and: server { error_page 500 http://$http_host/errors/500.html; } and: server { error_page 500 /500.html; location = /500.html { root /path/to/errors/; } } with no luck. Is this expected behavior? Do error pages have to exist at the document root, or am I missing something obvious? Update 1: This also fails: server { error_page 500 /foo.html; } when foo.html does indeed exist in the document root. It almost seems like something else is overwriting my configuration, but this block is the only place anywhere in /etc/nginx/* that references the error_page directive. Is there any other place that could set nginx configuration?

    Read the article

  • Mercurial confusion - commit / push, backouts

    - by Madmanguruman
    I'm trying to set up a repository on a shared filesystem. I'm using Mercurial 2.1.2 on a Windows-based architecture. I start with an empty folder on the shared filesystem and create a repository in it. After this, I dump in the baseline files, and add them to versioning, then commit the changes. I then clone the repository to my local hard drive. I then make a change in my local repository, commit it, then push back to the shared filesystem repository. The shared repo graph I get in TortoiseHG looks strange (to me). This is the shared repo: This is the local repo: On the shared repo, the working directory always shows up on the top, then the graph goes 'down' to rev. 0 then back 'up' again through various revisions. It looks to me like I have two different branches, even though everything is on the default branch. Also, that 'top' revision always says "* Working Directory * Not a head revision!" I noticed that in my local repository, I don't get that dangling working directory at the top of the list - everything is in one branch. I also noticed that on my local repository, I can back out the tip revision with no problem. On the shared filesystem repository, I cannot, since I get an error ("Cannot backout change on a different branch"). How can this be? Aren't they supposed to be identical to each other? Am I fundamentally doing something wrong?

    Read the article

  • Robocopy (without /l flag) running for hours making log file huge, but not actually copying anything

    - by Mickster
    Here's my command: Robocopy C: C:\C_root /FP /BYTES /TEE /S /E /COPYALL /DCOPY:T /MOVE /Z /ETA /XJ /R:2 /W:30 /XF pagefile.sys /XD /LOG:C:\robocopy.log. BTW, notice the /XD option above. After that I had a few directories that I want to omit. I showed these between angle brackets like this: (left angle bracket) a few dirs I wanted to exclude (right angle bracket). Amongst these dirs was the C_root dir itself, so that it did not get into an "infinite" recursion. (This part was stripped from the post because angle brackets apparently have a meta-meaning to superuser.com about hyperlinks.) The command window this was running in listed a few "EXTRA" files then "hung". By this, I mean no more output, no cmnd prompt, and if I tried to scroll it up, it would immediately scroll right back to the bottom. After about six hours, it finally finished, although I never got a cmnd prompt back in the window I started it in. DIR shows the log file at more than 1.3GB, but when I try to do a MORE on it, I get "Cannot access file". C:\C_root never grew larger. Does anyone have an idea what is going on here?

    Read the article

  • How to stop my wireless adapter from received dhcp from router (windows)

    - by baobeiii
    Hi, I have a windows 7 computer which is connected via vpn to an OpenVpn server which happens to be in another country. I have all internet traffic being routed from my computer through the vpn to the server. However dns queries are not going through the vpn, but are instead going directly to my isp's dns via a route outside of the vpn tunnel. This is happening because my wireless adapter is configured to obtain DNS server address automatically. The router that stands between my computer and the internet happens to have a DCHP server running on it that is assinging my computer with the DNS addresses of the isp. The issue is, i haven't been able to stop my wireless adapter on my computer from receiving the dns settings from the router. I've tried selecting 'use the following dns server addresses' and then just leaving them blank, but ipconfig /all shows me that this hasn't worked and i'm still getting dns form the router. So is there any way to completely stop my windows wireless adapter from receiving these settings from the router? I have the OpenVpn server pushing to my computer's tun adapter the dns that it should be using. I'd rather solve this in a way that doesn't involve disabling the dhcp server on the router or fiddling with the router. The reason is i'm on a laptop and i want my vpn to not leak dns even when i'm out, for example in wireless hotspots. I know if i could just force the wireless adapter to ignore the router's dhcp server then my dns queries would go through the tunnel to the dns address pushed by the OpenVpn server. Sorry, i know thats long winded, if you have any idea's please do tell me. Thanks and merry xmas.

    Read the article

  • Problem with Amiga 1200 accelerator board

    - by cc0
    I just recently walked past a dump, where in the corner of my eye I spotted something that looked like a huge keyboard. I went to take a closer look, and found out that it was an Amiga 1200 with a 030 accellerator board and scala dongle. Jackpot! So anyway; I dried it, cleaned it, it works, but the floppy was not powering on and same with the harddrive. I am using an old Amiga 1200 PSU that was making some strange high pitch noise when I tried to boot the amiga with the harddrive installed in it. I removed the harddrive and it booted fine with the PSU not emitting any detectable noise. However, when I have the 030 installed it sometimes reboots and shows a red "Software Error" screen. I tried removing the memory on the board, same effect. Sometimes it does not boot at all, just gives a black screen. Someone suggested the card had problems with 3.1 roms, but this amiga has only 3.0 roms installed. Does anyone have any apparent theories as to why it seems unstable? I don't have any other Amiga parts to cross-swap with to test a lot of things, so I'd really appreciate some sound input here so I'd know what to look for in order to try fix it. And merry Christmas everyone :]

    Read the article

  • Why am I getting programs stuck in log_wait_commit under Linux?

    - by staticsan
    There is something subtly wrong with my Linux install that I just can't locate. It is Ubuntu Lucid Lynx (10.04) 64-bit. Hardware is a Dell Optiplex 960: Intel Core 2 Quad CPU, 8Gb of RAM, 2x 300Gb HDDs. /home is ext2 on one disk and everything else is on the other (/ is also ext3). I have VirtualBox running a 64-bit Vista image for Outlook calendaring, but the heavyweight apps are IntelliJ, NetBeans, MySQL and Opera. Opera also loads my mail (IMAP) of which there is over 10,000 messages. The problem is that Opera stalls for a few seconds from time-to-time. Watching the process list shows it's in log_wait_commit which means (as far as I have figured out) the filesystem is holding things up. Sometimes I can make this happen by doing a subversion update, but usually it happens for no reason I can see. It usually happens to Opera, but I've seen NetBeans go under, too. It doesn't make the app crash - it's just completely unresponsive for a few seconds. Googling has not helped. The closest I got was to remove the sync attribute in the file system. This achieved nothing. On the advice of a Linux guru friend, I lowered /proc/sys/vm/dirty_writeback_centisecs to 300, but that didn't do anything, either. And it was all he could think of. What is going on and can I fix it? (And how?)

    Read the article

  • How do you permanently disable the 'This Connection is Untrusted' page on Firefox

    - by TheIronChef9
    I'm going insane. Can someone please help me to COMPLETELY DISABLE the 'This Connection is Untrusted' page on Firefox. Facts: I am running Firefox 23.0 on an Ubuntu machine (downloaded and installed ubuntu today) It is a work computer and I have to use my employer's proxy While visiting Webpages/webapps like Gmail or Google brings up the 'This Connection is Untrusted' page and I have to go through the whole tedious task of selecting 'I understand the Risks' and add Exceptions, etc. etc. The fact is, I don't care about the risks. I would rather this computer melt into the ground than have to see that page ever again. I want to dance naked in untrusted pages and not give a damn about the consequences. I just never want to see that page again. Ever. For some sites (eg. wikipedia), the css doesn't load and I end up seeing them in plain text. As a result these sites are completely useless. Wasted hours trying to solve this for stackoverflow.com. These issues happen on the Firefox on my Windows XP machine as well (also using the same proxy). I don't want to export/import certificates or create exceptions for every site that shows this bloody page. I just want this page gone. I don't want Firefox to tell me what's safe and what's not. Also, my system time and date are correct. I've also tried the lies on this page too with no good results. Edit: I've also tried the whole going into the Advance-Certificates-validation setup page and unchecked 'Use the Online Certificate Status Protocol (OCSP) to confirm the current validity of certificates' checkbox. Nothing happened even after restarting firefox or rebooting. I need help. Thanks.

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

  • Why is my browser using so much memory?

    - by Steve
    Hi. I've recently had problems with Firefox running very slowly when I have many tabs open; say 20 tabs. My whole system would slow down. I decided to give Google Chrome a try, and it started out fine. But lately I am finding that it too, slows down my whole system. Looking at Task Manager, chrome.exe is using about 250MB of memory in about 6 different entries in task manager. However, when I shut Chrome down, memory usage is reduced by about 600MB. How can this be? (shows drop in memory usage after ending Chrome.) When my system locks up with Chrome having many tabs open, it takes 10 seconds to load the Start Menu, 10 seconds to expand All Programs, and each folder and subfolder, and 30 seconds for the program to be highlighted under my mouse. It also takes 10 seconds to switch to Notepad. Why is Chrome appearing to use so much more memory than Task Manager indicates? Why is my pagefile being used when I have around 1.1GB of memory? Can I set Chrome to run in RAM and not in the pagefile? How can 20 tabs use 600MB? That's 30MB per tab. Thanks for your help.

    Read the article

  • Constant crashes in windows 7 64bit when playing games

    - by yx.
    I've tried everything I can possibly think of in trying to fix this problem and I'm totally out of ideas, so any help would be appreciated: The problem: whenever I fire up a game, it works for a short while with no problems and then it would crash. Either its a hard crash, forcing me to reboot, or windows would report that the display driver has stopped working and recovered. Here is a list of things I've already tried: Drivers - tried the latest drivers (catalyst 9.12) as well as the stock drivers that came with the video card. Also have the latest BIOS/chipset Memtest - Ran Memtest86+ overnight, had no problems, the windows diagnostic tool also does not find any problems. Overheating - Video card/cpu temperatures are well below peak (42 and 31 Celsius receptively) PSU Voltage - CPUID shows that the voltage levels are all above what they should be. The PSU itself is only roughly 16 months old and is a good model. HDD - No errors when checked GPU - Brand new (replaced previous card since I thought it was the problem, apparently not) Overclocking - Everything is at stock levels, memory voltage is set to manufacturer's standard Specs: Motherboard: ASUS P5Q Pro CPU: Core 2 Duo E8400 3.0 ghz OS: Windows 7 home premium 64 bit Memory: Mushkin Enhanced 4GB DDR2 GPU: Sapphire HD 5850 1GB PSU: SeaSonic M12 600W ATX12V DirectX: DX11 Event Viewer after a crash always has these logged: A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 1 The details view of this entry contains further information. A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 0 The details view of this entry contains further information. A previous card that I had (4850x2) also had these errors, so I changed video cards, but the same thing is happening.

    Read the article

  • Using multiple USB webcams in Linux

    - by rachelderp
    Running more than one USB webcam in Debian/Linux results in the the following error: libv4l2: error turning on stream: No space left on device VIDIOC_STREAMON: No space left on device What initially seemed to be a programming issue in OpenCV turned into a quest for a mysterious hardware/software problem after the same errors were produced by running cheese and xawtv. Apparently it's caused by webcams requesting all the available bandwidth on the USB host controller. With that in mind I decided to run wireshark and capinfos to see just how much bandwidth a single camera used. 4 megabits per second at 320x240 14 megabits per second at 640x480 32 megabits per second at 1920x1080 Interesting! That might explain why two cameras at 320x240 work but any higher resolution fails. It's as if my USB controller is only operating at USB 1 speeds, yet lsusb shows both webcams belonging to a device which supposedly supports 480 megabits per second. One solution proposed forcing the webcams to calculate their bandwidth usage instead of requesting their maximum by running the following commands: sudo rmmod uvcvideo sudo modprobe uvcvideo quirks=128 Unfortunately that made no difference, so I decided to try another solution. A post on StackOverflow suggested telling my webcams to use a lower FPS or compressed video format like MJPEG, but after running v4lctl list it doesn't appear either of my webcams support changing their video mode. And that's where I'm stuck. Why would two webcams operating well below the maximum speed of USB 2 would produce this error? ps: It's not a disk space issue, df displays no change when the webcams are started. pps: If it makes a difference, here's the output of lsusb

    Read the article

  • .htaccess with addondomain and https ssl

    - by admon
    I have main domain and addon domain. Question. 1)When surfing to: ftp.addondomain.com or mail.addondomain.com For some reason it goes to the main domain. (normally this should not be problem but i still want completely separation) Do you know the syntax to redirect in the .htaccess file this: (.*).addondomain.com - addondomain.com and where do i put the code? in the addondomain .htaccess or in the main domain attaccess I.E any_words.addondomain.com should be forwarded to the addondomain.com so these: dsdhf.addondomain.com ftp.addondomain.com mail.addondomain.com ... all will be forwarded to: addondomain.com (i.e without the prefix). 2)Same question for https:// Main domain has SSL addon domain does not have ssl. For some reason when surfing to: https:// addondomain.com you get to: http:// maindomain.com (the address bar shows https:// addondomain.com but the site pages - the page you see is the page of the main domain) I would like that if user surfs to https:// addondomain.com then (since there is no ssl for the addon domain) then user will get to: http:// addondomain.com Or alternatively user will get error message. I do not want him to be redirected to the main domain. Please if you can, write me what to add to the .htaccess and i will add it. Please also let me know where to write the code. I.E in the addondomain .htaccess or in the main domain attaccess Thanks.

    Read the article

  • Did Adobe Photoshop just killed my Graphics Card for good?

    - by user6004
    I was working with Adobe Photoshop, just some regular work, when I came to edit a PSD file and change the text of some layer, when all of a sudden the PC froze. No mouse, screen is frozen, keyboard strokes aren't getting me anything, no Task Manager, nada. So I rebooted my PC, and then something quite terrifying appeared before my eyes. It was not the Checkdisk utility that was launched, that made me terrified (by the way, that reboot damaged the partition table of an external HDD that was connected at the time to my PC, but that's another story). It was the screen itself. Please have a look. So after Checkdisk finished and Windows loaded, I noticed that the resolution was not right. Instead of 1440x900 which I had set, it was 1280x1024. When I went to change it back, I had no option to change back to my old resolution, and has only 3 other general resolution properties, as if my Video Card (GeForce 8800 GTS btw) was not recognized. And what do you know, in the Device Manager it appeared with an exclamation mark. Inside the hardware, it said this: Windows has stopped this device because it has reported problems. (Code 43) Uninstalling the drivers, downloading the newest drivers from NVIDIA and installing them did not work. It always comes back to this. So, do you have any advice before I go out and buy a new graphics card? I thought this was the only option left, but maybe the experts at Super User can help me out. By the way, the dotted screen appears after every reboot, and I see the dots when the ASUS Motherboard screen shows up at boot. Thanks in advance.

    Read the article

  • How to create a WHM/cPanel account, without creating a new sub-domain?

    - by Cyclops
    I have a basic VPS (full root access), with WHM/cPanel, and am learning the ropes. I'm trying to create a new account for an existing domain (mysite.com), and so far WHM won't let me - it either wants a sub-domain or fake domain, but won't allow two accounts for one domain. In the beginning, there was only the root account, and it wouldn't let me login to cPanel - a quick chat with tech support, and I am informed that I need to create a second account, which I did. So now I have an account, call it ns1me, for domain mysite.com. Now I want to create a django account. I go through the same process, but WHM won't allow me to use mysite.com as the domain for django. The docs recommend a sub-domain, so I fill the box in with django.mysite.com. I then realize that has actually created a sub-domain - going to django.mysite.com shows me its home directory, along with helpful information about what version of Apache, Python, and other mods its running (thanks, Apache). I really don't want a sub-domain, so that's out. Another chat with tech support, and they recommend a fake domain name, as it won't create anything. Sure enough, using a domain of djangomysite.com works, and WHM allows me to create a django account. But of course, I can't send email to [email protected] (where I could to [email protected]). What I want, is to be able to create a second account, associated with mysite.com (so I can run cPanel logged in as django, send email to [email protected], etc) - without creating a whole new sub-domain, or fake domain.

    Read the article

  • Strange Domain name under the same IP Address

    - by Mike Chip
    There's something really weird happening in my server. But first things first: I wanted to have my website and chose the domain name "myowndomain.com", Now on my domain registrar I point "myowndomain.com" to the address of my recently setup VPS, let's say 50.50.50.50 So I installed everything I needed to run my website, and I started to notice strange queries coming from different IP Addresses. Like these [client 123.123.123.123] File does not exist: /var/www/html/api, referer: http://www.strangedomain.com/api/manyou/my.php [client 456.456.456.456] File does not exist: /var/www/html/api, referer: http://www.strangedomain.com/api/manyou/my.php or like this (Really a long line, I cut some things) GET /?s=vod-show-id-22-area-%E5%85%B6%E4%BB%96-language-%E9%9F%A9%E8%AF%AD.html HTTP/1.1" 301 295 "http://v.strangedomain.com/?s=vod-s ...[cut]... spider" That above is happening the most. The 'strangedomain.com' returns the same IP address of my VPS which my website is hosted on. The whois of such domain shows it's registered to a chinese. But the street name didn't look so right (like a huge single word), so I think all of that info might be fake, but still might be a chinese. I also noticed that all 'clients' trying to access the 'strangedomain.com' is coming from china. If I type in the browser 'strangedomain.com', I see my website. I'm worried, because my website is actually an e-commerce. I don't know if 'strangedomain.com' WAS a website on 50.50.50.50 in the not so far past, or if it's something else.

    Read the article

  • Printer deployment via Group Policy not working on a single system

    - by Aron Rotteveel
    One of my coworkers just got a new laptop running Windows 7 Pro x64. We use a GPO to deploy the printers to every system, but for some reason it is not working on this system. I have been breaking my head over this for the past 3 hours now without any result. The strange thing is that gpresult /H seems to indicate that the GPO did run. The hardware: Laptop: Windows 7 Professional x64 Print server: Windows Server 2008 x64 R1 HP Color LaserJet 2605dn HP LaserJet P2015 Driver packages on server: HP universal printer driver PCL5, both X86 as X64 Oddities and other info: GPO working flawlessly on every other system, including my own Windows 7 Ultimate X64 laptop gpresult /H shows the GPO being ran Windows Firewall completely disabled on the new laptop Below is the output for gpresult /H (in Dutch sadly, but I think you'll recognize it): Beleidsregels Windows-instellingen Printerverbindingen Pad Dominerend groepsbeleidsobject \\Server2008\HP Color LaserJet 2605dn Printers \\Server2008\HP LaserJet P2015 Printers Beheersjablonen Beleidsdefinities (ADMX-bestanden) opgehaald van de lokale computer. Configuratiescherm/Printers Beleid Instelling Dominerend groepsbeleidsobject Beperkingen van point-and-print Uitgeschakeld Printers Like I said, I have been trying to figure this out for the past few hours or so without any result, so you are my last hope. Any help is appreciated.

    Read the article

  • Win 7 64 with 8 gig of ram, getting running low on memory errors.

    - by John
    I have a new Dell laptop running Win7 64 with 8 meg of ram. If I leave the system running overnight I start getting low memory errors the next day. Looking at task manager it shows 6.27 gig used but looking at the processes list the totals don't show nearly that much. I am showing all processes from all users. I have also looked at the processes with PRocess Explorer and see the same results. Using resource monitor I see 4165 MB in Use, 2328 MB Modified and 1352MB Standby with only about 345 MB free. These numbers don't seem to add up to what I have running. I have Visual Studio 2010 running along with a number of IE8 sessions. I have run the same set of apps with XP SP2 32 bit with 4 gig of memory and never had this sort of problem. What is Modified memory? What is Standby memory? Any suggestions on what might be the issue and what might be a fix? TIA J

    Read the article

  • Duplicate forwarded messages in Blackberry when using BIS

    - by Avery Payne
    Our Setup External email arrives at a Postfix server, is scanned, and then forwarded via settings in transport (using the RELAY:[{ip-address}] for a given address) to an Exchange 2007 server. Some users are on Exchange, but a few are still on the Postfix server (they will be moved in the near future). IMAPS is provided for external connections via Dovecot; in-house, IMAP is provided for the Gateway and native MAPI is used for Exchange/Outlook. Blackberries are connected via BIS, which uses Dovecot as a reverse-proxy IMAPS service to connect to Exchange (when the mailbox exists on Exchange, otherwise it connects to the mailbox on the gateway). The Issue We have a user that, when they forward an email on their Outlook client, they get a duplicate of the original message on their Blackberry. When I say duplicate, I mean that they have a copy of the forwarded version of the message (i.e. their version of the message that they obtained hitting the forward button), and a copy of the original message that shows up at the same time. The expected behavior is to just see the forwarded message, not the forwarded message and a 2nd copy of the original message. We've only seen this with Outlook users that also have a Blackberry. Other IMAP clients, such as OS X Mail or Thunderbird, do not exhibit this behavior when connecting to the Exchange server; forwarded messages work as expected. The Questions what is causing this to happen? why does it only affect Outlook/Blackberry setups, and not TBird/Blackberry or OSX-Mail/Blackberry? how do we get it to stop, before people go insane and never forward messages again?

    Read the article

  • How does KMS (Windows Server 2008 R2) differentiate clients?

    - by Joe Taylor
    I have recently installed a KMS Server in our domain and deployed 75 new Windows 7 machines using an image I made using Acronis True Image. There are 2 variations of this image rolled out currently. When I go to activate the machines it returns that the KMS count is not sufficient. On the server with a slmgr /dlv it shows: Key Management Service is enabled on this machine. Current count: 2 Listening on Port: 1688 DNS publishing enabled KMS Priority: Normal KMS cumulative requests received from clients: 366 Failed requests received: 2 Requests with License status unlicensed: 0 Requests with License status licensed: 0 Requests with License status Initial Grace period: 1 Requests with License statusLicense expired or hardware out of tolerance: 0 Requests with License status Non genuine grace period: 0 Requests with License status Notification: 363 Is it to do with the fact that I've used the same image for all the PC's? If so how do I get round this. Would changing the SID help? OK knowing I've been thick whats the best way to rectify the situation. Can I sysprep the machines to OOBE on each individual machine? Or would NewSID work?

    Read the article

  • Shortcuts that make using Windows easier? [closed]

    - by ekaj
    Over the years I have found out a good bit of shortcuts, all of which make using Windows that much faster and easier. I was wondering if I had missed any important ones, or ones that just saved any time. I tried to keep these relative to Windows only, as programs such as Mozilla, Photoshop, etc. bring in hundreds of other shortcuts, which do not apply to all users. These are the current ones I know: For the mouse: Scroll wheel - opens a link in IE into a new tab - closes a specific tab in IE when closed on Right click-n-drag - nifty menu to make a copy of a file or create a shortcut For the keyboard: Cntrl + Alt + Del - need this be explained? WinKey + R - opens 'Run..' WinKey + D - shows the desktop WinKey + E - opens 'My Computer' WinKey + F - opens 'Find..' WinKey + Tab - cool way to switch inbetween windows WinKey + L - locks the computer Alt + Tab - switches windows with a simple interface Alt + F4 - closes windows Alt + F - opens 'File' (Handy for things like IE if you have the menubar disabled) Cntrl + F - find text on current document Cntrl + W - closes a window, or current active tab in IE (or IE itself if only one tab) Cntrl + C - copies selected text / image Cntrl + V - pastes selected text / image So does anyone know of any more shortcuts that just make life easier? I would be much appreciative of any, and I am sure that some other users would like to know some too =]

    Read the article

  • 3 Monitor PCI-e Graphics card (without tremendous pain)?

    - by N Rahl
    As we are all painfully aware, the only way to get multiple monitors AND compositing (Compiz) on Linux is to use a single graphics card that can drive both (or in my case all three) screens. I bought a Radeon 5750 specifically because it claims to able to drive 3 monitors. I can plug in 3 monitors (2 DVI, 1 HDMI) and the Catalyst Control Center shows all 3, but only 2 can be enabled at a time. The exact message is: The current settings cannot be applied. Possible issues may include: - Display(s) cannot be enabled. - Setting(s) cannot be applied due to insufficient video memory. So I'm going to assume that either the 5750 doesn't support 3 monitors, OR, more likely, ATI couldn't be bothered to add that support to their Linux drivers. So this is a multipart question: First, can anyone suggest a PCI Express Graphics card that can run 3 screens on linux without tremendous pain? I'm looking for something where you install the driver and all three screens "just work". Does such a card exist? Second, if you have a 5750, have you been able to get it to do 3 monitors? I'm running Ubuntu 10.04 at the moment.

    Read the article

  • Designing a persistent asynchronous TCP protocol

    - by dogglebones
    I have got a collection of web sites that need to send time-sensitive messages to host machines all over my metro area, each on its own generally dynamic IP. Until now, I have been doing this the way of the script kiddie: Each host machine runs an (s)FTP server, or an HTTP(s) server, and correspondingly has a certain port opened up by its gateway. Each host machine runs a program that watches a certain folder and automatically opens or prints or exec()s when a new file of a given extension shows up. Dynamic IP addresses are accommodated using a dynamic DNS service. Each web site does cURL or fsockopen or whatever and communicates directly with its recipient as-needed. This approach has been suprisingly reliable, however obvious issues have come up and the situation needs to be addressed. As stated, these messages are time-sensitive and failures need to be detected within minutes of submission by end-users. What I'm doing is building a messaging protocol. It will run on a machine and connection in my control. As far as the service is concerned, there is no distinction between web site and host machine -- there is only one device sending a message to another device. So that's where I'm at right now. I've got a skeleton server and a skeleton client. They can negotiate high-quality authentication and encryption. The (TCP) connection is persistent and asynchronous, and can handle delimited (i.e., read until \r\n or whatever) as well as length-prefixed (i.e., read exactly n bytes) messages. Unless somebody gives me a better idea, I think I'll handle messages as byte arrays. So I'm looking for suggestions on how to model the protocol itself -- at the application level. I'll mostly be transferring XML and DLM type files, as well as control messages for things like "handshake" and "is so-and-so online?" and so forth. Is there anything really stupid in my train of thought? Or anything I should read about before I get started? Stuff like that -- please and thanks.

    Read the article

  • exchange server 2010 with multiple domains

    - by air
    i have one exchange server 2010, which is working fine with one domain. my exchange is working as follows pop3 collector collect emails from one master catchall account and then deliver to exchange server, this working perfect. now what i want to add another domain to same exchange, i have added new domain as trusted domain & email policy and this new domain email account works fine with internal emails. now what i have done, i again forward new email account to same catchall account. but if i send email from any other external email address email is bounce, i can see email receive by pop3 collector but bounce by exchange server. to make you more clear let me explain logic on which i am working. i have 2 domains 1. domain1.com ([email protected]) 2. domain2.com ([email protected] -->[email protected]) now on my machine with exchange server i have pop3 collector which collect all emails from [email protected] and forward to exchange 2010 server. all emails to domain1.com is working perfect but when i send email to [email protected] this email redirect to [email protected] perfectly but when exchanger server receive this email, it bounce. i have also study the url link text and follow the whole process but no success. i also check that my DNS/MX is working fine as the bounce message is going from my exchange server. EDIT the only problem is with accepted domain, as email come to exchange server then bounce back. i just try this today i create one user called test, then i goto his properties -- email there was only one email account [email protected] i try to send email to [email protected] from internet (email bounce) then again i go to test user properties -- email and Add one email [email protected] again u try to send email to t*[email protected]* from internet (email received) i think the only problem is with accepted domain but in hub transport , it shows accepted is there any way to check does domain is properly accepted or not in exchange 2010 server. Thanks

    Read the article

  • Bind9 not doing anything with forwarded query responses?

    - by Rykaro
    I have a Bind DNS server that is the local production DNS server and a Windows 2008 R2 domain controller which provides DNS for a lab environment with the domain xyz.lab. I've configured the Bind DNS to forward DNS requests for the domain xyz.lab to the Windows DNS server with this config: zone "xyz.lab" { type forward; forward only; forwarders { x.x.x.x; }; }; zone "x.x.x.in-addr.arpa" { type forward; forward only; forwarders { x.x.x.x; }; }; And Bind options are (the all_internal acl includes the subnets of both the production and lab networks as well as the loopback of the bind server): allow-query { all_internal; }; allow-recursion { all_internal; }; allow-transfer { none; }; notify no; minimal-responses yes; version "unknown"; Unfortunately, when I do an nslookup or dig on the bind server for a host on the lab domain, the request times out. The logs on the Windows 2008 DNS server show it receiving the query and responding to it and a network packet trace shows the query responses arriving at the Bind DNS server. The servers reside on the same switch with a router providing connectivity between the layer 3 subnets (production and lab are on different subnets) and there is a round trip time of between 3ms and 5ms on pings between the two servers, so I don't think there is an issue with latency causing a timeout of the query. In summary a query-response arrives back at the Bind server and the nslookup/dig times-out. Why does the Bind DNS not seem to be doing anything with the query responses when it receives them?

    Read the article

  • cpusets not working - threads aren't running in the cpuset I specified?

    - by lori
    I have used cpuset to shield some cpus for exclusive use by some realtime threads. Displaying the cpuset config with the test app RealtimeTest1 running and its tasks moved into the cpusets: $ cset set --list -r cset: Name CPUs-X MEMs-X Tasks Subs Path ------------ ---------- - ------- - ----- ---- ---------- root 0-23 y 0-1 y 279 2 / system 0,2,4,6,8,10 n 0 n 202 0 /system shield 1,3,5,7,9,11 n 1 n 0 2 /shield RealtimeTest1 1,3,5,7 n 1 n 0 4 /shield/RealtimeTest1 thread1 3 n 1 n 1 0 /shield/RealtimeTest1/thread1 thread2 5 n 1 n 1 0 /shield/RealtimeTest1/thread2 main 1 n 1 n 1 0 /shield/RealtimeTest1/main I can interrogate the cpuset filesystem to show that my tasks are supposedly pinned to the cpus I requested: /cpusets/shield/RealtimeTest1 $ for i in `find -name tasks`; do echo $i; cat $i; echo "------------"; done ./thread1/tasks 17651 ------------ ./main/tasks 17649 ------------ ./thread2/tasks 17654 ------------ Further, if I use sched_getaffinity, it reports what cpuset does - that thread1 is on cpu 3 and thread2 is on cpu 5. However, if I run top -p 17649 -H with f,j to bring up the last used cpu, it shows that thread 1 is running on thread 2's cpu, and main thread is running on a cpu in the system cpuset (Note that thread 17654 is running FIFO, hence thread 17651 is blocked) PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND 17654 root -2 0 54080 35m 7064 R 100 0.4 5:00.77 3 RealtimeTest 17649 root 20 0 54080 35m 7064 S 0 0.4 0:00.05 2 RealtimeTest 17651 root 20 0 54080 35m 7064 R 0 0.4 0:00.00 3 RealtimeTest Also, looking at /proc/17649/task to find the last_cpu each of its tasks ran on: /proc/17649/task $ for i in `ls -1`; do cat $i/stat | awk '{print $1 " is on " $(NF - 5)}'; done 17649 is on 2 17651 is on 3 17654 is on 3 So cpuset and sched_getaffinity reports one thing, but reality is another I would say that cpuset is not working? My machine configuration is: $ cat /etc/SuSE-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 1 $ uname -a Linux foobar 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >