Search Results

Search found 1872 results on 75 pages for 'tom lilletveit'.

Page 25/75 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Setup dual screen with Catalyst Control Center on notebook so that nothing changes when closing lid (case) ?

    - by Tom
    I have setup a dualscreen setup for my notebook with an ATI video card (ATI mobility radeon hd 5470) using Catalyst Control Center. The monitor extends the desktop of the notebook's screen. This works great. I am running on windows 7 and have set it up to not do anything when I close my laptop's lid (case) other than dim the notebook's display. However, when I do close the lid (notebook's display), the second monitor suddenly changes and I believe it starts behaving as the primary monitor.. I do not want this to happen. So, how do I setup my dualscreen setup so that it won't set the secondary monitor as the primary one when I close my laptop's lid? UPDATE: this video shows exactly what my problem is: http://www.youtube.com/watch?v=Q1ygNHbqI2g

    Read the article

  • Does Intel Smart Response provide any statistics on the cache usage?

    - by Tom Seddon
    I've set up my Z68-based Core i7 PC with a 60GB SSD dedicated as a Smart Response cache drive. Is there any way I can get any statistics out of it? It would be nice to have some information on how much cache space is actually being used, maybe how much of it was actually accessed recently, and how many reads in general are coming from the SSD rather than from the mechanical disk. These statistics might help to quickly provide some evidence for or against the use of Smart Response, without my having to reinstall Windows on the SSD (etc.) to find out. The Windows ReadyBoost feature has some performance counters you can access via the Windows 7 perfmon tool, for example, which is the kind of thing I'm hoping is somehow available. Smart Response provides no perfmon counters, though, and the Intel Rapid Storage Utility tells you pretty much nothing except that Smart Response is switched on.

    Read the article

  • Ubuntu 12.04 3.2 Kernel showing incorrect "cache size" in cpuinfo?

    - by Tom G
    2x Xeon E5620 . 16 Cores altogether. /proc/cpuinfo shows cache is only @ 4096kb According to intel this should have 12MB of "smart cache". Doing searched for E5620 and CPUinfo shows the correct number: cache size : 12288 KB However mine shows this: processor : 15 v endor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz stepping : 2 microcode : 0x1 cpu MHz : 2400.104 cache size : 4096 KB fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx bogomips : 4800.20 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual Any idea on why it shows like it's missing 8MB in CPU cache? .

    Read the article

  • External Dell Display doesn't work with MacBook Pro (2011) after Thunderbolt Firmware Update (1.0 and 1.2)

    - by tom
    Today two Thunderbolt Firmware Updates (1.0 and 1.2) became available for my MacBook Pro (Early 2011). After installing both, my external monitor, a Dell U2713HM, does no longer work. The system detects the display, but the display shows only black. An Apple Thunderbolt display works fine and a MacBook Air can use the Dell monitor without problems. My MacBook Pro can use the Dell monitor just fine when I boot from a USB stick. Therefore, clearly the Thunderbolt Firmware Update seems to be the problem. Does anyone have the same problem? Any solutions or workarounds? I guess there is no way to remove a Thunderbolt Firmware Update once it's installed, right? Update 24.10.2013: Is there no one else with this problem? In the meantime I tried three different cables – none worked. My colleague with the same generation MacBook Pro also can't use my display after installing the firmware update. All colleagues with MacBook Airs and newer MacBook Pros (all didn't receive the firmware update) can use the display. Update 29.10.2013: Wow, ok today my new MacBook Pro Retina 13' (Late 2013) arrived. Guess what, I cannot use the display with it. Only HDMI works – not with the full resolution.

    Read the article

  • Windows 8 not booting from DVD while trying to install in a Ubuntu 14.04 system

    - by Tom
    Currently my system runs on Ubuntu 14.04. Yesterday, I deleted and formatted the partition(C drive) as NTFS in which Windows 7 was installed because Windows 7 was not booting for more than a week. I have a Windows 8 disk and it was able to boot from that disk when there was Windows 7 on my system. After the formatting of C drive yesterday, I tried to install Windows 8 by booting from the disk. Unfortunately, this time no booting happened from the disk. So I pressed F2 during the system start up and checked Boot Device Priority, Optical Drive has the first priority there. So why Windows 8 didn't boot from the disk ? I need to install Windows 8 too in my system without doing any damage to Ubuntu 14.04. How can I do it ?

    Read the article

  • Firewall still blocking port 53 despite listing otherwise?

    - by Tom
    I have 3 nodes with virtually the same iptables rules loaded from a bash script, but one particular node is blocking traffic on port 53 despite listing it's accepting it: $ iptables --list -v Chain INPUT (policy DROP 8886 packets, 657K bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo any anywhere anywhere 2 122 ACCEPT icmp -- any any anywhere anywhere icmp echo-request 20738 5600K ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- eth1 any anywhere node1.com multiport dports http,smtp 0 0 ACCEPT udp -- eth1 any anywhere ns.node1.com udp dpt:domain 0 0 ACCEPT tcp -- eth1 any anywhere ns.node1.com tcp dpt:domain 0 0 ACCEPT all -- eth0 any node2.backend anywhere 21 1260 ACCEPT all -- eth0 any node3.backend anywhere 0 0 ACCEPT all -- eth0 any node4.backend anywhere Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 15804 packets, 26M bytes) pkts bytes target prot opt in out source destination nmap -sV -p 53 ns.node1.com // From remote server Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2011-02-24 11:44 EST Interesting ports on ns.node1.com (1.2.3.4): PORT STATE SERVICE VERSION 53/tcp filtered domain Nmap finished: 1 IP address (1 host up) scanned in 0.336 seconds Any ideas? Thanks

    Read the article

  • Why are some UDP packets getting blocked?

    - by Tom
    In our organization, we have two test machines running Windows XP. While attempting to test a roll-my-own UDP message server, I found that both could receive small messages (under 2k) just fine. However, when I test sending large packets to both of these machines, one receives them fine, while the other can't receive them at all. Both machines have SP3 and both have their Windows Firewall shut off, but one still isn't working. Can anyone tell me where to look for anything that might be blocking or limiting the packet size on a Windows Machine? Thanks.

    Read the article

  • rdp allow client reconnect without password prompt after several hours

    - by Tom
    Let me describe the setup first: client PC with several rdp sessions to local servers, all opened from saved rdp sessions with stored passwords, using the standard windows rdp client. several windows servers on the LAN, with varying server OS: windows server 2003, 2008, and even 2012 now. When I log onto my PC I open up rdp sessions to all those servers, and keep them open all the time for various reasons. Overnight the client PC is put into sleep or hibernate mode, thereby braking the rdp connections. On the next day when I wake the client PC and login again, the rdp sessions automatically try to reconnect to the servers, and this leads to the question: starting with server 2008 something apparently changed in the rdp server config, as all servers with 2008, 2008r2 and 2012 will prompt for the password in the rdp session, whereas the 2003 server rdp connections will re-establish without the password prompt. Apparently there is a timeout setting on 2008+ that, when exceeded, requires a reauthentication. Is there any way to setup the 2008+ servers to behave like 2003 did? I'd like the rdp sessions to reconnect without a password prompt even after a several hour disconnect.

    Read the article

  • Connecting to SVN server from a computer outside of my LAN

    - by Tom Auger
    I've got a Fedora server running Subversion and svnserve on port 3690. My repo is at /var/svn/project_name. I have my router forwarding port 3690 to the local server (as well as port 80, 21, 22 and a few others). When I connect locally to svn://192.168.0.2/project_name it works great. When I connect from an external server to svn://my.static.ip/project_name I get a time out connecting to the host. However, if I http://my.static.ip there is no problem, so port forwarding is working (at least for port 80). I don't want to run WebDAV or svn via HTTP/s. I'd like it to work using svnserve, as documented in the svn book. What have I misconfigured? EDIT Here is the last part of my iptables dump. I'm not an expert, but it looks OK to me: ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:svn ACCEPT udp -- anywhere anywhere state NEW udp dpt:svn ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:6680:6699 ACCEPT udp -- anywhere anywhere state NEW udp dpts:6680:6699 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited EDIT 2 Results from sudo netstat -tulpn tcp 0 0 0.0.0.0:3690 0.0.0.0:* LISTEN 1455/svnserve

    Read the article

  • "Countersigning" a CA with openssl

    - by Tom O'Connor
    I'm pretty used to creating the PKI used for x509 authentication for whatever reason, SSL Client Verification being the main reason for doing it. I've just started to dabble with OpenVPN (Which I suppose is doing the same things as Apache would do with the Certificate Authority (CA) certificate) We've got a whole bunch of subdomains, and applicances which currently all present their own self-signed certificates. We're tired of having to accept exceptions in Chrome, and we think it must look pretty rough for our clients having our address bar come up red. For that, I'm comfortable to buy a SSL Wildcard CN=*.mycompany.com. That's no problem. What I don't seem to be able to find out is: Can we have our Internal CA root signed as a child of our wildcard certificate, so that installing that cert into guest devices/browsers/whatever doesn't present anything about an untrusted root? Also, on a bit of a side point, why does the addition of a wildcard double the cost of certificate purchase?

    Read the article

  • Relayout LVM Disk

    - by Tom
    I have an Ubuntu 11.10 system with two 500GB disks. The partition tables look like this: /dev/sda1 primary 465.52GB /dev/sda2 extended 243.17MB -> /dev/sda5 logical 243.14MB /dev/sdb1 primary 465.76GB sda1 and sdb1 are in a single LVM physical volume group containing a single logical volume containing a single logical filesystem which is mounted as /. sda5 is mounted as /boot. The problem comes when I want to upgrade to Ubuntu 12.04, which requires at least 247MB free on /boot. So I need to reduce the size of sda1 so that I can increase the size of sda2 and sda5. How the heck do I do that? I can find how to shrink the logical volume group, but I'm not at all clear on how to clear out the end part of sda1 so that I can reduce the physical volume group. Does pvresize just deal with this automagically? Or is that wild wishful thinking? I guess the alternatives are to back everything up onto something or other and recreate the thing from scratch or find out whether GRUB2 supports using LVM for /boot.

    Read the article

  • Force request to miss cache but still store the response

    - by Tom Marthenal
    I have a slow web app that I've placed Varnish in front of. All of the pages are static (they don't vary for a different user), but they need to be updated every 5 minutes so they contain recent data. I have a simple script (wget --mirror) that crawls the entire website every 15 minutes. Each crawl takes about 5 minutes. The point of the crawl is to update every page in the Varnish cache so that a user never has to wait for the page to generate (since all pages have been generated recently thanks to the spider). The timeline looks like this: 00:00:00: Cache flushed 00:00:00: Spider starts crawling to update cache with new pages 00:05:00: Spider finishes crawling, all pages are updated until 1:15 A request that comes in between 0:00:00 and 0:05:00 might hit a page that hasn't been updated yet, and will be forced to wait a few seconds for a response. This isn't acceptable. What I'd like to do is, perhaps using some VCL magic, always foward requests from the spider to the backend, but still store the response in the cache. This way, a user will never have to wait for a page to generate since there is no 5-minute window in which parts of the cache are empty (except perhaps at server startup). How can I do this?

    Read the article

  • I just deleted "/bin". What's the best way to recover?

    - by Tom Marthenal
    I just ran (not on purpose!) rm -rf /bin. I've booted down the computer and am using Finnix to try to recover from it. I have succeeded in mounting the drive, and confirmed that, yes, the entire /bin folder is deleted. Is it possible to recover from this without reinstalling the OS? I'm thinking that I could setup a VM with the same OS and architecture (Ubuntu Server 11.10 alpha release, x86) and install all the packages I had installed on the server, then just copy the /bin folder. Will this work? Am I better off just starting over?

    Read the article

  • Adding file to /etc/cron.d doesn't make it run (ubuntu 10.04)

    - by tom
    If I scp a cron file into /etc/cron.d it doesn't run unless I edit the file and change the command. Then crond seems to pick up the cron file. How can I make cron reload its cron files in ubuntu 10.04? 'touch'ing the file doesn't work nor does 'restart cron' or 'reload cron'. My cron file is set to run every minute and logs to a file. Nothing ends up in the log file until I edit the command, and there's no entry for it in /var/log/syslog I'm stumped. Here's my cron file saved to /etc/cron.d/runscript # Runs the script every minute. This is safe because it will exit with success if it's already running * * * * * www-data if [ -f /usr/local/bin/thing ]; then exec /usr/bin/php /usr/local/bin/thing mode:prod -a 14 -d >> /var/log/thing/mything.log 2>&1; else echo `date +'[%D %T]'` "Thing not deployed. Command not run\n" >> /var/log/thing/mything.log; fi &

    Read the article

  • System Lags/Freezes when under high usgage

    - by tom
    I am not sure if its my GPU / Memory or Hard drive thats failing. For example if I'm runnning more than one instance of chrome and running an application that takes up a lot of resources, my system will start to lag and freeze. When I launch Photoshop the GPU feature disables automatically, this also lags when I click on menus and when working on documents in Photoshop. I really dont know where to start, if i should buy a new graphics card or test the memory or could it be my OS drive? System: Windows 7 64bit, ATI Raedeon 5850, Corsair 2x4GB http://i.stack.imgur.com/qqkLZ.jpg

    Read the article

  • Explorer.exe - No such interface supported and other issues

    - by tom
    The problem started when I uninstalled my ATI graphics driver and i used a driver cleaner. After i restarted win7 and it got to POST the screen just went completely blank. So i Used system restore from the repair menu to get back on. I found out that if I ran chkdsk the display just goes black after restart on startup and i have to use system restore to get back on windows7 not sure why this is. When i right click desktop and choose 'Screen Resolution' it says No such interface supported. Also when I click items such as device manager in the windows explorer it does not do anything. I have tried re-registering the DLLS but that did not do anything.

    Read the article

  • Can Squid 2.7 proxy gzipped content

    - by Tom Styles
    We have a forward proxy for our network which is Squid 2.7. This is managed for us by a third party. We noticed recently that http requests going from our network to the web were having the Accept-Encoding header removed. This was resulting in all web traffic across our network (approx 8000+ PCs) being uncompressed even though the browsers and server on each end were capable. We have asked the third party to look into this and they have said it is because Squid 2.7 does not support compression. I understand this to be true but I was under the impression that the compression happened on the webserver rather than the proxy. So... Can Squid 2.7 proxy and/or cache content that is gzipped? If it can, how/why might it be configured such that the Accept-Encoding header is being removed?

    Read the article

  • SSD performance

    - by Tom
    I recently upgraded to a Kingston Hyper-X 120GB SSD, when I run Crystaldiskmark my scores look really slow, my MB (gigabyte 775) does not have an option for ACHI in the BIOS, I'm wondering if that's an issue. The scores were: Seq read -233 write-176.8 512K-224 write-175.8 4K-25 write-80 4K-23 write-102 This drive is rated for over 500, Any help or input would be greatly appreciated..

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • Ubuntu's gui "Open With" command is not memorizing the application

    - by Tom Brito
    In Ubuntu, when we right-click a icon and use "open with", there is an option to remember that application for that file's type. For some reason, my system is no more memorizing the application (I think it was aftare some update). And, even when I have already used the "open with" command, the application is not showing in the "open with" list, so I have, everytime, to go to "open with"-"Other Application". Any hint to solve this?

    Read the article

  • How to detect bots programatically

    - by Tom
    we have a situation where we log visits and visitors on page hits and bots are clogging up our database. We can't use captcha or other techniques like that because this is before we even ask for human input, basically we are logging page hits and we would like to only log page hits by humans. Is there a list of known bot IP out there? Does checking known bot user-agents work?

    Read the article

  • can't register a soft phone to asterisk11

    - by Tom
    I have a VM (on oracle vbox) running Fedora17. I've installed asterisk 11 on it from sources. I've followed the wiki for installation (https://wiki.asterisk.org/wiki/display/AST/Creating+SIP+Accounts) to the letter. The ip on the VM machine running fedora is 192.168.1.7 and I can ping it from the host machine (Ubuntu 12.04), which is at 192.168.1.2 I've tried registering with ekiga with the following settings: user: [email protected]. Password: verysecretpassword registar: 192.168.1.7 but I'm getting an error "transport fail". Also, while trying to register I'm logged in to the asterisk CLI with verbose level 3 and debug level 4 and nothing appears. some more relevant data: I've added the following code to the end of my sip.conf.sample file: [demo-alice] type=friend host=dynamic secret=verysecretpassword context=users deny=0.0.0.0/0 permit=192.168.1.0/255.255.255.0 [demo-bob] type=friend host=dynamic secret=othersecretpassword context=users deny=0.0.0.0/0 permit=192.168.1.0/255.255.255.0 After I changed the sip.conf.sample file, I've created a copy of it and named it sip.conf. then I logged in to the asterisk CLI and typed sip reload. Then I'm trying to register and ekiga client from my host machine at 192.168.1.2 but it doesn't work and nothing appears on the asterisk CLI while in verbose mode level 3. BTW, If there is missing information about my question, please don't close it. comment about what you need to know and I'll edit it in to the question. tnx.

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • Which program is locking all my executable files?

    - by Tom Wijsman
    When updating any software product, as well as manually trying to replace .exe files, it says that access is denied to the file and in fact the System process is holding a handle to the file when I check it with Process Explorer. This must be a driver or something that is malfunctioning was my first though, but now I wonder how I figure out which driver / program is doing this and why it is so. Unlocker doesn't seem to be working for me, unless someone can tell me how to use it properly other than making it appear a magical wand in the notification area.... This is what Unlocker puts in my event log: The description for Event ID 1060 from source Application Popup cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: \??\C:\Program Files (x86)\Unlocker\UnlockerDriver5.sys the message resource is present but the message is not found in the string/message table Upon searching event 1060 I get: <file name> has been blocked from loading due to incompatibility with this system. Perhaps it is because I have 64 bit?

    Read the article

  • Can I restore Windows 8/install Windows 7 from BIOS?

    - by Tom
    I recently got an ASUS K55A series laptop with Windows 8 on it, and have been trying to load Windows 7 on it for days to no avail, and recently I discovered how to get my Windows 7 install DVD to boot from the BIOS, but I deleted all of my Windows 8 system information from both partitions of my HDD and Windows 7 setup says it cannot install on the disc because of a partition format issue. I did not delete the recovery HDD partition for Windows 8, but I can't get the HDD to show up in my boot menu in BIOS, and none of the F keys work to get to recovery mode (only DEL and F2 work to get me into BIOS)

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >