Search Results

Search found 8605 results on 345 pages for 'general dynamics'.

Page 304/345 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • Stack-based keyboard delay using Logitech MX3100 keyboard

    - by Mark S. Rasmussen
    I've been using a Logitech Cordless Desktop MX3100 keyboard for quite a while. I've never really had any problems, except for the occasional typo. I noticed however that I tended make the typo "Laod" instead of "Load", quite a bit more often than any other typos. As it started to get on my nerves, I decided to do some testing. What I found out was than when I write lowercase "load", I'd never make the typo. All uppercase, or just uppercase L, I'd make the typo quite often. My actual (very scientific) testing is probably best described by showing the output: moatmoatmoat MoatMoatMoat loatloatloat LaotLaotLaot loafloafloaf LaofLaofLaof hoathoathoat HoatHoatHoat hoadhoadhoad HoadHoadHoad lortlortlort LrotLrotLrot What i found out was that whenever shift was depressed, typing an uppercase "L" would induce a significant lag if the next character was an "o", compared to the lag of the any other key: High "o" lag: LoLoLoLoLoLo No "a" lag: LaLaLaLaLaLa No lag for neither "o" nor "a": lolololololo lalalalalala By realizing this I regained a slight bit of sanity as I knew I wasn't coming down with a case of Parkinsons. I was actually typing correctly, the lag just interpreted it wrongly. Now, what really bugs me is that I can't fathom how this is occurring. What I'm actually typing, in physical order, is this: L - o - a - d, and yet, the "a" is output before the "o", even though "o" was pressed before "a". So while the keyboard is processing the "Lo" combo, the "a" gets prioritized and is inserted before the "o" is done processing, resulting in Laod instead of Load. And this only happens when typing "Lo", not when typing lowercase "lo". This problem could stem from the keyboard hardware, the receiver hardware or the keyboard software driver. No matter the fault location however, I can't imagine how this could be implemented as anything but a FIFO queue. A general delay, sure, I could live with that, albeit I'd be irritated. But a lag affecting different keys differently, and even resulting in unpredictable outcome - that just doesn't make any sense. I've solved the problem by just switching to a wired keyboard. I just can't shake it off me though; what kind of bug/error/scenario would result in a case like this? Edit: It's been suggested that I stop drinking Red Bull and stick to water instead. While that may actually help solve the issue, I'm really not looking for a solution as such. I'm more interested in an explanation of how this could happen, as I can't imagine any viable technical solution that could result in this behavior.

    Read the article

  • varnish3, mod_geoip with apache2 using mod_rewrite and mod_rpaf

    - by mursalat
    I am maintaining a website with 3 different versions of the site, with 3 different languages, handles with a single system written in php, which takes in environment variables based on the domain name that is being accessed. These are the three sites: myshop.com : english international version myshop.eu : european version of site myshop.ru : russian version of the site when myshop.com is accessed from russia it is to be redirected to myshop.ru, and any country from europe accesses myshop.com, is redirected to myshop.eu, and international visitors stay at myshop.com, although they can go to the country specific site. All these redirections for the country is done using GeoIP apache2 mod in order to determine the country code, which is used in a RewriteCondition to state a RewriteRule, there are some exceptions of IPs that do not do the rewrite for, basically the IPs of the developer's PCs. The site has been doing just fine, until we decided to setup varnish to give the site a boost, it really did give it a great boost, but the country specific rewrites has become buggy. What started to happen is that a russian visitor can go to myshop.com and won't be redirected, until he clicks a random link (perhaps a link not cached by varnish yet) and the user is redirected to their specific country. For that i setup mod_rpaf, and for exceptions to the rewrite rule (for the developer's ip), i used this RewriteCond %{HTTP:X-FORWARDED-FOR} !^43\.43\.43\.43, and i restarted varnish and apache2, it worked for a while, then it messed up again. And whole day i have been doing changes however i have little no clue as to what's going on, sometimes it works, and sometimes it doesn't, and sometimes it half works, etc... As for geoip, i used a php to check the $_SERVER variable, and here is the general idea of the output [HTTP_X_FORWARDED_FOR] => 43.43.43.44 [HTTP_X_VARNISH] => 1705675599 [SERVER_ADDR] => 127.0.0.1 [SERVER_PORT] => 80 [REMOTE_ADDR] => 43.43.43.44 [GEOIP_ADDR] => 43.43.43.44 [GEOIP_CONTINENT_CODE] => EU [GEOIP_COUNTRY_CODE] => FR [GEOIP_COUNTRY_NAME] => France Now, thanks to the "random" redirects, i hardly have a clue as to what is going on, so can you guys please give me some ideas as to what tools to use to debug this? I have tried to see the redirect logs, but they really dont show much, and varnishlog isn't helping much either - although i must admit i am no professional at varnish. I believe the problem is with varnish trying to cache the url, and thus apache redirects are not being done properly, however visits the site first has a redirect, and based on that other users are served the content, depending on from where the user was when the cache was last updated, is it correct? if so, how can i solve the problem? Also, i have the option of using geoip redirects on varnish3 instead of using apache2 to do the redirects, is that what the best practice is? Any suggestion as to debugging this or to fix this would be helpful! thnx!

    Read the article

  • getting Internet connection sharing working in a slightly more complicated configuration

    - by tirichitirca t
    I have the following configuration: Computer A - Mac OSX 10.8.4, wireless & wired adapters Computer B - Windows 7 (64 bit), wireless & wired adapters, has internet connection via the wired adapter (ethernet) d-link wired/wireless router. Problem to solve: Connect from computer A to the internet through the wired connection of computer B. I tried the following: I set up a local network between A and B using the d-link router. The configuration is this: D-link router - 192.168.0.1 A - wired connection to the d-link router, static 192.168.0.101 (I could have used the wireless but I preferred the wired connection) B - wireless connection to the d-link router DHCP 192.168.0.102 (but I made sure it always gets the same address) B - wired connection to the internet using some address that begins with 10.x.y.z. In this configuration A can see B. I enabled ICS on the wired adapter of B. I set up the Gateway of A to point to B and DNS servers to point to the DNS servers specified for the 10.x.y.z address. It doesn't work, A goes only as far as B. It can ping the 10.x.y.z address of B though. I then found this article: http://terrybritton.com/windows-internet-connection-sharing-ics-not-working-with-linux-bridging-is-the-solution-916/. Terry is suggesting that a bridge should be defined on B between the two connections. I tried that but basically computer B is screwed as soon as I create the bridge. It can't connect to the internet anymore. It is as if the network bridge seems to think the traffic to the internet should go from the wired connection to the wireless and not the other way around. The other thing that puzzles me is the router itself. In general the router needs an internet address. In a normal configuration it is the router that gets the ip address and the internet traffic goes through the router. In my case I am not interested in that. So, any suggestions to get this working? I wouldn't shy away from using a commercial software but I would think windows 7 should allow me to do it. Thanks

    Read the article

  • Troubleshooting an unstable internet connection

    - by Konrad Rudolph
    My MacBook Pro running OS X (10.9, but I had the same problem before) is connected to a Belkin router via WiFi and, using Virgin Media as the ISP, to the internet. The connection is extremely unstable – on some days, I get a ping timeout every few seconds. In addition, some domains seem to suffer general connectivity issues. For instance, I often find that while the youtube.com website loads, none of the videos (which are hosted on a separate domain) do. At other times, videos load but always fail to buffer, even though the actual connection speed is ok, even though I’ve disabled dash playback. Since I’m living in a rented room and the ISP contract isn’t actually mine I’ve got only limited possibilities of addressing the problem. In particular, I have no access to the router configuration and my non tech savvy landlady, while sympathetic, is not in a great hurry to hand the problem over to the ISP’s customer support. What’s more, I seem to be the only person in the house experiencing these problems – but I can imagine that this is simply because I’m the only one who’s using the internet continuously. I’m searching for specific tests that might be able to pinpoint – and ideally solve – the problem. So far all I’ve managed to do is establish that Virgin is routing my traffic in mysterious ways. Here’s an excerpt from traceroute google.co.uk. It’s worth mentioning that the host name doesn’t seem to matter a lot, the trace route is always the same. traceroute: Warning: google.co.uk has multiple addresses; using 62.254.36.148 traceroute to google.co.uk (62.254.36.148), 64 hops max, 52 byte packets 1 (192.168.2.1) 1.112 ms 1.300 ms 2.359 ms 2 10.100.32.1 (10.100.32.1) 11.926 ms 10.217 ms 24.987 ms 3 cmbg-core-1a-ae3-610.network.virginmedia.net (80.1.202.93) 28.809 ms * 66.653 ms 4 popl-bb-1b-ae16-0.network.virginmedia.net (212.43.163.141) 13.759 ms 126.504 ms 20.472 ms 5 nrth-bb-1b-et-010-0.network.virginmedia.net (62.253.175.57) 28.357 ms 16.398 ms 42.387 ms 6 nrth-bb-1c-ae1-0.network.virginmedia.net (62.253.174.110) 27.441 ms 15.622 ms 12.044 ms 7 lutn-icdn-1-ae0-0.network.virginmedia.net (62.253.175.82) 16.678 ms 28.463 ms 28.253 ms 8 * * * 9 * * * 10 * * * ^C If I let it, this goes on until the end of time. It never seems to reach a destination. Is this normal? A friend living in the same town who is also with Virgin Media has a more conventional traceroute output: 7 hops to google.co.uk, all of which send the ICMP TIME_EXCEEDED response. The obvious fix – rebooting the router – doesn’t seem to help. As far as I can tell, the WiFi connection is stable (I can always ping the router) so the problem is further downstream. I’ve tried using an alternative DNS before (OpenDNS) but if anything, this made things worse. In fact, it made all Google services nigh unreachable.

    Read the article

  • How can I centralise MySQL data between 3 or more geographically separate servers?

    - by Andy Castles
    To explain the background to the question: We have a home-grown PHP application (for running online language-learning courses) running on a Linux server and using MySQL on localhost for saving user data (e.g. results of tests taken, marks of submitted work, time spent on different pages in the courses, etc). As we have students from different geographic locations we currently have 3 virtual servers hosted close to those locations (Spain, UK and Hong Kong) and users are added to the server closest to them (they access via different URLs, e.g. europe.domain.com, uk.domain.com and asia.domain.com). This works but is an administrative nightmare as we have to remember which server a particular user is on, and users can only connect to one server. We would like to somehow centralise the information so that all users are visible on any of the servers and users could connect to any of the 3 servers. The question is, what method should we use to implement this. It must be an issue that that lots of people have encountered but I haven't found anything conclusive after a fair bit of Googling around. The closest I have seen to solutions are: something like master-master replication, but I have read so many posts suggesting that this is not a good idea as things like auto_increment fields can break. circular replication, this sounded perfect but to quote from O'Reilly's High Performance MySQL, "In general, rings are brittle and best avoided" We're not against rewriting code in the application to make it work with whatever solution is required but I am not sure if replication is the correct thing to use. Thanks, Andy P.S. I should add that we experimented with writes to a central database and then using reads from a local database but the response time between the different servers for writing was pretty bad and it's also important that written data is available immediately for reading so if replication is too slow this could cause out-of-date data to be returned. Edit: I have been thinking about writing my own rudimentary replication script which would involve something like having each user given a server ID to say which is his "home server", e.g. users in asia would be marked as having the Hong Kong server as their own server. Then the replication scripts (which would be a PHP script set to run as a cron job reasonably frequently, e.g. every 15 minutes or so) would run independently on each of the servers in the system. They would go through the database and distribute any information about users with the "home server" set to the server that the script is running on to all of the other databases in the system. They would also need to suck new information which has been added to any of the other databases on the system where the "home server" flag is the server where the script is running. I would need to work out the details and build in the logic to deal with conflicts but I think it would be possible, however I wanted to make sure that there is not a correct solution for this already out there as it seems like it must be a problem that many people have already come across.

    Read the article

  • Varnish + Plesk : vhost broken

    - by Raphaël
    I have an e-commerce site with 300,000 products and 20,000 categories. It is slow and currently in production. I decided to install Varnish to speed up. The trouble is that during installation, I got a Guru Meditation. Since the site is in production, I am not allowed to leave this error more than a second, thinking to have made an enormous stupidity. I followed the following tutorial: http://www.euperia.com/linux/setting-up-varnish-with-apache-tutorial I'm sure I followed all without error. I say that there may be a specific configuration with plesk. Has anyone already installed Varnish on a ubuntu 11.04 server with plesk 10? Does anyone have a better resource? I know it is "very vague" as an error, but maybe some of you have had this problem. edit 24/11/2011 I continued to work on Varnish + Plesk ... but it still does not work. 1) I changed the port for apache in plesk General # mysql -uadmin -p`cat /etc/psa/.psa.shadow` -D psa -e'replace into misc (param, val) values ("http_port", 8008)' 1.1) I rebuild the server conf # /usr/local/psa/admin/bin/httpdmng --reconfigure-all 2) I changed the apache conf files (if those were not taking full plesk top) vim /etc/apache2/ports.conf NameVirtualHost *:8008 Listen 8008 2.1) I do the same with /etc/apache2/sites-enables/000-default 3) I changed the port of my vhost (a single server) vim /var/www/vhosts/MYDOMAIN.COM/conf/XXXXXXXXX.http.include Replace the port 80 by this I want. Rebuild the vhost conf /usr/local/psa/admin/sbin/websrvmng --reconfigure-vhost --vhost-name=<domain_name> with without www (See my issue in serverfault: Edit vhost port in plesk 10.3 ) 4) I installed varnish by following this tutorial : http://www.euperia.com/linux/setting-up-varnish-with-apache-tutorial 5) I restart apache 2 + varnish service apache2 restart service varnish restart When I go to my site, I come across a page of apache It works! This is the default web page for this server. The web server software is running but no content has been added, yet. Can somebody help me ? This means that my vhost does not point to the right place. Why? What to do? How?

    Read the article

  • Virtual Machine with Bridged Adapter to Centos not accepting ssh from host machine

    - by javadba
    I have a bridged connection on VirtualBox from os/x 10.8.5 host to Centos 5.8 client. But I suspect this is more of a general issue than specific to the host and precise version of linux. Shown below are the networking info from the VirtualBox and from within the guest sshd is running on port 22: [root@oracle-linux ~]# ps -ef | grep sshd | grep -v grep root 3103 1 0 20:22 ? 00:00:00 /usr/sbin/sshd root 14994 3103 0 21:23 ? 00:00:00 sshd: root@pts/1 Port 22 listening: [root@oracle-linux ~]# netstat -an | grep 22 | grep tcp | grep LIST tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN tcp 0 0 :::22 :::* LISTEN Here are ip addresses, still on the guest os: [root@oracle-linux ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:b9:e5:79 brd ff:ff:ff:ff:ff:ff inet 10.0.15.100/24 brd 10.0.15.255 scope global eth0 inet6 fe80::a00:27ff:feb9:e579/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:b4:86:8a brd ff:ff:ff:ff:ff:ff inet 10.0.3.15/24 brd 10.0.3.255 scope global eth1 inet6 fe80::a00:27ff:feb4:868a/64 scope link valid_lft forever preferred_lft forever [root@oracle-linux ~]# I can ssh to the guest from the guest: root@oracle-linux ~]# ssh 10.0.3.15 The authenticity of host '10.0.3.15 (10.0.3.15)' can't be established. RSA key fingerprint is ef:08:19:72:95:4d:e5:28:af:f3:6f:54:07:84:ba:04. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.3.15' (RSA) to the list of known hosts. [email protected]'s password: Last login: Mon Oct 21 21:24:12 2013 from 10.0.15.100 But can NOT ssh from the host to the guest: 18:27:04/shared:11 $ssh [email protected] ssh: connect to host 10.0.15.100 port 22: Operation timed out lost connection Here is bridged connection infO; BTW I looked into other answers, and one of them mentioned doing service iptables stop That did not help. Adapter 2 is a NAT, shown below In case NAT is causing any issues, i shut it down and restarted networking. [root@oracle-linux ~]# /etc/init.d/network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: Still No joy.. 18:27:04/shared:11 $ssh [email protected] ssh: connect to host 10.0.15.100 port 22: Operation timed out lost connection

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • How to move a ruby on rails application to a new server

    - by ManiacZX
    I have a rails app on an old Ubuntu server I need to move onto a new machine. I haven't worked with ruby on rails so I don't really know anything about the structure of the app. I want to load this onto an Ubuntu 8.04 AMI on Amazon EC2 and am looking for any information regarding the migration process such as: Do I copy over the entire folder defined as the application root in the mongrel config (for ex: /u/apps/myapp/current) or just certain folders? Am I looking for trouble if I go with the latest versions of ruby and the various gems? Any general gotchas to look out for in the process. Current server information: root@webnode001:/# cat /proc/version Linux version 2.6.15-27-server (buildd@terranova) (gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5)) #1 SMP Fri Dec 8 18:43:54 UTC 2006 root@webnode001:/# rails -v Rails 1.2.3 root@webnode001:/# mongrel_rails cluster::configure --version Version 1.0.1 root@webnode001:/# gem -v 0.9.0 root@webnode001:/# gem list -l *** LOCAL GEMS *** actionmailer (1.3.3, 1.2.5) Service layer for easy email delivery and testing. actionpack (1.13.3, 1.12.5) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.3, 1.1.6) Web service support for Action Pack. activerecord (1.15.3, 1.15.2, 1.14.4) Implements the ActiveRecord pattern for ORM. activesupport (1.4.2, 1.4.1, 1.3.1) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.7, 1.0.5, 1.0.4, 1.0.2) A toolkit to create and control daemons in different ways eventmachine (0.7.2, 0.7.0) Ruby/EventMachine socket engine library fastercsv (1.2.0, 1.1.0) FasterCSV is CSV, but faster, smaller, and cleaner. fastthread (1.0) Optimized replacement for thread.rb primitives ferret (0.11.4) Ruby indexing library. gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only mongrel (1.0.1, 0.3.13.4) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. piston (1.3.3) Piston is a utility that enables merge tracking of remote repositories. rails (1.2.3, 1.1.6) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.3, 0.7.1) Ruby based make-like utility. sources (0.0.1) This package provides download sources for remote gem installation swiftiply (0.5.1) A fast clustering proxy for web applications.

    Read the article

  • Customizing Fantacy Remote .INI file

    - by karthik
    I am using Fantacy Remote to remote view other machines. I have attached the default .INI file that Fantacy Remote uses. When i connect to a machine, the client user should not have mouse and keyboard access of the Remote machine. It should be a View only remote connection. And i want to make the Remote viewer screen to be in full screen mode, because i dont want the user to do anything with menubars of Fatancy remote. Because this is kiosk application. What should i change in configuration file [.ini] inorder to achieve the above ? Anyone who have used this software before, kindly help.. [APP] iVersion= 101 pcVersion=1.01a pcBuildDate=Mar 27 2009 [MAIN] iFirstSetup= 0 rcMain.rcLeft= 676 rcMain.rcTop= 378 rcMain.rcRight= 1004 rcMain.rcBottom= 672 iShowLog= 0 iMode= 1 [GENERAL] iTips= 1 iTrayAnimation= 1 iCheckColor= 1 iPriority= 1 iSsememcpy= 1 iAutoOpenRecv= 1 pcRecvPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\recv pcFileName=FantasyRemote iLanguage= 1 [SERVER] iAcceptVideo= 1 iAcceptAudio= 1 iAcceptInput= 1 iAutoAccept= 1 iAutoTray= 0 iConnectSound= 1 iEnablePassword= 0 pcPassword= pcPort=7902 [CLIENT] iAutoConnect= 0 pcPassword= pcDefaultPort=7902 [NETWORK] pcConnectAddr=192.168.1.1 pcPort=7902 [VIDEO] iEnable= 1 pcFcc=AMV3 pcFccServer= pcDiscription= pcDiscriptionServer= iFps= 30 iMouse= 2 iHalfsize= 0 iCapturblt= 0 iShared= 0 iSharedTime= 5 iVsync= 1 iCodecSendState= 1 iCompress= 2 pcPlugin= iPluginScan= 0 iPluginAspectW= 16 iPluginAspectH= 9 iPluginMouse= 1 iActiveClient= 0 iDesktop1= 1 iDesktop2= 2 iDesktop3= 0 iDesktop4= 3 iScan= 1 iFixW= 16 iFixH= 8 [AUDIO] iEnable= 1 iFps= 30 iVolume= 6 iRecDevice= 0 iPlayDevice= 0 pcSamplesPerSec=44100Hz pcChannels=2ch:Stereo pcBitsPerSample=16bit iRecBuffNum= 150 iPlayBuffNum= 4 [INPUT] iEnable= 1 iFps= 30 iMoe= 0 iAtlTab= 1 [MENU] iAlwaysOnTop= 0 iWindowMode= 0 iFrameSize= 4 iSnap= 1 [HOTKEY] iEnable= 1 key_IDM_HELP=0x00000070 mod_IDM_HELP=0x00000000 key_IDM_ALWAYSONTOP=0x00000071 mod_IDM_ALWAYSONTOP=0x00000000 key_IDM_CONNECT=0x00000072 mod_IDM_CONNECT=0x00000000 key_IDM_DISCONNECT=0x00000073 mod_IDM_DISCONNECT=0x00000000 key_IDM_CONFIG=0x00000000 mod_IDM_CONFIG=0x00000000 key_IDM_CODEC_SELECT=0x00000000 mod_IDM_CODEC_SELECT=0x00000000 key_IDM_CODEC_CONFIG=0x00000000 mod_IDM_CODEC_CONFIG=0x00000000 key_IDM_SIZE_50=0x00000074 mod_IDM_SIZE_50=0x00000000 key_IDM_SIZE_100=0x00000075 mod_IDM_SIZE_100=0x00000000 key_IDM_SIZE_200=0x00000076 mod_IDM_SIZE_200=0x00000000 key_IDM_SIZE_300=0x00000000 mod_IDM_SIZE_300=0x00000000 key_IDM_SIZE_400=0x00000000 mod_IDM_SIZE_400=0x00000000 key_IDM_CAPTUREWINDOW=0x00000077 mod_IDM_CAPTUREWINDOW=0x00000004 key_IDM_REGION=0x00000077 mod_IDM_REGION=0x00000000 key_IDM_DESKTOP1=0x00000078 mod_IDM_DESKTOP1=0x00000000 key_IDM_ACTIVE_MENU=0x00000079 mod_IDM_ACTIVE_MENU=0x00000000 key_IDM_PLUGIN=0x0000007A mod_IDM_PLUGIN=0x00000000 key_IDM_PLUGIN_SCAN=0x00000000 mod_IDM_PLUGIN_SCAN=0x00000000 key_IDM_DESKTOP2=0x00000078 mod_IDM_DESKTOP2=0x00000004 key_IDM_DESKTOP3=0x00000079 mod_IDM_DESKTOP3=0x00000004 key_IDM_DESKTOP4=0x0000007A mod_IDM_DESKTOP4=0x00000004 key_IDM_WINDOW_NORMAL=0x0000000D mod_IDM_WINDOW_NORMAL=0x00000004 key_IDM_WINDOW_NOFRAME=0x0000000D mod_IDM_WINDOW_NOFRAME=0x00000002 key_IDM_WINDOW_FULLSCREEN=0x0000000D mod_IDM_WINDOW_FULLSCREEN=0x00000001 key_IDM_MINIMIZE=0x00000000 mod_IDM_MINIMIZE=0x00000000 key_IDM_MAXIMIZE=0x00000000 mod_IDM_MAXIMIZE=0x00000000 key_IDM_REC_START=0x00000000 mod_IDM_REC_START=0x00000000 key_IDM_REC_STOP=0x00000000 mod_IDM_REC_STOP=0x00000000 key_IDM_SCREENSHOT=0x0000002C mod_IDM_SCREENSHOT=0x00000002 key_IDM_AUDIO_MUTE=0x00000073 mod_IDM_AUDIO_MUTE=0x00000004 key_IDM_AUDIO_VOLUME_DOWN=0x00000074 mod_IDM_AUDIO_VOLUME_DOWN=0x00000004 key_IDM_AUDIO_VOLUME_UP=0x00000075 mod_IDM_AUDIO_VOLUME_UP=0x00000004 key_IDM_CTRLALTDEL=0x00000023 mod_IDM_CTRLALTDEL=0x00000003 key_IDM_QUIT=0x00000000 mod_IDM_QUIT=0x00000000 key_IDM_MENU=0x0000007B mod_IDM_MENU=0x00000000 [OVERLAY] iIndicator= 1 iAlphaBlt= 1 iEnterHide= 0 pcFont=MS UI Gothic [AVI] iSound= 1 iFileSizeLimit= 100000 iPool= 4 iBuffSize= 32 iStartDiskSpaceCheck= 1 iStartDiskSpace= 1000 iRecDiskSpaceCheck= 1 iRecDiskSpace= 100 iCache= 0 iAutoOpen= 1 pcPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\avi [SCREENSHOT] iSound= 1 iAutoOpen= 1 pcPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\ss pcPlugin=BMP [CDLG_SERVER] mrcWnd.rcLeft= 667 mrcWnd.rcTop= 415 mrcWnd.rcRight= 1013 mrcWnd.rcBottom= 634 [CWND_CLIENT] miShowLog= 0 m_iOverlayLock= 0

    Read the article

  • Optimal setup for ASUS P6X58D Premium BIOS (no OC)?

    - by rumtscho
    Normally, I'd trust the mainboard manufacturer to choose the best options as defaults. But I had trouble with the board, because even with Quick Boot enabled, it booted twice as slowly as a Pentium 4 Celeron. Then I changed lots of options at once (most of them weren't explained in the manual, just mentioned with a single sentence) and the boot time is only marginally worse than the Pentium 4 (54 sec against 46 sec from button to pw entering screen). Now I don't know if I have turned something off which should have stayed on. I guess I even won't be able to boot from a CD now, because even though it is present in the boot sequence, I took off a timeout I think it needs to check whether there is a disk in the drive. The second reason is that I don't have an internal HDD, only a SSD. I forgot my sources blush but I am under the impression that today's BIOS and OS options are geared toward booting from a HDD, which is often less than optimal when one boots from a SSD, especially when there are functions which cause avoidable writing cycles, as a SSD wears out after too many writing cycles. Most of the things I've read concern the OS, but there are some BIOS-relevant options too. I am especially confused about the disk mode. The board supports AHCI, IDE-simulation and RAID, but of the different articles I've read, there is a proponent for each and no clear arguments for any. So can one tell me which options are important in general and which are important for a SSD-only system? I don't want to overclock the CPU, so you don't have to say anything about this (yes I know the board is meant for OC:)). I am thinking of overclocking the RAM, since they sold me 1600er heatsinked modules which are running at 1066 now, but I'm not sure yet about that. The rest of the system: i7-930, Intel X25-m G2, 6 GB RAM, GTS 250, some no-name Blue-ray ROM. 2 external HDDs over USB 2.0. Lots of other USB-connected hardware (12 devices I think), no SATA 3 drives (will disabling the controller have an impact on performance?), no LAN, only WiFi. Lucid Lynx 64 bit, no dual boot, no virtual installations. The main uses of the system are: managing and playing/showing all the media stored on the external disks, lots of image manipulation, some video editing, a bit of (non-demanding) gaming, rarely development. Lots of Internet surfing too, but this shouldn't have much impact on performance.

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Solution to route/proxy SNMP Traps (or Netflow, generic UDP, etc) for network monitoring?

    - by Christopher Cashell
    I'm implementing a network monitoring solution for a very large network (approximately 5000 network devices). We'd like to have all devices on our network send SNMP traps to a single box (technically this will probably be an HA pair of boxes) and then have that box pass the SNMP traps on to the real processing boxes. This will allow us to have multiple back-end boxes handling traps, and to distribute load among those back end boxes. One key feature that we need is the ability to forward the traps to a specific box depending on the source address of the trap. Any suggestions for the best way to handle this? Among the things we've considered are: Using snmptrapd to accept the traps, and have it pass them off to a custom written perl handler script to rewrite the trap and send it to the proper processing box Using some sort of load balancing software running on a Linux box to handle this (having some difficulty finding many load balancing programs that will handle UDP) Using a Load Balancing Appliance (F5, etc) Using IPTables on a Linux box to route the SNMP traps with NATing We've currently implemented and are testing the last solution, with a Linux box with IPTables configured to receive the traps, and then depending on the source address of the trap, rewrite it with a destination nat (DNAT) so the packet gets sent to the proper server. For example: # Range: 10.0.0.0/19 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.0.0/19 -j DNAT --to-destination 10.1.2.3 # Range: 10.0.33.0/21 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.33.0/21 -j DNAT --to-destination 10.1.2.3 # Range: 10.1.0.0/16 Site: xyz01 Destination: bar01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.1.0.0/16 -j DNAT --to-destination 10.3.2.1 This should work with excellent efficiency for basic trap routing, but it leaves us completely limited to what we can mach and filter on with IPTables, so we're concerned about flexibility for the future. Another feature that we'd really like, but isn't quite a "must have" is the ability to duplicate or mirror the UDP packets. Being able to take one incoming trap and route it to multiple destinations would be very useful. Has anyone tried any of the possible solutions above for SNMP traps (or Netflow, general UDP, etc) load balancing? Or can anyone think of any other alternatives to solve this?

    Read the article

  • Losing internet connectivity on server after installing LogMeIn Hamachi (with server set as gateway node)

    - by Kim Jong-Un
    Our domain controller (SBS 2003) completely lost internet and network connectivity yesterday after I remotely installed LogMeIn Hamachi on it and set it to be a gateway node- in an attempt to create a VPN link between the server and a remote site. I had to go in to the office to resolve the problem as, unsurprisingly, my own remote access to the server was also lost. I was only able to restore network connectivity by deleting a virtual network adapter Hamachi created when making the server the gateway node (called "Hamachi bridge" I believe), then rebooting the server. This is a repeatable problem. Every time I try to get this to work, it just takes the server offline. Why would this bridge affect regular TCP/IP connectivity on the NIC in this way? I have tried a "hub-and-spoke" configuration between the server and our PC at a remote site (server set as hub, remote site as spoke). This caused no such problems with general internet connectivity, and file transfer worked well between the two computers. However, there was a DNS issue with the VPN between the two sites- resulting in Active Directory not being able to communicate between them (could not log on using domain user accounts at remote site if they were not already cached on that machine). I only tried a "gateway" network as LogMeIn support told me: If you can get the Active Directory to work it would only be through a "Gateway" network type with the server acting as the Gateway Node. You would configure the gateway settings on the server in the Hamachi client on that machine to push whatever IP's/DNS settings you prefer and at that point AD would be able (all things being equal) communicate to the client node when it attaches. We do not have any ActiveDirectory configuration info as that's outside the scope of our support. I hope this helps. It would be fantastic if I could get Active Directory to work over a Hamachi VPN connection, without worrying about the server going offline in this way. Does anyone have any ideas how I should proceed, or any theories as to what is going on when I try to use the "gateway" network type? I want to try to narrow down what is going on here.

    Read the article

  • What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?

    - by James van Dyke
    I'm pretty new to the whole Netflix OSS stack and deployments in general. As a background for my current level of knowledge ops-wise, my main role is as a front-end application engineer. However, I enjoy the operations side of things, so I'm attempting to setup a new deployment strategy and the tooling for a new project. Our Goals Super easy deploys (we want to push a button to update production) Automated deploys to test environments (using Jenkins) Ease of maintenance (we have an app to write, don't want to spend our time fiddling with production issues) Ability to handle a service oriented architecture (many small apps, various languages and data stores) Enough flexibility to ensure we won't have to change strategies any time soon (we're already trying to get away from RightScale) We're OK with a little more initial setup time if doing so will save us some headaches in the future. So, along these lines, I've been listening to podcasts, watching Ops talks, and reading tons of blog posts and based on our goals and what I've taken to be some forming best practices, we've started forming a plan using Asgard, rolling our package into a jar and rolling that into an AMI. We had this all planned out and like the advantages the process versus using a Chef server and converging instances on the fly (we felt this was error prone given our limited timeline and lack of understanding around a Chef server workflow). However, a coworker did a little looking around on his own and felt like Elastic Beanstalk met our needs. I've looked into it and spun up a test environment with a WAR file and an attached RDS database. Things seem to work and I believe that we can automate deploys to a testing environment using Jenkins via the AWS API. Seems simple enough... perhaps too simple. What I'm wondering is, what's the catch? If Elastic Beanstalk is so simple and effective, why isn't it talked about more? I'm having a hard time finding enough objective opinions and facts about the two different deployment strategies, so I thought I'd ask around. Do you use Elastic Beanstalk? If so, why and what factors lead to that decision? What do you like and dislike? If you don't use Elastic Beanstalk but considered it, what do you use and why didn't you use Elastic Beanstalk? What are the advantages and disadvantages to a Elastic Beanstalk based deployment strategy for an SOA? That is, will Elastic Beanstalk work well with many small applications that rely on each other to work?

    Read the article

  • Showing Directory Root When Launching Rails App Using Apache2 and Passenger

    - by LightBe Corp
    I have done the following in an attempt to host a Rails 3.2.3 application using Apache 2.2.21 and Passenger 3.0.13: Installed gem Passenger rvmsudo passenger-install-apache2-module Added website info in /etc/apache2/extra/httpd-vhosts.conf Added line to /etc/hosts (not sure if this was needed or not; not mentioned in Passenger documentation Uncommented out the line in /etc/apache2/httpd.conf to Include /etc/apache2/extra/httpd-vhosts.conf Restarted Apache When I try to pull up my website the following displays: Index of / Name Last modified Size Description Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8r DAV/2 PHP/5.3.10 with Suhosin-Patch Phusion_Passenger/3.0.13 Server at lightbesandbox2.com Port 443 Here is /etc/hosts entry for the website: 127.0.0.1 www.lightbesandbox2.com Here is my /etc/apache2/extra/httpd-vhosts.conf entry for the website: NameVirtualHost *:80 <VirtualHost *:80> ServerName www.lightbesandbox2.com ServerAlias lightbesandbox2.com PassengerAppRoot /Users/server1/Sites/iktusnetlive_RoR/ DocumentRoot /Users/server1/Sites/iktusnetlive_RoR/public <Directory /Users/server1/Sites/iktusnetlive_RoR/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> When I do rvmsudo passenger-status I get the following output: ----------- General information ----------- max = 6 count = 1 active = 0 inactive = 1 Waiting on global queue: 0 ----------- Application groups ----------- /Users/server1/Sites/iktusnetlive_RoR/: App root: /Users/server1/Sites/iktusnetlive_RoR/ * PID: 8140 Sessions: 0 Processed: 2 Uptime: 20m 51s None of my assets are in the public folder in my Rails app. I have written an application using the template presented in Michael Hartl's Ruby on Rails Tutorial. The home page is in /app/views/static_pages/home.html.erb. I decided to copy an index.html file in the public folder to see if it would display. It displayed as I had hoped.. Is there a way to get Passenger to find my assets without me having to rewrite my application? Any help would be appreciated. Update 6/23/2012 10:00 am CDT GMT-6 I corrected the problems with my file and have successfully executed the rake assets:precompile command. I still get the index page as before. I have made no other changes. I did a passenger-status command and it is still loaded. Restarting Apache did nothing.

    Read the article

  • How Do I Properly Run OfflineIMAP in a Crontab

    - by alharaka
    Installed Fedora. # cat /etc/redhat_release | awk ' { print F "> " $0; print ""; }' Fedora release 14 (Laughlin) Installed offlineimap from yum, cuz I'm lazy these days. # yum info offlineimap | awk ' { print F "> " $0; print ""; }' Loaded plugins: langpacks, presto, refresh-packagekit Adding en_US to language list Installed Packages Name : offlineimap Arch : noarch Version : 6.2.0 Release : 2.fc14 Size : 611 k Repo : installed From repo : fedora Summary : Powerful IMAP/Maildir synchronization and reader support URL : http://software.complete.org/offlineimap/ License : GPLv2+ Description : OfflineIMAP is a tool to simplify your e-mail reading. With : OfflineIMAP, you can read the same mailbox from multiple : computers. You get a current copy of your messages on each : computer, and changes you make one place will be visible on all : other systems. For instance, you can delete a message on your home : computer, and it will appear deleted on your work computer as : well. OfflineIMAP is also useful if you want to use a mail reader : that does not have IMAP support, has poor IMAP support, or does : not provide disconnected operation. And, lo and behold, every time I run offlineimap and try to redirect output in a crontab, it does not work. Below is my .offlineimaprc. [general] ui = TTY.TTYUI accounts = Personal, Work maxsyncaccounts = 3 [Account Personal] localrepository = Local.Personal remoterepository = Remote.Personal [Account Work] localrepository = Local.Work remoterepository = Remote.Work [Repository Local.Personal] type = Maildir localfolders = ~/mail/gmail [Repository Local.Work] type = Maildir localfolders = ~/mail/companymail [Repository Remote.Personal] type = IMAP remotehost = imap.gmail.com remoteuser = [email protected] remotepass = password ssl = yes maxconnections = 4 # Otherwise "deleting" a message will just remove any labels and # retain the message in the All Mail folder. realdelete = no [Repository Remote.Work] type = IMAP remotehost = server.company.tld remoteuser = username remotepass = password ssl = yes maxconnections = 4 I have tried TTY.TTYUI, NonInteractive.Quiet and NonInteractive.Basic with different variations. With or without redirection, the crontab entries I try cause problems. $ crontab -l | awk ' { print F "> " $0; print ""; }' */5 * * * * offlineimap >> ~/mail/logs/offlineimap.log 2>&1 */5 * * * * offlineimap I always get the same damn error ERROR: No UIs were found usable!. What am I doing wrong!?

    Read the article

  • Windows Setup could not configure Windows to run on this computer's hardware

    - by Hello71
    The whole installation goes smoothly up to the point of "Completing installation ...". The monitor changes resolution, after which a standard dialog box pops up saying Windows Setup could not configure Windows to run on this computer's hardware Then, in a few seconds, the whole machine powers down. Trying to restart produces the message: STOP: c000021a {Fatal System Error} 0x00000000 (0xc0000001 0x00100448) OR it boots into Setup and comes up with the message: Windows Setup encountered an unexpected error... (This is not the actual error, just paraphrasing) I tried using the OEM restore instead of a regular install, but it fails with the same error. (Even though it worked before...) General specs: HP Pavilion Elite e9262f Intel Core i5-750 Processor ATI Radeon HD 4650 Hitachi HDT721010SLA360 ATA Device 6GB DDR3 RAM SuperMulti DVD Burner with LightScribe Some built-in Wi-Fi module http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01916917 I've tried disconnecting the wireless card and disabling the built-in Ethernet and Firewire via the BIOS, and replacing the wireless keyboard and mouse with wired USB ones. Didn't work. I've also tried changing the SATA controller settings in the BIOS to RAID, AHCI, and IDE, reinstalling each time I changed. Still not working. I think the reason why it is showing the Fatal System Error is because it didn't finish installing before it errored out and shut down, so the system is left in an inconsistent state. I've tried 3 different copies (including the OEM restore) of Windows 7 now, and they're all failing at the same point, with the same error message. I've tried to install Windows 7 maybe 10 times already, with the exact same error message at the exact same location. Hm... Interestingly, the 32-bit version of Windows 7 works, but the 64-bit version doesn't. Perhaps it was a badly burned disk? Reburning the 64-bit version still comes up with the same error. Here's a picture of the side of the case that clearly says it came with Windows 7 64-bit, along with the model number and CPU. sudo fdisk -l: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009896f Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 7 HPFS/NTFS /dev/sda2 14 94119 755906445 7 HPFS/NTFS /dev/sda3 119922 121602 13492224 7 HPFS/NTFS /dev/sda4 94120 119922 207257740+ 5 Extended /dev/sda5 119527 119922 3170769 82 Linux swap / Solaris /dev/sda6 107174 119526 99225441 83 Linux /dev/sda7 94120 107173 104856192 7 HPFS/NTFS Partition table entries are not in disk order

    Read the article

  • Custom built machine has much higher power consumption than expected

    - by foraidt
    I built a machine according to the specs of a computer magazine (c't, Germany). According to the magazine, the power consumption should be at around 10W. I don't want to go into the specifics of the hardware but rather ask for general advice on where to look: I updated the BIOS/UEFI version to the latest version, installed all the recommended drivers and unplugged all hardware that's not necessary to boot into Windows. All that was left is the power supply, mainboard, cpu, cpu cooler and one SSD drive. But still I measured a power consumption of 50W, which is 40W more than it should be. I tried booting Linux Mint from a USB stick, so I don't think it's some Windows-related problem.. Where else could I look? Update 1 I dind't want the question to get closed for being too localized but if more details are necessary, here they are: The system is a desktop PC. The power consumption is measured using a Brennenstuhl PM 231 device, which was tested also by c't and they found it quite accurate. The PSU is an Enermax ETL300AWT, the mainboard Intel DH87RL (Socket 1150) and the CPU Intel G3220 (Haswell). Update 2 There is no online version of the article*. The most details I found can be read on its project page (in German, though...) (*)You can pay for downloadable PDFs, however. English translation of that project page Update 3 Regarding the sceptics: It may sound ridiculous but apparently 10W idle consumption is possible with Intel's Haswell architecture. As a kind of proof, there's an additional Blog article explicitly listing the steps needed to reduce the idle consumption to 10W. Additional hardware: I measured the consumption without the HDD, and as expected the usage dropped by around 10W. I have no chassis fans and the CPU fan is a "Scythe Mugen 4" model. It runs at around 600rpm so I think it won't draw much. When stripping off all my extra components I should be at 10W. But I'm not getting anywhere near that. I would be happy to see "just" 15W in the stripped down version but currently I'm not getting below 50W no matter which component I remove. As I see it this cannot be explained by the PSU being less efficient at lower consumption. I also waited half an hour or so (also checked that no Windows updates were running in the background) and the consumption dind't drop by more than a few watts.

    Read the article

  • Sent items listed by sender's name (my name!) not recipient in Outlook for Mac 2011

    - by user568458
    Outlook 2011 on a Mac (OSX7 Lion), on a (mostly PC) office network using Microsoft Exchange server. In Outlook 2011, in the network "Sent Items" folder, all emails are listed showing the sender's name (my name), not the recipients' names. That means every single email is headed with my own name, like this: My Name 08/09/2012 Some subject line [flag] My Name 08/09/2012 Re: Another subject line [flag] My Name 07/09/2012 Re: different subject line [flag] My Name 07/09/2012 different subject line [flag] ...and so on. There's no clue at all as to who these emails were to, until I open each and every one. I guess it's reassuring to be told that every email that I sent was sent by me... and if I was ever to forget my own name while browsing my sent emails folder, it'd be super useful... But it's not very helpful for navigating sent emails. Is this normal for Office for Mac 2011? How can I fix this, so that the list shows the recipients' names instead of endlessly reminding me of my own name? Things I've found while researching this: This can also happen in Outlook for Windows. On Windows, it's easily fixed by resetting the field. That method doesn't work on Mac because the fields can't be selected separately. It seems this can also happen in Apple Mail too, and a lot of people seem stumped by this. I can't find anything on this specific to Outlook 2011 or Outlook for Mac in general. A simple guide on how to fix this would be the best answer, but I'd also welcome any knowledgable thoughts from Microsoft Exchange Server people on whether this sounds like an Outlook settings issue which I can fix on my machine, or some issue relating to how the Mac gets data from the server and network. The fact that Apple Mail users have encountered the same issue with no apparent fix makes me think the problem might be in the network rather than the mail client - but that's way beyond my limited knowledge of these things. I don't know whether the local ("On my computer") Sent Items folder has the same problem, as it's configured so that no emails except drafts are ever stored in these local folders. Drafts saved locally are listed by recipient as expected.

    Read the article

  • Ubuntu: Multiple NICs, one used only for Wake-On-LAN

    - by jcwx86
    This is similar to some other questions, but I have a specific need which is not covered in the other questions. I have an Ubuntu server (11.10) with two NICs. One is built into the motherboard and the other is a PCI express card. I want to have my server connected to the internet via my NAT router and also have it able to wake from suspend using a Magic Packet (henceforth referred to as Wake-On-LAN, WOL). I can't do this with just one of the NICs because each has an issue - the built-in NIC will crash the system if it is placed under heavy load (typically downloading data), whilst the PCI express NIC will crash the system if it is used for WOL. I have spent some time investigating these individual problems, to no avail. My plan is thus: use the built-in NIC solely for WOL, and use the PCI express card for all other network communication except WOL. Since I send the WOL Magic Packet to a specific MAC address, there is no danger of hitting the wrong NIC, but there is a danger of using the built-in NIC for general network access, overloading it and crashing the system. Both NICs are wired to the same LAN with address space 192.168.0.0/24. The built-in ethernet card is set to have interface name eth1 and the PCI express card is eth0 in Ubuntu's udev persistent rules (so they stay the same upon reboot). I have been trying to set this up with the /etc/network/interfaces file. Here is where I am currently: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 auto eth1 iface eth1 inet static address 192.168.0.254 netmask 255.255.255.0 I think by not specifying a gateway for eth1, I prevent it being used for outgoing requests. I don't mind if it can be reached on 192.168.0.254 on the LAN, i.e. via SSH -- it's IP is irrelevant to WOL, which is based on MAC addresses -- I just don't want it to be used to access internet resources. My kernel routing table (from route -n) is Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 My question is this: Is this sufficient for what I want to achieve? My research has thrown up the idea of using static routing to specify that eth1 should only be used for WOL on the local network, but I'm not sure this is necessary. I have been monitoring the activity of the interfaces using iptraf and it seems like eth0 takes the vast majority of the packets, though I am not sure that this will be consistent based on my configuration. Given that if I mess up the configuration, my system will likely crash, it is important to me to have this set up correctly!

    Read the article

  • CUPS Web Admin Error 500 Unknown

    - by Floyd Resler
    I keep getting a 500 Unknown error whenever I navigate off the home page of my CUPS web admin. I'm sure I have something misconfigured but I'm not sure what. Here's my configuration: # # "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $" # # Sample configuration file for the CUPS scheduler. See "man cupsd.conf" for a # complete description of this file. # # Log general information in error_log - change "warn" to "debug" # for troubleshooting... LogLevel warn # Administrator user group... SystemGroup lpadmin sys root # Only listen for connections from the local machine. Listen 192.168.6.101:631 Listen /var/run/cups/cups.sock ServerName 192.168.6.101 # Show shared printers on the local network. Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS BrowseAddress 192.168.6.255 # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to the admin pages... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to configuration files... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Set the default printer/job policies... # Job-related operations must be done by the owner or an administrator... Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # Set the authenticated printer/job policies... # Job-related operations must be done by the owner or an administrator... AuthType Default Order deny,allow AuthType Default Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... AuthType Default Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # # End of "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $". #

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Command-line connect to wired network for Ubuntu

    - by Tim
    I like to know how to use command-line to connect to a wired network in general for Ubuntu 8.10? In my case, I connect a cable to my laptop but it doesn't work with my WICD. So I like to try command-line method. Here is the ifconfig of my network adapters: $ ifconfig eth0 Link encap:Ethernet HWaddr 00:c0:9f:8d:23:74 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0x1800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4457 errors:0 dropped:0 overruns:0 frame:0 TX packets:4457 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:493002 (493.0 KB) TX bytes:493002 (493.0 KB) wlan0 Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 RX packets:1508929 errors:0 dropped:0 overruns:0 frame:0 TX packets:768144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:806027375 (806.0 MB) TX bytes:78834873 (78.8 MB) wlan0:avahi Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 inet addr:169.254.5.92 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 wmaster0 Link encap:UNSPEC HWaddr 00-0E-9B-AB-56-19-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Thanks and regards! UPDATE: Tried what oyvindio suggested. Here is the failing message: $ sudo dhclient3 eth0 There is already a pid file /var/run/dhclient.pid with pid 18279 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:c0:9f:8d:23:74 Sending on LPF/eth0/00:c0:9f:8d:23:74 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 No DHCPOFFERS received. No working leases in persistent database - sleeping.

    Read the article

  • Changing Physical Path gives blank homepage

    - by Julie
    I have two websites ASP Classic - www.company.com and www.companytesting.com. At this time of year, company.com is pointed to a folder called website2012 and companytesting.com is pointing to a folder called website2013. The contents of those two folders are almost identical, just minor changes for our season change (which I was supposed to do today - lol). Up until a couple of weeks ago, I was running Windows Server 2003. To update the "live" website, I'd make a copy of the test site folder, and rename it website2013R1, and point the test site there, then point the live site at website2012. We now have Windows Server 2008 R2 64. (I had someone migrate the websites to the new server for me.) The companytesting.com site, when I pointed it to website2013R1, worked fine. The company.com site, when I pointed it to website2013 (which worked just before, for the companytesting.com site) gives an empty page. (i.e. view source = nothing there.) There is nothing in the failed request log when this happens. I can use the Explore button/link (upper right) in IIS7.5 and see all of the files there. If I use the browse button (either in general or on the index.asp page) I get the blank page again. One weirdness about how these are set up is that companytesting.com uses a login (which I think is windows authentication - it's simply a single username and password for staff, and to keep the GoogleBots out of it). Obviously, company.com does not. But redirecting the to website2013r1 kept the login in place. (So I'm not absolutely clear whether that's attached to the folder or to the site. Hitting the company.com site after changing the path did not yield a password request.) The permissions on the folders all seem to be the same, but obviously, I'm missing something. Why isn't changing the physical path working? As is probably obvious, I'm not knowledgeable about servers. I did OK in 2003, but since it's not my main task and I'm buried right now, I have barely looked at 2008. So I may have really stupid questions when you ask me to check something.

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >