Search Results

Search found 2891 results on 116 pages for 'amazon ecs'.

Page 111/116 | < Previous Page | 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Apache taking up too much CPU

    - by andrewtweber
    I'm trying to manage a server on Amazon for a network of sites that receives about 100 million pageviews per month. Unfortunately, nobody out of my team of 5 developers has much server admin experience. Right now we have the MaxClients set to 1400. Currently our traffic is about average, and we have 1150 total Apache processes running, which use about 2% CPU each! Out of those 1150, 800 of them are currently sleeping, but still taking up CPU. I'm sure there are ways to optimize this. I have a few thoughts: It appears Apache is creating a new process for every single connection. Is this normal? Is there a way to more quickly kill the sleeping processes? Should we turn KeepAlive on? Each page loads about 15-20 medium-sized graphics and a lot of javascript/css. So, here's our Apache setup. We do plan on contracting a server admin asap, but I would really appreciate some advice until we can find someone. Timeout 25 KeepAlive Off MaxKeepAliveRequests 200 KeepAliveTimeout 5 <IfModule prefork.c> StartServers 100 MinSpareServers 20 MaxSpareServers 50 ServerLimit 1400 MaxClients 1400 MaxRequestsPerChild 5000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 400 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Full top output: top - 23:44:36 up 1 day, 6:43, 4 users, load average: 379.14, 379.17, 377.22 Tasks: 1153 total, 379 running, 774 sleeping, 0 stopped, 0 zombie Cpu(s): 71.9%us, 26.2%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 1.9%si, 0.0%st Mem: 70343000k total, 23768448k used, 46574552k free, 527376k buffers Swap: 0k total, 0k used, 0k free, 10054596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1756 mysql 20 0 10.2g 1.8g 5256 S 19.8 2.7 904:41.13 mysqld 21515 apache 20 0 396m 18m 4512 R 2.1 0.0 0:34.42 httpd 21524 apache 20 0 396m 18m 4032 R 2.1 0.0 0:32.63 httpd 21544 apache 20 0 394m 16m 4084 R 2.1 0.0 0:36.38 httpd 21643 apache 20 0 396m 18m 4360 R 2.1 0.0 0:34.20 httpd 21817 apache 20 0 396m 17m 4064 R 2.1 0.0 0:38.22 httpd 22134 apache 20 0 395m 17m 4584 R 2.1 0.0 0:35.62 httpd 22211 apache 20 0 397m 18m 4104 R 2.1 0.0 0:29.91 httpd 22267 apache 20 0 396m 18m 4636 R 2.1 0.0 0:35.29 httpd 22334 apache 20 0 397m 18m 4096 R 2.1 0.0 0:34.86 httpd 22549 apache 20 0 395m 17m 4056 R 2.1 0.0 0:31.01 httpd 22612 apache 20 0 397m 19m 4152 R 2.1 0.0 0:34.34 httpd 22721 apache 20 0 396m 18m 4060 R 2.1 0.0 0:32.76 httpd 22932 apache 20 0 396m 17m 4020 R 2.1 0.0 0:37.34 httpd 22933 apache 20 0 396m 18m 4060 R 2.1 0.0 0:34.77 httpd 22949 apache 20 0 396m 18m 4060 R 2.1 0.0 0:34.61 httpd 22956 apache 20 0 402m 24m 4072 R 2.1 0.0 0:41.45 httpd

    Read the article

  • Encrypted off-site data storage

    - by Dan
    My business has a rather unique problem. We work in China and we want to implement a file server paradigm which does not store any files locally, but rather in a server overseas. Applications would be saved onto our local machines, but data would be loaded directly into memory from the cloud, e.g. I load a docx into word at the beginning of the day, saving periodically to the cloud as I work on it, and turn off my computer at night, with nothing saved locally. Considering recent events, we worry about being raided by the Chinese authorities, and although all our data is encrypted, it would not be hard for the authorities to force us to give up the keys. So the goal is not to have anything compromising physically in China. We have about 20 computers, and we need an authenticated, encrypted connection with this overseas file server. A system with Active-Directory-like permissions would be best, so that only management can read or write to certain files, or workers can only access files that relate to their projects, and to which all access can be cut off should the need arise. The file server itself would also need to be encrypted. And for convenience, it would be nice if this system was integrated with each computer's file explorer (like skydrive or dropbox does, but, again, without saving a copy locally), rather than through a browser. I can't find any solution online. Does anyone know of a service that does this? Otherwise I'll have to do it myself (which kinda sounds fun, but I don't really have the time), and I'm not sure where to start. Amazon maybe. But the protocols that offices would use on their intranet typically aren't encrypted; we need all traffic securely tunneled out of the country. Each computer already has a VPN to a server in California, but I'm unsure whether it would be efficient to pipe file transfers through it. Let me know if anyone has any ideas. And this is my first post; feel free say whether this question is inappropriate/needs to be posted elsewhere.

    Read the article

  • How can I prevent an unintentional DDOS running ColdFusion 8 with IIS 6?

    - by Eric Belair
    We had an interesting outage today on one of our client's websites. Out of nowhere, the website was inaccessible. The website runs by itself on a dedicated physical Windows 2000 server (probably overkill, I know, but that's a discussion for a different day). After restarting IIS and ColdFusion Application Service, the problem came back several times. My initial thought was that it was a DNS issue, which happens occasionally - the last time it happened was after Hurricane Sandy when we our ISP was out, and we had to make some network config changes. But, it was not a DNS issue. My second thought was that it was a DDOS attack, but, there's very little reason anyone would want to take this site down. When we called our ISP, the operator on the other end noted that traffic was spiking significantly. As it turned out, the client had unintentionally caused a DDOS on the website, after they FTPed a very large video file, and then mass emailed a link to it. Hundreds of people clicked the link and brought the site to its knees. I am primarily a Website Programmer, but I often have to contribute to server administration at times. Sadly, I'm the resident ColdFusion and IIS expert, but I don't have a lot of experience with this issue. What are some basic steps that I can take to prevent this from happening in the future, since we cannot always control what files the client posts to the website. Here are some ideas I had, but I'm unsure of the impact: Limit the number of connections in IIS. Put media files on a separate server (like an Amazon site, etc.). File requests of this type currently behind a server-script (i.e. /www.site.com/viewFile.cfm?fileId=1424545, where the fileId references a file off the webroot) that logs requests, and pushes the file to the browser using CFCONTENT. I could edit this script to reject requests when they exceed a certain amount in a given time-frame (i.e. a 5MB can be accessed globally 10 times in an hour). This may cause some users frustration, but, if hundreds of users are attempting to view the file, the site is going to crash anyways, as it did today, which is way more frustrating, since there is no "pretty" message explaining why they can't get to the file. I'm open to any suggestions, as I'm continuing my research to report to the CTO with the best options, so that we can put a solution into effect. Thank you.

    Read the article

  • Web Site Serving, Cloud-Computing, oh, my

    - by Frank
    I'm planning a software based service. To give it a bit of context (type of traffic), assume it similar to facebook in nature (with a little GitHub thrown in). I've been trying to understand my different hosting options. I've been using a shared host with GoDaddy for years just fine. I currently host a Wordpress web site there and I've not had any problems. Quite frankly, they've taken good care of me. However, the nature of a shared hosting environment is limited in nature. For example, I can't do anything but host a web site there. For example, I can not run a Mercurial server. Last time I attempted to build a web application with the intention of eventually launching it via GoDaddy, I ran in to all sorts of troubles because it was shared-hosted. Assembly issues, etc. At the time, the cost and time sank my project. (The lack of direct access was also frustrating.) (to be fair to godaddy, this was over 3 years ago) I've been looking at Rackspace or Amazon as a possible cloud solution but it seems to be just processing power and bandwidth (and an OS). From what I understand, I'd need to get Apache and MySQL Working on my own. The way cloud hosting is priced, however, seems appealing. I figure my final option might be to use a virtual private host. I think this would be more flexible than a shared-host site but less scalable than a cloud based server. So, I guess my question is what is an appropriate solution for someone who intends to build a web application service? I figure that I need to establish a hosting environment now rather than later so I can plan to effectively use the environment. I'd prefer to be fairly economical to start out with. I really can't afford to pay $999 (or even $99) while I build up the site and get the core functionality online but at the same time, I'd like to have the selected environment grow as needed. Thank you.

    Read the article

  • How to move a ruby on rails application to a new server

    - by ManiacZX
    I have a rails app on an old Ubuntu server I need to move onto a new machine. I haven't worked with ruby on rails so I don't really know anything about the structure of the app. I want to load this onto an Ubuntu 8.04 AMI on Amazon EC2 and am looking for any information regarding the migration process such as: Do I copy over the entire folder defined as the application root in the mongrel config (for ex: /u/apps/myapp/current) or just certain folders? Am I looking for trouble if I go with the latest versions of ruby and the various gems? Any general gotchas to look out for in the process. Current server information: root@webnode001:/# cat /proc/version Linux version 2.6.15-27-server (buildd@terranova) (gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5)) #1 SMP Fri Dec 8 18:43:54 UTC 2006 root@webnode001:/# rails -v Rails 1.2.3 root@webnode001:/# mongrel_rails cluster::configure --version Version 1.0.1 root@webnode001:/# gem -v 0.9.0 root@webnode001:/# gem list -l *** LOCAL GEMS *** actionmailer (1.3.3, 1.2.5) Service layer for easy email delivery and testing. actionpack (1.13.3, 1.12.5) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.3, 1.1.6) Web service support for Action Pack. activerecord (1.15.3, 1.15.2, 1.14.4) Implements the ActiveRecord pattern for ORM. activesupport (1.4.2, 1.4.1, 1.3.1) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.7, 1.0.5, 1.0.4, 1.0.2) A toolkit to create and control daemons in different ways eventmachine (0.7.2, 0.7.0) Ruby/EventMachine socket engine library fastercsv (1.2.0, 1.1.0) FasterCSV is CSV, but faster, smaller, and cleaner. fastthread (1.0) Optimized replacement for thread.rb primitives ferret (0.11.4) Ruby indexing library. gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only mongrel (1.0.1, 0.3.13.4) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. piston (1.3.3) Piston is a utility that enables merge tracking of remote repositories. rails (1.2.3, 1.1.6) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.3, 0.7.1) Ruby based make-like utility. sources (0.0.1) This package provides download sources for remote gem installation swiftiply (0.5.1) A fast clustering proxy for web applications.

    Read the article

  • Tomcat can't talk to MySql after outage

    - by gav
    I missed a payment for my server and hey suspended my account for a day or so. When they brought the server back up all my data was in tact but for some reason Tomcat can't make a JDBC connection to my MySql server. They both run on the same machine and hence I have a bind address of 127.0.0.1. It's strange because I have reset the machine of my own accord before without issue but clearly something has been reset in the downtime. I followed this guide (Just the bits which don't concern S3, I am not on Amazon infrastructure) originally and everything worked as expected. I'm very new to being a SysAdmin and I'm not sure what to try, how would you go about diagnosing this issue? The stack trace I get is as follows; INFO: Deploying web application archive myapp-1.1.war 2010-05-26 22:07:22,221 [main] ERROR context.ContextLoader - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'messageSource': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager': Cannot resolve reference to bean 'sessionFactory' while setting bean property 'sessionFactory'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory': Cannot resolve reference to bean 'hibernateProperties' while setting bean property 'hibernateProperties'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'hibernateProperties': Cannot resolve reference to bean 'dialectDetector' while setting bean property 'properties' with key [hibernate.dialect]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dialectDetector': Invocation of init method failed; nested exception is org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.codehaus.groovy.grails.commons.spring.ReloadAwareAutowireCapableBeanFactory.doCreateBean(ReloadAwareAutowireCapableBeanFactory.java:129) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.context.support.AbstractApplicationContext.initMessageSource(AbstractApplicationContext.java:714) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:404) at org.codehaus.groovy.grails.commons.spring.GrailsWebApplicationContext.refresh(GrailsWebApplicationContext.java:153) ... I get this error for a number of 'beans'. If I type mysql at my command prompt then I can easily login with the same credentials as my grails app which uses GORM and Hibernate to persist objects to the DB. I might not have given enough info to start with but I'm really interested to learn and will certainly provide it if asked, I just really don't know where to start on this one. Thanks, Gav

    Read the article

  • StrongSwan + xl2tpd client timeout between 2-5 minutes

    - by Howard Guo
    I run CentOS 6.4 on Amazon EC2, using xl2tpd-1.3.1 from EPEL repository together with StrongSwan 5.0.4. I setup a simple IPSec connection: conn l2tp type=transport keyexchange=ikev1 rekey=no authby=psk leftsubnet=0.0.0.0/0 rightsubnet=0.0.0.0/0 compress=yes auto=add And here is xl2tpd.conf: [global] ipsec saref = yes [lns default] ip range = 192.168.0.2-192.168.0.250 local ip = 192.168.0.1 ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes Here is options.xl2tpd: ms-dns 8.8.4.4 auth lock debug proxyarp There is only one client - Android 4.2 Android connects successfully: Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Connection established to x.x.x.x, 59578. Local: 18934, Remote: 29291 (ref=0/0). LNS session is 'default' Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Call established with x.x.x.x, Local: 36452, Remote: 29845, Serial: -1369754322 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: pppd 2.4.5 started by howard, uid 0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Using interface ppp0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Connect: ppp0 <--> /dev/pts/0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: peer from calling number x.x.x.x authorized Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Deflate (15) compression enabled Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: Cannot determine ethernet address for proxy ARP Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: local IP address 192.168.0.1 Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: remote IP address 192.168.0.2 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 disappeared from ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] interface ppp0 activated In the meanwhile, Internet works perfectly on the Android client, the VPN connection is stable and fast. However, it always happens that within 2-5 minutes after the connection is established: Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Maximum retries exceeded for tunnel 18934. Closing. Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Connection 29291 closed to 95.91.227.224, port 59578 (Timeout) Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deactivated Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deleted Then the VPN connection is broken. So what might have gone wrong? The same L2TP service works flawlessly on iOS 7, MacOS 10.8, and Windows 7, there is no disconnection issue on those OSes. Thank you!

    Read the article

  • Ubuntu unattended-upgrades stops apache

    - by Robbie
    This morning i was alerted to the fact that both apache instances serving my app were not responding to requests from my load balancer. I attempted apachectl restart and it said apache was not running. So, i started apache on both instances and got the service up again. I then followed the logs and worked out that both had performed upgrades via the unattended-upgrades package moments before they stopped responding. /var/log/unattended-upgrades/unattended-upgrades.log 2013-07-02 06:30:51,875 INFO Starting unattended upgrades script 2013-07-02 06:30:51,875 INFO Allowed origins are: ['o=Ubuntu,a=precise-security'] 2013-07-02 06:33:57,771 INFO Packages that are upgraded: accountsservice apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common apparmor apport apt apt-transport-https apt-utils bind9-host binutils dbus dnsutils gnupg gpgv isc-dhcp-client isc-dhcp-common krb5-locales libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdns81 libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libexpat1 libfreetype6 libgc1c2 libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libisc83 libisccc80 libisccfg82 liblwres80 libruby1.8 libx11-6 libx11-data libxcb1 libxext6 libxml2 linux-firmware linux-image-virtual linux-libc-dev linux-virtual multiarch-support openssl perl perl-base perl-modules python-apport python-crypto python-keyring python-problem-report python-software-properties ri1.8 ruby1.8 ruby1.8-dev sudo tzdata update-manager-core 2013-07-02 06:33:57,772 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-02_06:33:57.772399.log' 2013-07-02 06:36:10,584 INFO All upgrades installed I'm running Ubuntu 12.04 on Amazon EC2 servers. I have unattended-upgrades installed and configured as follows: /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { }; /etc/apt/apt.conf.d/20auto-upgrades APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; I've struggled to find documentation about what happens to running processes during an upgrade. - Is this expected behaviour? Or should unattended-upgrades restart apache after upgrading it? - What can I do to ensure apache is restarted correctly? Should I just blacklist the apache package?

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • Is it possible to configure a CDN so that it will step out of the way for a subset of regional IPs?

    - by rwired
    We have a website which targets customers in China, both expat and local Chinese. We have an ICP license which allows us to host in a datacenter inside China. Internet in China is actually as fast as anywhere else (faster than most places actually), so long as the content is served-up within the boundaries of the Great-Firewall. Anything that crosses the wall is horribly slow. The problem is that most expats have some sort of VPN installed so that they can access all the blocked stuff. What this means is that when they access our site, the traffic first has to go out of China through the firewall to their VPN, and then back in. The performance is terrible, worse than if we were just hosting outside of China directly (which we used to do before the ICP was issued). So I want to use a global CDN to mirror the site automatically, but I only want to deliver the content via the CDN if the user's request IP address is outside of China. Inside China I would like the content to be served by our own server. I also want to be careful with the domain names. We currently use www.xxx.com and www.xxx.cn for language selection purposes, as these perform well in SEO on Google (which the expats use), and Baidu (which the locals use). If possible I would like to avoid having one domain on the outside, and the other on the inside since not all expats use a VPN, and some Chinese speakers also use VPNs. Also some of our legitimate customers in both languages are from outside of China. I also don't want to resort to using something like www2.xxx.com/cn for the outside connection if at all possible, since I have worries about duplicate content and canonical URLs ruining our SEO (unless you know of a quick fix for that). CDNs I'm considering are: Google PageSpeed, CloudFlare, Amazon CloudFront. None of which have datacenters inside China. I have complete control of the .com DNS zone records, but the .cn zones are under the control of the domain issuing body in China. I'm not sure at this time if they would allow even a CNAME to point to an IP outside of China (although I don't see why not). They no longer allow outside registrars like they used to.

    Read the article

  • Need advice on which PCI SATA Controller Card to Purchase

    - by Matt1776
    I have a major issue with the build of a machine I am trying to get up and running. My goal is to create a file server that will service the needs of my software development, personal media storage and streaming/media server needs, as well as provide a strong platform for backing up all this data in a routine, cron-job oriented German efficiency sort of way. The issue is a simple one - all my drives are SATA drives and my motherboard controller only contains 4 ports. Solving the issue has proven to be an unmitigated nightmare. I would like advice on the purchase of the following: 4 Port internal SATA / 2 Port external eSATA PCI SATA Controller Card that has the following features and/or advantages: It must function. If I plug it in and attach drives, I expect my system to still make it to the Operating System login screen. It must function on CentOS, and I mean it must function WELL and with MINIMAL hassle. If hassle is unavoidable, there shall be CLEAR CUT and EASY TO FOLLOW instructions on how to install drivers and other supporting software. I do not need nor want fakeRAID - I will be setting up any RAID configurations from within the operating system. Now, if I am able to find such a mythical device, I would be eternally grateful to whomever would be able to point me in the right direction, a direction which I assume will be paved with yellow bricks. I am prepared to pay a considerable sum of money (as SATA controller cards go) and so paying anywhere between 60 to 120 dollars will not be an issue whatsoever. Does such a magical device exist? The following link shows an "example" of the type of thing I am looking for, however, I have no way of verifying that once I plug this baby in that my system will still continue to function once I've attached the drives, or that once I've made it to the OS, I will be able to install whatever drivers or software programs I need to make it work with relative ease. It doesn't have to be dog-shit simple, but it cannot involve kernels or brain surgery. http://www.amazon.com/gp/product/B00552PLN4/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B003GSGMPU&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1HJG60XTZFJ48Z173HKY So does anyone have a suggestion regarding the subject I am asking about? PCI SATA Controller Cards? It would help if you've had experience with the component before - that is after all why I am asking here - for those who have had experience that I do not have. Bear in mind that this is for a home setup and that I do not have a company credit card. I have a budget with a 'relative' upper limit of about $150.00.

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • git private server error: "Permission denied (publickey)."

    - by goddfree
    I followed the instructions here in order to set up a private git server on my Amazon EC2 instance. However, I am having problems when trying to SSH into the git account. Specifically, I get the error "Permission denied (publickey)." Here are the permissions of my files/folders on the EC2 server: drwx------ 4 git git 4096 Aug 13 19:52 /home/git/ drwx------ 2 git git 4096 Aug 13 19:52 /home/git/.ssh -rw------- 1 git git 400 Aug 13 19:51 /home/git/.ssh/authorized_keys Here are the permissions of my files/folders on my own computer: drwx------ 5 CYT staff 170 Aug 13 14:51 .ssh -rw------- 1 CYT staff 1679 Aug 13 13:53 .ssh/id_rsa -rw-r--r-- 1 CYT staff 400 Aug 13 13:53 .ssh/id_rsa.pub -rw-r--r-- 1 CYT staff 1585 Aug 13 13:53 .ssh/known_hosts When checking my logs in /var/log/secure, I used to get the following error message every time I tried to SSH: Authentication refused: bad ownership or modes for file /home/git/.ssh/authorized_keys However, after making a few permission changes, I no longer get this error message. Despite this, I am still getting the "Permission denied (publickey)." message every time I try to SSH. The command I am using to SSH is ssh -T git@my-ip. Here is the full log I get when I run ssh -vT [email protected]: OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to my-ip [my-ip] port 22. debug1: Connection established. debug1: identity file /Users/CYT/.ssh/id_rsa type -1 debug1: identity file /Users/CYT/.ssh/id_rsa-cert type -1 debug1: identity file /Users/CYT/.ssh/id_dsa type -1 debug1: identity file /Users/CYT/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2 debug1: match: OpenSSH_6.2 pat OpenSSH* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr [email protected] none debug1: kex: client->server aes128-ctr [email protected] none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 08:ad:8a:bc:ab:4d:5f:73:24:b2:78:69:46:1a:a5:5a debug1: Host 'my-ip' is known and matches the RSA host key. debug1: Found key in /Users/CYT/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/CYT/.ssh/id_rsa debug1: Trying private key: /Users/CYT/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). I have spent a few hours going through threads on various sites, including SO and SF, looking for a solution. It seems that the permissions for my files are all okay, but I just can't figure out the problem. Any help would be greatly appreciated. Edit: EEAA: Here are the outputs you requested: $ getent passwd git git:x:503:504::/home/git:/bin/bash $ grep ssh ~git/.ssh/authorized_keys | wc -l grep: /home/git/.ssh/authorized_keys: Permission denied 0

    Read the article

  • AGENT: The World's Smartest Watch

    - by Rob Chartier
    AGENT: The World's Smartest Watch by Secret Labs + House of Horology Disclaimer: Most if not all of this content has been gleaned from the comments on the Kickstarter project page and comments section. Any discrepancies between this post and any documentation on agentwatches.com, kickstarter.com, etc.., those official sites take precedence. Overview The next generation smartwatch with brand-new technology. World-class developer tools, unparalleled battery life, Qi wireless charging. Kickstarter Page, Comments Funding period : May 21, 2013 - Jun 20, 2013 MSRP : $249 Other Urls http://www.agentwatches.com/ https://www.facebook.com/agentwatches http://twitter.com/agentwatches http://pinterest.com/agentwatches/ http://paper.li/robchartier/1371234640 Developer Story The first official launch of the preview SDK and emulator will happen on 20-Jun-2013.  All development will be done in Visual Studio 2012, using the .NET Micro Framework SDK 2.3.  The SDK will ship with the first round of the expected API for developers along with an emulator. With that said, there is no need to wait for the SDK.  You can download the tooling now and get started with Apps and Faces immediately.  The only thing that you will not be able to work with is the API; but for example, watch faces, you can start building the basic face rendering with the Bitmap graphics drawing in the .NET Micro Framework.   Does it look good? Before we dig into any more of the gory details, here are a few photos of the current available prototype models.   The watch on the tiny QI Charter   If you wander too far away from your phone, your watch will let you know with a vibration and a message, all but one button will dismiss the message.   An app showing the premium weather data!   Nice stitching on the straps, leather and silicon will be available, along with a few lengths to choose from (short, regular, long lengths). On to those gory details…. Hardware Specs Processor 120MHz ARM Cortex-M4 processor (ATSAM4SD32) with secondary AVR co-processor Flash & RAM 2MB of onboard flash and 160KB of RAM 1/4 of the onboard flash will be used by the OS The flash is permanent (non-volatile) storage. Bluetooth Bluetooth 4.0 BD/EDR + LE Bluetooth 4.0 is backwards compatible with Bluetooth 2.1, so classic Bluetooth functions (BD/EDR, SPP/AVRCP/PBAP/etc.) will work fine. Sensors 3D Accelerometer (Motion) ST LSM303DLHC Ambient Light Sensor Hardware power metering Vibration Motor (You can pulse it to create vibration patterns, not sure about the vibration strength - driven with PWM) No piezo/speaker or microphone. Other QI Wireless Charging, no NFC, no wall adapter included Custom LED Backlight No GPS in the watch. It uses the GPS in your phone. AGENT watch apps are deployed and debugged wirelessly from your PC via Bluetooth. RoHS, Pb-free Battery Expected to use a CR2430-sized rechargeable battery – replaceable (Mouser, Amazon) Estimated charging time from empty is 2 hours with provided charger 7 Days typical with Bluetooth on, 30 days with Bluetooth off (watch-face only mode) The battery should last at least 2 years, with 100s of charge cycles. Physical dimensions Roughly 38mm top-to-bottom on the front face 35mm left-to-right on the front face and around 12mm in depth 22mm strap Two ~1/16" hex screws to attach the watch pin The top watchcase material candidates are PVD stainless steel, brushed matte ceramic, and high-quality polycarbonate (TBD). The glass lens is mineral glass, Anti-glare glass lens Strap options Leather and silicon straps will be available Expected to have three sizes Display 1.28" Sharp Memory Display The display stays on 100% of the time. Dimensions: 128x128 pixels Buttons Custom "Pusher" buttons, they will not make noise like a mouse click, and are very durable. The top-left button activates the backlight; bottom-left changes apps; three buttons on the right are up/select/down and can be used for custom purposes by apps. Backup reset procedure is currently activated by holding the home/menu button and the top-right user button for about ten seconds Device Support Android 2.3 or newer iPhone 4S or newer Windows Phone 8 or newer Heart Rate monitors - Bluetooth SPP or Bluetooth LE (GATT) is what you'll want the heart monitor to support. Almost limitless Bluetooth device support! Internationalization & Localization Full UTF8 Support from the ground up. AGENT's user interface is in English. Your content (caller ID, music tracks, notifications) will be in your native language. We have a plan to cover most major character sets, with Latin characters pre-loaded on the watch. Simplified Chinese will be available Feature overview Phone lost alert Caller ID Music Control (possible volume control) Wireless Charging Timer Stopwatch Vibrating Alarm (possibly custom vibrations for caller id) A few default watch faces Airplane mode (by demand or low power) Can be turned off completely Customizable 3rd party watch faces, applications which can be loaded over bluetooth. Sample apps that maybe installed Weather Sample Apps not installed Exercise App Other Possible Skype integration over Bluetooth. They will provide an AGENT app for your smartphone (iPhone, Android, Windows Phone). You'll be able to use it to load apps onto the watch.. You will be able to cancel phone calls. With compatible phones you can also answer, end, etc. They are adopting the standard hands-free profile to provide these features and caller ID.

    Read the article

  • How To Switch Back to Outlook 2007 After the 2010 Beta Ends

    - by Matthew Guay
    Are you switching back to Outlook 2007 after trying out Office 2010 beta?  Here’s how you can restore your Outlook data and keep everything working fine after the switch. Whenever you install a newer version of Outlook, it will convert your profile and data files to the latest format.  This makes them work the best in the newer version of Outlook, but may cause problems if you decide to revert to an older version.  If you installed Outlook 2010 beta, it automatically imported and converted your profile from Outlook 2007.  When the beta expires, you will either have to reinstall Office 2007 or purchase a copy of Office 2010. If you choose to reinstall Office 2007, you may notice an error message each time you open Outlook. Outlook will still work fine and all of your data will be saved, but this error message can get annoying.  Here’s how you can create a new profile, import all of your old data, and get rid of this error message. Banish the Error Message with a New Profile To get rid of this error message, we need to create a new Outlook profile.  First, make sure your Outlook data files are backed up.  Your messages, contacts, calendar, and more are stored in a .pst file in your appdata folder.  Enter the following in the address bar of an Explorer window to open your Outlook data folder, and replace username with your user name: C:\Users\username\AppData\Local\Microsoft\Outlook Copy the Outlook Personal Folders (.pst) files that contain your data. Its name is usually your email address, though it may have a different name.  If in doubt, select all of the Outlook Personal Folders files, copy them, and save them in another safe place (such as your Documents folder). Now, let’s remove your old profile.  Open Control Panel, and select Mail.  In Windows Vista or 7, simply enter “Mail” in the search box and select the first entry. Click the “Show Profiles…” button. Now, select your Outlook profile, and click Remove.  This will not delete your data files, but will remove them from Outlook. Press Yes to confirm that you wish to remove this profile. Open Outlook, and you will be asked to create a new profile.  Enter a name for your new profile, and press Ok. Now enter your email account information to setup Outlook as normal. Outlook will attempt to automatically configure your account settings.  This usually works for accounts with popular email systems, but if it fails to find your information you can enter it manually.  Press finish when everything’s done. Outlook will now go ahead and download messages from your email account.  In our test, we used a Gmail account that still had all of our old messages online.  Those files are backed up in our old Outlook data files, so we can save time and not download them.  Click the Send/Receive button on the bottom of the window, and select “Cancel Send/Receive”. Restore Your Old Outlook Data Let’s add our old Outlook file back to Outlook 2007.  Exit Outlook, and then go back to Control Panel, and select Mail as above.  This time, click the Data Files button. Click the Add button on the top left. Select “Office Outlook Personal Folders File (.pst)”, and click Ok. Now, select your old Outlook data file.  It should be in the folder that opens by default; if not, browse to the backup copy we saved earlier, and select it. Press Ok at the next dialog to accept the default settings. Now, select the data file we just imported, and click “Set as Default”. Now, all of your old messages, appointments, contacts, and everything else will be right in Outlook ready for you.  Click Ok, and then open Outlook to see the change. All of the data that was in Outlook 2010 is now ready to use in Outlook 2007.  You won’t have to wait to re-download all of your emails from the server since everything’s still here ready to be used.  And when you open Outlook, you won’t see any error messages, either! Conclusion Migrating your Outlook profile back to Outlook 2007 is fairly easy, and with these steps, you can avoid seeing an error message every time you open Outlook.  With all your data in tact, you’re ready to get back to work instead of getting frustrated with Outlook.  Many of us use webmail and keep all of our messages in the cloud, but even on broadband connections it can take a long time to download several gigabytes of emails. Similar Articles Productive Geek Tips Opening Attachments in Outlook 2007 by KeyboardQuickly Create Appointments from Tasks with Outlook 2007’s To-Do BarFix For Outlook 2007 Constantly Asking for Password on VistaPin Microsoft Outlook to the Desktop BackgroundOur Look at the LinkedIn Social Connector for Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • Ask How-To Geek: Dropbox in the Start Menu, Understanding Symlinks, and Ripping TV Series DVDs

    - by Jason Fitzpatrick
    This week we take a look at how to incorporate Dropbox into your Windows Start Menu, understanding and using symbolic links, and how to rip your TV series DVDs right to unique and high-quality episode files. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Add Drobox to Your Start Menu Dear How-To Geek, I use Dropbox all the time and would like to add it right onto my start menu along side the other major shortcuts like Documents, Pictures, etc. It seems like adding Dropbox into the menu should be part of the Dropbox installation package! Sincerely, Dropboxing in Des Moines Dear Dropboxing, We agree, it would be a nice installation option. As it stands you’re going to have to do a little simple hacking to get Dropbox nestled neatly into your start menu. The hack isn’t super elegant but when you’re done you’ll have the link you want and it’ll look like it was there all along. Check out this step-by-step guide here in order to take an existing Library shortcut and rework it to be a Dropbox link. Understanding and Using Symbolic Links Dear How-To Geek, I was talking to a coworker the other day about an issue I’d been having with a media center application I’m running. He suggested using symbolic links to better organize my media and make it easier for the application to access my collection. I had no idea what he was talking about and never got a chance to bug him about it later. Can you clear up this whole symbolic links business for me? I’ve been using computers for years and I’ve never even heard of it! Sincerely, Symbolic Who? Dear Symbolic, Symbolic links aren’t commonly used by many Windows users which is why you likely haven’t run into the concept. Symbolic links are essentially supercharged shortcuts—the newly introduced Windows library system is really just a type of symbolic link system. You can use symbolic links to do all sorts of neat stuff like link folders to your Dropbox folder, organize media, and more. The concept of symbolic links is pretty simple but the execution can be really tricky. We’d suggest reading over our guide to creating symbolic links in Windows 7, Windows XP, and Ubunutu to get a clearer idea what you’re getting into. Rip Your TV DVDs into Handy Episode Files Dear How-To Geek, My wife got me an iPod for Christmas and I still haven’t got around to filling it up. I have tons of entire TV show seasons on DVD and would like to get them on the iPod but I have absolutely no idea where to start. How do I get the shows off the discs? I thought it would be as easy to import the TV shows into iTunes as it is to import tracks off a CD but I was totally wrong. I tried downloading some applications to rip them but those didn’t work at all. Very frustrating! Surely there is an easy and/or automated way to do this, right? Sincerely, Free My DVDs Dear DVDs, Oh man is this a frustration we can relate to. It’s inordinately difficult to get movies and TV shows off physical media and into digital (and portable media player-friendly) formats. There are a multitude of ways to rip DVDs and quite a few applications out there (some good, some mediocre, and some outright malware). We’d recommend a two-part punch to solve your ripping woes. You’ll need a copy of DVDFab to strip away the protections on the discs and rip the disc and Handbrake to load the disc image and convert the files. It’s not quite as smooth as the CD-to-iTunes workflow but it’s still pretty easy. Check out all the steps and settings you’ll want to toggle here. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Google’s New Personal Blocklist Extension Kills Search Engine Spam KeyCounter Tracks Your Keystrokes and Mouse Clicks Add Custom LED Ambient Lighting to Your PC or Media Center The Trackor Monitors Amazon Prices; Integrates with Chrome, Firefox, and Safari Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup]

    Read the article

  • Why Software Sucks...and What You Can Do About It – book review

    - by DigiMortal
        How do our users see the products we are writing for them and how happy they are with our work? Are they able to get their work done without fighting with cool features and crashes or are they just switching off resistance part of their brain to survive our software? Yeah, the overall picture of software usability landscape is not very nice. Okay, it is not even nice. But, fortunately, Why Software Sucks...and What You Can Do About It by David S. Platt explains everything. Why Software Sucks… is book for software users but I consider it as a-must reading also for developers and specially for their managers whose politics often kills all usability topics as soon as they may appear. For managers usability is soft topic that can be manipulated the way it is best in current state of project. Although developers are not UI designers and usability experts they are still very often forced to deal with these topics and this is how usability problems start (of course, also designers are able to produce designs that are stupid and too hard to use for users, but this blog here is about development). I found this book to be very interesting and funny reading. It is not humor book but it explains you all so you remember later very well what you just read. It took me about three evenings to go through this book and I am still enjoying what I found and how author explains our weird young working field to end users. I suggest this book to all developers – while you are demanding your management to hire or outsource usability expert you are at least causing less pain to end users. So, go and buy this book, just like I did. And… they thanks to mr. Platt :) There is one book more I suggest you to read if you are interested in usability - Don't Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition by Steve Krug. Editorial review from Amazon Today’s software sucks. There’s no other good way to say it. It’s unsafe, allowing criminal programs to creep through the Internet wires into our very bedrooms. It’s unreliable, crashing when we need it most, wiping out hours or days of work with no way to get it back. And it’s hard to use, requiring large amounts of head-banging to figure out the simplest operations. It’s no secret that software sucks. You know that from personal experience, whether you use computers for work or personal tasks. In this book, programming insider David Platt explains why that’s the case and, more importantly, why it doesn’t have to be that way. And he explains it in plain, jargon-free English that’s a joy to read, using real-world examples with which you’re already familiar. In the end, he suggests what you, as a typical user, without a technical background, can do about this sad state of our software—how you, as an informed consumer, don’t have to take the abuse that bad software dishes out. As you might expect from the book’s title, Dave’s expose is laced with humor—sometimes outrageous, but always dead on. You’ll laugh out loud as you recall incidents with your own software that made you cry. You’ll slap your thigh with the same hand that so often pounded your computer desk and wished it was a bad programmer’s face. But Dave hasn’t written this book just for laughs. He’s written it to give long-overdue voice to your own discovery—that software does, indeed, suck, but it shouldn’t. Table of contents Acknowledgments xiii Introduction Chapter 1: Who’re You Calling a Dummy? Where We Came From Why It Still Sucks Today Control versus Ease of Use I Don’t Care How Your Program Works A Bad Feature and a Good One Stopping the Proceedings with Idiocy Testing on Live Animals Where We Are and What You Can Do Chapter 2: Tangled in the Web Where We Came From How It Works Why It Still Sucks Today Client-Centered Design versus Server-Centered Design Where’s My Eye Opener? It’s Obvious—Not! Splash, Flash, and Animation Testing on Live Animals What You Can Do about It Chapter 3: Keep Me Safe The Way It Was Why It Sucks Today What Programmers Need to Know, but Don’t A Human Operation Budgeting for Hassles Users Are Lazy Social Engineering Last Word on Security What You Can Do Chapter 4: Who the Heck Are You? Where We Came From Why It Still Sucks Today Incompatible Requirements OK, So Now What? Chapter 5: Who’re You Looking At? Yes, They Know You Why It Sucks More Than Ever Today Users Don’t Know Where the Risks Are What They Know First Milk You with Cookies? Privacy Policy Nonsense Covering Your Tracks The Google Conundrum Solution Chapter 6: Ten Thousand Geeks, Crazed on Jolt Cola See Them in Their Native Habitat All These Geeks Who Speaks, and When, and about What Selling It The Next Generation of Geeks—Passing It On Chapter 7: Who Are These Crazy Bastards Anyway? Homo Logicus Testosterone Poisoning Control and Contentment Making Models Geeks and Jocks Jargon Brains and Constraints Seven Habits of Geeks Chapter 8: Microsoft: Can’t Live With ’Em and Can’t Live Without ’Em They Run the World Me and Them Where We Came From Why It Sucks Today Damned if You Do, Damned if You Don’t We Love to Hate Them Plus ça Change Growing-Up Pains What You Can Do about It The Last Word Chapter 9: Doing Something About It 1. Buy 2. Tell 3. Ridicule 4. Trust 5. Organize Epilogue About the Author

    Read the article

  • Hack Extension Files to Make Them Version-Compatible for Firefox

    - by Asian Angel
    A well known drawback in using Firefox is the problem with extension compatibility when a new major version is released. Whether it is for a new extension that you are trying for the first time or an old favorite we have a way to get those extensions working for you again. There are multiple reasons why you might want to choose this method to fix a non-compatible extension: You are uncomfortable with tweaking the “about:config” settings You prefer to maintain the original “about:config” settings in a pristine state and like having compatibility checking active You are looking to gain some “geek cred” Keep in mind that most extensions will work perfectly well with a new version of Firefox and simply have the “version compatibility number” problem. But once in a while there may be one that needs to have some work done on it by the extension’s author. The Problem Here is a perfect example of everyone’s least favorite “extension message”. This is the last thing that you need when all that you want is for your favorite extension (or a new one) to work on a fresh clean install. Note: This works nicely to “replace” non-compatible extensions already present in your browser if you are simply upgrading. Hacking the XPI File For this procedure you will need to manually download the extension to your hard-drive (right click on the extension’s “Install Button” and select “Save As”). Once you have done that you are ready to start hacking the extension. For our example we chose the “GCal Popup Extension”. The best thing to do is place the extension in a new folder (i.e. the Desktop or other convenient location) then unzip it just the same way that you would with any regular zip file. Once it is unzipped you will see the various folders and files that were in the “xpi file” (we had four files here but depending on the extension the number may vary). There is only one file that you need to focus on…the “install.rdf” file. Note: At this point you should move the original extension file to a different location (i.e. outside of the folder) so that it is no longer present. Open the file in “Notepad” so that you can change the number for the “maxVersion”. Here the number is listed as “3.5.*” but we needed to make it higher… Replacing the “5” with a “7” is all that we needed to do. Once you have entered your new “maxVersion” number save the file. At this point you will need to re-zip all of the files back into a single file. Make certain that you “create” a file with the “.zip file extension” otherwise this will not work. Once you have the new zip file created you will need to rename the entire file including the “file extension”. For our example we copied and pasted the original extension name. Once you have changed the name click outside of the “text area”. You will see a small message window like this asking for confirmation…click “Yes” to finish the process. Now your modified/updated extension is ready to install. Drag the extension into your browser to install it and watch that wonderful “Restart to complete the installation.” message appear. As soon as your browser starts you can check the “Add-ons Manager Window” and see the version compatibility numbers for the extension. Looking very very nice! And just like that your extension should be up and running without any problems. Conclusion If you are looking to try something new, gain some geek cred, or just want to keep your Firefox install as close to the original condition as possible this method should get those extensions working nicely for you again. Similar Articles Productive Geek Tips Make Firefox Extensions Compatible After Firefox Update Breaks Them For No Good ReasonCheck Extension Compatibility for Upcoming Firefox ReleasesFirefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsHow To Force Extension Compatibility with Firefox 3.6+Test and Report Add-on Compatibility in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • Posting from ASP.NET WebForms page to another URL

    - by hajan
    Few days ago I had a case when I needed to make FORM POST from my ASP.NET WebForms page to an external site URL. More specifically, I was working on implementing Simple Payment System (like Amazon, PayPal, MoneyBookers). The operator asks to make FORM POST request to a given URL in their website, sending parameters together with the post which are computed on my application level (access keys, secret keys, signature, return-URL… etc). So, since we are not allowed nesting another form inside the <form runat=”server”> … </form>, which is required because other controls in my ASPX code work on server-side, I thought to inject the HTML and create FORM with method=”POST”. After making some proof of concept and testing some scenarios, I’ve concluded that I can do this very fast in two ways: Using jQuery to create form on fly with the needed parameters and make submit() Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post Both ways seemed fine. 1. Using jQuery to create FORM html code and Submit it. Let’s say we have ‘PAY NOW’ button in our ASPX code: <asp:Button ID="btnPayNow" runat="server" Text="Pay Now" /> Now, if we want to make this button submit a FORM using POST method to another website, the jQuery way should be as follows: <script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.5.1.js" type="text/javascript"></script> <script type="text/javascript">     $(function () {         $("#btnPayNow").click(function (event) {             event.preventDefault();             //construct htmlForm string             var htmlForm = "<form id='myform' method='POST' action='http://www.microsoft.com'>" +                 "<input type='hidden' id='name' value='hajan' />" +             "</form>";             //Submit the form             $(htmlForm).appendTo("body").submit();         });     }); </script> Yes, as you see, the code fires on btnPayNow click. It removes the default button behavior, then creates htmlForm string. After that using jQuery we append the form to the body and submit it. Inside the form, you can see I have set the htttp://www.microsoft.com URL, so after clicking the button you should be automatically redirected to the Microsoft website (just for test, of course for Payment I’m using Operator's URL). 2. Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post The C# code behind should be something like this: public void btnPayNow_Click(object sender, EventArgs e) {     string Url = "http://www.microsoft.com";     string formId = "myForm1";     StringBuilder htmlForm = new StringBuilder();     htmlForm.AppendLine("<html>");     htmlForm.AppendLine(String.Format("<body onload='document.forms[\"{0}\"].submit()'>",formId));     htmlForm.AppendLine(String.Format("<form id='{0}' method='POST' action='{1}'>", formId, Url));     htmlForm.AppendLine("<input type='hidden' id='name' value='hajan' />");     htmlForm.AppendLine("</form>");     htmlForm.AppendLine("</body>");     htmlForm.AppendLine("</html>");     HttpContext.Current.Response.Clear();     HttpContext.Current.Response.Write(htmlForm.ToString());     HttpContext.Current.Response.End();             } So, with this code we create htmlForm string using StringBuilder class and then just write the html to the page using HttpContext.Current.Response.Write. The interesting part here is that we submit the form using JavaScript code: document.forms["myForm1"].submit() This code runs on body load event, which means once the body is loaded the form is automatically submitted. Note: In order to test both solutions, create two applications on your web server and post the form from first to the second website, then get the values in the second website using Request.Form[“input-field-id”] I hope this was useful post for you. Regards, Hajan

    Read the article

  • Change the User Interface Language in Vista or Windows 7

    - by Matthew Guay
    Would you like to change the user interface language in any edition of Windows 7 or Vista on your computer?  Here’s a free app that can help you do this quickly and easily. If your native language is not the one most spoken in your area, you’ve likely purchased a PC with Windows preinstalled with a language that is difficult or impossible for you to use.  Windows 7 and Vista Ultimate include the ability to install multiple user interface languages and switch between them. However, all other editions are stuck with the language they shipped with.  With the free Vistalizator app, you can add several different interface languages to any edition of Vista or Windows 7 and easily switch between them. Note:  In this test, we used an US English copy of both Windows 7 Home Premium and Windows Vista Home Premium, and it works the same on any edition. The built-in language switching in the Ultimate Editions lets you set a user interface language for each user account, but this will only switch it for all users.  Add a User Interface Language to Windows To add an interface language to any edition of Windows 7 and Vista, first download Vistalizator (link below).  Then, from the same page, download the language pack of your choice.  The language packs are specific for each service pack of Windows, so make sure to choose the correct version and service pack you have installed. Once the downloads are finished, launch the Vistalizator program. You do not need to install it; simply run it and you’re ready to go.  Click the Add languages button to add a language to Windows. Select the user interface language pack you downloaded, and click Open. Depending on the language you selected, it may not automatically update with Windows Update when a service pack is released.  If so, you will have to remove the language pack and reinstall the new one for that service pack at that time.  Click Ok to continue. Make sure you’ve selected the correct language, and click Install language. Vistalizator will extract and install the language pack.  This took around 5 to 10 minutes in our test. Once the language pack is installed, click Yes to make it the default display language. Now, you have two languages installed in Windows.  You may be prompted to check for updates to the language pack; if so, click Update languages and Vistalizator will automatically check for and install any updates. When finished, exit Vistalizator to finish switching the language.  Click Yes to automatically reboot and apply the changes. When you computer reboots, it will show your new language, which in our test is Thai.  Here’s our Windows 7 Home Premium machine with the Thai language pack installed and running. You can even add a right to left language, such as Arabic, to Windows.  Simply repeat the steps to add another language pack.    Vistalizator was originally designed for Windows Vista, and works great with Windows 7 too.  The language packs for Vista are larger downloads than their Windows 7 counterparts.  Here’s our Vista Home Premium in English… And here’s how it looks after installing the Simplified Chinese language pack with Vistalizator. Revert to Your Original Language If you wish to return to the language that your computer shipped with, or want to switch to another language you’ve installed, run Vistalizator again.  Select the language you wish to use, and click Change language.   When you close Vistalizator, you will again be asked to reboot.  Once you’ve rebooted, you’ll see your new (or original) language ready to use.  Here’s our Windows 7 Home Premium desktop, back in it’s original English interface. Conclusion This is a great way to change your computer’s language into your own native language, and is especially useful for expatriates around the world.  Also, if you’d like to simply change or add an input language instead of changing the language throughout your computer, check out our tutorial on How to Add Keyboard Languages to XP, Vista, and Windows 7. Download Vistalizator Similar Articles Productive Geek Tips Enable Military Time in Windows 7 or VistaWhy Does My Password Expire in Windows?Use Windows Vista Aero through Remote Desktop ConnectionDisable User Account Control (UAC) the Easy Way on Win 7 or VistaAdd keyboard languages to XP, Vista, and Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • AWS: setting up auto-scale for EC2 instances

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/16/aws-setting-up-auto-scale-for-ec2-instances.aspxWith Amazon Web Services, there’s no direct equivalent to Azure Worker Roles – no Elastic Beanstalk-style application for .NET background workers. But you can get the auto-scale part by configuring an auto-scaling group for your EC2 instance. This is a step-by-step guide, that shows you how to create the auto-scaling configuration, which for EC2 you need to do with the command line, and then link your scaling policies to CloudWatch alarms in the Web console. I’m using queue size as my metric for CloudWatch,  which is a good fit if your background workers are pulling messages from a queue and processing them.  If the queue is getting too big, the “high” alarm will fire and spin up a new instance to share the workload. If the queue is draining down, the “low” alarm will fire and shut down one of the instances. To start with, you need to manually set up your app in an EC2 VM, for a background worker that would mean hosting your code in a Windows Service (I always use Topshelf). If you’re dual-running Azure and AWS, then you can isolate your logic in one library, with a generic entry point that has Start() and Stop()  functions, so your Worker Role and Windows Service are essentially using the same code. When you have your instance set up with the Windows Service running automatically, and you’ve tested it starts up and works properly from a reboot, shut the machine down and take an image of the VM, using Create Image (EBS AMI) from the Web Console: When that completes, you’ll have your own AMI which you can use to spin up new instances, and you’re ready to create your auto-scaling group. You need to dip into the command-line tools for this, so follow this guide to set up the AWS autoscale command line tool. Now we’re ready to go. 1. Create a launch configuration This launch configuration tells AWS what to do when a new instance needs to be spun up. You create it with the as-create-launch-config command, which looks like this: as-create-launch-config sc-xyz-launcher # name of the launch config --image-id ami-7b9e9f12 # id of the AMI you extracted from your VM --region eu-west-1 # which region the new instance gets created in --instance-type t1.micro # size of the instance to create --group quicklaunch-1 #security group for the new instance 2. Create an auto-scaling group The auto-scaling group links to the launch config, and defines the overall configuration of the collection of instances: as-create-auto-scaling-group sc-xyz-asg # auto-scaling group name --region eu-west-1 # region to create in --launch-configuration sc-xyz-launcher # name of the launch config to invoke for new instances --min-size 1 # minimum number of nodes in the group --max-size 5 # maximum number of nodes in the group --default-cooldown 300 # period to wait (in seconds) after each scaling event, before checking if another scaling event is required --availability-zones eu-west-1a eu-west-1b eu-west-1c # which availability zones you want your instances to be allocated in – multiple entries means EC@ will use any of them 3. Create a scale-up policy The policy dictates what will happen in response to a scaling event being triggered from a “high” alarm being breached. It links to the auto-scaling group; this sample results in one additional node being spun up: as-put-scaling-policy scale-up-policy # policy name -g sc-psod-woker-asg # auto-scaling group the policy works with --adjustment 1 # size of the adjustment --region eu-west-1 # region --type ChangeInCapacity # type of adjustment, this specifies a fixed number of nodes, but you can use PercentChangeInCapacity to make an adjustment relative to the current number of nodes, e.g. increasing by 50% 4. Create a scale-down policy The policy dictates what will happen in response to a scaling event being triggered from a “low” alarm being breached. It links to the auto-scaling group; this sample results in one node from the group being taken offline: as-put-scaling-policy scale-down-policy -g sc-psod-woker-asg "--adjustment=-1" # in Windows, use double-quotes to surround a negative adjustment value –-type ChangeInCapacity --region eu-west-1 5. Create a “high” CloudWatch alarm We’re done with the command line now. In the Web Console, open up the CloudWatch view and create a new alarm. This alarm will monitor your metrics and invoke the scale-up policy from your auto-scaling group, when the group is working too hard. Configure your metric – this example will fire the alarm if there are more than 10 messages in my queue for over a minute: Then link the alarm to the scale-up policy in your group: 6. Create a “low” CloudWatch alarm The opposite of step 4, this alarm will trigger when the instances in your group don’t have enough work to do (e.g fewer than 2 messages in the queue for 1 minute), and will invoke the scale-down policy. And that’s it. You don’t need your original VM as the auto-scale group has a minimum number of nodes connected. You can test out the scaling by flexing your CloudWatch metric – in this example, filling up a queue from a  stub publisher – and watching AWS create new nodes as required, then stopping the publisher and watch AWS kill off the spare nodes.

    Read the article

  • Independence Day for Software Components &ndash; Loosening Coupling by Reducing Connascence

    - by Brian Schroer
    Today is Independence Day in the USA, which got me thinking about loosely-coupled “independent” software components. I was reminded of a video I bookmarked quite a while ago of Jim Weirich’s “Grand Unified Theory of Software Design” talk at MountainWest RubyConf 2009. I finally watched that video this morning. I highly recommend it. In the video, Jim talks about software connascence. The dictionary definition of connascence (con-NAY-sense) is: 1. The common birth of two or more at the same time 2. That which is born or produced with another. 3. The act of growing together. The brief Wikipedia page about Connascent Software Components says that: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. The term was introduced to the software world in Meilir Page-Jones’ 1996 book “What Every Programmer Should Know About Object-Oriented Design”. The middle third of that book is the author’s proposed graphical notation for describing OO designs. UML became the standard about a year later, so a revised version of the book was published in 1999 as “Fundamentals of Object-Oriented Design in UML”. Weirich says that the third part of the book, in which Page-Jones introduces the concept of connascence “is worth the price of the entire book”. (The price of the entire book, by the way, is not much – I just bought a used copy on Amazon for $1.36, so that was a pretty low-risk investment. I’m looking forward to getting the book and learning about connascence from the original source.) Meanwhile, here’s my summary of Weirich’s summary of Page-Jones writings about connascence: The stronger the form of connascence, the more difficult and costly it is to change the elements in the relationship. Some of the connascence types, ordered from weak to strong are: Connascence of Name Connascence of name is when multiple components must agree on the name of an entity. If you change the name of a method or property, then you need to change all references to that method or property. Duh. Connascence of name is unavoidable, assuming your objects are actually used. My main takeaway about connascence of name is that it emphasizes the importance of giving things good names so you don’t need to go changing them later. Connascence of Type Connascence of type is when multiple components must agree on the type of an entity. I assume this is more of a problem for languages without compilers (especially when used in apps without tests). I know it’s an issue with evil JavaScript type coercion. Connascence of Meaning Connascence of meaning is when multiple components must agree on the meaning of particular values, e.g that “1” means normal customer and “2” means preferred customer. The solution to this is to use constants or enums instead of “magic” strings or numbers, which reduces the coupling by changing the connascence form from “meaning” to “name”. Connascence of Position Connascence of positions is when multiple components must agree on the order of values. This refers to methods with multiple parameters, e.g.: eMailer.Send("[email protected]", "[email protected]", "Your order is complete", "Order completion notification"); The more parameters there are, the stronger the connascence of position is between the component and its callers. In the example above, it’s not immediately clear when reading the code which email addresses are sender and receiver, and which of the final two strings are subject vs. body. Connascence of position could be improved to connascence of type by replacing the parameter list with a struct or class. This “introduce parameter object” refactoring might be overkill for a method with 2 parameters, but would definitely be an improvement for a method with 10 parameters. This points out two “rules” of connascence:  The Rule of Degree: The acceptability of connascence is related to the degree of its occurrence. The Rule of Locality: Stronger forms of connascence are more acceptable if the elements involved are closely related. For example, positional arguments in private methods are less problematic than in public methods. Connascence of Algorithm Connascence of algorithm is when multiple components must agree on a particular algorithm. Be DRY – Don’t Repeat Yourself. If you have “cloned” code in multiple locations, refactor it into a common function.   Those are the “static” forms of connascence. There are also “dynamic” forms, including… Connascence of Execution Connascence of execution is when the order of execution of multiple components is important. Consumers of your class shouldn’t have to know that they have to call an .Initialize method before it’s safe to call a .DoSomething method. Connascence of Timing Connascence of timing is when the timing of the execution of multiple components is important. I’ll have to read up on this one when I get the book, but assume it’s largely about threading. Connascence of Identity Connascence of identity is when multiple components must reference the entity. The example Weirich gives is when you have two instances of the “Bob” Employee class and you call the .RaiseSalary method on one and then the .Pay method on the other does the payment use the updated salary?   Again, this is my summary of a summary, so please be forgiving if I misunderstood anything. Once I get/read the book, I’ll make corrections if necessary and share any other useful information I might learn.   See Also: Gregory Brown: Ruby Best Practices Issue #24: Connascence as a Software Design Metric (That link is failing at the time I write this, so I had to go to the Google cache of the page.)

    Read the article

  • HTG Explains: Are You Using IPv6 Yet? Should You Even Care?

    - by Chris Hoffman
    IPv6 is extremely important for the long-term health of the Internet. But is your Internet service provider providing IPv6 connectivity yet? Does your home network support it? Should you even care if you’re using IPv6 yet? Switching from IPv4 to IPv6 will give the Internet a much larger pool of IP addresses. It should also allow every device to have its own public IP address, rather than be hidden behind a NAT router. IPv6 is Important Long-Term IPv6 is very important for the long-term health of the Internet. There are only about 3.7 billion public IPv4 addresses. This may sound like a lot, but it isn’t even one IP address for each person on the planet. Considering people have more and more Internet-connected devices — everything from light bulbs to thermostats are starting to become network-connected — the lack of IP addresses is already proving to be a serious problem. This may not affect those of us in well-off developed countries just yet, but developing countries are already running out of IPv4 addresses. So, if you work at an Internet service provider, manage Internet-connected servers, or develop software or hardware — yes, you should care about IPv6! You should be deploying it and ensuring your software and hardware works properly with it. It’s important to prepare for the future before the current IPv4 situation becomes completely unworkable. But, if you’re just typical user or even a typical geek with a home Internet connection and a home network, should you really care about your home network just yet? Probably not. What You Need to Use IPv6 To use IPv6, you’ll need three things: An IPv6-Compatible Operating System: Your operating system’s software must be capable of using IPv6. All modern desktop operating systems should be compatible — Windows Vista and newer versions of Windows, as well as modern versions of Mac OS X and Linux. Windows XP doesn’t have IPv6 support installed by default, but you shouldn’t be using Windows XP anymore, anyway. A Router With IPv6 Support: Many — maybe even most — consumer routers in the wild don’t support IPv6. Check your router’s specifications details to see if it supports IPv6 if you’re curious. If you’re going to buy a new router, you’ll probably want to get one with IPv6 support to future-proof yourself. If you don’t have an IPv6-enabled router yet, you don’t need to buy a new one just to get it. An ISP With IPv6 Enabled:  Your Internet service provider must also have IPv6 set up on their end. Even if you have modern software and hardware on your end, your ISP has to provide an IPv6 connection for you to use it. IPv6 is rolling out steadily, but slowly — there’s a good chance your ISP hasn’t enabled it for you yet. How to Tell If You’re Using IPv6 The easiest way to tell if you have IPv6 connectivity is to visit a website like testmyipv6.com. This website allows you to connect to it in different ways — click the links near the top to see if you can connect to the website via different types of connections. If you can’t connect via IPv6, it’s either because your operating system is too old (unlikely), your router doesn’t support IPv6 (very possible), or because your ISP hasn’t enabled it for you yet (very likely). Now What? If you can connect to the test website above via IPv6, congratulations! Everything is working as it should. Your ISP is doing a good job of rolling out IPv6 rather than dragging its feet. There’s a good chance you won’t have IPv6 working properly, however. So what should you do about this — should you head to Amazon and buy a new IPv6-enabled router or switch to an ISP that offers IPv6? Should you use a “tunnel broker,” as the test site recommends, to tunnel into IPv6 via your IPv4 connection? Well, probably not. Typical users shouldn’t have to worry about this yet. Connecting to the Internet via IPv6 shouldn’t be perceptibly faster, for example. It’s important for operating system vendors, hardware companies, and Internet service providers to prepare for the future and get IPv6 working, but you don’t need to worry about this on your home network. IPv6 is all about future-proofing. You shouldn’t be racing to implement this at home yet or worrying about it too much — but, when you need to buy a new router, try to buy one that supports IPv6. Image Credit: Adobe of Chaos on Flickr, hisperati on Flickr, Vox Efx on Flickr     

    Read the article

  • SSH error: Permission denied, please try again

    - by Kamal
    I am new to ubuntu. Hence please forgive me if the question is too simple. I have a ubuntu server setup using amazon ec2 instance. I need to connect my desktop (which is also a ubuntu machine) to the ubuntu server using SSH. I have installed open-ssh in ubuntu server. I need all systems of my network to connect the ubuntu server using SSH (no need to connect through pem or pub keys). Hence opened SSH port 22 for my static IP in security groups (AWS). My SSHD-CONFIG file is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Through webmin (Command shell), I have created a new user named 'senthil' and added this new user to 'sudo' group. sudo adduser -y senthil sudo adduser senthil sudo I tried to login using this new user 'senthil' in 'webmin'. I was able to login successfully. When I tried to connect ubuntu server from my terminal through SSH, ssh senthil@SERVER_IP It asked me to enter password. After the password entry, it displayed: Permission denied, please try again. On some research I realized that, I need to monitor my server's auth log for this. I got the following error in my auth log (/var/log/auth.log) Jul 2 09:38:07 ip-192-xx-xx-xxx sshd[3037]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=MY_CLIENT_IP user=senthil Jul 2 09:38:09 ip-192-xx-xx-xxx sshd[3037]: Failed password for senthil from MY_CLIENT_IP port 39116 ssh2 When I tried to debug using: ssh -v senthil@SERVER_IP OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to SERVER_IP [SERVER_IP] port 22. debug1: Connection established. debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA {SERVER_HOST_KEY} debug1: Host 'SERVER_IP' is known and matches the ECDSA host key. debug1: Found key in {MY-WORKSPACE}/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: password debug1: Next authentication method: password senthil@SERVER_IP's password: debug1: Authentications that can continue: password Permission denied, please try again. senthil@SERVER_IP's password: For password, I have entered the same value which I normally use for 'ubuntu' user. Can anyone please guide me where the issue is and suggest some solution for this issue?

    Read the article

  • User Experience Fundamentals

    - by ultan o'broin
    Understanding what user experience means in the modern work environment is central to building great-looking usable applications on the desktop or mobile devices. What better place to start a series of blog posts on such Applications User Experience team enablement for customers and partners than by sharing what the term really means, writes team member Karen Scipi. Applications UX have gained valuable insights into developing a user experience that reflects the experience of today’s worker. We have observed real workers performing real tasks in real work environments, and we have developed a set of new standards of application design that have been scientifically proven to be beneficial to enable today’s workers. We share such expertise to enable our customers and partners to benefit from our insights and to further their return on investment when building Oracle applications. So, What is User Experience? ?The user interface (UI) is about the on-screen user context provided by the layout of widgets (such as icons, fields, and buttons and more) and the visual impact of colors, typographic choices, and so on. The UI comprises the “look and feel” of the application that users interact with, and reflects, in essence, the most immediate aspects of usability we can now all relate to.  User experience, on the other hand, is about understanding the whole context of the world of work, how workers go about completing tasks, crossing all sorts of boundaries along the way. It is a study of how business processes and workers goals coincide, how users work with technology or other tools to get their jobs done, their interactions with other users, and their response to the technical, physical, and cultural environment around them. User experience is all about how users work—their work environments, office layouts, desk tools, types of devices, their working day, and more. Even their job aids, such as sticky notes, offer insight for UX innovation. User experience matters because businesses needs to be efficient, work must be productive, and users now demand to be satisfied by the applications they work with. In simple terms, tasks finished quickly and accurately for a business evokes organization and worker satisfaction, which in turn makes workers feel good and more than willing to use the application again tomorrow. Design Principles for the Enterprise Worker The consumerization of information technology has raised the bar for enterprise applications. Applications must be consistent, simple, intuitive, but above all contextual, reflecting how and when workers work, in the office or on the go. For example, the Google search experience with its type-ahead keyword-prompting feature is how workers expect to be able to discover enterprise information, too. Type-ahead in PeopleSoft 9.1 To build software that enables workers to be productive, our design principles meet modern work requirements about consistency, with well-organized, context-driven information, geared for a working world of discovery and collaboration. Our applications must also behave in a simple, web-like way just like Amazon, Google, and Apple products that workers use at home or on the go. Our user experience must also reflect workers’ needs for flexibility and well-loved enterprise practices such as using popular desktop tools like Microsoft Excel or Outlook as required. Building User Experience Productively The building blocks of Oracle Fusion Applications are the user experience design patterns. Based on the Oracle Fusion Middleware technology used to build Oracle Fusion Applications, the patterns are reusable solutions to common usability challenges that ADF developers typically face as they build applications, extensions, and integrations. Developers use the patterns as part of their Oracle toolkits to realize great usability consistently and in a productive way. Our design pattern creation process is informed by user experience research and science, an understanding of our technology’s capabilities, the demands for simplification and intuitiveness from users, and the best of Oracle’s acquisitions strategy (an injection of smart people and smart innovation). The patterns are supported by usage guidelines and are tested in our labs and assembled into a library of proven resources we used to build own Oracle Fusion Applications and other Oracle applications user experiences. The design patterns library is now available to the ADF community and to our partners and customers, for free. Developers with ADF skills and other technology skills can now offer more than just coding and functionality and still use the best in enterprise methodologies to ensure that a great user experience is easily applied, scaled, and maintained, whether it be for SaaS or on-premise deployments for Oracle Fusion Applications, for applications coexistence, or for partner integrations scenarios.  Oracle partners and customers already using our design patterns to build solutions and win business in smart and productive ways are now sharing their experiences and insights on pattern use to benefit your entire business. Applications UX is going global with the message and the means. Our hands-on user experience enablement through ADF  is expanding. So, stay tuned to Misha Vaughan's Voice of User Experience (VOX) blog and follow along on Twitter at @usableapps for news of outreach events and other learning opportunities. Interested in Learning More? Oracle Fusion Applications User Experience Patterns and Guidelines Library Shout-outs for Oracle UX Design Patterns Oracle Fusion Applications User Experience Design Patterns: Productivity Realized

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116  | Next Page >