Search Results

Search found 2853 results on 115 pages for 'amazon cloudfront'.

Page 109/115 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • MySQL query paralyzes site

    - by nute
    Once in a while, at random intervals, our website gets completely paralyzed. Looking at SHOW FULL PROCESSLIST;, I've noticed that when this happens, there is a specific query that is "Copying to tmp table" for a loooong time (sometimes 350 seconds), and almost all the other queries are "Locked". The part I don't understand is that 90% of the time, this query runs fine. I see it going through in the process list and it finishes pretty quickly most of the time. This query is being called by an ajax call on our homepage to display product recommendations based your browsing history (a la amazon). Just sometimes, randomly (but too often), it gets stuck at "copying to tmp table". Here is a caught instance of the query that was up 109 seconds when I looked: SELECT DISTINCT product_product.id, product_product.name, product_product.retailprice, product_product.imageurl, product_product.thumbnailurl, product_product.msrp FROM product_product, product_xref, product_viewhistory WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.outofstock='N' AND product_viewhistory.cookieId = '188af1efad392c2adf82' AND product_viewhistory.productId IN (24976, 25873, 26067, 26073, 44949, 16209, 70528, 69784, 75171, 75172) ORDER BY product_xref.hits DESC LIMIT 10 Of course the "cookieId" and the list of "productId" changes dynamically depending on the request. I use php with PDO.

    Read the article

  • Insert only adds upto 1000 records and ignoresall records after that.

    - by user560559
    i have a large database where the client stores personal messages and fire email notifications [if allowed by the users]. certain users have the option of sending messages to their entire network of friends. some users have over 5000 friends in their network so if they select the whole network they'll be sending messages to over 5000 friends and system will store all the messages into a table. the problem is this that it does not insert more than 1000 records and ignores all inserts after the first 1000. i have increased the packet size, bulk_insert_buffer_size but still no luck. since the system stores some of the info in another table for reports, every insert returns its new message id. due to this i can not use the "insert into table (column1,column2) values (value1,value2) , (value1,value2)....etc." table engine is innodb, mysql version is 5.1.3 and is hosted on amazon web services. all i want is to fix this issue of inserting more than 1000 records at a time. as mentioned earlier, it works fine but only up to 1000 records and simply ignores all the records after that. i'm using php foreach(){} to insert message for each friend and if email is available, send notification to the user. this foreach(){} also inserts the same record in another table [with only 3 columns] for generating reports.

    Read the article

  • Can't store UTF-8 in RDS despite setting up new Parameter Group using Rails on Heroku

    - by Lail
    I'm setting up a new instance of a Rails(2.3.5) app on Heroku using Amazon RDS as the database. I'd like to use UTF-8 for everything. Since RDS isn't UTF-8 by default, I set up a new Parameter Group and switched the database to use that one, basically per this. Seems to have worked: SHOW VARIABLES LIKE '%character%'; character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server utf8 character_set_system utf8 character_sets_dir /rdsdbbin/mysql-5.1.50.R3/share/mysql/charsets/ Furthermore, I've successfully setup Heroku to use the RDS database. After rake db:migrate, everything looks good: CREATE TABLE `comments` ( `id` int(11) NOT NULL AUTO_INCREMENT, `commentable_id` int(11) DEFAULT NULL, `parent_id` int(11) DEFAULT NULL, `content` text COLLATE utf8_unicode_ci, `child_count` int(11) DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `commentable_id` (`commentable_id`), KEY `index_comments_on_community_id` (`community_id`), KEY `parent_id` (`parent_id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the markup, I've included: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Also, I've set: production: encoding: utf8 collation: utf8_general_ci ...in the database.yml, though I'm not very confident that anything is being done to honor any of those settings in this case, as Heroku seems to be doing its own config when connecting to RDS. Now, I enter a comment through the form in the app: "Úbe® ƒåiL", but in the database I've got "Úbe® Æ’Ã¥iL" It looks fine when Rails loads it back out of the database and it is rendered to the page, so whatever it is doing one way, it's undoing the other way. If I look at the RDS database in Sequel Pro, it looks fine if I set the encoding to "UTF-8 Unicode via Latin 1". So it seems Latin-1 is sneaking in there somewhere. Somebody must have done this before, right? What am I missing?

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • How do you program a custom WordPress plug-in?

    - by James
    I have seen several WordPress plug-ins for adding a "quote of the day" feature (or something similar) to your blog. How do you create a customized one? I'm looking for something that will pull a daily entry from a list/database of my creation. I apologize if my question is not detailed enough. Still a newbie with WordPress. PART 2: Thanks for your prompt and on-point responses. With your responses and some additional research, I'm able to fine-tune my question. What I wish to accomplish is something similar to Amazon's Deal of the Day widget. Except, in my case, it will simply be a title and a corresponding link. My presumption is that I will set up a database and (using php or something similar) have the information drawn from the database and displayed in my WP sidebar. Additionally, I forgot to mention the time element. I want the displayed info to update once a day, at or around the same time each day. Any ideas? Thanks again. I'm so glad I found stackoverflow.

    Read the article

  • C# and Excel best practices

    - by rlp
    I am doing a lot of MS Excel interop i C# (Visual Studio 2012) using Microsoft.Office.Interop.Excel. It requires a lot of tiresome manual code to include Excel formulas, doing formatting of text and numbers, and making graphs. I would like it very much if any of you have some input on how I do the task better. I have been looking at Visual Studio Tools for Office, but I am uncertain on its functions. I get it is required to make Excel add-ins, but does it help doing Excel automation? I have desperately been trying to find information on working with Excel in Visual Studio 2012 using C#. I did found some good but short tutorials. However I really would like a book an the subject to learn the field more in depth regarding functionality and best practices. Searching Amazon with my limited knowlegde only gives me book on VSTO using older versions of Visual Studio. I would not like to use VBA. My applications use Excel mainly for visualizing compiled from different sources. I also to data processing where Excel is not required. Futhermore, I can write C# but not VB.

    Read the article

  • License For (Mostly) Open Source Website / Service

    - by Ryan Sullivan
    I have an interesting setup and am not sure how to license a website. I know this is not legal advice, and I am not asking for any. There are so many different Open Source Licenses and I do not have the time to read every last one to see which best fits my situation. Really, I am looking for suggestions and a nudge in the right direction. My setup is: I give away for free version of my web service with a clean website interface. The implementation I use in the actual web site is (almost) identical to what I give away. The main service works the exactly the same way, but the website interface to manage features in the service is fairly different. Really the web interfaces have the same exact backend, and the front ends accomplish the same tasks, but the service I offer on my site is very rich and uses a good deal of javascript, where I kept the interface in the version I give away as simple and javascript-less as possible. Mostly so it is easy to understand and integrate into other sites. I am not entirely sure how I should license this. It is more like I develop an open source service but have a separate site built upon it. I like the GPLv3 but I am not sure if I can use it in this case especially since I am making some money off of google ad's on the site and plan on using amazon affiliates as well. Any help would be greatly appreciated. I do want to open it up as much as possible. But I still want to be able to continue with my own implementation. Thanks in advance for any information or help anyone can provide.

    Read the article

  • How to increase my "advanced" knowledge of PHP further? (quickly)

    - by Kerry
    I have been working with PHP for years and gotten a very good grasp of the language, created many advanced and not-so-advanced systems that are working very well. The problem I'm running into is that I only learn when I find a need for something that I haven't learned before. This causes me to look up solutions and other code that handles the problem, and so I will learn about a new function or structure that I hadn't seen before. It is in this way that I have learned many of my better techniques (such as studying classes put out by Amazon, Google or other major companies). The main problem with this is the concept of not being able to learn something if you don't know it exists. For instance, it took me several months of programming to learn about the empty() function, and I simply would check the string length using strlen() to check for empty values. I'm now getting into building bigger and bigger systems, and I've started to read blogs like highscalability.com and been researching MySQL replication and server data for scaling. I know that structure of your code is very important to make full systems work. After reading a recent blog about reddit's structure, it made me question if there is some standard or "accepted systems" out there. I have looked into frameworks (I've used Kohana, which I regretted, but decided that PHP frameworks were not for me) and I prefer my own library of functions rather than having a framework. My current structure is a mix between WordPress, Kohana and my own knowledge. The ways I can see as being potentially beneficial are: Read blogs Read tutorials Work with someone else Read a book What would be the best way(s) to "get to the next level" the level of being a very good system developer?

    Read the article

  • Infrastructure for high transactional system (language & hosting suggestion help)

    - by RPS
    Some of our friends (University students) are trying to develop a twitter type application, I want to plan for at least 1000 transactions per second (I know it's wishful thinking) for initial launch. This involves several people connecting and getting updates and posting (text + images) to site. In the back end db will server the data and also calculates rankings of what to push to user based on complex algorithm on the fly real-time. Our group is familiar with Java and Tomcat/MySQL. We can also easily learn/code in PHP/MySQL. What is the best suited platform for our purpose ? Though Java seem to be easy to implement for us I am afraid that hosting will be a bit difficult. I could find cloud based php hosting services (like rackspace cloudsites) at reasonable cost. Amazon EC2 is a bit over our heads to manage on day-to-day. Also any recommendation on hosting ? (PHP or Java) We don't have millions in seed money but about $20K to start with. Any advice on above or any thing in general approach is much appreciated.

    Read the article

  • Looking for home networking hardware and software advice

    - by phobos7
    Note: I originally wrote this up in a blog post. I've removed any affiliate links that I put in my original post to ensure I don't annoy anybody. I've recently moved home and I now need to go to the trouble of sorting out my home network yet again. We had Virgin broadband in Hertford but you can't get Virgin in the street we've moved to so I've had to go with O2 Broadband. Normally I prefer to use my own hardward, and previously used the DLink DIR-655 router which was great, but in this situation I am using the O2 Wirelss Box III since I only have an old Netgear DG834PN Wireless G modem router and I'd rather be using Wireless N. Anyway, the place we have moved into has only one phone point in the hallway, has the best TV point in one room and the best place to put the TV and other entertainment stuff in yet another room. So, networking the house up for Internet and TV is required. The diagram below shows the things that I'll have in my home network but there are three points where I'm not quite sure what hardware to us. Wireless Access Point/Bridge, that acts only as a wireless to wire bridge and not an AP, that links up a Media Centre/PC and a couple of consoles to the network. I'm pretty much settled on us an Acer Aspire Revo R3600 as my media PC, probably with Ubuntu or Windows and XBMC installed. Wireless Access Point/Bridge, that acts only as a wireless to wire bridge and not an AP, that links up a device that can decode and stream TV from a TV aerial across the network. The device that is connected to 2). At the moment I'm considering a HDHomeRun by SiliconDust. At the moment I'm considering either the TP LINK TL-WA701ND 150Mbps Wireless Lite N Access Point (very cheap at Amazon) or the Netgear 5 GHz Wireless-N HD Access Point/Bridge. I'd love to get some insight into what you would do in my situation. What Wireless Access Point/Bridge should I put at points 1) and 2)? What device should I choose for point 3) that can decode and stream a TV signal? Is the Acer Aspire Revo R3600 a good choice? ![alt text][6] Note 2: I've also posted this question on AVForums.

    Read the article

  • SSL connection errors from Apache

    - by Yang
    I'm running a (self-signed) SSL cert site on Apache/2.2.14 on Ubuntu 10.04, but various browsers are giving errors on half the connection attempts. Just now saw this transient error from Chrome: "Error 126 (net::ERR_SSL_BAD_RECORD_MAC_ALERT): Unknown error." Hit refresh and the problem goes away for a while. wget too: $ wget --no-check-certificate https://dev.foo.com/deps/ --2010-09-08 19:30:26-- https://dev.foo.com/deps/ Resolving dev.foo.com... 184.72.53.220 Connecting to dev.foo.com|184.72.53.220|:443... connected. OpenSSL: error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01 OpenSSL: error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed OpenSSL: error:1408D07B:SSL routines:SSL3_GET_KEY_EXCHANGE:bad signature Unable to establish SSL connection. Run it right away again and it works: $ wget --no-check-certificate https://dev.foo.com/deps/ --2010-09-08 19:30:29-- https://dev.foo.com/deps/ Resolving dev.foo.com... 184.72.53.220 Connecting to dev.foo.com|184.72.53.220|:443... connected. WARNING: cannot verify dev.foo.com's certificate, issued by `/CN=dev.foo.com': Self-signed certificate encountered. HTTP request sent, awaiting response... 200 OK Length: 3157 (3.1K) [text/html] Saving to: `index.html' 100%[======================================>] 3,157 --.-K/s in 0s 2010-09-08 19:30:29 (48.6 MB/s) - `index.html' saved [3157/3157] In my sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key The cert: -----BEGIN CERTIFICATE----- MIIBszCCARwCCQCa0TzNwqLgsTANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNk ZXYucGFydHlvbmRhdGEuY29tMB4XDTEwMDgyNzA2MzA1N1oXDTIwMDgyNDA2MzA1 N1owHjEcMBoGA1UEAxMTZGV2LnBhcnR5b25kYXRhLmNvbTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAzXDEULpCUqIc9hV/ESFapkckR2uoYINA81DvG2aQZ9Ot Q30OwX2ae2CC4bSzJEIVlahU8vjVrWpmpa28NEhQbqh4ywwbl1XDrEVYI6Gkfimf snJhOKyaVrEhlwutYtBjmsz3ZIqwymMPm/6smVcSS5dJIynlSmtltxX6ivPcO8UC AwEAATANBgkqhkiG9w0BAQUFAAOBgQBGxHVkpSSOnZjzuySRepjhAlV/yhe9Fx23 fh12WrjQMEi98B7JEuNSLXDWckUN7O6XRc3RzKmazcGHJqzhn0Ov6gAmAE2XjZ/x VW21xmaLwk+KgYKFJbJJaP3jMSpU7I3aa11wqAkR2Zd4Nkm9N0YXYIzcBdfztTVI Et8mEHBFdg== -----END CERTIFICATE----- The cert is in turn generated via: $ make-ssl-cert generate-default-snakeoil --force-overwrite Apache version. $ apache2 -V Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" I don't administer the network, hardware, etc. - this is all running on Amazon EC2. I'm not running a load-balancer or anything else in front of the server. I'm making direct TCP connections to that host (AFAIK). Any ideas? Thanks in advance for any help.

    Read the article

  • Will this RAID5 setup work (3TB Seagate Barracudas + Adaptec RAID 6405)?

    - by Slayer537
    As the title states, will this RAID combo work, and if not what needs to be changed? Overall opinions would be most helpful. I currently run a small file server of about 5TB or so. I keep outgrowing my needs and need to build a RAID setup that will allow me to expand as needed. I am new to RAID setups, especially one of the scale I have currently planned out, but I have being doing some research for the past couple of weeks and have come up with a build. Ideally, I'd have the setup completely built, but I'd like to keep the total cost around $1k and can't afford to go above $1.5k, so unfortunately that's not an option. 2 of my current drives are WD Caviar Blacks 2TB; however, I have recently learned that due to the lack of TLER those drives are awful for any RAID setup other than 0 or 1. That being said, my third drive is a Seagate Barracuda 3TB (ST300DM001) and I have found a RAID controller that states it supports it, so I'd like to use this same type of drive, if possible. Have any of you had any experience using this drive or a similar one in a RAID5 configuration? The manufacturer states that it supports it, but knowing that it is not an enterprise drive, I am slightly concerned that it could drop out of the array. I would just go with enterprise drives, but those are about double in cost... Parts list: Storage rack: http://www.ebay.com/itm/SGI-3U-Media-Storage-Server-16-Hard-Drive-Bay-SATA-SAS-Expander-Omnistor-SE3016-/140735776937?pt=LH_DefaultDomain_0&hash=item20c48188a9 3 more HDs (for now..): http://www.amazon.com/Seagate-Barracuda-3-5-Inch-Internal-ST3000DM001/dp/B005T3GRLY/ref=dp_return_2?ie=UTF8&n=172282&s=electronics Adaptec RAID 6405: http://www.newegg.com/Product/Product.aspx?Item=N82E16816103224 here's a link to the compatibility sheet if that helps: http://download.adaptec.com/pdfs/compatibility_report/arc-sas_cr_03-27-12_series6.pdf SAS expander cable: http://www.pc-pitstop.com/sas_cables_adapters/8887-2M.asp My plan is to install the RAID card in my computer and then route the SAS cable to the rack. Setup a RAID5 on 3 drives, transfer my data over from my other drive, and then add that drive to the array. Eventually, I'd like to get a 2U unit and run the file server on that and move the RAID card over to there, but that will have to happen later on. Side note: The computer the card would be going into will be running Windows 7 Pro with 24GB of DDR3-1600 and an i7-930.

    Read the article

  • Apache taking up too much CPU

    - by andrewtweber
    I'm trying to manage a server on Amazon for a network of sites that receives about 100 million pageviews per month. Unfortunately, nobody out of my team of 5 developers has much server admin experience. Right now we have the MaxClients set to 1400. Currently our traffic is about average, and we have 1150 total Apache processes running, which use about 2% CPU each! Out of those 1150, 800 of them are currently sleeping, but still taking up CPU. I'm sure there are ways to optimize this. I have a few thoughts: It appears Apache is creating a new process for every single connection. Is this normal? Is there a way to more quickly kill the sleeping processes? Should we turn KeepAlive on? Each page loads about 15-20 medium-sized graphics and a lot of javascript/css. So, here's our Apache setup. We do plan on contracting a server admin asap, but I would really appreciate some advice until we can find someone. Timeout 25 KeepAlive Off MaxKeepAliveRequests 200 KeepAliveTimeout 5 <IfModule prefork.c> StartServers 100 MinSpareServers 20 MaxSpareServers 50 ServerLimit 1400 MaxClients 1400 MaxRequestsPerChild 5000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 400 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Full top output: top - 23:44:36 up 1 day, 6:43, 4 users, load average: 379.14, 379.17, 377.22 Tasks: 1153 total, 379 running, 774 sleeping, 0 stopped, 0 zombie Cpu(s): 71.9%us, 26.2%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 1.9%si, 0.0%st Mem: 70343000k total, 23768448k used, 46574552k free, 527376k buffers Swap: 0k total, 0k used, 0k free, 10054596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1756 mysql 20 0 10.2g 1.8g 5256 S 19.8 2.7 904:41.13 mysqld 21515 apache 20 0 396m 18m 4512 R 2.1 0.0 0:34.42 httpd 21524 apache 20 0 396m 18m 4032 R 2.1 0.0 0:32.63 httpd 21544 apache 20 0 394m 16m 4084 R 2.1 0.0 0:36.38 httpd 21643 apache 20 0 396m 18m 4360 R 2.1 0.0 0:34.20 httpd 21817 apache 20 0 396m 17m 4064 R 2.1 0.0 0:38.22 httpd 22134 apache 20 0 395m 17m 4584 R 2.1 0.0 0:35.62 httpd 22211 apache 20 0 397m 18m 4104 R 2.1 0.0 0:29.91 httpd 22267 apache 20 0 396m 18m 4636 R 2.1 0.0 0:35.29 httpd 22334 apache 20 0 397m 18m 4096 R 2.1 0.0 0:34.86 httpd 22549 apache 20 0 395m 17m 4056 R 2.1 0.0 0:31.01 httpd 22612 apache 20 0 397m 19m 4152 R 2.1 0.0 0:34.34 httpd 22721 apache 20 0 396m 18m 4060 R 2.1 0.0 0:32.76 httpd 22932 apache 20 0 396m 17m 4020 R 2.1 0.0 0:37.34 httpd 22933 apache 20 0 396m 18m 4060 R 2.1 0.0 0:34.77 httpd 22949 apache 20 0 396m 18m 4060 R 2.1 0.0 0:34.61 httpd 22956 apache 20 0 402m 24m 4072 R 2.1 0.0 0:41.45 httpd

    Read the article

  • Encrypted off-site data storage

    - by Dan
    My business has a rather unique problem. We work in China and we want to implement a file server paradigm which does not store any files locally, but rather in a server overseas. Applications would be saved onto our local machines, but data would be loaded directly into memory from the cloud, e.g. I load a docx into word at the beginning of the day, saving periodically to the cloud as I work on it, and turn off my computer at night, with nothing saved locally. Considering recent events, we worry about being raided by the Chinese authorities, and although all our data is encrypted, it would not be hard for the authorities to force us to give up the keys. So the goal is not to have anything compromising physically in China. We have about 20 computers, and we need an authenticated, encrypted connection with this overseas file server. A system with Active-Directory-like permissions would be best, so that only management can read or write to certain files, or workers can only access files that relate to their projects, and to which all access can be cut off should the need arise. The file server itself would also need to be encrypted. And for convenience, it would be nice if this system was integrated with each computer's file explorer (like skydrive or dropbox does, but, again, without saving a copy locally), rather than through a browser. I can't find any solution online. Does anyone know of a service that does this? Otherwise I'll have to do it myself (which kinda sounds fun, but I don't really have the time), and I'm not sure where to start. Amazon maybe. But the protocols that offices would use on their intranet typically aren't encrypted; we need all traffic securely tunneled out of the country. Each computer already has a VPN to a server in California, but I'm unsure whether it would be efficient to pipe file transfers through it. Let me know if anyone has any ideas. And this is my first post; feel free say whether this question is inappropriate/needs to be posted elsewhere.

    Read the article

  • How can I prevent an unintentional DDOS running ColdFusion 8 with IIS 6?

    - by Eric Belair
    We had an interesting outage today on one of our client's websites. Out of nowhere, the website was inaccessible. The website runs by itself on a dedicated physical Windows 2000 server (probably overkill, I know, but that's a discussion for a different day). After restarting IIS and ColdFusion Application Service, the problem came back several times. My initial thought was that it was a DNS issue, which happens occasionally - the last time it happened was after Hurricane Sandy when we our ISP was out, and we had to make some network config changes. But, it was not a DNS issue. My second thought was that it was a DDOS attack, but, there's very little reason anyone would want to take this site down. When we called our ISP, the operator on the other end noted that traffic was spiking significantly. As it turned out, the client had unintentionally caused a DDOS on the website, after they FTPed a very large video file, and then mass emailed a link to it. Hundreds of people clicked the link and brought the site to its knees. I am primarily a Website Programmer, but I often have to contribute to server administration at times. Sadly, I'm the resident ColdFusion and IIS expert, but I don't have a lot of experience with this issue. What are some basic steps that I can take to prevent this from happening in the future, since we cannot always control what files the client posts to the website. Here are some ideas I had, but I'm unsure of the impact: Limit the number of connections in IIS. Put media files on a separate server (like an Amazon site, etc.). File requests of this type currently behind a server-script (i.e. /www.site.com/viewFile.cfm?fileId=1424545, where the fileId references a file off the webroot) that logs requests, and pushes the file to the browser using CFCONTENT. I could edit this script to reject requests when they exceed a certain amount in a given time-frame (i.e. a 5MB can be accessed globally 10 times in an hour). This may cause some users frustration, but, if hundreds of users are attempting to view the file, the site is going to crash anyways, as it did today, which is way more frustrating, since there is no "pretty" message explaining why they can't get to the file. I'm open to any suggestions, as I'm continuing my research to report to the CTO with the best options, so that we can put a solution into effect. Thank you.

    Read the article

  • Web Site Serving, Cloud-Computing, oh, my

    - by Frank
    I'm planning a software based service. To give it a bit of context (type of traffic), assume it similar to facebook in nature (with a little GitHub thrown in). I've been trying to understand my different hosting options. I've been using a shared host with GoDaddy for years just fine. I currently host a Wordpress web site there and I've not had any problems. Quite frankly, they've taken good care of me. However, the nature of a shared hosting environment is limited in nature. For example, I can't do anything but host a web site there. For example, I can not run a Mercurial server. Last time I attempted to build a web application with the intention of eventually launching it via GoDaddy, I ran in to all sorts of troubles because it was shared-hosted. Assembly issues, etc. At the time, the cost and time sank my project. (The lack of direct access was also frustrating.) (to be fair to godaddy, this was over 3 years ago) I've been looking at Rackspace or Amazon as a possible cloud solution but it seems to be just processing power and bandwidth (and an OS). From what I understand, I'd need to get Apache and MySQL Working on my own. The way cloud hosting is priced, however, seems appealing. I figure my final option might be to use a virtual private host. I think this would be more flexible than a shared-host site but less scalable than a cloud based server. So, I guess my question is what is an appropriate solution for someone who intends to build a web application service? I figure that I need to establish a hosting environment now rather than later so I can plan to effectively use the environment. I'd prefer to be fairly economical to start out with. I really can't afford to pay $999 (or even $99) while I build up the site and get the core functionality online but at the same time, I'd like to have the selected environment grow as needed. Thank you.

    Read the article

  • How to move a ruby on rails application to a new server

    - by ManiacZX
    I have a rails app on an old Ubuntu server I need to move onto a new machine. I haven't worked with ruby on rails so I don't really know anything about the structure of the app. I want to load this onto an Ubuntu 8.04 AMI on Amazon EC2 and am looking for any information regarding the migration process such as: Do I copy over the entire folder defined as the application root in the mongrel config (for ex: /u/apps/myapp/current) or just certain folders? Am I looking for trouble if I go with the latest versions of ruby and the various gems? Any general gotchas to look out for in the process. Current server information: root@webnode001:/# cat /proc/version Linux version 2.6.15-27-server (buildd@terranova) (gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5)) #1 SMP Fri Dec 8 18:43:54 UTC 2006 root@webnode001:/# rails -v Rails 1.2.3 root@webnode001:/# mongrel_rails cluster::configure --version Version 1.0.1 root@webnode001:/# gem -v 0.9.0 root@webnode001:/# gem list -l *** LOCAL GEMS *** actionmailer (1.3.3, 1.2.5) Service layer for easy email delivery and testing. actionpack (1.13.3, 1.12.5) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.3, 1.1.6) Web service support for Action Pack. activerecord (1.15.3, 1.15.2, 1.14.4) Implements the ActiveRecord pattern for ORM. activesupport (1.4.2, 1.4.1, 1.3.1) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.7, 1.0.5, 1.0.4, 1.0.2) A toolkit to create and control daemons in different ways eventmachine (0.7.2, 0.7.0) Ruby/EventMachine socket engine library fastercsv (1.2.0, 1.1.0) FasterCSV is CSV, but faster, smaller, and cleaner. fastthread (1.0) Optimized replacement for thread.rb primitives ferret (0.11.4) Ruby indexing library. gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only mongrel (1.0.1, 0.3.13.4) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. piston (1.3.3) Piston is a utility that enables merge tracking of remote repositories. rails (1.2.3, 1.1.6) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.3, 0.7.1) Ruby based make-like utility. sources (0.0.1) This package provides download sources for remote gem installation swiftiply (0.5.1) A fast clustering proxy for web applications.

    Read the article

  • Tomcat can't talk to MySql after outage

    - by gav
    I missed a payment for my server and hey suspended my account for a day or so. When they brought the server back up all my data was in tact but for some reason Tomcat can't make a JDBC connection to my MySql server. They both run on the same machine and hence I have a bind address of 127.0.0.1. It's strange because I have reset the machine of my own accord before without issue but clearly something has been reset in the downtime. I followed this guide (Just the bits which don't concern S3, I am not on Amazon infrastructure) originally and everything worked as expected. I'm very new to being a SysAdmin and I'm not sure what to try, how would you go about diagnosing this issue? The stack trace I get is as follows; INFO: Deploying web application archive myapp-1.1.war 2010-05-26 22:07:22,221 [main] ERROR context.ContextLoader - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'messageSource': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager': Cannot resolve reference to bean 'sessionFactory' while setting bean property 'sessionFactory'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory': Cannot resolve reference to bean 'hibernateProperties' while setting bean property 'hibernateProperties'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'hibernateProperties': Cannot resolve reference to bean 'dialectDetector' while setting bean property 'properties' with key [hibernate.dialect]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dialectDetector': Invocation of init method failed; nested exception is org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.codehaus.groovy.grails.commons.spring.ReloadAwareAutowireCapableBeanFactory.doCreateBean(ReloadAwareAutowireCapableBeanFactory.java:129) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.context.support.AbstractApplicationContext.initMessageSource(AbstractApplicationContext.java:714) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:404) at org.codehaus.groovy.grails.commons.spring.GrailsWebApplicationContext.refresh(GrailsWebApplicationContext.java:153) ... I get this error for a number of 'beans'. If I type mysql at my command prompt then I can easily login with the same credentials as my grails app which uses GORM and Hibernate to persist objects to the DB. I might not have given enough info to start with but I'm really interested to learn and will certainly provide it if asked, I just really don't know where to start on this one. Thanks, Gav

    Read the article

  • StrongSwan + xl2tpd client timeout between 2-5 minutes

    - by Howard Guo
    I run CentOS 6.4 on Amazon EC2, using xl2tpd-1.3.1 from EPEL repository together with StrongSwan 5.0.4. I setup a simple IPSec connection: conn l2tp type=transport keyexchange=ikev1 rekey=no authby=psk leftsubnet=0.0.0.0/0 rightsubnet=0.0.0.0/0 compress=yes auto=add And here is xl2tpd.conf: [global] ipsec saref = yes [lns default] ip range = 192.168.0.2-192.168.0.250 local ip = 192.168.0.1 ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes Here is options.xl2tpd: ms-dns 8.8.4.4 auth lock debug proxyarp There is only one client - Android 4.2 Android connects successfully: Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Connection established to x.x.x.x, 59578. Local: 18934, Remote: 29291 (ref=0/0). LNS session is 'default' Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Call established with x.x.x.x, Local: 36452, Remote: 29845, Serial: -1369754322 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: pppd 2.4.5 started by howard, uid 0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Using interface ppp0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Connect: ppp0 <--> /dev/pts/0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: peer from calling number x.x.x.x authorized Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Deflate (15) compression enabled Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: Cannot determine ethernet address for proxy ARP Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: local IP address 192.168.0.1 Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: remote IP address 192.168.0.2 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 disappeared from ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] interface ppp0 activated In the meanwhile, Internet works perfectly on the Android client, the VPN connection is stable and fast. However, it always happens that within 2-5 minutes after the connection is established: Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Maximum retries exceeded for tunnel 18934. Closing. Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Connection 29291 closed to 95.91.227.224, port 59578 (Timeout) Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deactivated Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deleted Then the VPN connection is broken. So what might have gone wrong? The same L2TP service works flawlessly on iOS 7, MacOS 10.8, and Windows 7, there is no disconnection issue on those OSes. Thank you!

    Read the article

  • Ubuntu unattended-upgrades stops apache

    - by Robbie
    This morning i was alerted to the fact that both apache instances serving my app were not responding to requests from my load balancer. I attempted apachectl restart and it said apache was not running. So, i started apache on both instances and got the service up again. I then followed the logs and worked out that both had performed upgrades via the unattended-upgrades package moments before they stopped responding. /var/log/unattended-upgrades/unattended-upgrades.log 2013-07-02 06:30:51,875 INFO Starting unattended upgrades script 2013-07-02 06:30:51,875 INFO Allowed origins are: ['o=Ubuntu,a=precise-security'] 2013-07-02 06:33:57,771 INFO Packages that are upgraded: accountsservice apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common apparmor apport apt apt-transport-https apt-utils bind9-host binutils dbus dnsutils gnupg gpgv isc-dhcp-client isc-dhcp-common krb5-locales libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdns81 libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libexpat1 libfreetype6 libgc1c2 libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libisc83 libisccc80 libisccfg82 liblwres80 libruby1.8 libx11-6 libx11-data libxcb1 libxext6 libxml2 linux-firmware linux-image-virtual linux-libc-dev linux-virtual multiarch-support openssl perl perl-base perl-modules python-apport python-crypto python-keyring python-problem-report python-software-properties ri1.8 ruby1.8 ruby1.8-dev sudo tzdata update-manager-core 2013-07-02 06:33:57,772 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-02_06:33:57.772399.log' 2013-07-02 06:36:10,584 INFO All upgrades installed I'm running Ubuntu 12.04 on Amazon EC2 servers. I have unattended-upgrades installed and configured as follows: /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { }; /etc/apt/apt.conf.d/20auto-upgrades APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; I've struggled to find documentation about what happens to running processes during an upgrade. - Is this expected behaviour? Or should unattended-upgrades restart apache after upgrading it? - What can I do to ensure apache is restarted correctly? Should I just blacklist the apache package?

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • Need advice on which PCI SATA Controller Card to Purchase

    - by Matt1776
    I have a major issue with the build of a machine I am trying to get up and running. My goal is to create a file server that will service the needs of my software development, personal media storage and streaming/media server needs, as well as provide a strong platform for backing up all this data in a routine, cron-job oriented German efficiency sort of way. The issue is a simple one - all my drives are SATA drives and my motherboard controller only contains 4 ports. Solving the issue has proven to be an unmitigated nightmare. I would like advice on the purchase of the following: 4 Port internal SATA / 2 Port external eSATA PCI SATA Controller Card that has the following features and/or advantages: It must function. If I plug it in and attach drives, I expect my system to still make it to the Operating System login screen. It must function on CentOS, and I mean it must function WELL and with MINIMAL hassle. If hassle is unavoidable, there shall be CLEAR CUT and EASY TO FOLLOW instructions on how to install drivers and other supporting software. I do not need nor want fakeRAID - I will be setting up any RAID configurations from within the operating system. Now, if I am able to find such a mythical device, I would be eternally grateful to whomever would be able to point me in the right direction, a direction which I assume will be paved with yellow bricks. I am prepared to pay a considerable sum of money (as SATA controller cards go) and so paying anywhere between 60 to 120 dollars will not be an issue whatsoever. Does such a magical device exist? The following link shows an "example" of the type of thing I am looking for, however, I have no way of verifying that once I plug this baby in that my system will still continue to function once I've attached the drives, or that once I've made it to the OS, I will be able to install whatever drivers or software programs I need to make it work with relative ease. It doesn't have to be dog-shit simple, but it cannot involve kernels or brain surgery. http://www.amazon.com/gp/product/B00552PLN4/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B003GSGMPU&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1HJG60XTZFJ48Z173HKY So does anyone have a suggestion regarding the subject I am asking about? PCI SATA Controller Cards? It would help if you've had experience with the component before - that is after all why I am asking here - for those who have had experience that I do not have. Bear in mind that this is for a home setup and that I do not have a company credit card. I have a budget with a 'relative' upper limit of about $150.00.

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • git private server error: "Permission denied (publickey)."

    - by goddfree
    I followed the instructions here in order to set up a private git server on my Amazon EC2 instance. However, I am having problems when trying to SSH into the git account. Specifically, I get the error "Permission denied (publickey)." Here are the permissions of my files/folders on the EC2 server: drwx------ 4 git git 4096 Aug 13 19:52 /home/git/ drwx------ 2 git git 4096 Aug 13 19:52 /home/git/.ssh -rw------- 1 git git 400 Aug 13 19:51 /home/git/.ssh/authorized_keys Here are the permissions of my files/folders on my own computer: drwx------ 5 CYT staff 170 Aug 13 14:51 .ssh -rw------- 1 CYT staff 1679 Aug 13 13:53 .ssh/id_rsa -rw-r--r-- 1 CYT staff 400 Aug 13 13:53 .ssh/id_rsa.pub -rw-r--r-- 1 CYT staff 1585 Aug 13 13:53 .ssh/known_hosts When checking my logs in /var/log/secure, I used to get the following error message every time I tried to SSH: Authentication refused: bad ownership or modes for file /home/git/.ssh/authorized_keys However, after making a few permission changes, I no longer get this error message. Despite this, I am still getting the "Permission denied (publickey)." message every time I try to SSH. The command I am using to SSH is ssh -T git@my-ip. Here is the full log I get when I run ssh -vT [email protected]: OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to my-ip [my-ip] port 22. debug1: Connection established. debug1: identity file /Users/CYT/.ssh/id_rsa type -1 debug1: identity file /Users/CYT/.ssh/id_rsa-cert type -1 debug1: identity file /Users/CYT/.ssh/id_dsa type -1 debug1: identity file /Users/CYT/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2 debug1: match: OpenSSH_6.2 pat OpenSSH* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr [email protected] none debug1: kex: client->server aes128-ctr [email protected] none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 08:ad:8a:bc:ab:4d:5f:73:24:b2:78:69:46:1a:a5:5a debug1: Host 'my-ip' is known and matches the RSA host key. debug1: Found key in /Users/CYT/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/CYT/.ssh/id_rsa debug1: Trying private key: /Users/CYT/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). I have spent a few hours going through threads on various sites, including SO and SF, looking for a solution. It seems that the permissions for my files are all okay, but I just can't figure out the problem. Any help would be greatly appreciated. Edit: EEAA: Here are the outputs you requested: $ getent passwd git git:x:503:504::/home/git:/bin/bash $ grep ssh ~git/.ssh/authorized_keys | wc -l grep: /home/git/.ssh/authorized_keys: Permission denied 0

    Read the article

  • AGENT: The World's Smartest Watch

    - by Rob Chartier
    AGENT: The World's Smartest Watch by Secret Labs + House of Horology Disclaimer: Most if not all of this content has been gleaned from the comments on the Kickstarter project page and comments section. Any discrepancies between this post and any documentation on agentwatches.com, kickstarter.com, etc.., those official sites take precedence. Overview The next generation smartwatch with brand-new technology. World-class developer tools, unparalleled battery life, Qi wireless charging. Kickstarter Page, Comments Funding period : May 21, 2013 - Jun 20, 2013 MSRP : $249 Other Urls http://www.agentwatches.com/ https://www.facebook.com/agentwatches http://twitter.com/agentwatches http://pinterest.com/agentwatches/ http://paper.li/robchartier/1371234640 Developer Story The first official launch of the preview SDK and emulator will happen on 20-Jun-2013.  All development will be done in Visual Studio 2012, using the .NET Micro Framework SDK 2.3.  The SDK will ship with the first round of the expected API for developers along with an emulator. With that said, there is no need to wait for the SDK.  You can download the tooling now and get started with Apps and Faces immediately.  The only thing that you will not be able to work with is the API; but for example, watch faces, you can start building the basic face rendering with the Bitmap graphics drawing in the .NET Micro Framework.   Does it look good? Before we dig into any more of the gory details, here are a few photos of the current available prototype models.   The watch on the tiny QI Charter   If you wander too far away from your phone, your watch will let you know with a vibration and a message, all but one button will dismiss the message.   An app showing the premium weather data!   Nice stitching on the straps, leather and silicon will be available, along with a few lengths to choose from (short, regular, long lengths). On to those gory details…. Hardware Specs Processor 120MHz ARM Cortex-M4 processor (ATSAM4SD32) with secondary AVR co-processor Flash & RAM 2MB of onboard flash and 160KB of RAM 1/4 of the onboard flash will be used by the OS The flash is permanent (non-volatile) storage. Bluetooth Bluetooth 4.0 BD/EDR + LE Bluetooth 4.0 is backwards compatible with Bluetooth 2.1, so classic Bluetooth functions (BD/EDR, SPP/AVRCP/PBAP/etc.) will work fine. Sensors 3D Accelerometer (Motion) ST LSM303DLHC Ambient Light Sensor Hardware power metering Vibration Motor (You can pulse it to create vibration patterns, not sure about the vibration strength - driven with PWM) No piezo/speaker or microphone. Other QI Wireless Charging, no NFC, no wall adapter included Custom LED Backlight No GPS in the watch. It uses the GPS in your phone. AGENT watch apps are deployed and debugged wirelessly from your PC via Bluetooth. RoHS, Pb-free Battery Expected to use a CR2430-sized rechargeable battery – replaceable (Mouser, Amazon) Estimated charging time from empty is 2 hours with provided charger 7 Days typical with Bluetooth on, 30 days with Bluetooth off (watch-face only mode) The battery should last at least 2 years, with 100s of charge cycles. Physical dimensions Roughly 38mm top-to-bottom on the front face 35mm left-to-right on the front face and around 12mm in depth 22mm strap Two ~1/16" hex screws to attach the watch pin The top watchcase material candidates are PVD stainless steel, brushed matte ceramic, and high-quality polycarbonate (TBD). The glass lens is mineral glass, Anti-glare glass lens Strap options Leather and silicon straps will be available Expected to have three sizes Display 1.28" Sharp Memory Display The display stays on 100% of the time. Dimensions: 128x128 pixels Buttons Custom "Pusher" buttons, they will not make noise like a mouse click, and are very durable. The top-left button activates the backlight; bottom-left changes apps; three buttons on the right are up/select/down and can be used for custom purposes by apps. Backup reset procedure is currently activated by holding the home/menu button and the top-right user button for about ten seconds Device Support Android 2.3 or newer iPhone 4S or newer Windows Phone 8 or newer Heart Rate monitors - Bluetooth SPP or Bluetooth LE (GATT) is what you'll want the heart monitor to support. Almost limitless Bluetooth device support! Internationalization & Localization Full UTF8 Support from the ground up. AGENT's user interface is in English. Your content (caller ID, music tracks, notifications) will be in your native language. We have a plan to cover most major character sets, with Latin characters pre-loaded on the watch. Simplified Chinese will be available Feature overview Phone lost alert Caller ID Music Control (possible volume control) Wireless Charging Timer Stopwatch Vibrating Alarm (possibly custom vibrations for caller id) A few default watch faces Airplane mode (by demand or low power) Can be turned off completely Customizable 3rd party watch faces, applications which can be loaded over bluetooth. Sample apps that maybe installed Weather Sample Apps not installed Exercise App Other Possible Skype integration over Bluetooth. They will provide an AGENT app for your smartphone (iPhone, Android, Windows Phone). You'll be able to use it to load apps onto the watch.. You will be able to cancel phone calls. With compatible phones you can also answer, end, etc. They are adopting the standard hands-free profile to provide these features and caller ID.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115  | Next Page >