Search Results

Search found 63877 results on 2556 pages for 'mysql error 1452'.

Page 369/2556 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • Dedicated server automatic backup solution

    - by Luigi
    I have a dedicated Ubuntu web server in a cloud environment, and I am looking for a nice way to do automated backups. I would like to backup some directories with web apps, and all my MySql databases. As for destination: make snapshots every two hours localy, and every six hours to a remote ftp server. Also delete backup archives older than seven days(localy + ftp), and notify on any problems by email. Now to achieve some of this functionality I use cron + shell script, and http://www.mysqldumper.net/, but really that doesn't answer my needs. Mysqldumper doesn't know automaticly about new databases, and shell script does not notify on problems. It's something I have to check out from time to time, and i don't have trust for. I googled a while, and seems like most people solve this stuff with shell scripts. Is this a method you can trust? Are there any web-gui tools, I'm missing? Maybe there is a smarter startegy for doing this? I'm a little bit confused.

    Read the article

  • Apache crashes every 5min

    - by Simon
    I'm relatively new to server issues, having a site of mine that I started early in the year grow beyond my capabilities of managing it. I need help. I recently moved out of my shared hosting environment onto a dedicated virtual server from Mediatemple. Each week, I run a script that fetches data from my DB, fetches data from last.fm's API and then tweets information to Twitter. My server uses Virtuozzo and when the script runs, Apache crashes every 5min. I checked and saw that the 'kmemsize' parameter reaches its cap (its 13mb). I realise my problem. The MySQL process stays open for long while Apache needs to handle lots of incoming links (about 200 000 pageviews for that day according to my previous host's AWSTATS). Yes, I'm quite unexperienced in this, and I'm clearly killing the server with too many incoming links while it has to manage the updating of the DB. So that is the precedent: I want a few answers. 1) Why did my shared hosting environment not crash apache every 5min? It ran fine, the site only slowed a lot. Clearly, it must be the virtual container and the kmemsize limit? 2) Where do I go from here? Would a physical server (not a virtual container) encounter the same problems? I sent a support request to Mediatemple as well. I need all the help I can get.

    Read the article

  • Why does gcc think that I am trying to make a function call in my template function signature?

    - by nieldw
    GCC seem to think that I am trying to make a function call in my template function signature. Can anyone please tell me what is wrong with the following? 227 template<class edgeDecor, class vertexDecor, bool dir> 228 vector<Vertex<edgeDecor,vertexDecor,dir>> Graph<edgeDecor,vertexDecor,dir>::vertices() 229 { 230 return V; 231 }; GCC is giving the following: graph.h:228: error: a function call cannot appear in a constant-expression graph.h:228: error: template argument 3 is invalid graph.h:228: error: template argument 1 is invalid graph.h:228: error: template argument 2 is invalid graph.h:229: error: expected unqualified-id before ‘{’ token Thanks a lot.

    Read the article

  • Webserver python update script

    - by ThePyCoder
    So i have made this website on which you can trade stocks based on real stock quotes with virtual money. The stock quotes are in a MySQL database and are updated using a python script which runs every minute or so. Now, this works fine on my local machine with xampp but how about moving the project to a commercial web server? Basically I want my page hosted by a professional company but do those kind of servers support python scripts running in the background? Because a dedicated server would be to expensive and the script does some other sql tasks too so it can't be replaced by PHP or so... So, are there any good web hosting services out there who give me the possibility of running a script in the background and hosting a website in the foreground? For what server specifications do i have to look for? Thnx in advance! PS: I've done some research, and I found a python supporting web host WITH ssh support. Is that what I need? Or is the ssh not allowed to start processes?

    Read the article

  • Only variables can be passed by reference

    - by zaf
    I had the bright idea of using a custom error handler which led me down a rabbit hole. Following code gives (with and without custom error handler): Fatal error: Only variables can be passed by reference function foo(){ $b=array_pop(array("a","b","c")); return $b; } print_r(foo()); Following code gives (only with a custom error handler): (2048) Only variables should be passed by reference function foo(){ $a=explode( '/' , 'a/b/c'); $c=array_pop(array_slice($a,-2,1)); return $c; } print_r(foo()); The second one worries me since I have a lot of 'compact' code. So, I either ditch the bright idea of using a custom error handler (to improve my logging module) or expand all my code. Anyone with better ideas? Also, WTF?

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Server configration for our website [duplicate]

    - by Varun Varunesh
    This question already has an answer here: Can you help me with my capacity planning? 2 answers We are a start-up and 6 month back we have launched our beta version website. Now we are in a phase of building our website and web-services for the final product. This website will be based on PHP, Python, MySql database and with wamp server. Right now in the beta version we are using Azure VM for hosting, with configuration of 786MB RAM and Shared CPU. We have 200 avg users daily coming to our website. Now we are trying to increase the number of users from 200 to 1500 daily users. And I am thinking our server should have capability to handle at least 100 concurrent user. Also we have developed web-services for our mobile-apps. Which can also increase loads on the sever. So here are the question that takes me here, I am pretty much confused about whether to go with shared hosting or VM based hosting. If VM, then what configuration will be best for our requirement (as I discussed above) ? Currently our VM is a Windows based server and its very simple to manage, So other than cost factor why should I go for Linux based sever? What other factor should I keep in mind while choosing the server as per our requirement ?

    Read the article

  • mysqldump skip one table

    - by danneth
    I'm running a cronjob to backup our system using mysqldump. The database contains 90 or so tables. One of these tables is HUGE and every once in a while causes the dump to fail. From the manual I see that you can specify specific tables to dump shell> mysqldump [options] db_name [tbl_name ...] This got me thinking. What if I have two jobs, one for dumping the huge table, and one for all the others. To accomplish this it would be nice if I could to something like shell> mysqldump -u backupuser -p database huge_table > db_huge_table.sql shell> mysqldump -u backupuser -p database --skip huge_table > db_rest.sql Unfortenately I'm not seeing such and option. I could of course explicitly state the 90 tables, but that just seems like a mess. Another option would be a script of some sort, but before checking that route I'll try this resource. MySQL is 5.1.61 on CentOS 6.2

    Read the article

  • Web Hosting: Any web host that supports files more than 50,000 in number?

    - by Devner
    Hi all, For my PHP & mySQL based application, I am trying to buy website hosting from a host who does not have a limit on the number of files I carry in my hosting account. Almost all the websites have a common limit of 50,000 files (some websites call it 50,000 nodes). The rest(to the extent of my search) are not even close. I have gone through the various websites, Googled lot of information, have spoken with the customer service of the hosting companies and they said that they have a limit of 50,000 files and that's why they call it the LIMIT. Now I have my application, which is a kind of social networking website, where people can upload various files of varying file size. So say if 50,000 users were to join the website and upload 1 file each, the limit of 50,000 will be reached very easily and my 50,001 customer will start facing file upload problems (& so will my account). So I would like to know if there's any website hosting services that do NOT levy such restrictions. In summary, I need the following options: No maximum file limit (more than 50,000 files in account). No maximum file upload limit in server setting (10MB, 12MB, 15MB, 20MB, etc.). Ability to upload files of various types (zip, flv, jg, png, etc.). Ability to stream Audio and Video (live audio & video not necessary). Access to .htaccess Access to php.ini, my.cnf or my.ini (this would be a plus) Supports SSL. Provides dedicated hosting(& IP) as well. Monthly payments without contracts are a plus. If you know of any such website hosting services, please post a reply ( a link to the same will be appreciated ). Thank you.

    Read the article

  • C++ MTL Library dimension.h bug?

    - by avanwieringen
    I've installed MTL on my Fedora Core 12 x64 system, but when building an application I get the following error: In file included from /usr/local/include/mtl/matrix.h:41, from /usr/local/include/mtl/mtl.h:40, from ltiSystem.hxx:4, from strTools.hxx:4, from ff.cxx:3: /usr/local/include/mtl/envelope2D.h:72: error: declaration of ‘typedef struct mtl::twod_tag mtl::envelope2D<T>::dimension’ /usr/local/include/mtl/dimension.h:19: error: changes meaning of ‘dimension’ from ‘class mtl::dimension<typename mtl::dense1D<T, 0>::size_type, 0, 0>’ make[1]: *** [ff.o] Error 1 Which would imply an error in MTL. I have changed to different MTL versions and the problem persists, but on Google there is no proper answer. I use the g++ compiler. Does anyone have a clye?

    Read the article

  • Developing high-performance and scalable zend framework website [on hold]

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery or maybe throw Backbone.js in there, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand for developing an high performance, scalable application expected to have a lot of traffic using PHP(Zend Framework 2)...I would be interested in your approch. I should note that I'm a Zend developer, I'm working with Zend for over 3 years, this is why I'm choosing it.

    Read the article

  • choosing hosting for custom ecommerce site, shared, dedicated, what to look for?

    - by spirytus
    Hi, I have (almost) developed website for my client and now need to decide on hosting. Most of the users of the site will be located in Australia, and so am I and my client. Now, I want to consider everything before deciding on host and few questions comes to my mind: I cannot afford website being down, and all hosts say something like "99% uptime guranted". Should just that be enough or shall I ask hosts for some stats maybe? Does it make any difference if servers and whole hosting company is located in Australia or outside? I've been hosting few sites with JustHost.com on shared hosting (cheapest plan, servers in US I believe) and never seen any delays but could that be an issue? I would prefer Australian company so I can actually go to them and give them piece of my mind if something goes wrong, but US servers seem cheaper. Would share hosting do? Its ecommerce custom build php application, I know there are security issues with sessions etc on shared hosting. Will take precautions of course but could share hosting be an issue? Would dedicated be worthy option considering that my knowledge of server is very limited? I need to run php/mysql, with preferably unlimited bandwidth as with my experience I cannot tell what amount of traffic would be sufficient. Please let me know if I didn't provide you with enough information so you could answer my questions, will gladly explain further. In advance thanks for any answers :)

    Read the article

  • I'm an idiot/blind and I can't find why I'm getting a list index error. Care to take a look at these 20 or so lines?

    - by Meff
    Basically it's supposed to take a set of coordinates and return a list of coordinates of it's neighbors. However, when it hits here:if result[i][0] < 0 or result[i][0] >= board.dimensions: result.pop(i) when i is 2, it gives me an out of index error. I can manage to have it print result[2][0] but at the if statement it throws the errors. I have no clue how this is happening and if anyone could shed any light on this problem I'd be forever in debt. def neighborGen(row,col,board): """ returns lists of coords of neighbors, in order of up, down, left, right """ result = [] result.append([row-1 , col]) result.append([row+1 , col]) result.append([row , col-1]) result.append([row , col+1]) #prune off invalid neighbors (such as (0,-1), etc etc) for i in range(len(result)): if result[i][0] < 0 or result[i][0] >= board.dimensions: result.pop(i) if result[i][1] < 0 or result[i][1] >= board.dimensions: result.pop(i) return result

    Read the article

  • High-performance Academic Server [closed]

    - by PHPsmith
    Suppose I want to build a server for the university's academic interests. The server is dedicated only to a site, where users (students and lecturers) just view and fill the academic data. But at a time (e.g. once a semester), about 12,000 students will access the site simultaneously. Due to limitation of resources, I have to build the server using free software (except for the operating system Windows 7, the university has been prepared). The hardware is also limited to the usual 4-core computers (eg, Ivy Bridge Intel Core i7-3770) with approximately 16GB of memory (DDR3 1600 MHz), equipped with an RJ-45 port (Intel 82 579 Gigabit Ethernet). With all these limitations, I have to choose the software (web server, database, etc) are appropriate for this purpose is achieved. I decided to create a site in PHP. Please help me by answering the following questions based on your expertise. (my prime candidate software to consider after googling) Web server which is faster & stable & secure, when implemented and optimized for PHP? And why? (nginx) PHP accelerator which is faster & stable & compatible with the selected web server? And why? (APC with Zend Optimizer+) Database which is faster & stable & secure, when implemented and optimized for selected web server and selected PHP accelerator? (MySQL) Are there any errors that have been or will be happening from my condition is? If there is, please enlighten me? Is there anything else I need to know in order to achieve this goal? If there is, please enlighten me? I understand that the performance also depends on the implementation of source-code program, so I assume it will create a site with the best efficiency (e.g. using AJAX).

    Read the article

  • Multivalue Mysql Inserts using HibernateTemplate

    - by Langali
    I am using Spring HibernateTemplate and need to insert hundreds of records into a mysql database every second. Not sure what is the most performant way of doing it, but I am trying to see how the multi value mysql inserts do using hibernate. String query = "insert into user(age, name, birth_date) values(24, 'Joe', '2010-05-19 14:33:14'), (25, 'Joe1', '2010-05-19 14:33:14')" getHibernateTemplate().execute(new HibernateCallback(){ public Object doInHibernate(Session session) throws HibernateException, SQLException { return session.createSQLQuery(query).executeUpdate(); } }); But I get this error: 'could not execute native bulk manipulation query.' Please check your query ..... Any idea of I can use a multi value mysql insert using Hibernate? or is my query incorrect? Any other ways that I can improve the performance? I did try the saveOrUpdateAll() method, and that wasn't good enough!

    Read the article

  • Why is my server performance degrading to the point of stopping, periodically?

    - by Pascal Aschwanden
    So, once in a while, I see in firebug that a request takes over 15 or even 60 seconds to respond and sometimes never. Here is what I've ruled out: It's not the CPU, cuz every time I check the Server load its less then 6 for all 3 numbers It's not the memory, because thats fairly low too, less the 50% It's not the I/O anymore, because I've seen the graphs that Joyent sent back to me when I requested them, and they show less then 3MB of I/O (mostly all read). It's not the SQL performance - I've profiled every last SQL command that runs, and they're all (99.9% of them anyway) running in less then 30ms, most run in less then 5ms. Oh and I've been profiling all the script execution times, and even the when the problem occurs, the script always manages to finish in 50ms or less (that's 1 / 20th of a second ). Now, I do run alot of ajax calls. 1 every 2 seconds per user and I have 300 DAU+. But, even if all 300 are playing simultaneously, thats still only 150 calls per second max. The only other thing I can think of is that one of my neighbors is funky. The problem is highly intermittent. 99% of the time it works perfectly and there's excellent performance. but 99%+ is not good enough. Eventually the performance gets so bad I have to restart the server, at which point everything is fine again. I've done this about 4 times now. Any ideas? Note: this is on joyent, vps, intro package 256mb of ram with bursting. here are the mysql dump info: Traffic ø per hour Received 18 MiB 29 MiB Sent 134 MiB 221 MiB Total 151 MiB 251 MiB Connections ø per hour % max. concurrent connections 5 --- --- Failed attempts 0 0.00 0.00% Aborted 0 0.00 0.00% Total 9,418 15.59 k 100.00%

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • Good Hosting Providers With Zend Framework Support [closed]

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for?

    Read the article

  • Errors not being redirected to an Http handler if redirectMode="ResponseRewrite"

    - by Luk
    I see similar questions, but it looks like there were due to an unrelated issue. in 3.5, I have a custom error handler that logs errors and redirects users. My web.config is set up as such: <httpHandlers> <add path="error.ashx" type="MySite.Tools.WebErrorLogger, MySite.Tools" verb="*"/> </httpHandlers> <customErrors mode="On" defaultRedirect="error.ashx" redirectMode="ResponseRewrite"> </customErrors> When redirectMode is set to "ResponseRedirect", everything works fine (but Server.GetLastError() being null but that seems to be intended) However, when using ResponseRewrite, my handler is not called and I see ASP.Net default error pages. Any idea on how I could do this? (I unfortunately can't use either an aspx page or do my error handling in global.asax due to other constraints)

    Read the article

  • Server specification recommendation

    - by foo
    To cut the story short, I can't buy an item (server/cpu/motherboard/ram) that costs more than USD 330. However, I can combine them, meaning, I can buy a CPU that costs USD 330 and motherboard that costs USD 330. With this limitation, I can't buy a powerful 1U server which will definitely costs me more USD 330. With that in mind, I was hoping to build a powerful desktop PC which will be used as a database server. However, through my experience, desktop PC doesn't last very long, usually the motherboard will just die by itself after 1 or 2 years. So, what would you guys recommend me to buy with this kind of budget? Every item must be <= USD 330. Will be used as a MySQL server. RAID would be nice. 1TB is pretty big for my data. I do not need external graphic card (onboard would do just fine), mouse, keyboard, monitor. Linux friendly. One ethernet port is good enough. It's important that those hardware is made of components that will last long (at least 3 years or something). The server will be placed in an air conditioned room, but a good ventilation for the server is always preferred. I won't overclock it. Intel processor is preferred. Thanks in advance.

    Read the article

  • 5 year old server upgrade

    - by rizzo0917
    I am looking to upgrade a server for a web app. Currently the application is running very sluggish. We've made some adjustments to mysql (that's another issue in itself) and made some adjustments so that heaviest quires get run on a copy of the database on another server was have as a backup, however this will not last that much longer and we are looking to upgrade. Currently the servers CPUs are (4) Intel(R) XEON(TM) CPU 2.00GHz, with 1 gig of ram. The database is 442.5 MiB, with about 1,743,808 records. There are two parts of the program, the one, side a, inserts and updates most of the data. Side b, reads the data and does some minor updates. Currently our biggest day for side a are 800 users (of 40,000 users all year) imputing the system. And our Side b is currently unknown, however we have a total of 1000 clients. The system is most likely going to cap out at 5000 side b clients, with about a year 300,000 side a users. The current database is 5 years old, so we can most likely expect the database to grow pretty rapidly, possibly double each year (which we can most likely archive older records if it comes to that). So with that being said, should we get a server for each side of the app, side a being the master, side b being the slave, any updates made on side b are router to side a. So the question is should i get 2 of these or 1. 2 x Intel Nehalem Xeon E5520 2.26Ghz (8 Cores) 12GB DDRIII Memory 500GB SATAII HDD 100Mbps Port Speed And Naturally I would need to have a redundant backup so it could potentially be 4 of them.

    Read the article

  • please take a look at my server's ram usage

    - by user66779
    Hi, i am a noob with servers. I have a centos5.5 vps with 512mb ram. My goal is to have it host just one magento store. I've installed Magento on the server without any control panel, by just installing lamp myself and whatever php extensions were necessary to get Magento to install. As soon as i visit my magento store, suddenly the ram on the vps is almost completely used, with only about 100mb left. Please see this screenshot of htop taken after just myself visited the website. http://img714.imageshack.us/img714/1944/screenouv.png As you can see there's only around 100mb left. Is that normal? I'm wondering if i might have done something stupid with the server that makes it very resource hungry. I installed apache from the centos base repo, php version 5.3 from the ius repository and mysql 5.1 also from ius repo. I haven't changed any of the default config files for any of these except to make memory_minimum 256 in php.ini. Is there anything i can do to make more ram free? I'm clueless but i see each Apache daemon is using 8% of available ram, and AFAIK each visitor needs one Apache daemon. So i would run out of ram with just a handful of visitors. Thanks for your advice.

    Read the article

  • Windows 7 update failed (language pack update)

    - by yihang
    I am using Windows 7 Ultimate and tried to install the language pack for chinese simplified from Windows Update and failed for many times. The error code was 8007065B: Windows Update encountered an unknown error. What does the code means? I have tried to google it but cannot find anything. What should I do to solve this problem.

    Read the article

  • Nginx+Passenger: 502 Bad Gateway from Nginx when passing urlencoded URLs in GET vars

    - by jimeh
    Here's an example of the URLs that don't work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2Fin%2Fperson http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2F However, the following URL does work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com Also, this only happens with Nginx, using Passenger with Apache it works fine, but we use Nginx on our production machines. Here's the entry in Nginx's error log: 2009/12/01 09:30:51 [error] 6407#0: *136 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: domain, request: "GET /do?url=http%3A%2F%2Fwww.linkedin.com%2F HTTP/1.1", upstream: "passenger://unix:/tmp/passenger.6335/master/helper_server.sock:", host: "domain"

    Read the article

  • Choose Default Program does not work (is broken) on Windows

    - by Piotr Dobrogost
    For some time now when I click Open with...|Choose Default Program from Windows Explorer's context menu I'm getting this error This file does not have a program associated with it to perform this action. Create an association in the Set Associations control panel. I'm getting this error no matter what the extension of the selected file is. Any ideas how to fix this?

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >