Search Results

Search found 2856 results on 115 pages for 'amazon beanstalk'.

Page 104/115 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • Enabling mod_wsgi in Apache for a Django app on Gentoo

    - by hobbes3
    I installed Apache, Django, and mod_wsgi on Gentoo using emerge (on Amazon EC2). I know that the mod_wsgi is configured in /etc/apache2/modules.d/70_mod_wsgi.conf: <IfDefine WSGI> LoadModule wsgi_module modules/mod_wsgi.so </IfDefine> # vim: ts=4 filetype=apache So in my /etc/conf.d/apache I added the WSGI module: APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D SSL -D SSL_DEFAULT_VHOST -D LANGUAGE -D WSGI" But when I try to list the loaded module, mod_wsgi isn't listed. root ~ # apache2 -M | grep wsgi Syntax OK I also know that mod_wsgi isn't loading properly because the Apache configuration file doesn't recognize WSGIScriptAlias. By the way for Django to work I need to include a custom Apache configuration file. Where should I insert the line below? Include "/var/www/localhost/htdocs/mysite/apache/apache_django_wsgi.conf" I currently have that in the httpd.conf file but I feel like that file will get reseted whenever I upgrade Gentoo or related package. EDIT: it seems the mod_wsgi file is located in /usr/lib64/apache2/modules/mod_wsgi.so. Here is my detailed Apache settings: root@ip-99-99-99-99 /usr/portage/eclass # apache2 -V Server version: Apache/2.2.21 (Unix) Server built: Mar 7 2012 06:52:30 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr" -D SUEXEC_BIN="/usr/sbin/suexec" -D DEFAULT_PIDLOG="/var/run/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/httpd.conf"

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

  • Installing ffmpeg + dependencies on AWS Linux AMI (repo issues)

    - by HdN8
    I'm installing ffmpeg to run on an Amazon linux AMI, and have added the rpmforge repo and the dag repo. Here are some guidelines I'm using for reference: TWoZaO and Razuna The rpmforge repo has ffmpeg, but if you try to install it then it will complain that is missing dependencies (for me libSDL-1.2.so.0()(64bit)). Regardless I will install ffmpeg from svn so I can be sure to enable the options I want (namelylibx264). It seems strange to me though that SDL is not inrpmforgeordag`, and in according to both of my references above, it should be there. I tried to grab it manually from here, but it needs these dependencies, so no-go: > error: Failed dependencies: SDL = > 1.2.10-8.el5 is needed by SDL-devel-1.2.10-8.el5.x86_64 > alsa-lib-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGL-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGLU-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libSDL-1.2.so.0()(64bit) is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libX11-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXext-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrandr-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrender-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXt-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64

    Read the article

  • Wireless card on HP laptop not working

    - by D. Strout
    I just bought an HP Envy m6-1125dx online from Best Buy. When I got it home and started it up, the wireless card did not work well - at all. I could connect, but any real usage would cause the connection to start dropping every 30 seconds or so, and it would be really slow. Taking another look at the reviews on the Best Buy site, it seems only a few others had this problem, so I took it to my local Best Buy and exchanged it for another unit. Got it home again and the card had the same issues. Which leads to my dilemma. First: does this model have several different cards that it could come with? Mine is a Ralink RT5390R (on both units I received). If it does, then I can keep exchanging until I get a unit with a different card. I wouldn't ask this, except it seems weird that only a few people mentioned this issue, so I thought that might be one possibility. I looked in to replacing the card with a different one myself, but it seems that HP blocks certain wireless cards. However, some people reported success in replacing the card, and this site said it was only an issue on "older HP computer[s]". Can anyone confirm this? Finally, if that fails/will not work, does anyone know what I can get through Best Buy? I am concerned that they will not put any different card than the Ralink, and after two of those, I don't want that. Can I ask Best Buy support to use a different card? Can they even get another card from HP? I guess the base question is: should I attempt to replace the card myself (two days via Amazon to get a new card), should I try to get the laptop repaired through Best Buy (two - four weeks), should I go for a different model laptop from Best Buy, or should I try a different unit of the same model (three's the charm?).

    Read the article

  • Cannot Access Shared Folder From IIS

    - by Tim Scott
    From IIS I need to access a folder on another computer. Both servers are Window 2008 SP2, and they live in a Virtual Private Cloud on Amazon EC2. They reach one another by private IP -- they are in WORKGROUP, not a domain. I can access the shared folder manually when logged in to the client as Administrator. But IIS gets "access denied." Here's what I have done: Set File Sharing = ON Set Password Protected Sharing = OFF Set Public Folder Sharing = ON Shared the folder Added permission to the share: Everyone, Full Control Added permission to the share: NETWORK SERVICE, Full Control Verified that File & Printer Sharing is checked in Windows Firewall Opened port 445 to inbound traffic from local sources I tried adding <remote-machine-name>\NETWORK SERVICE to the share but it says it does not recognize the machine, which makes sense, I guess. As I said, from the other computer I have no trouble accessing the shared folder from my user account, but IIS is shut out. How does the file server even know the difference? I would assume that with Everyone given full control and password protected sharing turned off, it would not matter what the client user account is. In any case, how to solve? UPDATE: To clarify, I am not trying to serve up files on the share directly through IIS. Rather I am writing files to the share from my code (System.IO).

    Read the article

  • Wireless very slow on one laptop on network, all other machines normal?

    - by th3dude19
    My new laptop (Acer Aspire Timeine 3810TZ running Windows 7 Home Premium 64bit) is acting very strange on my wireless network. Below are the issues I'm noticing... The connection frequently drops. I see the icon change from 'full bars' to 'empty bars with yellow star (meaning no connection)' occasionally. Almost every website I visit (Firefox) hangs for a long time on 'Looking up www.amazon.com' for example. After a long pause, it finally starts loading the website. Neither of these problems exist on any other machines on my network. I also have a desktop running the same OS wirelessly and it works fine. I've run several Speedtest.net tests and the speeds are great (20MBit down/4 up). Results from pingtest.net are as follows: Line quality: D Ping: 46ms Jitter: 65ms Packet Loss: 9% These results are to a server that is less than 10 miles from my residence. The results on the other machines in my house are normal. Any suggestions? This is becoming very annoying as I purchased this machine primarily for browsing.

    Read the article

  • Having Trouble Ripping Some CD's

    - by James
    Hi, When I buy CD's I tend to rip them to FLAC right away. When ripping I use Foobar2000 or Exact Audio Copy and enable secure ripping which uses error correction. Recently I bought a 2 CD compilation album brand new but when I tried to rip the second CD on my laptop using Foobar2000 it struggled with the last 2 tracks and was unable to finish. EAC was also unable to get an accurate rip and reports read errors. Ripping in fast mode results in audible errors in the output track. I have tried another computer and having similar problems. I cannot see any damage to the disc and it has not been dropped or anything. The weird thing is that I had similar problems with a different album and different PC a while back. This other CD was a compilation disk so it was also right up to the CD capacity limit and again it was the last few tracks that would not rip. Dozens of other discs have ripped fine So I am wondering if the CD is simply defective, or whether it is something else. How common are defective CD's? Do some CD drives struggle with CD's of this capacity? Or Is this some kind of copy protection? I'm thinking of asking Amazon for a replacement but it would be annoying if I get the same problem again.

    Read the article

  • Server Clustering (Django, Apache, Nginx, Postgres)

    - by system-matrix
    I have a project deployed with django, Apache, Nginx and Postgres. The project has requirement of live data viewable to customers. The projects main points are: 1. Devices in field send data to server(devices are also like website users) after login. 2. There is background import process which imports the uploaded data in postgres. 3. The webusers of the system use this data and can send commands to the devices, which devices read when they login. 4. There are also background analysis routines running on the data. All the above mentioned setup and system is deployed on one amazon EC2 cloud machine. The project currently supports over 600 devices and 400 users. But as the number of devices are increasing with time the performance of the server is going down. We want to extend this project so that it can support more and more devices. My initial thinking is, We will create one more server like current one and divide the devices amongst these to servers. But Again We need a central user and device managment point though django admin. Any Ideas? What are the best possible ways to create a scalable architecture? How can I create a Postgres Cluster and Use it with Django, if possible?

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Alternative Bluetooth Stacks for Windows 7 64bit

    - by Martin
    I have a notebook with an inbuilt Broadcom BCM2046 bluetooth adapter and several bluetooth HID-devices (mice, keyboards etc.) The operating system is Windows 7 64 bit Professional. The HID-devices all work perfectly with other computers, but on the system mentionend above, problems with some power-saving features inside the HID-devices occur (see eg. Amazon reviews for Microsoft Mobile Keyboard 6000 not waking up). I have tried the bluetooth drivers supplied by Windows update and the latest Broadcom drivers directly from the Broadcom updater software. The problems persist (I can rule out any further configuration issues or alternative device drivers, I have tried every possibility). I have tried a trial version of the BlueSoleil Bluetooth stack and it solved the wake-up problem. However the BlueSoleil stack causes some other problems, is relatively expensive and I would prefer not to use it. My question: are there any other alternative bluetooth stacks availible for Windows 7 64bit? To my knowledge there used to be Toshiba Bluetooth stack for non-Toshiba hardware, but the older versions I have found on the internet do not install, they do not seem to recognize the bluetooth hardware when installing the driver.

    Read the article

  • Guide To Setting Accessing PhpMyAdmin On NGINX, Ubuntu 11.04, EC2 Remote MySQL Instance

    - by darkAsPitch
    I have setup a domain name to run on amazon ec2 running ubuntu 11.04, nginx and php5-fpm. The domain name works great, I have setup it's own sites-available configuration file and sym-linked it to sites-enabled. I installed phpmyadmin via sudo apt-get install phpmyadmin and followed the instructions. I then added this just above my /etc/nginx/nginx.conf file and restarted nginx. server { listen 80; server_name phpmyadmin.domain.com; location / { root /usr/share/phpmyadmin; index index.php; } #make sure all php files are processed by fast_cgi location ~ \.php { # try_files $uri =404; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I have also added the appropriate dns A records for phpmyadmin.domain.com phpmyadmin.domain.com just shows a 404 error code. All other subdomains do not respond at all so at least something is working here. FYI I have edited the /etc/phpmyadmin/config.inc.php file so that I can connect to a remote MySQL Database. What else do I need to do?

    Read the article

  • Converting PDF eBooks into a Kindle format

    - by Ender
    Over the past couple of years I've amassed quite a collection of guides, tutorials and ebooks in PDF format. A lot of these are quite useful for work, especially PDF documentation, and rather than have to be at a computer every time I want to read how to do something in Sitecore or to read through a software testing ebook I'd like to do it on my brand-spanking-new Kindle. However, even though there is now a native PDF reader on the Kindle due to the nature of PDF's they are practically unreadable. The text doesn't wrap due to how PDF's are sized and so far after a bunch of Google searches I've yet to find a viable solution to get my PDF's converted into a readable Kindle format. Sometimes these books have code or pictures/tables in them, but most of the time they're text-heavy and to be honest I'd be surprised if there wasn't a free tool to handle the converting of PDF to one of the (seemingly many) Kindle formats. So, can anyone help me out with this? EDIT: I've tried Calibre, and have checked their forums to play with some of the advanced settings, yet the solutions available seem to be extremely poor, especially if the book you're attempting to read contains equations, code, or anything outside of plain text. I've also tried Amazon's conversion service, which wasn't much help with such documents. The best way I have found so far is to build the entire thing over again in ePub or RTF format and convert to MOBI from there. This works for text-heavy books with tables, but anything technical still isn't covered. Can anyone help with this?

    Read the article

  • Windows 7 ICS client web failure

    - by n8wrl
    I have several windows 7 PC's connected on a LAN via a hub. One has a Verizon 3G connection and works great. I have internet connection sharing enabled on it, which automagically set the LAN connection to 192.168.137.1 and enabled DHCP. I am trying to get the client PC's working one at a time. The others are off. The client is able to: Get an IP via DHCP with correct settings. Ping any web address I can throw at it, so DNS and routing are working. Windows update works. But web sites hang in IE. All but google.com! I type www.msn.com, microsoft.com, amazon.com, etc. etc. All ping via a cmd window but IE just hangs - it says web site found but the green progress bar just slowly creeps and no content displays. www.google.com comes up even after clearing browser and dns cache. I am pulling my hair out - what am I missing? EDIT: After some more gyrations with a router I'm back to ICS. Same symptoms, only now I have an answer to Andrew's question, YES I can do Google searches but clicking on any of the result links hangs! Let one sit for half an hour with no timeout or error.

    Read the article

  • Implementing emailing (bulk & event based) features for my website.

    - by Kabeer
    Hello. For my upcoming social networking website, I am looking for suggestions on the best way to implement emailing. Here are some of my requirements and constraints: Requirements: - Should be able to send emails based on events (new registrations, change password, etc.), promotions (advertisements based on user consent), bulk mails (newsletters), reminders (profile updates), etc. I hope I got the point through. - Should be able to process faults (incorrect email address, mail-box full, etc) - User initiated invites (inviting friends to connect) Constraints: - As of now I am looking at Godaddy for hosting. Subsequently I shall move to, may be Amazon Cloud. Godaddy seems to be excruciatingly conservative (not bad always) when it comes to the ability to send email. - My tests on Godaddy so far have been discouraging. There is limit to no. of emails I can send and sometimes if emails carries special characters it throws strange exceptions like there was a virus affected attachment (even though I hadn't attached a thing). The replies from Godaddy support have been equally funny. My intent is not to portray Godaddy as wrong but I am looking for a work-around that frees me from said constraints. I am looking for a mechanism / service that is either free of very cost effective. I wonder how other sites address this. Mine is a .Net / Windows based application.

    Read the article

  • Nginx proxy to IIS Connection Timeout

    - by MitMaro
    I am having an issue with random timeouts with a Nginx proxy connecting to an IIS machine. I have been watching a packet capture between the two servers and it seems that the IIS machine is receiving a SYN packet but is not responding with what I think should be an ACK response. Before the timeout occurs there seems to be a slower response from the IIS server. There is no unusual memory or processor usage on the IIS or Nginx machine. Some information on the servers and setup: Nginx Machine: Ubuntu 10.04 64bit Nginx 0.7.65 Amazon EC2 Windows Machine: Windows Server 2008 IIS 7 ASP.net Application in Integrated Mode Nginx Error: 2011/01/10 17:57:40 [error] 8297#0: *30 connect() failed (110: Connection timed out) while connecting to upstream, client: 209.***.***.***, server: secure.example.com, request: "GET /a/path/deliver.aspx HTTP/1.1", upstream: "http://***.***.***.****:****//another/path/deliver.aspx", host: "secure.example.com" WireShark Packets 6521.449528 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422103 TSER=0 WS=7 6524.443239 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422403 TSER=0 WS=7 6530.443241 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477423003 TSER=0 WS=7

    Read the article

  • Can Internet data be used by malware when PC off?

    - by Val
    I have noticed over the last month that my off peak data has been used at a rate of approx 350MB per hour - this has meant that I have gone over my quota and slowed down by my ISP to 256k. There is no one in the house using it (2am-8am is my ISPs off peak hours) at that time. My PC and other wireless devices (ipad and iphone) are turned off. I have changed the wireless password on my modem 3 times and it is now 30 digits long. So I don't think someone else is using my wireless access between 2-8am. It has been suggested by my ISP that I may have malware/spyware on my computer. Sorry for my ignorance, but can malware still run if the PC is off? I did look at my modem's log and followed an IP address to a service called Amazon Simple server Storage. Could this company possibly be the culprit? I am not too tech savvy, so any assistance appreciated. I have run a barrage of spyware cleaning software eg malware bytes; spy bot etc.... Cheers Val

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • Windows 2008 Server can't connect to FTP

    - by stivlo
    I have Windows 2008 Server R2, and I am trying to install FTP services. My problem is I can't connect from outside, FileZilla complains with: Error: Connection timed out Error: Could not connect to server Here is what I did. With the Server Manager, I've installed the Roles FTP Server, FTP Service and FTP Extensibility. In Internet Information Services version 7.5, I've chosen Add FTP Site, enabled Basic Authentication, Allow a user to connect Read and Write. In FTP Firewall support on the main server, just after start page, I've set Data Channel Port Range to 49100-49250 and set the external IP Address as the one I see from outside. If I click on FTP IPv4 Address and Domain Restrictions, and click on Edit Feature Settings, I see that access for unspecified clients is set to Allow, so I click OK without changing those defaults. In FTP SSL Policy, I've set to Require SSL connection, certificate is self signed. I tried to connect with FileZilla from the same host and it works, however it doesn't work remotely, as I said above. I've enabled pfirewall.log, but apparently nothing gets logged. The server is in Amazon EC2, and on the security group inbound firewall rules, I've set that ports 21 and ports 49100-49250 accepts connections from everywhere. What else should I be checking to solve the problem?

    Read the article

  • Measuring performance indicators on a cluster

    - by Aditya Singh
    My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances. Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request. I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

  • Apache suddenly very slow on http and faster on https

    - by hsnm
    Background: I have Apache 2 running on ubuntu. There is a low usage on it and mostly being accessed for a web service URL from mobile apps. It was working fine until I installed SSL certificates. I now have both http and https. When I access the server using https, I get a fairly quick response (but probably not as fast as before). When I use http, it's so slow. What I tried: From this post: I curl localhost from the host and it takes some time, meaning there is no routing issue. The server runs on Amazon EC2 instance and is managed by me only. Also: I see that Apache once running, creates the maximum number of processes it is allowed to, which was not the case before. I lowered the MaxClients to 20 and I think I'm getting faster responses but it still takes over a minute and I always have MaxClients Apache processes. dmesg returns many [ 1953.655703] TCP: Possible SYN flooding on port 80. Sending cookies. When I netstat I get many entries with SYN_RECV. Possibly a DDoS attack? From EC2's monitoring diagrams I see a pattern of high "Maximum Network In (Bytes)" since 2 days ago. By the way the server is still being tested, the actual traffic is very low and not consistent. I tried to go with this solution to limit incoming connections using iptables, still no luck, but I'm trying. Question: What could be the problem? Is this a DDoS attack?

    Read the article

  • Drowning in documents - recommend doc management solutions?

    - by Martin Day
    I've been researching document management lately. I want to organise my docs at home and also at the office. Finding affordable solutions one can actually test drive is quite hard. Some that I've downloaded just don't seem to work (testing on brand new Vista PC). I've seen some software on Amazon like Paperport but not really sure what they're like. For home I'd like something to organise files, full text search, good scanner integration, nice interface etc. But for the office it seems harder. I need something that does proper workflow and keeps versions. It will have an audit trail. Documents can be approved, checked in/out etc. I know a few clients who would like something similar. It would be great just to import thousands of documents from a shared drive and get them indexed with dupes killed. I'd like to be super clear about how/where the documents are being stored so that maintenance and backups are clear. My Google/twitter searches lead back to the same tired and vague webpages pushing what look like expensive and custom made solutions. Some might be very good I suppose but it's darn hard to tell. I don't mind a hosted package but all in all I don't think something like Google Docs, as good as it is now, will work. There are too many quirks and missing features (as compared to Office). Being able to work directly with the common Office file formats is important. I've noted a similar sounding question asked here back in August but it didn't seem to turn up too many solutions that I could easily and quickly apply. Also there could have been some changes since then so I feel it's worth asking.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >