Search Results

Search found 9847 results on 394 pages for 'cloud backup'.

Page 367/394 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • WPF closing child- closes parent-window

    - by Thomas Spranger
    i have the pretty same sample as mentioned here. Fast concluded: MainWindow closes when the last childwindow is closed. My Problem: I couldn't solve my problems with the described solutions. I can't produce a program where it als takes place. Only in one of my bigger progs. Maybe someone has an idea or knows any further steps. Thanks for reading - Thomas As requested here's a bit of code: This is the part in the MainWindow: bool editAfterSearch = false; Movie selectedMovie = (Movie)this.listView.SelectedItem; Movie backup = (Movie)selectedMovie.Clone(); if (new OnlineSearchWindow().EditMovieViaOnlineSearch(ref selectedMovie, out editAfterSearch)) { this.coverFlow.Update(selectedMovie); } And that's the part of the ChildWindow: public bool EditMovieViaOnlineSearch(ref Movie preset, out bool editAfter) { this.exitWithOk = false; this.editMovieAfterSearch = false; this.tbx_SearchTerm.Text = preset.Title; this.linkedMovie = preset; this.ShowDialog(); editAfter = editMovieAfterSearch; if (this.exitWithOk) { this.linkedMovie.CloneOnlineInformation(ref preset); preset.Bitmap = this.linkedMovie.Bitmap; return true; } else { return false; } }

    Read the article

  • Editing a remote file on-the-fly with PHP

    - by user275074
    Hi, I have a requirement to edit a remote text file on-the-fly, the content of which currently stands at ~1Mb. I have tried a couple of approaches and both seem to be clunky or hog memory which I can't rely on. Thinking out logically what I'm trying to achieve is: FTP to a remote server. Download a copy of the file for backup purposes and store it somewhere locally. Open the remote file and add the necessary lines required. Remove lines from the remote file as per an array of un-required data generated from the local server. Is this possible? I've managed to code steps 1 and 2 but I'm having difficult with 3 and 4. The way I'm doing it now is to use fgets and return the whole string. Really, I don't want to do this as it involves manipulating and re-generating the whole string (and it's large) and then re-inserting it in between two markers in the remote file. Is there no way of manipulating the lines of text in the file on-the-fly?

    Read the article

  • MySQL Need some help with a query

    - by Jules
    I'm trying to fix some data by adding a new field. I have a backup from a few months ago and I have restored this database to my server. I'm looking at table called pads, its primary key is PadID and the field of importance is called RemoveMeDate. In my restored (older) database there is less records with an actual date set in RemoveMeDate. My control date is 2001-01-01 00:00:00 meaning that the record is not hidden aka visible. What I need to do is select all the records from the older database / table with the control date and join with those from the newer db /table where the control date is not set. I hope I've explained that correctly. I'll try again, with numbers. I have 80,000 visible records in the older table (with control date set) and 30,000 in the newer db/table. I need to select the 50,000 from the old database, to perform an update query. Heres my query, which I'd can't get to work as I'd like. jules-fix-reasons is the old database, jules is the newer one. select p.padid from `jules-fix-reasons`.`pads` p JOIN `jules`.`pads` ON p.padid = `jules`.`pads`.`PadID` where p.RemoveMeDate <> '2001-01-01 00:00:00' AND `jules`.`pads`.RemoveMeDate = '2001-01-01 00:00:00'

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

  • active record relations – who needs it?

    - by M2_
    Well, I`m confused about rails queries. For example: Affiche belongs_to :place Place has_many :affiches We can do this now: @affiches = Affiche.all( :joins => :place ) or @affiches = Affiche.all( :include => :place ) and we will get a lot of extra SELECTs, if there are many affiches: Place Load (0.2ms) SELECT "places".* FROM "places" WHERE "places"."id" = 3 LIMIT 1 Place Load (0.3ms) SELECT "places".* FROM "places" WHERE "places"."id" = 3 LIMIT 1 Place Load (0.8ms) SELECT "places".* FROM "places" WHERE "places"."id" = 444 LIMIT 1 Place Load (1.0ms) SELECT "places".* FROM "places" WHERE "places"."id" = 222 LIMIT 1 ...and so on... And (sic!) with :joins used every SELECT is doubled! Technically we cloud just write like this: @affiches = Affiche.all( ) and the result is totally the same! (Because we have relations declared). The wayout of keeping all data in one query is removing the relations and writing a big string with "LEFT OUTER JOIN", but still there is a problem of grouping data in multy-dimentional array and a problem of similar column names, such as id. What is done wrong? Or what am I doing wrong? UPDATE: Well, i have that string Place Load (2.5ms) SELECT "places".* FROM "places" WHERE ("places"."id" IN (3,444,222,57,663,32,154,20)) and a list of selects one by one id. Strange, but I get these separate selects when I`m doing this in each scope: <%= link_to a.place.name, **a.place**( :id => a.place.friendly_id ) %> the marked a.place is the spot, that produces these extra queries.

    Read the article

  • Restoring dev db from production: Running a set of SQL scripts based on a list stored in a table?

    - by mattley
    I need to restore a backup from a production database and then automatically reapply SQL scripts (e.g. ALTER TABLE, INSERT, etc) to bring that db schema back to what was under development. There will be lots of scripts, from a handful of different developers. They won't all be in the same directory. My current plan is to list the scripts with the full filesystem path in table in a psuedo-system database. Then create a stored procedure in this database which will first run RESTORE DATABASE and then run a cursor over the list of scripts, creating a command string for SQLCMD for each script, and then executing that SQLCMD string for each script using xp_cmdshell. The sequence of cursor-sqlstring-xp_cmdshell-sqlcmd feels clumsy to me. Also, it requires turning on xp_cmdshell. I can't be the only one who has done something like this. Is there a cleaner way to run a set of scripts that are scattered around the filesystem on the server? Especially, a way that doesn't require xp_cmdshell?

    Read the article

  • What are Sharepoint(MOSS 2007) Developement/Deployment best practices.

    - by Satish
    We are deploying sharepoint MOSS 2007 at our work. I'm trying to come up with a sharepoint development and deployment methodology. We have Dev/QA/Prod environments and I need a way, preferably automated to deploy changes from Dev to QA and from there to prod. We are creating site collections web parts etc. Some of it is done directly within sharepoint, some through Sharepoint designer or visual studio. I'm looking for a way to extract this and deploy it to other enviornments. I tried stsadm backup/restore import/export etc but they all move the data along with it as well. I just need the structure deployed. Content deployment paths and jobs does the same thing as well. We use MSBuild & Curisecontrol.net for other .net projects to automate build/deployment process. I'm looking for something similar with sharepoint if possible. What are your best practices for this? Since my team is learning we don't have a defined process and we are open to change our development process if needed.

    Read the article

  • Problem with SQL Server "EXECUTE AS"

    - by Vilx-
    I've got the following setup: There is a SQL Server DB with several tables that have triggers set on them (that collect history data). These triggers are CLR stored procedures with EXECUTE AS 'HistoryUser'. The HistoryUser user is a simple user in the database without a login. It has enough permissions to read from all tables and write to the history table. When I backup the DB and then restore it to another machine (Virtual Machine in this case, but it does not matter), the triggers don't work anymore. In fact, no impersonation for the user works anymore. Even a simple statement such as this exec ('select 3') as user='HistoryUser' produces an error: Cannot execute as the database principal because the principal "HistoryUser" does not exist, this type of principal cannot be impersonated, or you do not have permission. I read in MSDN that this can occur if the DB owner is a domain user, but it isn't. And even if I change it to anything else (their recommended solution) this problem remains. If I create another user without login, I can use it for impersonation just fine. That is, this works just fine: create user TestUser without login go exec ('select 3') as user='TestUser' I do not want to recreate all those triggers, so is there any way how I can make the existing HistoryUser work? Bump: Sorry, but this is kinda urgent...

    Read the article

  • jQuery class selectors with nested div's

    - by mboles57
    This is part of some HTML from which I need to retrieve a piece of data. The HTML is assigned to a variable called fullDescription. <p>testing</p> <div class="field field-type-text field-field-video-short-desc"> <div class="field-label">Short Description:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Demonstrates the basics of using the Content section of App Cloud Studio </div> </div> </div> <div class="field field-type-text field-field-video-id"> <div class="field-label">Video ID:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> 1251462871001 </div> </div> </div> I wish to retrieve the video ID number (1251462871001). I was thinking something like this: var videoID = $(fullDescription).find(".field.field-type-text.field-field-video-id").find(".field-item.odd").html(); Although it does not generate any syntax errors, it does not retrieve the number. Thanks for helping out a jQuery noob! -Matt

    Read the article

  • Is There a Better Way to Feed Different Parameters into Functions with If-Statements?

    - by FlowofSoul
    I've been teaching myself Python for a little while now, and I've never programmed before. I just wrote a basic backup program that writes out the progress of each individual file while it is copying. I wrote a function that determines buffer size so that smaller files are copied with a smaller buffer, and bigger files are copied with a bigger buffer. The way I have the code set up now doesn't seem very efficient, as there is an if loop that then leads to another if loops, creating four options, and they all just call the same function with different parameters. import os import sys def smartcopy(filestocopy, dest_path, show_progress = False): """Determines what buffer size to use with copy() Setting show_progress to True calls back display_progress()""" #filestocopy is a list of dictionaries for the files needed to be copied #dictionaries are used as the fullpath, st_mtime, and size are needed if len(filestocopy.keys()) == 0: return None #Determines average file size for which buffer to use average_size = 0 for key in filestocopy.keys(): average_size += int(filestocopy[key]['size']) average_size = average_size/len(filestocopy.keys()) #Smaller buffer for smaller files if average_size < 1024*10000: #Buffer sizes determined by informal tests on my laptop if show_progress: for key in filestocopy.keys(): #dest_path+key is the destination path, as the key is the relative path #and the dest_path is the top level folder copy(filestocopy[key]['fullpath'], dest_path+key, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, callback = None) #Bigger buffer for bigger files else: if show_progress: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600) def display_progress(pos, total, filename): percent = round(float(pos)/float(total)*100,2) if percent <= 100: sys.stdout.write(filename + ' - ' + str(percent)+'% \r') else: percent = 100 sys.stdout.write(filename + ' - Completed \n') Is there a better way to accomplish what I'm doing? Sorry if the code is commented poorly or hard to follow. I didn't want to ask someone to read through all 120 lines of my poorly written code, so I just isolated the two functions. Thanks for any help.

    Read the article

  • Git repos over multiple machines - backups and keeping in sync

    - by a-or-b
    I'm new to git so please feel free to RTFM me... I have multiple development sites (none of which can communicate via a network with each other) and am working on a few projects (with a few people) at any one time. What I would ideally have is at each site a centralized repository that can be pulled from but development would occur in our own (personal) repos. Then I would like to be able to sync across the centralized repos (via USB key for example). I want a centralized repo at each location as (1) I'm new to git and do break my (personal) local repo by playing around and (2) some projects get put on hold so I want to be able to free up disk space by deleting them. This is the "backup" part of my question. I was also hoping to be able to use 'git clone --bare' for my centralized repos (and the USB key repos to?) as we don't need the full checkout, just the git benefits. However I can't seem to get a bare repo to work as repo I can push from. I've used 'git remote' to set up an remote origin (similar to http://toolmantim.com/thoughts/setting_up_a_new_remote_git_repository) but I can't get 'git push' to work - it seems I need a checked-out repo. . Does anyone else use this sort of repo/development structure or is there something fundamental about git usage that I'm missing? . A solution that I thought about that might not work - If I had a 'git clone --bare' at each site and then use a git repo on my removable media which has remotes set up for each site then I could ('pull') sync my USB key with each repo. But then can I update the site repo from my USB key? Could I push from USB?

    Read the article

  • Ubuntu + virtualenv = a mess? virtualenv hates dist-packages, wants site-packages

    - by lostincode
    Can someone please explain to me what is going on with python in ubuntu 9.04? I'm trying to spin up virtualenv, and the --no-site-packages flag seems to do nothing with ubuntu. I installed virtualenv 1.3.3 with easy_install (which I've upgraded to setuptools 0.6c9) and everything seems to be installed to /usr/local/lib/python2.6/dist-packages I assume that when installing a package using apt-get, it's placed in /usr/lib/python2.6/dist-packages/ ? The issue is, there is a /usr/local/lib/python2.6/site-packages as well that just sits there being empty. It would seem (by looking at the path in a virtualenv) that this is the folder virtualenv uses as backup. Thus even thought I omit --no-site-packages, I cant access my local systems packages from any of my virtualenv's. So my questions are: How do I get virtualenv to point to one of the dist-packages? Which dist-packages should I point it to? /usr/lib/python2.6/dist-packages or /usr/local/lib/python2.6/dist-packages/ What is the point of /usr/lib/python2.6/site-packages? There's nothing in there! Is it first come first serve on the path? If I have a newer version of package XYZ installed in /usr/local/lib/python2.6/dist-packages/ and and older one (from ubuntu repos/apt-get) in /usr/lib/python2.6/dist-packages, which one gets imported when I import xyz? I'm assuming this is based on the path list, yes? Why the hell is this so confusing? Is there something I'm missing here? Where is it defined that easy_install should install to /usr/local/lib/python2.6/dist-packages? Will this affect pip as well? Thanks to anyone who can clear this up!

    Read the article

  • Largest triangle from a set of points

    - by Faken
    I have a set of random points from which i want to find the largest triangle by area who's verticies are each on one of those points. So far I have figured out that the largest triangle's verticies will only lie on the outside points of the cloud of points (or the convex hull) so i have programmed a function to do just that (using Graham scan in nlogn time). However that's where I'm stuck. The only way I can figure out how to find the largest triangle from these points is to use brute force at n^3 time which is still acceptable in an average case as the convex hull algorithm usually kicks out the vast majority of points. However in a worst case scenario where points are on a circle, this method would fail miserably. Dose anyone know an algorithm to do this more efficiently? Note: I know that CGAL has this algorithm there but they do not go into any details on how its done. I don't want to use libraries, i want to learn this and program it myself (and also allow me to tweak it to exactly the way i want it to operate, just like the graham scan in which other implementations pick up collinear points that i don't want).

    Read the article

  • Subversion freaking out on me!

    - by Malfist
    I have two copies of a site, one is the production copy, and the other is the development copy. I recently added everything in the production to a subversion repository hosted on our linux backup server. I created a tag of the current version and I was done. I then copied the development copy overtop of the production copy (on my local machine where I have everything checked out). There are only 10-20 files changed, however, when I use tortoise SVN to do a commit, it says every file has changed. The diff file generated shows subversion removing everything, and replacing it with the new version (which is the exact same). What is going on? How do I fix it? An example diff: Index: C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html =================================================================== --- C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (revision 5) +++ C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (working copy) @@ -1,4 +1,4 @@ -<html> -<body bgcolor="#FFFFFF"> -</body> +<html> +<body bgcolor="#FFFFFF"> +</body> </html> \ No newline at end of file

    Read the article

  • Ruby on Rails: Uploading a modifed site.

    - by Califer
    I'm having a heck of a time getting a site I modified to work correctly. I didn't set the site up originally, and since the person that set it up no longer works with me I had to learn ruby just to make some changes. I made all the changes in the development server and everything worked fine. Then I did a diff on the production and development and moved only my changes over. Unfortunately when I loaded my changes onto the production server I got a lot of errors. I've changed all of the permissions to 755, which took care being able to access anything at all, but after that I started getting a lot of 500 errors. Nothing showed up in the production.log file. I really have no clue what's going wrong except that perhaps things are not noticing the new changes. I moved the old site to a backup folder, and the new site crashes whenever it goes to anything that I've changed. In particular, I added a link to a new setup with an extra controller/model/view group. It works fine on development but in production it gives me a 404. Yes, I did copy all the files up. I even put everything back how it was, but the website is still showing the broken version of it. I checked the tmp/cache folder but it was empty. Running dispatch.fcgi shows the old site (which I expected) but it still shows the flawed new site when I connect through a browser. I've been tearing my hair out trying to get this to work. Any ideas as to how I can get this to work?

    Read the article

  • How to design authentication in a thick client, to be fail safe?

    - by Jay
    Here's a use case: I have a desktop application (built using Eclipse RCP) which on start, pops open a dialog box with 'UserName' and 'Password' fields in it. Once the end user, inputs his UserName and Password, a server is contacted (a spring remote-servlet, with the client side being a spring httpclient: similar to the approaches here.), and authentication is performed on the server side. A few questions related to the above mentioned scenario: If said this authentication service were to go down, what would be the best way to handle further proceedings? Authentication is something that I cannot do away with. Would running the desktop client in a "limited" mode be a good idea? For instance, important features/menus/views will be disabled, rest of the application will be accessible? Should I have a back up authentication service running on a different machine, working as a backup? What are the general best-practices in this scenario? I remember reading about google gears and how it would let you edit and do stuff offline - should something like this be designed? Please let me know your design/architectural comments/suggestions. Appreciate your help.

    Read the article

  • Are there any less costly alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • How can I prevent double file uploading with Amazon S3?

    - by Tony
    I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions. One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary) The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails. Both seem equally messy. Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.

    Read the article

  • VisualSVN How to roll back the revision number ?

    - by Ita
    The company I work in has suffered a major server failure. During this failure the SVN Repository was lost. But there is still hope ! We have an old backup of the repository which I've managed to successfully restore using VisualSVN. The problem I'm facing now is that I can't update / commit pre-failure checkedout folders. The reason for this problem is that for instance: a local folder has a revision number of 2361, while the repository itself holds a revision number of 2290, which is older. Is there a way to deal with this issue ? Can I some how change the revision numbers on either the local copy or the server copy? A few points: I'm using TortoiseSVN 1.6.6. I can checkout folders from the repo and the connection is active. I've picked one of my folders and used the Relocate option on it. This helped me see that there is something wrong with the revision number I've experimented a bit with the merge option but this lead me now where special. (I'm open for suggestions ) Thank you for your time, Ita

    Read the article

  • MySQL to SQL Server ODBC Connector?

    - by Scott C.
    My boss wants to have data in MySQL DBs used for our website to be "linked and synced" with a Financial Server that has its DB in SQL Server. Sooooo...even though I have no idea how to accomplish this, this just sounds like an absolute nightmare especially since the MySQL DB is most likely going to be hosted in the cloud and not on a machine next to the Financial Server. Any ideas how to accomplish this? (within reason?) Also, his big thing is he wants to basically pull up the data from any record a user enters and using data pulled from that do all sorts of calculations using ANOTHER program that stores its data (apparently) in SQL Server. Thinking of all the data I might have to convert makes me very uneasy. Please tell a ODBC eliminates complicated junk like this. :/ I'm trying to talk him into just having MySQL do a nightly dump into a CSV file or something and using that (rather than connector) to update the SQL Server DBs. I guess I'm just not that comfortable with a server and/or programming I have no say over being connected DIRECTLY to my MySQL DB for the website. If there's no good answer for this, can anyone offer a suggestion as to what I can say to talk him out of this? (I'm a low-level IT guy w/ a decent grasp on programming...but I'm no expert - should I try to push this off to a seasoned IT pro?) Thanks in advance.

    Read the article

  • WCF + Azure = Nightmare!

    - by lsb
    Hi! I've spent the prior week trying to get a secure form of WCF to work on Azure, but all to no avail! My use case is pretty simple. I want to call a WCF endpoint in the cloud and pass messages to be queued for a Worker Role. Beyond that I want to limit access to pre-authrorized users, authenticated via username & password. I've tried to get this working with Transport, TransportWithMessageCredential and Message security but nothing seems to work. Indeed, I've worked through every example and snippet that I could find, most recently the "Service using binary HTTP binding with transport security and message credentials and Silverlight client" example on the http://code.msdn.microsoft.com/wcfazure page. I'm pretty sure that I'm being knocked down by small bugs and beta changes but the end result is that I'm totally stuck. This is a critical path item for me so any suggestions would be greatly appreciated. A complete working example or a walkthrough would be even better!

    Read the article

  • Are there any less costlier alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • Exporting database from external server (without SSH access)

    - by Derek Carlisle
    Our current website is hosted by the design agency who originally built the website, however we are bringing the development of the website in house therefore need to export the database from their server and import it to ours. We have FTP and phpMyAdmin access but don't have SSH access to the server. I was hoping to run a PHP script that would mysql dump the database, compress it and then copy it across to our server using scp: $backupFile = $_SERVER['DOCUMENT_ROOT'].'/backup' . date("Y-m-d-H-i-s") . '.gz'; system("mysqldump -h DB_HOST -u DB_USER -pDB_PASS DB_NAME | gzip > $backupFile"); exec("sshpass -p PASSWORD scp -r -P PORT_NUMBER $backupFile [email protected]:/path/to/directory/"); I have ran this locally from the command line and it worked fine, although I had to install sshpass (the hosting server might not have this installed). Also, I was hoping to run it from the browser as I don't have command line access on the hosting server, however it didn't work, no errors produced though. Can you anyone recommend how I can export from the server that I don't have SSH access to and import to my server? Thanks

    Read the article

  • Google appEngine: 404 when accesing /_ah/api

    - by jfu
    I try to build a very simple GAE application, using eclipse and the Google Plugin for Eclipse. I've generated some Endpoint from an @Entity class, then I've generated Cloud Endpoint Client library. After that I've started the appEngine project (within eclipse, on the embedded jetty server). When I try to access /_ah/api I get the following issue: HTTP ERROR 500 Problem accessing /_ah/api/. Reason: Failed to retrieve API configs with status: 404 Caused by: java.io.IOException: Failed to retrieve API configs with status: 404 at com.google.api.server.spi.tools.devserver.ApiServlet.getApiConfigSources(ApiServlet.java:102) at com.google.api.server.spi.tools.devserver.ApiServlet.initConfigsIfNecessary(ApiServlet.java:67) at com.google.api.server.spi.tools.devserver.RestApiServlet.service(RestApiServlet.java:117) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.socket.dev.DevSocketFilter.doFilter(DevSocketFilter.java:74) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.ResponseRewriterFilter.doFilter(ResponseRewriterFilter.java:123) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.HeaderVerificationFilter.doFilter(HeaderVerificationFilter.java:34) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:63) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:125) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.DevAppServerModulesFilter.doDirectRequest(DevAppServerModulesFilter.java:368) at com.google.appengine.tools.development.DevAppServerModulesFilter.doDirectModuleRequest(DevAppServerModulesFilter.java:351) at com.google.appengine.tools.development.DevAppServerModulesFilter.doFilter(DevAppServerModulesFilter.java:116) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) What am I doing wrong?

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >