Search Results

Search found 7545 results on 302 pages for 'backup and restore'.

Page 281/302 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Suggestions for a Cron like scheduler in Python?

    - by jamesh
    I'm looking for a library in Python which will provide at and cron like functionality. I'd quite like have a pure Python solution, rather than relying on tools installed on the box; this way I run on machines with no cron. For those unfamiliar with cron: you can schedule tasks based upon an expression like: 0 2 * * 7 /usr/bin/run-backup # run the backups at 0200 on Every Sunday 0 9-17/2 * * 1-5 /usr/bin/purge-temps # run the purge temps command, every 2 hours between 9am and 5pm on Mondays to Fridays. The cron time expression syntax is less important, but I would like to have something with this sort of flexibility. If there isn't something that does this for me out-the-box, any suggestions for the building blocks to make something like this would be gratefully received. Edit I'm not interested in launching processes, just "jobs" also written in Python - python functions. By necessity I think this would be a different thread, but not in a different process. To this end, I'm looking for the expressivity of the cron time expression, but in Python. Cron has been around for years, but I'm trying to be as portable as possible. I cannot rely on its presence.

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • Cleaning up a sparsebundle with a script

    - by nickg
    I'm using time machine to backup some servers to a sparse disk image bundle and I'd like to have a script to clean up the old back ups and re-size the image once space has been freed up. I'm fairly certain the data is protected because if I delete the old backups by right clicking, I have to enter a password to be able to delete them. To allow my script to be able to delete them, I've been running it as root. For some reason, it just won't run and every file it tries to delete I get rm: /file/: Operation not permitted Here is what I have as my script: #!/bin/bash for server in servername; do /usr/bin/hdiutil attach -mountpoint /path/to/mountpoint /path/to/sparsebundle/$server.sparsebundle/; /bin/sleep 10; /usr/bin/find /path/to/mountpoint -type d -mtime +7 -exec /bin/rm -rf {} \; /usr/bin/hdiutil unmount /path/to/mountpoint; /bin/sleep 10; /usr/bin/hdiutil compact /path/to/sparsebundle/$server.sparsebundle/; done exit; One of the problems I thought was causing this was it needed to have a mountpoint specified since the default mount was to /Volumes/Time\ Machine\ Backups/ that's why I created a mountpoint. I also thought that it was trying to delete the files to quickly after mounting and it wasn't actually mounted yet, that's why I added the sleep. I've also tried using the -delete option for find instead of -exec, but it didn't make any difference. Any help on this would be greatly appreciated because I'm out of ideas as to why this won't work.

    Read the article

  • How can I use JSONP to download client-side javascript objects?

    - by Alex Mcp
    I'm trying to get client-side javascript objects saved as a file locally. I'm not sure if this is possible. The basic architecture is this: Ping an external API to get back a JSON object Work client-side with that object, and eventually have a "download me" link This link sends the data to my server, which processes it and sends it back with a mime type application/json, which (should) prompt the user to download the file locally. Right now here are my pieces: Server Side Code <?php $data = array('zero', 'one', 'two', 'testing the encoding'); $json = json_encode($data); //$json = json_encode($_GET['']); //eventually I'll encode their data, but I'm testing header("Content-type: application/json"); header('Content-Disposition: attachment; filename="backup.json"'); echo $_GET['callback'] . ' (' . $json . ');'; ?> Relevant Client Side Code $("#download").click(function(){ var json = JSON.stringify(collection); //serializes their object $.ajax({ type: "GET", url: "http://www.myURL.com/api.php?callback=?", //this is the above script dataType: "jsonp", contentType: 'jsonp', data: json, success: function(data){ console.log( "Data Received: " + data[3] ); } }); return false; }); Right now when I visit the api.php site with Firefox, it prompts a download of download.json and that results in this text file, as expected: (["zero","one","two","testing the encoding"]); And when I click #download to run the AJAX call, it logs in Firebug Data Received: testing the encoding which is almost what I'd expect. I'm receiving the JSON string and serializing it, which is great. I'm missing two things: The Actual Questions What do I need to do to get the same prompt-to-download behavior that I get when I visit the page in a browser (much simpler) How do I access, server-side, the json object being sent to the server to serialize it? I don't know what index it is in the GET array (silly, I know, but I've tried almost everything)

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Can't see *all* databases in a remote SQL Server instance

    - by George
    Yesterday I posted a related question on StackOverflow. This problem involved not being able to see a SQL Server 2008 instance on another PC. I am not sure why adding the port number enabled me to see a SQL Server that I could not otherwise see, since the port number that I specified was, after all, the default port. Now I notice that I have another problem. While I can connect to the remote SQL 2008 Server instance, I cannot see all the databases in the instance. I am trying to connect to the 2008 instance from another PC using SQL Server 2008 Mgt Studio. I am connecting from a Windows 7 Ultimate PC to a Windows XP Pro PC. I suspect that my problem has something to do with not all database in the remote instance having the same version. For example, I "upgraded" a a SL 2005 database to 2008 by doing a backup frm 2005 and importing it into 2008. When I realized that this was not one of the database that I could see from my other PC, I noticed that the compatability level of the imported was still 2005, so I changed it to 2008. Still I could not see the database. I am sure that this is relevant: I just noticed that on my remote server, the sql node instance node, named "sql2008" says "version 10" when I am on the remote serfver, but when I connect to the sql2008 remote instance fron my local PC, the connection is shown locally as being a "SQL Servr version 8.0" instance. I suspect that locally, I am only being shown databases that are somehow in the remote 2008 instance but have not been upgraded. I guess I don't know what constitutes an upgraded database and I don not know who to connect to see all the databases, even if this requires multiple connections from the source PC.

    Read the article

  • Editing a remote file on-the-fly with PHP

    - by user275074
    Hi, I have a requirement to edit a remote text file on-the-fly, the content of which currently stands at ~1Mb. I have tried a couple of approaches and both seem to be clunky or hog memory which I can't rely on. Thinking out logically what I'm trying to achieve is: FTP to a remote server. Download a copy of the file for backup purposes and store it somewhere locally. Open the remote file and add the necessary lines required. Remove lines from the remote file as per an array of un-required data generated from the local server. Is this possible? I've managed to code steps 1 and 2 but I'm having difficult with 3 and 4. The way I'm doing it now is to use fgets and return the whole string. Really, I don't want to do this as it involves manipulating and re-generating the whole string (and it's large) and then re-inserting it in between two markers in the remote file. Is there no way of manipulating the lines of text in the file on-the-fly?

    Read the article

  • WPF closing child- closes parent-window

    - by Thomas Spranger
    i have the pretty same sample as mentioned here. Fast concluded: MainWindow closes when the last childwindow is closed. My Problem: I couldn't solve my problems with the described solutions. I can't produce a program where it als takes place. Only in one of my bigger progs. Maybe someone has an idea or knows any further steps. Thanks for reading - Thomas As requested here's a bit of code: This is the part in the MainWindow: bool editAfterSearch = false; Movie selectedMovie = (Movie)this.listView.SelectedItem; Movie backup = (Movie)selectedMovie.Clone(); if (new OnlineSearchWindow().EditMovieViaOnlineSearch(ref selectedMovie, out editAfterSearch)) { this.coverFlow.Update(selectedMovie); } And that's the part of the ChildWindow: public bool EditMovieViaOnlineSearch(ref Movie preset, out bool editAfter) { this.exitWithOk = false; this.editMovieAfterSearch = false; this.tbx_SearchTerm.Text = preset.Title; this.linkedMovie = preset; this.ShowDialog(); editAfter = editMovieAfterSearch; if (this.exitWithOk) { this.linkedMovie.CloneOnlineInformation(ref preset); preset.Bitmap = this.linkedMovie.Bitmap; return true; } else { return false; } }

    Read the article

  • MySQL Need some help with a query

    - by Jules
    I'm trying to fix some data by adding a new field. I have a backup from a few months ago and I have restored this database to my server. I'm looking at table called pads, its primary key is PadID and the field of importance is called RemoveMeDate. In my restored (older) database there is less records with an actual date set in RemoveMeDate. My control date is 2001-01-01 00:00:00 meaning that the record is not hidden aka visible. What I need to do is select all the records from the older database / table with the control date and join with those from the newer db /table where the control date is not set. I hope I've explained that correctly. I'll try again, with numbers. I have 80,000 visible records in the older table (with control date set) and 30,000 in the newer db/table. I need to select the 50,000 from the old database, to perform an update query. Heres my query, which I'd can't get to work as I'd like. jules-fix-reasons is the old database, jules is the newer one. select p.padid from `jules-fix-reasons`.`pads` p JOIN `jules`.`pads` ON p.padid = `jules`.`pads`.`PadID` where p.RemoveMeDate <> '2001-01-01 00:00:00' AND `jules`.`pads`.RemoveMeDate = '2001-01-01 00:00:00'

    Read the article

  • Ubuntu + virtualenv = a mess? virtualenv hates dist-packages, wants site-packages

    - by lostincode
    Can someone please explain to me what is going on with python in ubuntu 9.04? I'm trying to spin up virtualenv, and the --no-site-packages flag seems to do nothing with ubuntu. I installed virtualenv 1.3.3 with easy_install (which I've upgraded to setuptools 0.6c9) and everything seems to be installed to /usr/local/lib/python2.6/dist-packages I assume that when installing a package using apt-get, it's placed in /usr/lib/python2.6/dist-packages/ ? The issue is, there is a /usr/local/lib/python2.6/site-packages as well that just sits there being empty. It would seem (by looking at the path in a virtualenv) that this is the folder virtualenv uses as backup. Thus even thought I omit --no-site-packages, I cant access my local systems packages from any of my virtualenv's. So my questions are: How do I get virtualenv to point to one of the dist-packages? Which dist-packages should I point it to? /usr/lib/python2.6/dist-packages or /usr/local/lib/python2.6/dist-packages/ What is the point of /usr/lib/python2.6/site-packages? There's nothing in there! Is it first come first serve on the path? If I have a newer version of package XYZ installed in /usr/local/lib/python2.6/dist-packages/ and and older one (from ubuntu repos/apt-get) in /usr/lib/python2.6/dist-packages, which one gets imported when I import xyz? I'm assuming this is based on the path list, yes? Why the hell is this so confusing? Is there something I'm missing here? Where is it defined that easy_install should install to /usr/local/lib/python2.6/dist-packages? Will this affect pip as well? Thanks to anyone who can clear this up!

    Read the article

  • Is There a Better Way to Feed Different Parameters into Functions with If-Statements?

    - by FlowofSoul
    I've been teaching myself Python for a little while now, and I've never programmed before. I just wrote a basic backup program that writes out the progress of each individual file while it is copying. I wrote a function that determines buffer size so that smaller files are copied with a smaller buffer, and bigger files are copied with a bigger buffer. The way I have the code set up now doesn't seem very efficient, as there is an if loop that then leads to another if loops, creating four options, and they all just call the same function with different parameters. import os import sys def smartcopy(filestocopy, dest_path, show_progress = False): """Determines what buffer size to use with copy() Setting show_progress to True calls back display_progress()""" #filestocopy is a list of dictionaries for the files needed to be copied #dictionaries are used as the fullpath, st_mtime, and size are needed if len(filestocopy.keys()) == 0: return None #Determines average file size for which buffer to use average_size = 0 for key in filestocopy.keys(): average_size += int(filestocopy[key]['size']) average_size = average_size/len(filestocopy.keys()) #Smaller buffer for smaller files if average_size < 1024*10000: #Buffer sizes determined by informal tests on my laptop if show_progress: for key in filestocopy.keys(): #dest_path+key is the destination path, as the key is the relative path #and the dest_path is the top level folder copy(filestocopy[key]['fullpath'], dest_path+key, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, callback = None) #Bigger buffer for bigger files else: if show_progress: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600) def display_progress(pos, total, filename): percent = round(float(pos)/float(total)*100,2) if percent <= 100: sys.stdout.write(filename + ' - ' + str(percent)+'% \r') else: percent = 100 sys.stdout.write(filename + ' - Completed \n') Is there a better way to accomplish what I'm doing? Sorry if the code is commented poorly or hard to follow. I didn't want to ask someone to read through all 120 lines of my poorly written code, so I just isolated the two functions. Thanks for any help.

    Read the article

  • Git repos over multiple machines - backups and keeping in sync

    - by a-or-b
    I'm new to git so please feel free to RTFM me... I have multiple development sites (none of which can communicate via a network with each other) and am working on a few projects (with a few people) at any one time. What I would ideally have is at each site a centralized repository that can be pulled from but development would occur in our own (personal) repos. Then I would like to be able to sync across the centralized repos (via USB key for example). I want a centralized repo at each location as (1) I'm new to git and do break my (personal) local repo by playing around and (2) some projects get put on hold so I want to be able to free up disk space by deleting them. This is the "backup" part of my question. I was also hoping to be able to use 'git clone --bare' for my centralized repos (and the USB key repos to?) as we don't need the full checkout, just the git benefits. However I can't seem to get a bare repo to work as repo I can push from. I've used 'git remote' to set up an remote origin (similar to http://toolmantim.com/thoughts/setting_up_a_new_remote_git_repository) but I can't get 'git push' to work - it seems I need a checked-out repo. . Does anyone else use this sort of repo/development structure or is there something fundamental about git usage that I'm missing? . A solution that I thought about that might not work - If I had a 'git clone --bare' at each site and then use a git repo on my removable media which has remotes set up for each site then I could ('pull') sync my USB key with each repo. But then can I update the site repo from my USB key? Could I push from USB?

    Read the article

  • Subversion freaking out on me!

    - by Malfist
    I have two copies of a site, one is the production copy, and the other is the development copy. I recently added everything in the production to a subversion repository hosted on our linux backup server. I created a tag of the current version and I was done. I then copied the development copy overtop of the production copy (on my local machine where I have everything checked out). There are only 10-20 files changed, however, when I use tortoise SVN to do a commit, it says every file has changed. The diff file generated shows subversion removing everything, and replacing it with the new version (which is the exact same). What is going on? How do I fix it? An example diff: Index: C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html =================================================================== --- C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (revision 5) +++ C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (working copy) @@ -1,4 +1,4 @@ -<html> -<body bgcolor="#FFFFFF"> -</body> +<html> +<body bgcolor="#FFFFFF"> +</body> </html> \ No newline at end of file

    Read the article

  • How to design authentication in a thick client, to be fail safe?

    - by Jay
    Here's a use case: I have a desktop application (built using Eclipse RCP) which on start, pops open a dialog box with 'UserName' and 'Password' fields in it. Once the end user, inputs his UserName and Password, a server is contacted (a spring remote-servlet, with the client side being a spring httpclient: similar to the approaches here.), and authentication is performed on the server side. A few questions related to the above mentioned scenario: If said this authentication service were to go down, what would be the best way to handle further proceedings? Authentication is something that I cannot do away with. Would running the desktop client in a "limited" mode be a good idea? For instance, important features/menus/views will be disabled, rest of the application will be accessible? Should I have a back up authentication service running on a different machine, working as a backup? What are the general best-practices in this scenario? I remember reading about google gears and how it would let you edit and do stuff offline - should something like this be designed? Please let me know your design/architectural comments/suggestions. Appreciate your help.

    Read the article

  • Ruby on Rails: Uploading a modifed site.

    - by Califer
    I'm having a heck of a time getting a site I modified to work correctly. I didn't set the site up originally, and since the person that set it up no longer works with me I had to learn ruby just to make some changes. I made all the changes in the development server and everything worked fine. Then I did a diff on the production and development and moved only my changes over. Unfortunately when I loaded my changes onto the production server I got a lot of errors. I've changed all of the permissions to 755, which took care being able to access anything at all, but after that I started getting a lot of 500 errors. Nothing showed up in the production.log file. I really have no clue what's going wrong except that perhaps things are not noticing the new changes. I moved the old site to a backup folder, and the new site crashes whenever it goes to anything that I've changed. In particular, I added a link to a new setup with an extra controller/model/view group. It works fine on development but in production it gives me a 404. Yes, I did copy all the files up. I even put everything back how it was, but the website is still showing the broken version of it. I checked the tmp/cache folder but it was empty. Running dispatch.fcgi shows the old site (which I expected) but it still shows the flawed new site when I connect through a browser. I've been tearing my hair out trying to get this to work. Any ideas as to how I can get this to work?

    Read the article

  • Exporting database from external server (without SSH access)

    - by Derek Carlisle
    Our current website is hosted by the design agency who originally built the website, however we are bringing the development of the website in house therefore need to export the database from their server and import it to ours. We have FTP and phpMyAdmin access but don't have SSH access to the server. I was hoping to run a PHP script that would mysql dump the database, compress it and then copy it across to our server using scp: $backupFile = $_SERVER['DOCUMENT_ROOT'].'/backup' . date("Y-m-d-H-i-s") . '.gz'; system("mysqldump -h DB_HOST -u DB_USER -pDB_PASS DB_NAME | gzip > $backupFile"); exec("sshpass -p PASSWORD scp -r -P PORT_NUMBER $backupFile [email protected]:/path/to/directory/"); I have ran this locally from the command line and it worked fine, although I had to install sshpass (the hosting server might not have this installed). Also, I was hoping to run it from the browser as I don't have command line access on the hosting server, however it didn't work, no errors produced though. Can you anyone recommend how I can export from the server that I don't have SSH access to and import to my server? Thanks

    Read the article

  • Setting up padding for websites in mobile devices

    - by ambrelasweb
    I had finished this website a while ago but something happened to it and I've now spent all day fixing it and getting it back from scratch as my backup wasn't correctly done. I don't quite understand what it's doing as I've done this technique on many other websites with no troubles, maybe I've looked at this website too long? Here is the website. I'm wanting to put some space on the left and right hand side, however I dont just have one container as I was needing the dark grey bar at 100% of the screen and always under the banner no matter where it was. So there are 4 "containing" divs that I want to have the space. I've placed soem CSS3 media queries in but now I'm getting a gap to the right. I was thinking it was because my background mages are going all the way across but they set at 100% so I'm just not understanding whats going on. It's somethign simple, I'm not seeing it right now.. This is what I have for the media queries /* Smartphones (portrait and landscape) ----------- */ @media only screen and (min-device-width : 320px) and (max-device-width : 480px) { #header, #banner, #main, #footer-widget-area { padding: 0 2em 0 2em; } } This is what t looks like on my iPhone Any advice is helpful and appreciated.

    Read the article

  • Is it possible to create a Mac OS specific CSS to fix font difference ?

    - by Gabriel
    I'm working on a project with a designer and he insisted on using some specific font for titles and various elements in the page. So we're using a font kit to embed with @font-face. It's working perfectly on PC (Firefox, IE 7 and 8, Chrome, Safari) but on Mac OS (Safari and Firefox) the fonts are not vertically aligned the same way. After looking on the Web, I didn't find any solution for this except "there always been differences between browsers and platforms, live with it". I know that fonts are never rendered exactly the same across platforms, but this time it's not something like the font looks more bold or something like that. The font looks as if it's baseline is completely different between Windows and Mac OS X. On Mac OS, the font, at a size of 16px is 3px higher than on PC. So I'm looking for a backup solution : is there a way to create a CSS specifically for Mac OS users? I do not want to target only Safari because Safari PC is ok, and Firefox Mac is not ok. Or if you have a solution to fix the baseline difference that does not require a specific CSS file, I'd be happy to hear it. Thanks!

    Read the article

  • Can I split a single SQL 2008 DB Table into multiple filegroups, based on a discriminator column?

    - by Pure.Krome
    Hi folks, I've got a SQL Server 2008 R2 database which has a number of tables. Two of these tables contains a lot of large data .. mainly because one of them is VARBINARY(MAX) and the sister table is GEOGRAPHY. (Why two tables? Read Below if you're interested***) The data in these tables are geospatial shapes, such as zipcode boundaries. Now, the first 70K odd rows are for DataType = 1 the rest 5mil rows are for DataType = 2 Now, is it possible to split the table data into two files? so all rows that are for DataType != 2 goes into File_A and DataType = 2 goes into File_B? This way, when I backup the DB, I can skip adding File_B so my download is waaaaay smaller? Is this possible? I guessing you might be thinking - why not keep them as TWO extra tables? Mainly because in the code, the data is conceptually the same .. it's just happens that I want to split the storage of this model data. It really messes up my model if I now how two aggregates in my model, instead of one. ***Entity Framework doesn't like Tables with GEOGRAPHY, so i have to create a new table which transforms the GEOGRAPHY to VARBINARY, and then drop that into EF.

    Read the article

  • Why isn't obliterate an essential feature of Subversion?

    - by Dimitri C.
    For some years now, I'm waiting for Subversion to feature a "delete permanently" (obliterate) function. I hesitate to make the transition to Subversion (coming from Visual SourceSafe :p), because I think this is an essential feature, as otherwise I'd expect the repository to grow unstopably. However, for one reason or the other, the feature gets postponed over and over again. So I begin wondering if there is some other feature or workaround which makes the obliterate function dispensable. What do you do when you want to shrink the SVN central repository? Example 1: I check in a large third party library, and after a few weeks I realize it is not suited for my needs. I don't want that to store and backup that large amount of data forever. Example 2: I have 10 versions of 10 big third party libraries in the repository, but I only use the latest versions. Example 3: I accidentally checked in sensitive information (as suggested by John). Example 4: I accidentally checked in some big files that were never meant to be put in the repository.

    Read the article

  • Is there a way to create subdatabases as a kind of subfolders in sql server?

    - by user193655
    I am creating an application where there is main DB and where other data is stored in secondary databases. The secondary databases follow a "plugin" approach. I use SQL Server. A simple installation of the application will just have the mainDB, while as an option one can activate more "plug-ins" and for every plug-in there will be a new database. Now why I made this choice is because I have to work with an exisiting legacy system and this is the smartest thing I could figure to implement the plugin system. MainDB and Plugins DB have exactly the same schema (basically Plugins DB have some "special content", some important data that one can use as a kind of template - think to a letter template for example - in the application). Plugin DBs are so used in readonly mode, they are "repository of content". The "smart" thing is that the main application can also be used by "plugin writers", they just write a DB inserting content, and by making a backup of the database they creaetd a potential plugin (this is why all DBs has the same schema). Those plugins DB are downloaded from internet as there is a content upgrade available, every time the full PlugIn DB is destroyed and a new one with the same name is creaetd. This is for simplicity and even because the size of this DBs is generally small. Now this works, anyway I would prefer to organize the DBs in a kind of Tree structure, so that I can force the PlugIn DBs to be "sub-DBs" of the main application DB. As a workaround I am thinking of using naming rules, like: ApplicationDB (for the main application DB) ApplicationDB_PlugIn_N (for the N-th plugin DB) When I search for plugin 1 I try to connect to ApplicationDB_PlugIn_1, if I don't find the DB i raise an error. This situation can happen for example if som DBA renamed ApplicationDB_Plugin_1. So since those Plugin DBs are really dependant on ApplicationDB only I was trying to "do the subfolder trick". Can anyone suggest a way to do this? Can you comment on this self-made plugin approach I decribed above?

    Read the article

  • php import larg table to phpmyadmin database

    - by safaali
    hi, I am so worry :( I dropped one of the tables from the database accidentally. fortunately, I have back-up. (I have used the "Auto backup for mysql") The back-up of the table is stored as .txt file (56 Megabytes) on my PC. I tried to import it by PhpMyAdmin and the import failed because the file is too large to import. then I uploaded the file to /home/tablebk directory. I have some experiences in php. I know that I would import it with this code, but i don't know the sql statment for this import. what is have to put as $line variable? please help me :( :( <?php $dbhost = 'localhost'; $dbuser = 'mysite'; $dbpw = 'password'; $dbname = 'databasename'; $file = @fopen('country.txt', 'r'); if ($file) { while (!feof($file)) { $line = trim(fgets($file)); $flag = mysql_query($line); if (isset($flag)) { echo 'Insert Successfully<br />'; } else { echo mysql_error() . '<br/>'; } flush(); } fclose($file); } echo '<br />End of File'; ?>

    Read the article

  • List all foreign key constraints that refer to a particular column in a specific table

    - by Sid
    I would like to see a list of all the tables and columns that refer (either directly or indirectly) a specific column in the 'main' table via a foreign key constraint that has the ON DELETE=CASCADE setting missing. The tricky part is that there would be an indirect relationships buried across up to 5 levels deep. (example: ... great-grandchild- FK3 = grandchild = FK2 = child = FK1 = main table). We need to dig up the leaf tables-columns, not just the very 1st level. The 'good' part about this is that execution speed isn't of concern, it'll be run on a backup copy of the production db to fix any relational issues for the future. I did SELECT * FROM sys.foreign_keys but that gives me the name of the constraint - not the names of the child-parent tables and the columns in the relationship (the juicy bits). Plus the previous designer used short, non-descriptive/random names for the FK constraints, unlike our practice below The way we're adding constraints into SQL Server: ALTER TABLE [dbo].[UserEmailPrefs] WITH CHECK ADD CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] FOREIGN KEY([UserId]) REFERENCES [dbo].[UserMasterTable] ([UserId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[UserEmailPrefs] CHECK CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] GO The comments in this SO question inpire this question.

    Read the article

  • Best way to implement some type of ITaggable interface

    - by Jack
    I've got a program I'm creating that reports on another certain programs backup xml files. I've gotten to the point where I need to implement some type of ITaggable interface - but am unsure how to go about it code wise. My idea is that each item (BackupClient, BackupVersion, and BackupFile) should implement an ITaggable interface for highlighting old, out of date, or non-existent files in their HTML or Excel report. The user will be able to specify tags in the settings. My question is this, how can a user dynamically specify a "tag" such as File Date 3 days old? - Background Color = Red. Actually I guess my question is more, how can I, the programmer, implement this dynamically? I was thinking Expression trees, but am unsure this is the way to go as I havn't studied them much. I know my ITaggable interface would have methods such as AddTag(T tag), RemoveTag(T tag), but what exactly specifies the criteria for the tag to be added? I realize this may be subjective, and can be marked as wiki if need be, but I truly am stuck. Any input would be greatly helpful!

    Read the article

  • detection of 'flush tables with read lock' in php

    - by theduke0
    I would like to know from my application if a myisam table can accept writes (i.e. not locked). If an exception is thrown, everything is fine as I can catch this and log the failed statement to a file. However, if a 'flush tables with read lock' command has been issued (possibly for backup), the query I send will pretty much hang out forever. If one table is locked at a time, insert delayed works well. But when this global lock is applied, my query just waits. The query I run is an insert statement. If this statement fails or hangs, user experience is degraded. I need a way to send the query to the server and forget about it (pretty much). Does anyone have any suggestions on how to deal with this? -set a query timeout? -run asyncronous request and allow for the lock to expire while application continues? -fork my php process? Please let me know if I can provide and clarification or details.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >