Search Results

Search found 4296 results on 172 pages for 'git clone'.

Page 87/172 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Running bundle install fails trying to remote fetch from rubygems.org/quick/Marshal...

    - by dreeves
    I'm getting a strange error when doing bundle install: $ bundle install Fetching source index for http://rubygems.org/ rvm/rubies/ree-1.8.7-2010.02/lib/ruby/site_ruby/1.8/rubygems/remote_fetcher.rb:304 :in `open_uri_or_path': bad response Not Found 404 (http://rubygems.org/quick/Marshal.4.8/resque-scheduler-1.09.7.gemspec.rz) (Gem::RemoteFetcher::FetchError) I've tried bundle update, gem source -c, gem update --system, gem cleanup, etc etc. Nothing seems to solve this. I notice that the URL beginning with http://rubygems.org/quick does seem to be a 404 -- I don't think that's any problem with my network, though if that's reachable for anyone else then that would be a simple explanation for my problem. More hints: If I just gem install resque-scheduler it works fine: $ gem install resque-scheduler Successfully installed resque-scheduler-1.9.7 1 gem installed Installing ri documentation for resque-scheduler-1.9.7... Installing RDoc documentation for resque-scheduler-1.9.7... And here's my Gemfile: source 'http://rubygems.org' gem 'json' gem 'rails', '>=3.0.0' gem 'mongo' gem 'mongo_mapper', :git => 'git://github.com/jnunemaker/mongomapper', :branch => 'rails3' gem 'bson_ext', '1.1' gem 'bson', '1.1' gem 'mm-multi-parameter-attributes', :git=>'git://github.com/rlivsey/mm-multi-parameter-attributes.git' gem 'devise', '~>1.1.3' gem 'devise_invitable', '~> 0.3.4' gem 'devise-mongo_mapper', :git => 'git://github.com/collectiveidea/devise-mongo_mapper' gem 'carrierwave', :git => 'git://github.com/rsofaer/carrierwave.git' , :branch => 'master' gem 'mini_magick' gem 'jquery-rails', '>= 0.2.6' gem 'resque' gem 'resque-scheduler' gem 'SystemTimer' gem 'capistrano' gem 'will_paginate', '3.0.pre2' gem 'twitter', '~> 1.0.0' gem 'oauth', '~> 0.4.4'

    Read the article

  • Bash: Quotes getting stripped when a command is passed as argument to a function

    - by Shoaibi
    I am trying to implement a dry run kind of mechanism for my script and facing the issue of quotes getting stripped off when a command is passed as an argument to a function and resulting in unexpected behavior. dry_run () { echo "$@" #printf '%q ' "$@" if [ "$DRY_RUN" ]; then return 0 fi "$@" } email_admin() { echo " Emailing admin" dry_run su - $target_username -c "cd $GIT_WORK_TREE && git log -1 -p|mail -s '$mail_subject' $admin_email" echo " Emailed" } Output is: su - webuser1 -c cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' [email protected] Expected: su - webuser1 -c "cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' [email protected]" With printf enabled instead of echo: su - webuser1 -c cd\ /home/webuser1/public_html\ \&\&\ git\ log\ -1\ -p\|mail\ -s\ \'Git\ deployment\ on\ webuser1\'\ [email protected] Result: su: invalid option -- 1 That shouldn't be the case if quotes remained where they were inserted. I have also tried using "eval", not much difference. If i remove the dry_run call in email_admin and then run script, it work great.

    Read the article

  • Android Source code download error

    - by user351850
    Hi all I have followed the instructions on the Android website on how to download the latest android source code files but it gives errors when i run this command: repo init -u git://android2.git.kernel.org/platform/manifest.git It gives the following error: Getting repo ... from git://android.git.kernel.org/tools/repo.git android.git.kernel.org[0: 199.6.1.176]: errno=Connection refused android.git.kernel.org[0: 130.239.17.12]: errno=Connection refused fatal: unable to connect a socket (Connection refused) On checking forums for its resolution, i was told that port 9418 was being blocked. I use Ubuntu 10.04 and ensured that the firewall wasnt blocking the port and also enabled the port and the above IP addresses. I also spoke to the networking peeps who ensured that no traffic from the internet is being blocked. I would be glad if i could get directions on how to proceed next. Many thanks as you respond. Saheed.

    Read the article

  • SSH automatic logon works for one user but not the other

    - by tinmaru
    I want to enable automatic ssh login using the .ssh/config file for my git user. Here is my .ssh/config file: Host test HostName myserver.net User test IdentityFile ~/.ssh/id_rsa Host git HostName myserver.net User git IdentityFile ~/.ssh/id_rsa It works for my test user but not for my git user so my global SSH configuration is correct. The configuration are exactly the same as far as I know. It used to work with git user but I'm unable what change has broken the automatic logon. When I type: ssh -v git I get the following log: ... debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey Offering RSA public key: /Users/mylocalusername/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: _ Does anyone know what could be a possible difference

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Can Foswiki be used as a distributed Redmine replacement? [closed]

    - by Tobias Kienzler
    I am quite familiar with and love using git, among other reasons due to its distributed nature. Now I'd like to set up some similarly distributed (FOSS) Project Management software with features similar to what Redmine offers, such as Issue & time tracking, milestones Gantt charts, calendar git integration, maybe some automatic linking of commits and issues Wiki (preferably with Mathjax support) Forum, news, notifications Multiple Projects However, I am looking for a solution that does not require a permanently accesible server, i.e. like in git, each user should have their own copy which can be easily synchronized with others. However it should be possible to not have a copy of every Project on every machine. Since trac uses multiple instances for multiple projects anyway, I was considering using that, but I neither know how well it adapts to simply giting the database itself (which would be be easiest way to handle the distribution due to git being used anyway), nor does it include all of Redmine's feature. After checking http://www.wikimatrix.org for Wikis with integrated tracking system and RCS support, and filtering out seemingly stale project, the choices basically boil down to Foswiki, TWiki and Ikiwiki. The latter doesn't seem to offer as many usability features, and in the TWiki vs Foswiki issue I tend to the latter. Finally, there is Fossil, which starts from the other end by attempting to replace git entirely and tracking itself. I am however not too comfortable with the thought of replacing git, and Fossil's non-SCM features don't seem to be as developed. Now before I invest too much time when someone else might already have tried this, I basically have two questions: Are there crucial features of Project Management software like Redmine that Foswiki does not provide even with all the extensions available? How to set Foswiki up to use git instead of the perl RcsLite?

    Read the article

  • How to make a non-english clone of CoffeeScript?

    - by Ans
    I want to make a non-english programming language that is identical to what CoffeeScript is to JavaScript. What I mean is that I don't want to build my own design or syntax. Just want to have a non-english programming language that compiles to JavaScript. I want to follow everything CoffeeScript fellows so I don't really want to make any design decisions. For example: This is coffeescript: number = 42 opposite = true number = -42 if opposite I want my language to be something like: ??? = 42 ??? = ???? ??? = -42 ??? ??? that get compiled to: var number, opposite; number = 42; opposite = true; if (opposite) { number = -42; }

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • Which Git-based MIS to track workflow like Trac/Redmine but on console minimastically?

    - by hhh
    Definitions MIS = management information system Some list about console based solutions here and some GUI-hacks here. Been fed up to install all those dependencies and no make -files with GUI -things so which console-based MIS would you suggest for a game-development team with graphical -repo, animation -repo, code -repo, stories -repo, etc ? P.s. I do use Git -submodules and the reason for repo -fragmentation is due to roles and size, certain repos such as graphic -repos tend to be quite large so better to keep them separate. Perhaps useful to readers interested about this http://stackoverflow.com/questions/5881578/trac-vs-redmine https://github.com/jchris/sofa

    Read the article

  • Le projet PHP migre sur Git, les tags des futures release seront signés par l'équipe de développement elle-même

    Le système de gestion de version Git vient de recevoir un nouveau "membre" : il s'agit du projet PHP qui vient de migrer complètement son code source sur cette plate-forme comme on peut le lire sur leur site web . Il est donc désormais possible de cloner ou de "forker" les sources de PHP depuis son miroir GitHub. De même les requêtes Pull faites via GitHub sont désormais prises en charge. Le code source est également disponible via le lien suivant et toutes les instructions relatives au clonage de l'arbre des sources de PHP peuvent être consultées sur le lien suivant .

    Read the article

  • Is Mac OS X a licensed Unix or Unix-like clone that conforms to Unix specification?

    - by KMC
    Is Mac OS X developed on a licensed Unix or is it a Unix-like clone that, unlike Linux, conforms to Unix specification well enough to be registered as a Unix OS. Not until Leopard, Mac OS X did not gain the Unix certification. But in Leopard, Terminal still print: GNU bash, version 3.2.48(1)-release (x86_64-apple-darwin10.0) But GNU is GNU's not Unix, and Mac OS X is registered as Unix. That gets me confused whether OS X is unix or unix-like. In other words, is OS X written on top of Unix or a re-write of Unix that is as Unix as it can possible be. May be along the answer someone can provide lineage or other background information. I would also recommend reading How Unix is Mac OS X.

    Read the article

  • What is the best way to clone Win7 machines?

    - by John Hoge
    I'm looking to buy 5 new Win7 boxes and would like to ease deployment by cloning the OS. What I would like to do is install a fresh OS (Dell doesn't seem to sell machines without preinstalled crapware anymore) and then install a few apps on the first one. Once it is just right, I want to clone the OS and install the image on the other four machines and just change the machine name. Is this possible to do without any extra third party software? What I am thinking of doing is backing up the disk image of the first machine to a network share, and then booting the others to the windows install DVD and restoring the same image on each machine. Has anyone had any luck with this technique?

    Read the article

  • How do I know if 'hg clone' is doing the work remotely?

    - by jjfine
    I've got a very simple windows install of Mercurial on my machine. The 'central' repository is located at //mymachine/hg-repos/central. I want remote (VPN) users to be able to create clones of this repository in the hg-repos directory because it gets daily backups. I have given these users full control of the hg-repos directory. My question is this: If I'm on a remote machine, and I run the command: hg clone //mymachine/hg-repos/central //mymachine/hg-repos/central-copy ...is the remote machine doing most of the work? I don't want the client to have to download all of the central repository and then upload it all back because people are going to be using this from across the country. But I suspect this is what's happening here since it works so easily.

    Read the article

  • Is there a performance difference between Windows 7 on SSD installed from scratch versus it using a recent ghost/clone drive image from a harddisk?

    - by therobyouknow
    I'm planning to upgrade a notebook PC to a Solid-State Flash Drive (SSD) soon. I want to use the notebook before that and am considering installing Windows 7 on the hard disk (spinning variety, 5400rpm) before I get the SSD. To save time I am wondering if I can ghost/clone the installation of Windows 7 from the hard drive and put on the SSD. Would the performance of this clone from the harddisk onto the SSD be different from starting again and reinstalling Windows 7 from scratch on the SSD? (Windows 7 32bit professional)

    Read the article

  • ssh-agent is broken after running Meerkat - can connect to git in terminal but not in Tower - no keychain access

    - by marblegravy
    My mac running Snow leopard 10.6.8 is having trouble handling it's ssh keys. I could previously access all my git repo's via Tower without an issue. The other day I ran Meerkat to see what it was about and it looks like it has broken the way ssh works. Terminal doesn't seem to have a problem and can still connect to Git, but it can't access the keychain. Tower doesn't seem to be able to access anything. The Tower support crew have been super helpful, but I wanted to float this here and see if anyone has any ideas on how to fix my problem. The only hints I have are: $ which ssh returns: /usr/bin/ssh and echo $SSH_AUTH_SOCK returns: /tmp/ssh-nBhRYVEg8t/agent.199 (This one seems to be wrong as I think it's supposed to point to a Listener, but no idea how to fix it) additional: Keychain first-aid finds no problems. The problem seems to be that ssh-agent is not being run properly... but that's just a guess.

    Read the article

  • Use msysgit/"Git for Windows" to navigate Windows shortcuts?

    - by Darthfett
    I use msysgit on Windows to use git, but I often want to navigate through a Windows-style *.lnk shortcut. I typically manage my file structure through Windows' explorer, so using a different type of shortcut (such as creating hard or soft link in git) isn't feasible. How would I navigate through this type of shortcut? For example: PCUser@PCName ~ $ cd Desktop PCUser@PCName ~/Desktop $ ls Scripts.lnk PCUser@PCName ~/Desktop $ cd Scripts.lnk sh.exe": cd: Scripts.lnk: Not a directory Is it possible to change this behavior, so that instead of getting an error, it just goes to the location of the directory? Alternatively, is there a command to get the path in a *.lnk file?

    Read the article

  • zsh auto-complete event designator

    - by simont
    (See my previous question for additional context). I'm migrating to zsh from bash, and using oh-my-zsh. When my zsh history looks something like the following: git status git add -A git commit I want to be able to re-run git add -A. To do that, I could use !?git add, which should: !?str[?] Refer to the most recent command containing str. The trailing ‘?’ is necessary if this reference is to be followed by a modifier or followed by any text that is not to be considered part of str. The link for zsh event designators is here. Unfortunately, I can't do this - as I'm typing !?git add, as I hit the ' ', it auto-completes the command to the most recent command matching git (ie, it auto-completes with git commit). I can't use the event designator properly because of this auto-completion as I hit the space. I assume this is an oh-my-zsh feature. I have no idea where to look, though - greping for 'complet' in the oh-my-zsh source doesn't get me anywhere. My question: how do I turn off this feature? Or, if that's not something that's known, where should I be looking - if I was going to implement this auto-complete when whitespace is entered, where would be a logical place to do so in the oh-my-zsh framework?

    Read the article

  • Update a bootable OS X drive clone with rsync?

    - by Joe
    The question: is it possible to keep a boot-able backup drive clone of OS X updated with rsync? If rsync is not a viable option are there alternatives? The Setup: My situation is as shown above. One internal Samsung 840 SSD [120g] in use as my OS X 10.8 boot disk on a recent model Mac Mini. I have successfully cloned that drive with disk utility to a 125g partition of another HDD in an external USB 3 enclosure and at that point I am able to boot to it. The Goal: As my last system went out in a fiery blaze taking much valuable data with it, I have a new respect for a proper backup solution and really want to do this right. My goal is to achieve an automated differential backup/update from Disk A to Disk B while most importantly maintaining boot-ability on the external drive. And I would prefer to do this differentially to minimize stress on the drives. Hence rsync was the first thing to come to mind. What I have tried: following along with Jamie Zawinski's differential mac bootable backup solution running this manually initially worked - i tested it with only very miniscule file change and everything was fine / external booted and all. now after subsequent passes rsync fails throwing errors particularly relating to updating 'boot.efi' (not at the machine currently I will update the precise log message once I return home) is this a drive partition size issue? does rsync require more space? if it cant be done, are there any alternatives? i've heard whispers of dd

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • CakePHP, CodeIgniter or Rails for multi-user Tumblr clone?

    - by Jordan
    I'm about to start building a tumblr clone that handles multiple users (so premade clones like Gelato won't cut it) and I'm not sure which framework I'd like to build this is. Right now, I'm only intending to build a prototype. Something I can get a dozen friends on to test the concept and grow to maybe a couple hundred users to prove the market, so I'm not worried about long term scale. My biggest concern right now is quick deployment. I'd like to get from zero to signups in as short a time as possible, with as little customization to the framework of choice as possible. I have experience with PHP, but not Ruby. However, I don't think the learning curve would be too steep so I'm not ruling out rails. I just want the framework that is most appropriate for a system like a multi-user tumblr clone so that I can build it with as little hassle, and as quickly, as possible. If anyone has experience with a similar project, or with these frameworks and can offer an insightful perspective, I'd be very appreciative. Thanks for taking the time to read. Cheers, ~Jordan Feldstein

    Read the article

  • Can't clone file-input element in Safari and Chrome. FF and Opera are OK

    - by Christian Fazzini
    This is very strange. I've got a simple form. I have a file input element outside this form. User clicks the file input element and selects a file. I clone the file input using this code: $('input[name="song[attachment]"]').clone(true).appendTo('form') In all browsers: FF, Opera, Safari, Chrome, when I inspect the form element, I see the cloned file input element inside the form. However, when I submit the form in FF and Opera it works. Safari and Chrome submits the form with an empty file input. I notice when the file input element is cloned and appended to the form element, it doesn't copy over its values. It only clones an empty input file element. Is this normal? Is there something wrong with my Jquery code? Or is this a security issue and that's why Safari and Chrome are not allowing me to do this? If the latter, why is FF and Opera doing otherwise?

    Read the article

  • haml with rails3 (git master) and devise: form_for syntax change breaks haml -- suggestions?

    - by z3cko
    i am trying to get haml working with a rails3 project; since i am quite far in the modeling i wanted to go to the haml views now -- seems that the current haml (git master) does not work together with the current rails3 git master because of some syntax changes in rails3 form_for does anyone have more information on the syntax changes? is there a temporary workaround to use haml with rails3? (i am on a deadline) :( see also: http://j.mp/9EYraQ thanks!

    Read the article

  • How to get path to the installed GIT in Python?

    - by Vladimir Prudnikov
    I need to get a path to the GIT on Max OS X 10.6 using Python 2.6.1 into script variables. I use this code for that: r = subprocess.Popen(shlex.split("which git"), stdout=subprocess.PIPE) print r.stdout.read() but the problem is that output is empty (I tried stderr too). It works fine with another commands such as pwd or ls. Can anyone help me with that?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >