Search Results

Search found 3715 results on 149 pages for 'git rm'.

Page 89/149 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Why I am not able to run rails tests

    - by dorelal
    This is what I did. > git clone git://github.com/rails/rails.git > cd rails > cd railties > rake And I got following error. (in /Users/dorelal/dev/scratch/rails/railties) ./test/isolation/abstract_unit.rb:236:in `initialize': No such file or directory - /Users/dorelal/dev/scratch/rails/railties/tmp/app_template/config/boot.rb (Errno::ENOENT) from ./test/isolation/abstract_unit.rb:236:in `open' from ./test/isolation/abstract_unit.rb:236 from ./test/isolation/abstract_unit.rb:222:in `initialize' from ./test/isolation/abstract_unit.rb:222:in `new' from ./test/isolation/abstract_unit.rb:222 from test/application/configuration_test.rb:1:in `require' from test/application/configuration_test.rb:1 rake aborted! I checked ~/railties/tmp and this directory is empty. I know rails is not broken. So what am I missing?

    Read the article

  • Capistrano 3, Rails 4, database configuration does not specify adapter

    - by Kazmin
    When I start cap production deploy it fails like this: DEBUG [4ee8fa7a] Command: cd /home/deploy/myapp/releases/releases/20131025212110 && (RVM_BIN_PATH=~/.rvm/bin RAILS_ENV= ~/.rvm/bin/myapp_rake assets:precompile ) DEBUG [4ee8fa7a] rake aborted! DEBUG [4ee8fa7a] database configuration does not specify adapter You can see that "RAILS_ENV=" is actually empty and I'm wondering why that might be happening? I assume that this is the reason for the latter error that I don't have a database configuration. The deploy.rb file is below: set :application, 'myapp' set :repo_url, '[email protected]:developer/myapp.git' set :branch, :master set :deploy_to, '/home/deploy/myapp/releases' set :scm, :git set :devpath, "/home/deploy/myapp_development" set :user, "deploy" set :use_sudo, false set :default_env, { rvm_bin_path: '~/.rvm/bin' } set :keep_releases, 5 namespace :deploy do desc 'Restart application' task :restart do on roles(:app), in: :sequence, wait: 5 do # Your restart mechanism here, for example: within release_path do execute " bundle exec thin restart -O -C config/thin/production.yml" end end end after :restart, :clear_cache do on roles(:web), in: :groups, limit: 3, wait: 10 do within release_path do end end end after :finishing, 'deploy:cleanup' end?

    Read the article

  • Why is capistrano acting up like this?

    - by Matt
    I am having an issue with my deploy i ran cap deploy and got this Warning: Permanently added 'github.com,207.97.227.239' (RSA) to the list of known hosts. ** [174.143.150.79 :: out] Permission denied (publickey). ** fatal: The remote end hung up unexpectedly command finished *** [deploy:update_code] rolling back * executing "rm -rf /home/deploy/transprint/releases/20110105034446; true" servers: ["174.143.150.79"] [174.143.150.79] executing command here is my deploy.rb set :application, "transprint" set :domain, "174.149.150.79" set :user, "deploy" set :use_sudo, false set :scm, :git set :deploy_via, :remote_cache set :app_path, "production" set :rails_env, 'production' set :repository, "[email protected]:myname/something.git" set :scm_username, 'deploy' set :deploy_to, "/home/deploy/#{application}" role :app, domain role :web, domain role :db, domain, :primary => true please help

    Read the article

  • Github svn interface

    - by Fabio
    I recently moved a project on github and I still use this library in some svn projects as external. However I can't get github svn interface working at this time. If I run svn list http://svn.github.com/fabn/zle.git I obtain this error svn: Server sent unexpected return value (500 Internal Server Error) in response to PROPFIND request for '/fabn/zle.git/!svn/bc/0' If I run the same command using a fork of my project the list (and also the checkout) goes fine and i obtain what I need (this is the command svn list http://svn.github.com/JellyBelly/zle.git) Is there anything I can do to resolve this issue or it's a github problem?

    Read the article

  • Makefile - Dependency generation

    - by Profetylen
    I am trying to create a makefile that automatically compiles and links my .cpp files into an executable via .o files. What I can't get working is automated (or even manual) dependency generation. When i uncomment the below commented code, nothing is recompiled when i run make build. All i get is make: Nothing to be done for 'build'., even if x.h (or any .h file) has changed. I've been trying to learn from this question: Makefile, header dependencies, dmckee's answer, especially. Why isn't this makefile working? Clarification: I can compile everything, but when I modify any header file, the .cpp files that depend on it aren't updated. So, if I for instance compile my entire source, then I change a #define in the header file, and then run make build, and I get Nothing to be done for 'build'. (when I have uncommented either commented chunks of the below code). CC=gcc CFLAGS=-O2 -Wall LDFLAGS=-lSDL -lstdc++ SOURCES=$(wildcard *.cpp) OBJECTS=$(patsubst %.cpp, obj/%.o,$(SOURCES)) TARGET=bin/test.bin # Nothing happens when i uncomment the following. (automated attempt) #depend: .depend # #.depend: $(SOURCES) # rm -f ./.depend # $(CC) $(CFLAGS) -MM $^ >> ./.depend; # #include .depend # And nothing happens when i uncomment the following. x.cpp and x.h are files in my project. (manual attempt) #x.o: x.cpp x.h clean: rm -f $(TARGET) rm -f $(OBJECTS) run: build ./$(TARGET) debug: build nm $(TARGET) gdb $(TARGET) build: $(TARGET) $(TARGET): $(OBJECTS) @mkdir -p $(@D) $(CC) $(LDFLAGS) $(OBJECTS) -o $@ obj/%.o: %.cpp @mkdir -p $(@D) $(CC) -c $(CFLAGS) $< -o $@ include $(DEPENDENCIES)

    Read the article

  • Capistrano update causes C: to be placed in the current directory (cygwin)

    - by user321775
    When I run cap deploy:update in a directory on my local machine (via cygwin), "C:" magically appears in the directory. Sure enough, I can cd to it and it's my windows C: drive. Now I'm afraid to delete it, but I definitely don't want it in this directory (a rails project under /home/username/blah/blah). Here's my config/deploy.rb file. custom options set :application, "xyz.com" set :repository, "ssh://[email protected]:yyyy/home/git/xxx" set :user, "myname" set :runner, user set :use_sudo, false server "xxx.xxx.xxx.xxx:yyyy", :app, :web, :db, :primary = true deploy to set :deploy_to, "/home/myname/public_html/xyz" repository set :scm, :git set :deploy_via, :copy ssh options default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:port] = yyyy start passenger namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles = :app, :except = { :no_release = true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Anyone see the problem? And does anyone know a safe way of getting rid of the C: drives that have already shown up (this has happened in a few directories)?

    Read the article

  • How to debug a .bash_profile

    - by Blankman
    I was updating my .bash_profile, and unfortunetly I made a few updates and now I am getting: env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory -bash: tar: command not found -bash: grep: command not found -bash: cat: command not found -bash: find: command not found -bash: dirname: command not found -bash: /preexec.sh.lib: No such file or directory -bash: preexec_install: command not found -bash: sed: command not found -bash: git: command not found My bash_profile actually pulls in other .sh files (sources them) so I am not exactly sure which modification may have caused this. Now if I even try and to a list of files, I get: >ls -bash: ls: command not found -bash: sed: command not found -bash: git: command not found Any tips on how to trace the source of the error, and how to be able to use the terminal for basic things like listing files etc?

    Read the article

  • Every command fails with "command not found" after changing .bash_profile?

    - by Blankman
    I was updating my .bash_profile, and unfortunetly I made a few updates and now I am getting: env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory -bash: tar: command not found -bash: grep: command not found -bash: cat: command not found -bash: find: command not found -bash: dirname: command not found -bash: /preexec.sh.lib: No such file or directory -bash: preexec_install: command not found -bash: sed: command not found -bash: git: command not found My bash_profile actually pulls in other .sh files (sources them) so I am not exactly sure which modification may have caused this. Now if I even try and to a list of files, I get: >ls -bash: ls: command not found -bash: sed: command not found -bash: git: command not found Any tips on how to trace the source of the error, and how to be able to use the terminal for basic things like listing files etc?

    Read the article

  • Wildcards not being substituted

    - by user21463
    #!/bin/bash loc=`echo ~/.gvfs/*/DCIM/100_FUJI` rm -f /mnt/fujifilmA100 ln -s "$loc" /mnt/fujifilmA100 For some reason the variable * doesn't get substituted with the only possible value and gets given the value /home/chris/.gvfs/*/DCIM/100_FUJI. Does anyone have an idea of why? Please note: If global expansion fails, the pattern is not substituted. I ran the commands: chris@comp2008:~$ loc=`echo ~/.gvfs/*/DCIM/100_FUJI ` chris@comp2008:~$ echo $loc /home/chris/.gvfs/gphoto2 mount on usb%3A001,008/DCIM/100_FUJI So we can see the expansion should work I have now switched to using: loc = `find ~/.gvfs -name 100_FUJI ` I am just curious why it doesn't work as is. Debugging output using sh -x echo /home/chris/.gvfs/*/DCIM/100_FUJI loc=/home/chris/.gvfs/*/DCIM/100_FUJI rm -f /mnt/fujifilmA100 ln -s /home/chris/.gvfs/*/DCIM/100_FUJI/mnt/fujifilmA100

    Read the article

  • Windows Azure Evolution - Web Sites (aka Antares) Part 1

    - by Shaun
    This is the 3rd post of my Windows Azure Evolution series, focus on the new features and enhancement which was alone with the Windows Azure Platform Upgrade June 2012, announced at the MEET Windows Azure event on 7th June. In the first post I introduced the new preview developer portal and how to works for the existing features such as cloud services, storages and SQL databases. In the second one I talked about the Windows Azure .NET SDK 1.7 on the latest Visual Studio 2012 RC on Windows 8. From this one I will begin to introduce some new features. Now let’s have a look on the first one of them, Windows Azure Web Sites.   Overview Windows Azure Web Sites (WAWS), as known as Antares, was a new feature still in preview stage in this upgrade. It allows people to quickly and easily deploy websites to a highly scalable cloud environment, uses the languages and open source apps of the choice then deploy such as FTP, Git and TFS. It also can be integrated with Windows Azure services like SQL Database, Caching, CDN and Storage easily. After read its introduction we may have a question: since we can deploy a website from both cloud service web role and web site, what’s the different between them? So, let’s have a quick compare.   CLOUD SERVICE WEB SITE OS Windows Server Windows Server Virtualization Windows Azure Virtual Machine Windows Azure Virtual Machine Host IIS IIS Platform ASP.NET WebForm, ASP.NET MVC, WCF ASP.NET WebForm, ASP.NET MVC, PHP Language C#, VB.NET C#, VB.NET, PHP Database SQL Database SQL Database, MySQL Architecture Multi layered, background worker, message queuing, etc.. Simple website with backend database. VS Project Windows Azure Cloud Service ASP.NET Web Form, ASP.NET MVC, etc.. Out-of-box Gallery (none) Drupal, DotNetNuke, WordPress, etc.. Deployment Package upload, Visual Studio publish FTP, Git, TFS, WebMatrix Compute Mode Dedicate VM Shared Across VMs, Dedicate VM Scale Scale up, scale out Scale up, scale out As you can see, there are many difference between the cloud service and web site, but the main point is that, the cloud service focus on those complex architecture web application. For example, if you want to build a website with frontend layer, middle business layer and data access layer, with some background worker process connected through the message queue, then you should better use cloud service, since it provides full control of your code and application. But if you just want to build a personal blog or a  business portal, then you can use the web site. Since the web site have many galleries, you can create them even without any coding and configuration. David Pallmann have an awesome figure explains the benefits between the could service, web site and virtual machine.   Create a Personal Blog in Web Site from Gallery As I mentioned above, one of the big feature in WAWS is to build a website from an existing gallery, which means we don’t need to coding and configure. What we need to do is open the windows azure developer portal and click the NEW button, select WEB SITE and FROM GALLERY. In the popping up windows there are many websites we can choose to use. For example, for personal blog there are Orchard CMS, WordPress; for CMS there are DotNetNuke, Drupal 7, mojoPortal. Let’s select WordPress and click the next button. The next step is to configure the web site. We will need to specify the DNS name and select the subscription and region. Since the WordPress uses MySQL as its backend database, we also need to create a MySQL database as well. Windows Azure Web Sites utilize ClearDB to host the MySQL databases. You cannot create a MySQL database directly from SQL Databases section. Finally, since we selected to create a new MySQL database we need to specify the database name and region in the last step. Also we need to accept the ClearDB’s terms as well. Then windows azure platform will download the WordPress codes and deploy the MySQL database and website. Then it will be ready to use. Select the website and click the BROWSE button, the WordPress administration page will be shown. After configured the WordPress here is my personal web blog on the cloud. It took me no more than 10 minutes to establish without any coding.   Monitor, Configure, Scale and Linked Resources Let’s click into the website I had just created in the portal and have a look on what we can do. In the website details page where are five sections. - Dashboard The overall information about this website, such as the basic usage status, public URL, compute mode, FTP address, subscription and links that we can specify the deployment credentials, TFS and Git publish setting, etc.. - Monitor Some status information such as the CPU usage, memory usage etc., errors, etc.. We can add more metrics by clicking the ADD METRICS button and the bottom as well. - Configure Here we can set the configurations of our website such as the .NET and PHP runtime version, diagnostics settings, application settings and the IIS default documents. - Scale This is something interesting. In WAWS there are two compute mode or called web site mode. One is “shared”, which means our website will be shared with other web sites in a group of windows azure virtual machines. Each web site have its own process (w3wp.exe) with some sandbox technology to isolate from others. When we need to scaling-out our web site in shared mode, we actually increased the working process count. Hence in shared mode we cannot specify the virtual machine size since they are shared across all web sites. This is a little bit different than the scaling mode of the cloud service (hosted service web role and worker role). The other mode called “dedicate”, which means our web site will use the whole windows azure virtual machine. This is the same hosting behavior as cloud service web role. In web role it will be deployed on the virtual machines we specified and all of them are only used by us. In web sites dedicate mode, it’s the same. In this mode when we scaling-out our web site we will use more virtual machines, and each of them will only host our own website. And we can specify the virtual machine size in this mode. In the developer portal we can select which mode we are using from the scale section. In shared mode we can only specify the instance count, but in dedicate mode we can specify the instance size as well as the instance count. - Linked Resource The MySQL database created alone with the creation of our WordPress web site is a linked resource. We can add more linked resources in this section.   Pricing For the web site itself, since this feature is in preview period if you are using shared mode, then you will get free up to 10 web sites. But if you are using dedicate mode, the price would be the virtual machines you are using. For example, if you are using dedicate and configured two middle size virtual machines then you will pay $230.40 per month. If there is SQL Database linked to your web site then they will be charged separately based on the Pay-As-You-Go price. For example a 1GB web edition database costs $9.99 per month. And the bandwidth will be charged as well. For example 10GB outbound data transfer costs $1.20 per month. For more information about the pricing please have a look at the windows azure pricing page.   Summary Windows Azure Web Sites gives us easier and quicker way to create, develop and deploy website to window azure platform. Comparing with the cloud service web role, the WAWS have many out-of-box gallery we can use directly. So if you just want to build a blog, CMS or business portal you don’t need to learn ASP.NET, you don’t need to learn how to configure DotNetNuke, you don’t need to learn how to prepare PHP and MySQL. By using WAWS gallery you can establish a website within 10 minutes without any lines of code. But in some cases we do need to code by ourselves. We may need to tweak the layout of our pages, or we may have a traditional ASP.NET or PHP web application which needed to migrated to the cloud. Besides the gallery WAWS also provides many features to download, upload code. It also provides the feature to integrate with some version control services such as TFS and Git. And it also provides the deploy approaches through FTP and Web Deploy. In the next post I will demonstrate how to use WebMatrix to download and modify the website, and how to use TFS and Git to deploy automatically one our code changes committed.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to detect APC UPS battery usage and run a script when on battery

    - by Andy Arismendi
    I have a couple APC UPS - Smart-UPS RT 6000 RM XL Smart-UPS RT 5000 RM XL Unfortunately the power in my office likes to go out (out of my control) and hence the equipment powered by these UPS shuts down. They power a VMware infrastructure environment (VMware Lab Manager) and what I'd like to do is detect when one is on battery (say has been for x amount of time or has x percentage left) and run a script on this event. What software do I need to detect a on-battery event and have it run a script? Thanks!

    Read the article

  • mysql: job failded to start. mysqld.sock is missing

    - by Freefri
    How can I fix this and start mysql-server? After /etc/init.d/mysql start or service mysql start I get the message start: "Job failed to start" And after # mysqld I get this: mysqld 121123 11:33:33 [ERROR] Can't find messagefile '/usr/share/mysql/errmsg.sys' 121123 11:33:33 [Note] Plugin 'FEDERATED' is disabled. mysqld: Unknown error 1146 121123 11:33:33 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 121123 11:33:33 InnoDB: The InnoDB memory heap is disabled 121123 11:33:33 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121123 11:33:33 InnoDB: Compressed tables use zlib 1.2.3.4 121123 11:33:33 InnoDB: Initializing buffer pool, size = 128.0M 121123 11:33:33 InnoDB: Completed initialization of buffer pool 121123 11:33:33 InnoDB: highest supported file format is Barracuda. 121123 11:33:33 InnoDB: Waiting for the background threads to start 121123 11:33:34 InnoDB: 1.1.8 started; log sequence number 1595675 121123 11:33:34 [ERROR] Aborting 121123 11:33:34 InnoDB: Starting shutdown... 121123 11:33:35 InnoDB: Shutdown completed; log sequence number 1595675 121123 11:33:35 [Note] I try to do what mysql say me to do: mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed And yes, /var/run/mysql is empty: mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed And this is my file /etc/mysql/my.cnf # cat /etc/mysql/my.cnf |grep sock # Remember to edit /etc/mysql/debian.cnf when changing the socket location. socket = /var/run/mysqld/mysqld.sock socket = /var/run/mysqld/mysqld.sock socket = /var/run/mysqld/mysqld.sock Then I try to reinstall mysql from cero: apt-get purge mysql-client mysql-common mysql-server rm -R /var/lib/mysql rm -R /etc/mysql rm -R /var/run/mysqld userdel mysql apt-get install mysql-server mysql-client Then, after typing my root password for mysql I get this error: | Unable to set password for the MySQL "root" user ¦ ¦ ¦ ¦ An error occurred while setting the password for the MySQL administrative ¦ ¦ user. This may have happened because the account already has a password, or ¦ ¦ because of a communication problem with the MySQL server. ¦ ¦ ¦ ¦ You should check the account's password after the package installation. ¦ ¦ ¦ ¦ Please read the /usr/share/doc/mysql-server-5.5/README.Debian file for more ¦ ¦ information. And again I can't start mysql getting the same messages.

    Read the article

  • Is /dev in linux virtual?

    - by user973917
    Today at work a client had rm -rf /dev and ended up deleting two files in /dev/shm that forced his site to no longer work. From what I learned previously /dev is not virtual, but a fellow technician had suggested to reboot the server because /dev is virtual like /proc. Sure enough I rebooted the server and the files that the client rm -rf'd were there. So, my question is; is /dev virtual? Is it the kind of virtual like /proc? Is there more documentation on this? How can I restore the /dev files without a server reboot?

    Read the article

  • "php: command not found" after changing PHP system files in OS X

    - by Aurelien Porte
    I wanted to install Symfony on Mac OS X Lion. Apparently, as MAMP was already installed on my computer, there was a problem with the "timezone" field in the php.ini file. I can't remember exactly the error but basically, Symfony installation required a timezone like "Europe/Paris" but MAMP apparently changed that part. Well, it's very vague but I've seen on the web that other people had the same issue. So I tried one of the solution I found (without success) but: It didn't work. I can not use the php command anymore ("-bash: php: command not found"). I can not remember the exact commands I did to go back. Here are some potential relevant commands I found in my history and that correspond with the beginning of my problem, in this order: sudo mv /usr/bin/php /usr/bin/php-old sudo ln -s /Applications/MAMP/bin/php5/bin/php /usr/bin/php rm /usr/bin/php-old sudo cp php.ini.default /etc/php.ini rm php.ini but I don't know anymor in which repertory I was. sudo mv /usr/bin/php-old /usr/bin/php

    Read the article

  • Bash Script help required

    - by Sunil J
    I am trying to get this bash script that i found on a forum to work. Copied it to text editor. Saved it as script.sh chmod 700 and tried to run it. rootdir="/usr/share/malware" day=`date +%Y%m%d` url=`echo "wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:\": :g' |\ awk '{print \"http://lists.clean-mx.com/pipermail/viruswatch/$day/\"$3}'"|sh` filename=`wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:": :g' |awk '{print $3}'` links -dump $url$filename | awk '/Up/'|grep "TR\|exe" | awk '{print $2,$8,$10,$11,$12"\n"}' > $rootdir/>$filename dirname=`wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:": :g' |awk '{print $3}'|sed 's:.html::g'` rm -rf $rootdir/$dirname mkdir $rootdir/$dirname cd $rootdir grep "exe$" $filename |awk '{print "wget \""$5"\""}' | sh ls *.exe | xargs md5 >> checksums mv *.exe $dirname rm -r $rootdir/*exe* mv checksums $rootdir/$dirname mv $filename $rootdir/$dirname I get the following message.. script.sh: line 11: /usr/share/malware/: Is a directory script.sh: line 11: links: command not found

    Read the article

  • Can't delete a directory on external drive (OS X)

    - by Martin Tóth
    I have a brand new Transcend StoreJet 25M3 (external HDD) mounted to MacBook (Leopard 10.5.8) at /Volumes/Transcend. I copied some data from my old Windows (XP) machine on it, and now, after cleaning some stuff up, I wanted to delete some directories, but this is what happened: $ rmdir My\ Pictures/ rmdir: My Pictures/: Operation not permitted Using Finder just asks for password, but does not delete the directory (sound of "moved to Trash" is played). I thought it's some permission "thing", but: $ ls -l drwxrwxrwx 1 martin staff 32768 5 jan 16:11 My Pictures/ $ sudo rm -rf My\ Pictures rm: My Pictures: Operation not permitted I re-mounted, rebooted (thinking that there's some file lock), but that did not help. What might have happened here? How to delete it?

    Read the article

  • Error when running debuild on package source

    - by Chris Wilson
    I'm attempting to build the squeak-vm source but am getting an error every time I do so. The output is: dpkg-buildpackage -rfakeroot -D -us -uc dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package squeak-vm dpkg-buildpackage: source version 1:4.0.3.2202-2 dpkg-buildpackage: source changed by José L. Redrejo Rodríguez <[email protected]> dpkg-source --before-build squeak-vm-4.0.3.2202 dpkg-buildpackage: host architecture i386 fakeroot debian/rules clean dh_testdir dh_testroot rm -f build-stamp configure-stamp rm -f unix/cmake/config.sub unix/cmake/config.guess /usr/bin/make -f debian/rules unpatch make[1]: Entering directory `/home/notgary/Projects/squeak/squeak-vm-4.0.3.2202' QUILT_PATCHES=debian/patches \ quilt --quiltrc /dev/null pop -a -R || test $? = 2 Patch linex.patch does not remove cleanly (refresh it or enforce with -f) make[1]: *** [unpatch] Error 1 make[1]: Leaving directory `/home/notgary/Projects/squeak/squeak-vm-4.0.3.2202' make: *** [clean] Error 2 dpkg-buildpackage: error: fakeroot debian/rules clean gave error exit status 2 debuild: fatal error at line 1337: dpkg-buildpackage -rfakeroot -D -us -uc failed

    Read the article

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Ubuntu Desktop does not load

    - by Niklas
    If I login on my Ubuntu 14.04, I get the following desktop: This weird behavior appeared after I executed sudo apt-get update && sudo apt-get upgrade and restarted my computer. Don't know why though. To my Ubuntu I have tried the following (nothing seems to work so far) Fix any broken packages: sudo apt-get update sudo apt-get autoclean sudo apt-get clean sudo apt-get autoremove Locate any broken packages and reinstall them: sudo apt-get install debsums sudo apt-get clean sudo debsums_init sudo debsums -cs sudo apt-get install --reinstall $(sudo dpkg -S $(sudo debsums -c) | cut -d : -f 1 | sort -u) Removing some compiz files: rm -r ~/.cache/compizconfig-1 rm -r ~/.compiz Purging of NVIDIA and installing NVIDIA-prime: sudo apt-get install --reinstall ubuntu-desktop sudo apt-get install unity sudo apt-get purge nvidia* bumblebee* sudo apt-get install nvidia-prime sudo shutdown -r now Compizconfig Settings Manager: sudo apt-get install compizconfig-settings-manager export DISPLAY=:0 ccsm // Back to UI and enablement of Unity Plugin Unity replace, which stopped at a while and did nothing afterwards unity --replace Some dconf reset dconf reset -f /org/compiz/ unity --reset-icons &disown Actually dconf did not work and I got this error: error: Cannot autolaunch D-Bus without X11 $DISPLAY Can anybody help me on that? This is my hardware (hope it helps in any way): Intel® Core™ i7-3770 ASUS GTX660TI-DC2-OG-2GD5 (NVIDIA driver is/was installed) ASUS P8Z77-V LX Corsair DIMM 8 GB DDR3-1600 Kit Samsung 830series 2,5" 256 GB (Windows is installed here) Seagate ST31000524AS 1 TB (3/4 are reserved for files; 1/4 is for Ubuntu (16GB swap included))

    Read the article

  • Best solution for a team home server

    - by aliasbody
    I created a home server with Ubuntu 12.04 Server (using an old Netbook with an Atom CPU and 512Mb). The idea is just to be used for a small team (maximum 10 persons) that will have constant access by SSH to the main projects and could add features with Git, and will, as well, have their own directory (with VirtualHost configured) for their own personal projects. Everything is configured and running, but my question is : What is the best solution here for everyone to work? It is to have them on the http group and then all have access as normal users to the /var/www folder (that also contains GitWeb and Drupal), or would be to create a new user named after the project (as an example) where only those with the password could have access to work (configured with VirtualHost). Notice: The idea is to have 1 person responsible of the server directly (since he is the one who is hosting it), 2 more people that will have access to the root from their home in order to configure anything from their home, plus anyone else that joins the group without any root access, but just the necessary access to create personal works and work with Git.

    Read the article

  • Accidentally deleted /opt/local/bin without backup. Any help?

    - by Aaron
    Hi all, I'm on a Mac OS X 10.5.8 I was recently uninstalling mysql5 from /opt/local/bin. I typed: rm -rf /opt/local/bin mysql* instead of rm -rf /opt/local/bin/mysql* This deleted my entire /opt/local/bin directory which puts me in a bit of a bind. Is there any way to recover these files? If not, I have a friend that is using a similar set of programs, would it be possible to use the contents of his folder? If I end up needing to re-install everything in this folder, what is the best way to go about doing this? Thanks in advance!

    Read the article

  • Identify my terminal session that started a particular process

    - by Sam
    I'm using Gnome on Ubuntu. I often have 8-20 terminal sessions open and in some of them I have su'd to a different user. The specific problem that caused me to write this query happens when using git status, but this is more general issue. git status will tell me I have an uncontrolled file .foo.java.swp. This means that in one of my terminal sessions I have vi open on foo.java. I need a script or tool that would tell me in which terminal session that vi is running. I can do a "ps aux | grep vi" to pretty easily find the pid of the particular vi. It would be nice if the tool highlighted the terminal on my task bar in some way. Thanks. -Sam

    Read the article

  • How to install custom c library?

    - by arijit
    I just wanted to add a c library to Ubuntu which was created by Harvard University for cs50 course. They provided instructions for how to install the library which is listed below. Debian, Ubuntu First become root, as with: sudo su - Then install the CS50 Library as follows: apt-get install gcc wget http://mirror.cs50.net/library/c/cs50-library-c-3.1.zip unzip cs50-library-c-3.1.zip rm -f cs50-library-c-3.1.zip cd cs50-library-c-3.1 gcc -c -ggdb -std=c99 cs50.c -o cs50.o ar rcs libcs50.a cs50.o chmod 0644 cs50.h libcs50.a mkdir -p /usr/local/include chmod 0755 /usr/local/include mv -f cs50.h /usr/local/include mkdir -p /usr/local/lib chmod 0755 /usr/local/lib mv -f libcs50.a /usr/local/lib cd .. rm -rf cs50-library-c-3.1 I did exactly as directed. But the compiler reported “Undefined reference to a function”--the function was Get String. So, I searched for a solution and found one. It said to use the -l switch. Now when I compile I use something like: gcc –o hello.c hello –lcs50 (I don’t remember the exact command.) However, I cannot use the make command, which is easier to use. I understand that there is some problem with linking the library. What is a good solution to this problem?

    Read the article

  • Why can't I install from software center?

    - by user64720
    There was a problem upgrading to Firefox 13. This error kept returning: /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb W: Waited for dpkg --assert-multi-arch but was not there - dpkgGo (10: There are no "child" processes). Now it seems that there is some problem with dpkg and I can't install anything from software center. I already tried to clean previous packages with sudo rm /var/lib/apt/lists/* -vf and then sudo apt-get update, it didn't work. When running sudo dpkg --configure -a, I get this: dpkg: problems with dependencies prevent the configuration of firefox-globalmenu: firefox-globalmenu depends on firefox (= 13.0+build1-0ubuntu0.12.04.1); however: The package is not installed. dpkg: error while processing firefox-globalmenu (--configure): problems com dependencies - leaving unconfigured There has been found errors while processing: firefox-globalmenu What should I do to fix this?? EDIT: I don't have the necessary expertise to understand why what I did worked and what was causing the conflict, but anyway, since there was a problem with firefox-globalmenu:, I went to synaptics package manager, I removed this particular package and reinstalled it. After that, I was able to install Firefox from synaptics and also any other applications from software center. However, still there was a problem, when running sudo apt-get update, the following kept returning: Failed to get gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages Verification code hash doesn't match. E: Some archives index failed at being downloaded. They have been ignored, or older copies are used instead. So I typed sudo rm /var/lib/apt/lists/* -vf in terminal and then again sudo apt-get update and everything is fine now. I did this before an answer was posted, anyway I agree the problem was that particular package and its removal. So I'll mark the below answer as accepted.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >