Search Results

Search found 6054 results on 243 pages for 'git extensions'.

Page 113/243 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Office 2013 : les détails de la version pour tablettes sous Windows RT, une déclinaison qui aura quelques limitations mais pas trop

    Office 2013 pour Windows RT serait limité en fonctionnalités Microsoft aurait supprimé le support des macros, des extensions et de VBA Microsoft avait annoncé que les tablettes ARM sur lesquelles seront exécutées Windows RT intégreront par défaut la suite bureautique Office 2013. Des sources officieuses, il semblerait que la firme aurait décidé que cette version d'Office serait dépourvue d'un certain nombre de fonctionnalités. Selon TheVerge, les fonctions comme les macros, les extensions tierces, le support de VBA et un petit nombre d'autres fonctionnalités ont été supprimées. Comme pour la version Metro d'Internet Explorer (dont les plugins ne sont pas autorisés), Micro...

    Read the article

  • Can't get packages after installing Faience

    - by ccrama
    I installed the Faience theme (sudo apt-get install Faience) and it installed fine. Then I tried installing another package and it said this... Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: faience : Depends: faenza but it is not going to be installed gnome-shell-extensions-user-theme : Depends: gnome-shell-extensions-common but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Please help :O!

    Read the article

  • Release Note for 3/30/2012

    We have been pretty busy working on a new UI for CodePlex, I will have a preview post coming shortly. Here are the notes from today’s release: Updated source code tab to show Author and Committer for Git (Thanks to Brad Wilson for reporting) Fixed issue where pagination did not work correctly in topic view Fixed issue where additional comments on a given line of code would get overridden for Git project Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Polymorphism problem: How to check type of derived class?

    - by malymato
    Hi, this is my first question here :) I know that I should not check for object type but instead use dynamic_cast, but that would not solve my problem. I have class called Extension and interfaces called IExtendable and IInitializable, IUpdatable, ILoadable, IDrawable (the last four are basicly the same). If Extension implements IExtendable interface, it can extend itself with different Extension objects. The problem is that I want to allow the Extension which implements IExtendable to extend only with Extension that implements the same interfaces as the original Extension. You probably don't unerstand that mess so I try to explain it with code: class IExtendable { public: IExtendable(void); void AddExtension(Extension*); void RemoveExtensionByID(unsigned int); vector<Extension*>* GetExtensionPtr(){return &extensions;}; private: vector<Extension*> extensions; }; class IUpdatable { public: IUpdatable(void); ~IUpdatable(void); virtual void Update(); }; class Extension { public: Extension(void); virtual ~Extension(void); void Enable(){enabled=true;}; void Disable(){enabled=false;}; unsigned int GetIndex(){return ID;}; private: bool enabled; unsigned int ID; static unsigned int _indexID; }; Now imagine the case that I create Extension like this: class MyExtension : public Extension, public IExtendable, public IUpdatable, public IDrawable { public: MyExtension(void); virtual ~MyExtension(void); virtual void AddExtension(Extension*); virtual void Update(); virtual void Draw(); }; And I want to allow this class to extend itself only with Extensions that implements the same interfaces (or less). For example I want it to be able to take Extension which implements IUpdatable; or both IUpdatable and IDrawable; but e.g. not Extension which implements ILoadable. I want to do this because when e.g. Update() will be called on some Extension which implements IExtendable and IUpdateable, it will be also called on these Extensions which extends this Extension. So when I'm adding some Extension to Extension which implements IExtendable and some of the IUpdatable, ILoadable... I'm forced to check if Extension that is going to be add implements these interfaces too. So In the IExtendable::AddExtension(Extension*) I would need to do something like this: void IExtendable::AddExtension(Extension* pEx) { bool ok = true; // check wheather this extension can take pEx // do this with every interface if ((*pEx is IUpdatable) && (*this is_not IUpdatable)) ok = false; if (ok) this->extensions.push_back(pEx); } But how? Any ideas what would be the best solution? I don't want to use dynamic_cast and see if it returns null... thanks

    Read the article

  • evaluating cost/benefits of using extension methods in C# => 3.0

    - by BillW
    Hi, In what circumstances (usage scenarios) would you choose to write an extension rather than sub-classing an object ? < full disclosure : I am not an MS employee; I do not know Mitsu Furota personally; I do know the author of the open-source Componax library mentioned here, but I have no business dealings with him whatsoever; I am not creating, or planning to create any commercial product using extensions : in sum : this post is from pure intellectal curiousity related to my trying to (continually) become aware of "best practices" I find the idea of extension methods "cool," and obviously you can do "far-out" things with them as in the many examples you can in Mitsu Furota's (MS) blog postslink text. A personal friend wrote the open-source Componax librarylink text, and there's some remarkable facilities in there; but he is in complete command of his small company with total control over code guidelines, and every line of code "passes through his hands." While this is speculation on my part : I think/guess other issues might come into play in a medium-to-large software team situation re use of Extensions. Looking at MS's guidelines at link text, you find : In general, you will probably be calling extension methods far more often than implementing your own. ... In general, we recommend that you implement extension methods sparingly and only when you have to. Whenever possible, client code that must extend an existing type should do so by creating a new type derived from the existing type. For more information, see Inheritance (C# Programming Guide). ... When the compiler encounters a method invocation, it first looks for a match in the type's instance methods. If no match is found, it will search for any extension methods that are defined for the type, and bind to the first extension method that it finds. And at Ms's link text : Extension methods present no specific security vulnerabilities. They can never be used to impersonate existing methods on a type, because all name collisions are resolved in favor of the instance or static method defined by the type itself. Extension methods cannot access any private data in the extended class. Factors that seem obvious to me would include : I assume you would not write an extension unless you expected it be used very generally and very frequently. On the other hand : couldn't you say the same thing about sub-classing ? Knowing we can compile them into a seperate dll, and add the compiled dll, and reference it, and then use the extensions : is "cool," but does that "balance out" the cost inherent in the compiler first having to check to see if instance methods are defined as described above. Or the cost, in case of a "name clash," of using the Static invocation methods to make sure your extension is invoked rather than the instance definition ? How frequent use of Extensions would affect run-time performance or memory use : I have no idea. So, I'd appreciate your thoughts, or knowing about how/when you do, or don't do, use Extensions, compared to sub-classing. thanks, Bill

    Read the article

  • Unable to install ffmpeg-php

    - by matt74tm
    I followed the instructions on http://www.mysql-apache-php.com/ffmpeg-install.htm but ffmpeg-php does not show up in my phpinfo() The commands I ran (in order) #yum install ffmpeg ffmpeg-devel ... Public key for faac-1.26-1.el5.rf.x86_64.rpm is not installed #rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm ... 1:rpmforge-release ########################################### [100%] #yum install ffmpeg ... Complete! #wget http://space.dl.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2 ... #tar -xjf ffmpeg-php-0.6.0.tbz2 #cd ffmpeg-php-0.6.0 #phpize ... configure: error: ffmpeg headers not found. Make sure ffmpeg is compiled as shared libraries using the --enable-shared option #yum install ffmpeg-devel ... Complete! #./configure ... config.status: creating config.h #make ... Build complete. Don't forget to run 'make test'. #make install Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ #ls -al /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ ... -rwxr-xr-x 1 root root 185285 Sep 20 03:36 ffmpeg.so* ... #nano /usr/local/lib/php.ini In which I put these two lines at the end of the php.ini file [ffmpeg] extension=ffmpeg.so Then, #service httpd restart But phpinfo() still does not show any 'ffmpeg' section. This is the correct php.ini because: #php -i | grep php\.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini

    Read the article

  • IIS6 Wildcard Mapping to ASP.NET - no file extension results in IIS 404

    - by Ian Robinson
    I'm trying to perform what I understand to be a relatively simple task. I'd like to remove the extensions from the URLs on my website. I have the proper set up in my application to handle and rewrite the URLs - the trouble is I can't get past IIS to actually get to my application without the extensions. The details: I'm running IIS6 on Windows Server 2003. I've gone into the web site for my application, gone to the home directory tab, clicked "Configuration" and added a wildcard map to the following file: c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll Which I verified is the same as what is used above in the application extensions portion by .ascx, etc. If I navigate to http://mywebsite.com/Blogs the result is as follows: HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Thu, 14 Jan 2010 15:04:49 GMT Which seems to be a standard IIS 404 message. If I navigate to http://mywebsite.com/Blogs.aspx I get my ASP.NET app.... How can I troubleshoot this? I feel like I've double checked everything a dozen times but to no avail. I must be missing something obvious. Update: Here are the exact instructions given by the asp.net url rewriter that I'm using: IIS 6.0 - Windows 2003 Server open property page for website / virtual directory. click the 'home directory' tab click the 'configuration' button, select the 'mappings' tab click 'insert' next to the 'Wildcard application maps' section browse to the aspnet_isapi.dll (normally at c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll) Ensure that 'check that file exists' is unchecked Click OK, OK, OK to close and apply changes Update 2: I have yet to find a resolution for this. The application does not seem to be receiving the request from IIS, any further ideas?

    Read the article

  • Unable to install ffmpeg-php

    - by matt_tm
    Hi, I followed the instructions on http://www.mysql-apache-php.com/ffmpeg-install.htm but ffmpeg-php does not show up in my phpinfo() The commands I ran (in order) #yum install ffmpeg ffmpeg-devel ... Public key for faac-1.26-1.el5.rf.x86_64.rpm is not installed #rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm ... 1:rpmforge-release ########################################### [100%] #yum install ffmpeg ... Complete! #wget http://space.dl.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2 ... #tar -xjf ffmpeg-php-0.6.0.tbz2 #cd ffmpeg-php-0.6.0 #phpize ... configure: error: ffmpeg headers not found. Make sure ffmpeg is compiled as shared libraries using the --enable-shared option #yum install ffmpeg-devel ... Complete! #./configure ... config.status: creating config.h #make ... Build complete. Don't forget to run 'make test'. #make install Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ #ls -al /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ ... -rwxr-xr-x 1 root root 185285 Sep 20 03:36 ffmpeg.so* ... #nano /usr/local/lib/php.ini In which I put these two lines at the end of the php.ini file [ffmpeg] extension=ffmpeg.so Then, #service httpd restart But phpinfo() still does not show any 'ffmpeg' section. This is the correct php.ini because: #php -i | grep php\.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini

    Read the article

  • Mediawiki extension error

    - by vinylguitar
    I'm running the latest version of mediawiki using MoWeS Portable II from my desktop. I just installed this extension on the wiki http://www.mediawiki.org/wiki/Extension:MsUpload It adds an option to upload files (to be embedded in an article) to the edit screen of an article. After installing it when I try and edit an article I get the following error: Fatal error: Call to undefined method OutputPage::addModules() in C:\Users\User\Desktop\knowledge mapedia 10 25 13 copy\mowes_portable\www\mediawiki\extensions\MsUpload\msupload.php on line 65 Also here is what I posted in the localsettings.php file (I put it in at the end of localsettings.php if it makes a difference): Start --------------------------------------- MsUpload $wgMSU_ShowAutoKat = false; #autocategorisation $wgMSU_CheckedAutoKat = false; #checkbox for autocategorisation checked $wgMSU_debug = false; #debug mode $wgMSU_ImgParams = '400px'; #default max-size for inserted image $wgMSU_UseDragDrop = true; #show drag&drop area require_once "$IP/extensions/MsUpload/msupload.php"; End --------------------------------------- MsUpload require_once "$IP/extensions/msupload/msupload.php"; At line 65 in the localsettings.php file there is the following: line 64 ## Database settings line 65 $wgDBtype = "mysql"; line 66 $wgDBserver = "localhost"; line 67 $wgDBname = "mediawiki"; line 68 $wgDBuser = "root"; line 69 $wgDBpassword = ""; Any idea what I'm doing wrong?

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • How should secret files be pushed to an EC2 (on AWS) Ruby on Rails application?

    - by nikc
    How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk? I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using: git aws.push The following files are in the .gitignore: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb Following this link I attempted to add an S3 file to my deployment: http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html Quoting from that link: Example Snippet The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp: sources: /etc/myapp: http://s3.amazonaws.com/mybucket/myobject Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .elasticbeanstalk .ebextensions directory: sources: /var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz That config.tar.gz file will extract to: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message: Error message: No such file or directory - /var/app/current/config/database.yml Exception class: Errno::ENOENT Application root: /var/app/current

    Read the article

  • Dash (-) in directory listing

    - by Mazzy
    I've Googled around for this to no avail, I'm sure its just something simple but I have not been able to figure this out perhaps because searching in Google or SF for a "-" can be problematic. I had a strange directory listing show up the other day in my git repository within Drupal. Listing my sites directory looks like this: -sh-4.1$ ls -al total 52 drwxr-xr-x 5 (hide) (hide) 4096 Dec 6 16:15 . drwxr-xr-x 24 (hide) (hide) 4096 Dec 11 16:22 .. -rw-rw-r-- 1 (hide) (hide) 24271 Dec 6 15:57 – drwxrwxr-x 4 (hide) (hide) 4096 Sep 17 11:53 all drwxr-xr-x 3 (hide) (hide) 4096 Sep 17 11:54 default drwxrwxr-x 8 (hide) (hide) 4096 Dec 11 17:40 .git -rw-rw-r-- 1 (hide) (hide) 476 Sep 17 11:53 .gitignore -rw-rw-r-- 1 (hide) (hide) 81 Sep 17 11:53 README.md This "-" file cannot be opened and does not appear to be a symlink, although when I execute "cd -" I get this: -sh-4.1$ cd - /home/sites/dev1.(hide).com That is coincidentally or not the users home directory, and the site's root directory. The other strange this is this entry does not show up for any other user browsing this same directory. Nor does it show up for other users period in their Git directories. The entry cannot be removed via RM. Running Centos 6.2 by the way...

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • Why does this preseed for gitolite fail?

    - by troutwine
    I'm installing gitolite on a Debian Squeeze box with the following preseed: gitolite gitolite/gituser string git gitolite gitolite/adminkey string ssh-rsa AAAAB3ECT gitolite gitolite/gitdir string /var/lib/git On installation: # debconf-set-selections /var/cache/debconf/gitolite.preseed # apt-get install gitolite Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: git-daemon-run gitweb The following NEW packages will be installed: gitolite 0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. Need to get 0 B/114 kB of archives. After this operation, 348 kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 24715 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... adduser: The home dir must be an absolute path. dpkg: error processing gitolite (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: gitolite E: Sub-process /usr/bin/dpkg returned an error code (1) Why? The pre-seed was extracted from a manually configured installation, per here and exists without issue on another machine.

    Read the article

  • Which version control should I use for my configuration files?

    - by rakete
    I want to store some of my configuration files (~/.emacs.d/, .Xdefaults, etc. linux $HOME stuff) in version control so I can easily sync them with my notebook/workplace and see my past changes and revert to them should the need arise. So far it seems to me that there are quite some people using git for this and I think that I too want to use a distributed vcs for this (if only to get more used to them) but I can't say that I am very experienced with all things dvcs. I did use darcs and git briefly and so far I can say that I really like the way git handles branches, and I think the possibility to have different branches within the same directory is especially useful for my use case. Darcs on the other hand has cherry picking of patches, which too is quite the convenient feature when managing configuration files (at least I assume it is). So, what would you recommend to use? And what would be your reasoning for your recommendation? What other vcs with nice feature that I haven't mentioned exist and would make a good vcs to store configuration files and why?

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • How can I switch from a custom linux network namespace back to the default one?

    - by Martin
    With ip netns exec you can execute a command in a custom network namespace - but is there also a way to execute a command in the default namespace? For example, after executing these two commands: sudo ip netns add test_ns sudo ip netns exec test_ns bash How can the newly created bash execute programs in the default network namespace? There is no ip netns exec default or anything similar as far as I've found. My scenario is: I want to run a SSH server in a separate network namespace (to keep the rest of the system unaware of the network connection, as the system is used for network testing), but want to be able to execute programs in the default network namespace via the SSH connection. What I've found out so far: Created network namespaces are listed as files under /var/run/netns (but there is no file for the default namespace) The ip netns exec code can be found here: http://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/ip/ipnetns.c#n132 - I haven't grasped everything that it is doing yet, but it doesn't look very promising. ip netns identify $$ as suggested by Howto query and change network namespace on linux? returns nothing when in the default network namespace

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • Determine which user initiated call in Asterisk

    - by adaptive
    I had the following code in my extensions.conf file: [local] exten => _NXXNXXXXXX,1,Set(CALLERID(name)=${OUTGOING_NAME}) exten => _NXXNXXXXXX,n,Set(CALLERID(num)=${OUTGOING_NUMBER}) Now I want to change this code to set the CallerID and number based on the user/extension that is making the call. In fact I have four(4) users/extensions in my sip.conf and only one of them (the one I use for business) is supposed to send a different caller id/number. Everything is in the same context (for simplicity) since all lines need to be able to pick up an incoming call. The only difference is when line1 needs to make a call, it has to send a different caller id/number and use a different provider. This is what I have so far: [local] exten => _NXXNXXXXXX,1,Set(line=${SIP_HEADER(From)}) exten => _NXXNXXXXXX,n,Verbose(line variable is <${line}>) exten => _NXXNXXXXXX,n,Set(CALLERID(name)=${IF($[ ${line} = line1 ]?${COMPANY_NAME}:${FAMILY_NAME})}) exten => _NXXNXXXXXX,n,Set(CALLERID(num)=${IF($[ ${line} = line1 ]?${COMPANY_NUMBER}:${FAMILY_NUMBER})}) exten => _NXXNXXXXXX,n,Dial(${IF($[ ${line} = line1]?SIP/${EXTEN}@${COMPANY_PROVIDER}:SIP/${EXTEN}@${FAMILY_PROVIDER})}) I really don't know if this is correct and I'm afraid to commit these changes to my extensions.conf before validating. Any help will be greatly appreciated.

    Read the article

  • How to decouple development server from Internet?

    - by intoxicated.roamer
    I am working in a small set-up where there are 4 developers (might grow to 6 or 8 in cuople of years). I want to set-up an environment in which developers get an internet access but can not share any data from the company on internet. I have thought of the following plan: Set-up a centralized git server (Debian). The server will have an internet access. A developer will only have git account on that server, and won't have any other account on it. Do not give internet access to developer's individual machine (Windows XP/Windows 7). Run a virtual machine (any multi-user OS) on the centralized server (the same one on which git is hosted). Developer will have an account on this virtual machine. He/she can access internet via this virtual machine. Any data-movement between this virtual machine and underlying server, as well as any of the developer's machine, is prohibited. All developers require USB port on their local machine, so that they can burn their code into a microcontroller. This port will be made available only to associated software that dumps the code in a microcontroller (MPLAB in current case). All other softwares will be prohibited from accessing the port. As more developers get added, providing internet support for them will become difficult with this plan as it will slow down the virtual machine running on the server. Can anyone suggest an alternative ? Are there any obvious flaws in the above plan ? Some key details of the server are as below: 1) OS:Debian 2) RAM: 8GB 3) CPU: Intel Xeon E3-1220v2 4C/4T

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • Where is android.os.SystemProperties

    - by Travis
    I'm looking at the Android Camera code and when I try importing android.os.SystemProperties It cannot be found. here is the file I'm looking at. http://android.git.kernel.org/?p=platform/packages/apps/Camera.git;a=blob;f=src/com/android/camera/VideoCamera.java;h=8effd3c7e2841eb1ccf0cfce52ca71085642d113;hb=refs/heads/eclair I created a new 2.1 project and tried importing this namespace again, but It still cannot be found. I checked developer.android.com and SystemProperties was not listed Did i miss something?

    Read the article

  • AutoIt scripts runs without error but I can't see archive?

    - by Scott
    #include <File.au3> #include <Zip.au3> ; bad file extensions Local $extData="ade|adp|app|asa|ashx|asp|bas|bat|cdx|cer|chm|class|cmd|com|cpl|crt|csh|der|exe|fxp|gadget|hlp|hta|htr|htw|ida|idc|idq|ins|isp|its|jse|ksh|lnk|mad|maf|mag|mam|maq|mar|mas|mat|mau|mav|maw|mda|mdb|mde|mdt|mdw|mdz|msc|msh|msh1|msh1xml|msh2|msh2xml|mshxml|msi|msp|mst|ops|pcd|pif|prf|prg|printer|pst|reg|rem|scf|scr|sct|shb|shs|shtm|shtml|soap|stm|url|vb|vbe|vbs|ws|wsc|wsf|wsh" Local $extensions = StringSplit($extData, "|") ; What is the root directory? $rootDirectory = InputBox("Root Directory", "Please enter the root directory...") archiveDir($rootDirectory) Func archiveDir($dir) $goDirs = True $goFiles = True ; Get all the files under the current dir $allOfDir = _FileListToArray($dir) Local $countDirs = 0 Local $countFiles = 0 $imax = UBound($allOfDir) For $i = 0 to $imax - 1 If StringInStr(FileGetAttrib($dir & "\" & $allOfDir[$i]),"D") Then $countDirs = $countDirs + 1 ElseIf StringInStr(($allOfDir[$i]),".") Then $countFiles = $countFiles + 1 EndIf Next MsgBox(0, "Value of $countDirs in " & $dir, $countDirs) MsgBox(0, "Value of $countFiles in " & $dir, $countFiles) If ($countDirs > 0) Then Local $allDirs[$countDirs] $goDirs = True Else $goDirs = False EndIf If ($countFiles > 0) Then Local $allFiles[$countFiles] $goFiles = True Else $goFiles = False EndIf $dirCount = 0 $fileCount = 0 For $i = 0 to $imax - 1 If (StringInStr(FileGetAttrib($dir & "\" & $allOfDir[$i]),"D")) And ($goDirs == True) Then $allDirs[$dirCount] = $allOfDir[$i] $dirCount = $dirCount + 1 ElseIf (StringInStr(($allOfDir[$i]),".")) And ($goFiles == True) Then $allFiles[$fileCount] = $allOfDir[$i] $fileCount = $fileCount + 1 EndIf Next ; Zip them if need be in current spot using 'ext_zip.zip' as file name, loop through each file ext. If ($goFiles == True) Then $emax = UBound($extensions) $fmax = UBound($allFiles) For $e = 0 to $emax - 1 For $f = 0 to $fmax - 1 $currentExt = getExt($allFiles[$f]) If ($currentExt == $extensions[$e]) Then $zip = _Zip_Create($dir & "\" & $currentExt & "_zip.zip") _Zip_AddFile($zip, $allFiles[$f]) EndIf Next Next EndIf ; Get all dirs under current DirCopy ; For each dir, recursive call from step 2 If ($goDirs == True) Then $dmax = UBound($allDirs) $rootDirectory = $rootDirectory & "\" For $d = 0 to $dmax - 1 archiveDir($rootDirectory & $allDirs[$d]) Next EndIf EndFunc Func getExt($filename) $pos = StringInStr($filename, ".") $retval = StringTrimLeft($filename, $pos + 1) Return $retval EndFunc This should output the .zip archives in the directories it finds the files that it needs to zip but it doesn't. Is there something I have to do after I create and add files to the archive within the code to put this created archive in the directory?

    Read the article

  • uninitialized constant Active Scaffold rails 2.3.5

    - by Kiva
    Hi guy, I update my rails application 2.0.2 to 2.3.5. I use active scaffold for the administration part. I change nothing in my code but a problem is coming with the update. I have a controller 'admin/user_controller' to manage users. Here is the code of the controller: class Admin::UserController < ApplicationController layout 'admin' active_scaffold :user do |config| config.columns.exclude :content, :historique_content, :user_has_objet, :user_has_arme, :user_has_entrainement, :user_has_mission, :mp, :pvp, :user_salt, :tchat, :notoriete_by_pvp, :invitation config.list.columns = [:user_login, :user_niveau, :user_mail, :user_bloc, :user_valide, :group_id] #:user_description, :race, :group, :user_lastvisited, :user_nextaction, :user_combats_gagner, :user_combats_perdu, :user_combats_nul, :user_password, :user_salt, :user_combats, :user_experience, :user_mana, :user_vie config.create.link.page = true config.update.link.page = true config.create.columns.add :password, :password_confirmation config.update.columns.add :password, :password_confirmation config.create.columns.exclude :user_password, :user_salt config.update.columns.exclude :user_password, :user_salt config.list.sorting = {:user_login => 'ASC'} config.subform.columns = [] end end This code hasn't change with the update, but when I go in this page, I got this error: uninitialized constant Users /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:443:in `load_missing_constant' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:in `const_missing' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:92:in `const_missing' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/inflector.rb:361:in `constantize' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/inflector.rb:360:in `each' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/inflector.rb:360:in `constantize' /Users/Kiva/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/string/inflections.rb:162:in `constantize' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/extensions/reverse_associations.rb:28:in `reverse_matches_for' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/extensions/reverse_associations.rb:24:in `each' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/extensions/reverse_associations.rb:24:in `reverse_matches_for' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/extensions/reverse_associations.rb:11:in `reverse' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold/data_structures/column.rb:117:in `autolink?' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold.rb:107:in `links_for_associations' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold/data_structures/columns.rb:62:in `each' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold/data_structures/columns.rb:62:in `each' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold.rb:106:in `links_for_associations' /Users/Kiva/Documents/Projet-rpg/jeu/vendor/plugins/active_scaffold/lib/active_scaffold.rb:59:in `active_scaffold' /Users/Kiva/Documents/Projet-rpg/jeu/app/controllers/admin/user_controller.rb:11 I search since 2 days but I don't find the problem, can you help me please.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >