Search Results

Search found 3833 results on 154 pages for 'git noob'.

Page 87/154 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How to translate along Z axis in OpenTK

    - by JeremyJAlpha
    I am playing around with an OpenGL sample application I downloaded for Xamarin-Android. The sample application produces a rotating colored cube I would simply like to edit it so that the rotating cube is translated along the Z axis and disappears into the distance. I modified the code by: adding an cumulative variable to store my Z distance, adding GL.Enable(All.DepthBufferBit) - unsure if I put it in the right place, adding GL.Translate(0.0f, 0.0f, Depth) - before the rotate functions, Result: cube rotates a couple of times then disappears, it seems to be getting clipped out of the frustum. So my question is what is the correct way to use and initialize the Z buffer and get the cube to travel along the Z axis? I am sure I am missing some function calls but am unsure of what they are and where to put them. I apologise in advance as this is very basic stuff but am still learning :P, I would appreciate it if anyone could show me the best way to get the cube to still rotate but to also move along the Z axis. I have commented all my modifications in the code: // This gets called when the drawing surface is ready protected override void OnLoad (EventArgs e) { // this call is optional, and meant to raise delegates // in case any are registered base.OnLoad (e); // UpdateFrame and RenderFrame are called // by the render loop. This is takes effect // when we use 'Run ()', like below UpdateFrame += delegate (object sender, FrameEventArgs args) { // Rotate at a constant speed for (int i = 0; i < 3; i ++) rot [i] += (float) (rateOfRotationPS [i] * args.Time); }; RenderFrame += delegate { RenderCube (); }; GL.Enable(All.DepthBufferBit); //Added by Noob GL.Enable(All.CullFace); GL.ShadeModel(All.Smooth); GL.Hint(All.PerspectiveCorrectionHint, All.Nicest); // Run the render loop Run (30); } void RenderCube () { GL.Viewport(0, 0, viewportWidth, viewportHeight); GL.MatrixMode (All.Projection); GL.LoadIdentity (); if ( viewportWidth > viewportHeight ) { GL.Ortho(-1.5f, 1.5f, 1.0f, -1.0f, -1.0f, 1.0f); } else { GL.Ortho(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f); } GL.MatrixMode (All.Modelview); GL.LoadIdentity (); Depth -= 0.02f; //Added by Noob GL.Translate(0.0f,0.0f,Depth); //Added by Noob GL.Rotate (rot[0], 1.0f, 0.0f, 0.0f); GL.Rotate (rot[1], 0.0f, 1.0f, 0.0f); GL.Rotate (rot[2], 0.0f, 1.0f, 0.0f); GL.ClearColor (0, 0, 0, 1.0f); GL.Clear (ClearBufferMask.ColorBufferBit); GL.VertexPointer(3, All.Float, 0, cube); GL.EnableClientState (All.VertexArray); GL.ColorPointer (4, All.Float, 0, cubeColors); GL.EnableClientState (All.ColorArray); GL.DrawElements(All.Triangles, 36, All.UnsignedByte, triangles); SwapBuffers (); }

    Read the article

  • Detect fails in setup script

    - by Lai Yu-Hsuan
    I wrote a setup script to install my prefered programs and settings after I got a new server. apt-get install git git clone http://[email protected] .vim ln -s .vimrc .vim/vimrc ... But if something wrong happens during setup, how can I interrupt the setup script, and log the error(s)? For example, if github server is down, it's obviously useless to create a symbolic link to non-existed vimrc. (or you have a better approach to initialize a server?)

    Read the article

  • Version control with no server installation

    - by Francisco Garcia
    I have ssh access to many servers where I have no root privileges. Do you know of any version control utility that can work with remote ssh repositories whichout installing anything on the remote server? I have tried a bare git repository folder, but it seems to demand some script/binary/installation on the server. I also dont like git because it is not very portable. The portable versions are made of too many files

    Read the article

  • How do I open files from the command line in Mac

    - by Michael
    I'm following a video tutorial where the author (who uses textmate) can open files by using "mate". for example mate .git/config will open this config file I'm using textwrangler however so I don't have that option. I did try edit .README once when i tried to open the README file of an application, but it opened a blank README file in textwrangler instead of the file with the text in it so any idea how I can open this .git/config file (or any other file) using textwrangler? I'm using Mac snow leopard

    Read the article

  • Rails 3 / RVM - Acts_as_list compiled locally - Why Can't Ruby See This Gem?

    - by rabbit on rails
    I cannot figure out why rails/ruby cannot see this gem, despite each telling me that the gem is visible. I compiled this gem locally from a github branch since the main version seems to be broken in Rails 3. Or perhaps I am missing something else entirely. Ovid:lightserve dlipa$ gem list *** LOCAL GEMS *** .. acts_as_list (0.2.1) .. And Ovid:lightserve dlipa$ cat Gemfile ... gem "acts_as_list", "0.2.1" ... And Ovid:lightserve dlipa$ bundle install ... Using acts_as_list (0.2.1) Your bundle is updated! Use `bundle show [gemname]` to see where a bundled gem is installed But Ovid:lightserve dlipa$ r c RubyGems Environment: - RUBYGEMS VERSION: 1.6.1 - RUBY VERSION: 1.9.2 (2011-02-18 patchlevel 180) [x86_64-darwin10.6.0] - INSTALLATION DIRECTORY: /Users/dlipa/.rvm/gems/ruby-1.9.2-p180 - RUBY EXECUTABLE: /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/bin/ruby - EXECUTABLE DIRECTORY: /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-darwin-10 - GEM PATHS: - /Users/dlipa/.rvm/gems/ruby-1.9.2-p180 - /Users/dlipa/.rvm/gems/ruby-1.9.2-p180@global - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com Loading development environment (Rails 3.0.5) ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `require' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `block in require' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:225:in `block in load_dependency' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:596:in `new_constants_in' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:225:in `load_dependency' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `require' from (irb):1 from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands/console.rb:44:in `start' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands/console.rb:8:in `start' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands.rb:23:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' ruby-1.9.2-p180 :002 > And Ovid:lightserve dlipa$ irb ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from (irb):1 from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/bin/irb:16:in `<main>' ruby-1.9.2-p180 :002 > Can anyone explain why this might be happening? I'd really appreciate it! ** UPDATE -- Response to Andrew Marshall's suggestion** I changed Gemfile to read the gem directly from git, but it did not resolve the problem. Does this mean that there is a problem with this gem? The error message is not very helpful ;-) Removed: Ovid:lightserve dlipa$ bundle show acts_as_list Could not find gem 'acts_as_list' in the current bundle. Then added back via: gem "acts_as_list", :git => "git://github.com/vpereira/acts_as_list.git" Ovid:lightserve dlipa$ bundle install Updating git://github.com/vpereira/acts_as_list.git ... Same problem even though bundle show matches the commit on that page: Ovid:lightserve dlipa$ bundle show acts_as_list /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/bundler/gems/acts_as_list-4cb76a8b198c Ovid:lightserve dlipa$ irb ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/rubies/ruby-1.9.2-.. I just looked in the gem and it appears there is no file called 'acts_as_list' in the gem. So it appears to be idiosyncratic, albeit poorly reported by Rails/Ruby. The API appears to have changed to: ruby-1.9.2-p180 :003 > require 'active_record/acts/list' => nil ruby-1.9.2-p180 :004 > ActiveRecord::Acts::List => ActiveRecord::Acts::List

    Read the article

  • Nunit test project won't compile under mono

    - by Quandary
    Question: I am implementing an OpenSource version of Microbosoft Sync framework, [http://www.microsoft.com/downloads/d...isplaylang=en] (Checkout: git clone [email protected]:quandary/SyncFramework.git) but I ran into a unittest problem. I added a nunit test-project to the C# project, but it's not compiling... I created the project on Windows, and there it compiled well. On Linux, the project compiles well, but compiling the unittest, I always get: Assembly 'nunit.framework, Version=2.5.5.10112, Culture=neutral, PublicKeyToken=96d09 My problem now is I installed nunit and added a reference to it (on Linux with mono), but I still get this message, I take out 'require specific version', and I still get this message... I also removed the windows reference to nunit, but still no change.

    Read the article

  • Pushing app to heroku problem

    - by Ryan Max
    Hi, I am trying to push my app to heroku and I get the following message: $ heroku create Creating electric-meadow-15..... done Created http://electric-meadow-15.heroku.com/ | [email protected]:electric-meadow-1 5.git $ git push heroku master ! No such app as fierce-fog-63 fatal: The remote end hung up unexpectedly It's weird that I am getting this now, I have pushed the app to heroku many times without issue. the especially weird thing is, fierce-fog-63 is an old app that I made a deleted a long time ago. Why is it now that heroku is trying to push to this app that doesn't exist anymore, especially when I have created a new one. Any suggestions?

    Read the article

  • Linux periodically "losing" ability to connect to server via SSH?

    - by gct
    I know this isn't exactly a programming question, but it popped up in my use of git for programming projects at least. I've got a web server that I use to host my git repos on, but my ubuntu box seems to "lose" the ability to connect to it via SSH. I'll get a "connection refused" error when I try to ssh or use git. Rebooting my local machine will fix the problem, but only temporarily. I can still connect to the web interface just fine, and the problem manifests with other servers as well. I've been working around it by pulling my changes over to my laptop and pushing from there, but that's sub-optimal as you can imagine. Has anyone seen something like this? I'd be tempted to say it's some kind of IP caching problem, but I can't connect even using the IP address of the server directly... Running Ubuntu 9.04

    Read the article

  • Why I am not able to run rails tests

    - by dorelal
    This is what I did. > git clone git://github.com/rails/rails.git > cd rails > cd railties > rake And I got following error. (in /Users/dorelal/dev/scratch/rails/railties) ./test/isolation/abstract_unit.rb:236:in `initialize': No such file or directory - /Users/dorelal/dev/scratch/rails/railties/tmp/app_template/config/boot.rb (Errno::ENOENT) from ./test/isolation/abstract_unit.rb:236:in `open' from ./test/isolation/abstract_unit.rb:236 from ./test/isolation/abstract_unit.rb:222:in `initialize' from ./test/isolation/abstract_unit.rb:222:in `new' from ./test/isolation/abstract_unit.rb:222 from test/application/configuration_test.rb:1:in `require' from test/application/configuration_test.rb:1 rake aborted! I checked ~/railties/tmp and this directory is empty. I know rails is not broken. So what am I missing?

    Read the article

  • Capistrano 3, Rails 4, database configuration does not specify adapter

    - by Kazmin
    When I start cap production deploy it fails like this: DEBUG [4ee8fa7a] Command: cd /home/deploy/myapp/releases/releases/20131025212110 && (RVM_BIN_PATH=~/.rvm/bin RAILS_ENV= ~/.rvm/bin/myapp_rake assets:precompile ) DEBUG [4ee8fa7a] rake aborted! DEBUG [4ee8fa7a] database configuration does not specify adapter You can see that "RAILS_ENV=" is actually empty and I'm wondering why that might be happening? I assume that this is the reason for the latter error that I don't have a database configuration. The deploy.rb file is below: set :application, 'myapp' set :repo_url, '[email protected]:developer/myapp.git' set :branch, :master set :deploy_to, '/home/deploy/myapp/releases' set :scm, :git set :devpath, "/home/deploy/myapp_development" set :user, "deploy" set :use_sudo, false set :default_env, { rvm_bin_path: '~/.rvm/bin' } set :keep_releases, 5 namespace :deploy do desc 'Restart application' task :restart do on roles(:app), in: :sequence, wait: 5 do # Your restart mechanism here, for example: within release_path do execute " bundle exec thin restart -O -C config/thin/production.yml" end end end after :restart, :clear_cache do on roles(:web), in: :groups, limit: 3, wait: 10 do within release_path do end end end after :finishing, 'deploy:cleanup' end?

    Read the article

  • Why is capistrano acting up like this?

    - by Matt
    I am having an issue with my deploy i ran cap deploy and got this Warning: Permanently added 'github.com,207.97.227.239' (RSA) to the list of known hosts. ** [174.143.150.79 :: out] Permission denied (publickey). ** fatal: The remote end hung up unexpectedly command finished *** [deploy:update_code] rolling back * executing "rm -rf /home/deploy/transprint/releases/20110105034446; true" servers: ["174.143.150.79"] [174.143.150.79] executing command here is my deploy.rb set :application, "transprint" set :domain, "174.149.150.79" set :user, "deploy" set :use_sudo, false set :scm, :git set :deploy_via, :remote_cache set :app_path, "production" set :rails_env, 'production' set :repository, "[email protected]:myname/something.git" set :scm_username, 'deploy' set :deploy_to, "/home/deploy/#{application}" role :app, domain role :web, domain role :db, domain, :primary => true please help

    Read the article

  • Github svn interface

    - by Fabio
    I recently moved a project on github and I still use this library in some svn projects as external. However I can't get github svn interface working at this time. If I run svn list http://svn.github.com/fabn/zle.git I obtain this error svn: Server sent unexpected return value (500 Internal Server Error) in response to PROPFIND request for '/fabn/zle.git/!svn/bc/0' If I run the same command using a fork of my project the list (and also the checkout) goes fine and i obtain what I need (this is the command svn list http://svn.github.com/JellyBelly/zle.git) Is there anything I can do to resolve this issue or it's a github problem?

    Read the article

  • Capistrano update causes C: to be placed in the current directory (cygwin)

    - by user321775
    When I run cap deploy:update in a directory on my local machine (via cygwin), "C:" magically appears in the directory. Sure enough, I can cd to it and it's my windows C: drive. Now I'm afraid to delete it, but I definitely don't want it in this directory (a rails project under /home/username/blah/blah). Here's my config/deploy.rb file. custom options set :application, "xyz.com" set :repository, "ssh://[email protected]:yyyy/home/git/xxx" set :user, "myname" set :runner, user set :use_sudo, false server "xxx.xxx.xxx.xxx:yyyy", :app, :web, :db, :primary = true deploy to set :deploy_to, "/home/myname/public_html/xyz" repository set :scm, :git set :deploy_via, :copy ssh options default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:port] = yyyy start passenger namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles = :app, :except = { :no_release = true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Anyone see the problem? And does anyone know a safe way of getting rid of the C: drives that have already shown up (this has happened in a few directories)?

    Read the article

  • How to debug a .bash_profile

    - by Blankman
    I was updating my .bash_profile, and unfortunetly I made a few updates and now I am getting: env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory -bash: tar: command not found -bash: grep: command not found -bash: cat: command not found -bash: find: command not found -bash: dirname: command not found -bash: /preexec.sh.lib: No such file or directory -bash: preexec_install: command not found -bash: sed: command not found -bash: git: command not found My bash_profile actually pulls in other .sh files (sources them) so I am not exactly sure which modification may have caused this. Now if I even try and to a list of files, I get: >ls -bash: ls: command not found -bash: sed: command not found -bash: git: command not found Any tips on how to trace the source of the error, and how to be able to use the terminal for basic things like listing files etc?

    Read the article

  • Every command fails with "command not found" after changing .bash_profile?

    - by Blankman
    I was updating my .bash_profile, and unfortunetly I made a few updates and now I am getting: env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory -bash: tar: command not found -bash: grep: command not found -bash: cat: command not found -bash: find: command not found -bash: dirname: command not found -bash: /preexec.sh.lib: No such file or directory -bash: preexec_install: command not found -bash: sed: command not found -bash: git: command not found My bash_profile actually pulls in other .sh files (sources them) so I am not exactly sure which modification may have caused this. Now if I even try and to a list of files, I get: >ls -bash: ls: command not found -bash: sed: command not found -bash: git: command not found Any tips on how to trace the source of the error, and how to be able to use the terminal for basic things like listing files etc?

    Read the article

  • Convert your Hash keys to object properties in Ruby

    - by kerry
    Being a Ruby noob (and having a background in Groovy), I was a little surprised that you can not access hash objects using the dot notation.  I am writing an application that relies heavily on XML and JSON data.  This data will need to be displayed and I would rather use book.author.first_name over book[‘author’][‘first_name’].  A quick search on google yielded this post on the subject. So, taking the DRYOO (Don’t Repeat Yourself Or Others) concept.  I came up with this: 1: class ::Hash 2:  3: # add keys to hash 4: def to_obj 5: self.each do |k,v| 6: if v.kind_of? Hash 7: v.to_obj 8: end 9: k=k.gsub(/\.|\s|-|\/|\'/, '_').downcase.to_sym 10: self.instance_variable_set("@#{k}", v) ## create and initialize an instance variable for this key/value pair 11: self.class.send(:define_method, k, proc{self.instance_variable_get("@#{k}")}) ## create the getter that returns the instance variable 12: self.class.send(:define_method, "#{k}=", proc{|v| self.instance_variable_set("@#{k}", v)}) ## create the setter that sets the instance variable 13: end 14: return self 15: end 16: end This works pretty well.  It converts each of your keys to properties of the Hash.  However, it doesn’t sit very well with me because I probably will not use 90% of the properties most of the time.  Why should I go through the performance overhead of creating instance variables for all of the unused ones? Enter the ‘magic method’ #missing_method: 1: class ::Hash 2: def method_missing(name) 3: return self[name] if key? name 4: self.each { |k,v| return v if k.to_s.to_sym == name } 5: super.method_missing name 6: end 7: end This is a much cleaner method for my purposes.  Quite simply, it checks to see if there is a key with the given symbol, and if not, loop through the keys and attempt to find one. I am a Ruby noob, so if there is something I am overlooking, please let me know.

    Read the article

  • Windows Azure Evolution - Web Sites (aka Antares) Part 1

    - by Shaun
    This is the 3rd post of my Windows Azure Evolution series, focus on the new features and enhancement which was alone with the Windows Azure Platform Upgrade June 2012, announced at the MEET Windows Azure event on 7th June. In the first post I introduced the new preview developer portal and how to works for the existing features such as cloud services, storages and SQL databases. In the second one I talked about the Windows Azure .NET SDK 1.7 on the latest Visual Studio 2012 RC on Windows 8. From this one I will begin to introduce some new features. Now let’s have a look on the first one of them, Windows Azure Web Sites.   Overview Windows Azure Web Sites (WAWS), as known as Antares, was a new feature still in preview stage in this upgrade. It allows people to quickly and easily deploy websites to a highly scalable cloud environment, uses the languages and open source apps of the choice then deploy such as FTP, Git and TFS. It also can be integrated with Windows Azure services like SQL Database, Caching, CDN and Storage easily. After read its introduction we may have a question: since we can deploy a website from both cloud service web role and web site, what’s the different between them? So, let’s have a quick compare.   CLOUD SERVICE WEB SITE OS Windows Server Windows Server Virtualization Windows Azure Virtual Machine Windows Azure Virtual Machine Host IIS IIS Platform ASP.NET WebForm, ASP.NET MVC, WCF ASP.NET WebForm, ASP.NET MVC, PHP Language C#, VB.NET C#, VB.NET, PHP Database SQL Database SQL Database, MySQL Architecture Multi layered, background worker, message queuing, etc.. Simple website with backend database. VS Project Windows Azure Cloud Service ASP.NET Web Form, ASP.NET MVC, etc.. Out-of-box Gallery (none) Drupal, DotNetNuke, WordPress, etc.. Deployment Package upload, Visual Studio publish FTP, Git, TFS, WebMatrix Compute Mode Dedicate VM Shared Across VMs, Dedicate VM Scale Scale up, scale out Scale up, scale out As you can see, there are many difference between the cloud service and web site, but the main point is that, the cloud service focus on those complex architecture web application. For example, if you want to build a website with frontend layer, middle business layer and data access layer, with some background worker process connected through the message queue, then you should better use cloud service, since it provides full control of your code and application. But if you just want to build a personal blog or a  business portal, then you can use the web site. Since the web site have many galleries, you can create them even without any coding and configuration. David Pallmann have an awesome figure explains the benefits between the could service, web site and virtual machine.   Create a Personal Blog in Web Site from Gallery As I mentioned above, one of the big feature in WAWS is to build a website from an existing gallery, which means we don’t need to coding and configure. What we need to do is open the windows azure developer portal and click the NEW button, select WEB SITE and FROM GALLERY. In the popping up windows there are many websites we can choose to use. For example, for personal blog there are Orchard CMS, WordPress; for CMS there are DotNetNuke, Drupal 7, mojoPortal. Let’s select WordPress and click the next button. The next step is to configure the web site. We will need to specify the DNS name and select the subscription and region. Since the WordPress uses MySQL as its backend database, we also need to create a MySQL database as well. Windows Azure Web Sites utilize ClearDB to host the MySQL databases. You cannot create a MySQL database directly from SQL Databases section. Finally, since we selected to create a new MySQL database we need to specify the database name and region in the last step. Also we need to accept the ClearDB’s terms as well. Then windows azure platform will download the WordPress codes and deploy the MySQL database and website. Then it will be ready to use. Select the website and click the BROWSE button, the WordPress administration page will be shown. After configured the WordPress here is my personal web blog on the cloud. It took me no more than 10 minutes to establish without any coding.   Monitor, Configure, Scale and Linked Resources Let’s click into the website I had just created in the portal and have a look on what we can do. In the website details page where are five sections. - Dashboard The overall information about this website, such as the basic usage status, public URL, compute mode, FTP address, subscription and links that we can specify the deployment credentials, TFS and Git publish setting, etc.. - Monitor Some status information such as the CPU usage, memory usage etc., errors, etc.. We can add more metrics by clicking the ADD METRICS button and the bottom as well. - Configure Here we can set the configurations of our website such as the .NET and PHP runtime version, diagnostics settings, application settings and the IIS default documents. - Scale This is something interesting. In WAWS there are two compute mode or called web site mode. One is “shared”, which means our website will be shared with other web sites in a group of windows azure virtual machines. Each web site have its own process (w3wp.exe) with some sandbox technology to isolate from others. When we need to scaling-out our web site in shared mode, we actually increased the working process count. Hence in shared mode we cannot specify the virtual machine size since they are shared across all web sites. This is a little bit different than the scaling mode of the cloud service (hosted service web role and worker role). The other mode called “dedicate”, which means our web site will use the whole windows azure virtual machine. This is the same hosting behavior as cloud service web role. In web role it will be deployed on the virtual machines we specified and all of them are only used by us. In web sites dedicate mode, it’s the same. In this mode when we scaling-out our web site we will use more virtual machines, and each of them will only host our own website. And we can specify the virtual machine size in this mode. In the developer portal we can select which mode we are using from the scale section. In shared mode we can only specify the instance count, but in dedicate mode we can specify the instance size as well as the instance count. - Linked Resource The MySQL database created alone with the creation of our WordPress web site is a linked resource. We can add more linked resources in this section.   Pricing For the web site itself, since this feature is in preview period if you are using shared mode, then you will get free up to 10 web sites. But if you are using dedicate mode, the price would be the virtual machines you are using. For example, if you are using dedicate and configured two middle size virtual machines then you will pay $230.40 per month. If there is SQL Database linked to your web site then they will be charged separately based on the Pay-As-You-Go price. For example a 1GB web edition database costs $9.99 per month. And the bandwidth will be charged as well. For example 10GB outbound data transfer costs $1.20 per month. For more information about the pricing please have a look at the windows azure pricing page.   Summary Windows Azure Web Sites gives us easier and quicker way to create, develop and deploy website to window azure platform. Comparing with the cloud service web role, the WAWS have many out-of-box gallery we can use directly. So if you just want to build a blog, CMS or business portal you don’t need to learn ASP.NET, you don’t need to learn how to configure DotNetNuke, you don’t need to learn how to prepare PHP and MySQL. By using WAWS gallery you can establish a website within 10 minutes without any lines of code. But in some cases we do need to code by ourselves. We may need to tweak the layout of our pages, or we may have a traditional ASP.NET or PHP web application which needed to migrated to the cloud. Besides the gallery WAWS also provides many features to download, upload code. It also provides the feature to integrate with some version control services such as TFS and Git. And it also provides the deploy approaches through FTP and Web Deploy. In the next post I will demonstrate how to use WebMatrix to download and modify the website, and how to use TFS and Git to deploy automatically one our code changes committed.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Best solution for a team home server

    - by aliasbody
    I created a home server with Ubuntu 12.04 Server (using an old Netbook with an Atom CPU and 512Mb). The idea is just to be used for a small team (maximum 10 persons) that will have constant access by SSH to the main projects and could add features with Git, and will, as well, have their own directory (with VirtualHost configured) for their own personal projects. Everything is configured and running, but my question is : What is the best solution here for everyone to work? It is to have them on the http group and then all have access as normal users to the /var/www folder (that also contains GitWeb and Drupal), or would be to create a new user named after the project (as an example) where only those with the password could have access to work (configured with VirtualHost). Notice: The idea is to have 1 person responsible of the server directly (since he is the one who is hosting it), 2 more people that will have access to the root from their home in order to configure anything from their home, plus anyone else that joins the group without any root access, but just the necessary access to create personal works and work with Git.

    Read the article

  • Help me to set samba and apache on my Ubuntu VM from Vista, starting from ping

    - by avastreg
    Ok the title is not so clear after all, so let's start with the problem description posting some points: i'm on Win Vista i have a Virtual Box Ubuntu 9.04 server (Virtual Machine) installed in windows i'm under Active Directory (maybe helps), with network 192.168.2.x After Ubuntu installation (LAMP), i have: Ubuntu Ip set to 10.0.2.15 (dhcp) Vista pings Ubuntu and Ubuntu pings Vista (only IPs, not names) Can't connect to Apache (default install ubuntu server) at the url h**p://10.0.2.15/ On Ubuntu, testing Apache by doing 'wget http://10.0.2.15/' works Tried to setup samba, writing a share def, but nothing, i can't access from Vista to Ubuntu My scope is: Setting up samba to work on files from windows Reaching apache to test web pages Ok i'm not completely noob (but i'm on the noob way anyway) and i've tried many solutions, so please try to help me; let's look together what went wrong :)

    Read the article

  • Identify my terminal session that started a particular process

    - by Sam
    I'm using Gnome on Ubuntu. I often have 8-20 terminal sessions open and in some of them I have su'd to a different user. The specific problem that caused me to write this query happens when using git status, but this is more general issue. git status will tell me I have an uncontrolled file .foo.java.swp. This means that in one of my terminal sessions I have vi open on foo.java. I need a script or tool that would tell me in which terminal session that vi is running. I can do a "ps aux | grep vi" to pretty easily find the pid of the particular vi. It would be nice if the tool highlighted the terminal on my task bar in some way. Thanks. -Sam

    Read the article

  • Jumping around to work on different features when you get stuck, is it a source of project failures?

    - by codecompleting
    On personal projects (or work), if one gets stuck on a problem, or waiting to figure out a solution to the problem, if you jump to another section of your code, don't you think it will be a good reason your application will be buggy or worse yet never get completed? Assuming you are not using git and code each feature to a specific branch, things can get out of hand since you have 3 different features you are working on, and you have unresolved issues in each. So when you get done to work, you get stressed out because you have these hanging issues and half-baked code lingering about. What's the best way to avoid this problem? (if you have it) I'm guessing using something like git and creating a branch per feature is the safest way to avoid this bad habit. Any other suggestions?

    Read the article

  • Is there a version control system that can show changes to a specific method or function?

    - by chesles
    Sometimes it would be nice to be able to say something like: (git|svn|hg|etc) diff Foo.c:main (git|svn|hg|etc) log log Foo.c:main to see the changes made to a specific function within a source file since the last commit, or the complete history of changes. My question is two-fold: Does something exist that does this? Would such a tool be practical? It would have to do some simple parsing of the code at each revision in order to compare different versions of the function; would the overhead be too much for it to be efficient?

    Read the article

  • Release Note for 3/30/2012

    We have been pretty busy working on a new UI for CodePlex, I will have a preview post coming shortly. Here are the notes from today’s release: Updated source code tab to show Author and Committer for Git (Thanks to Brad Wilson for reporting) Fixed issue where pagination did not work correctly in topic view Fixed issue where additional comments on a given line of code would get overridden for Git project Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >