Search Results

Search found 2950 results on 118 pages for 'git submodules'.

Page 77/118 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Version control for game development - issues and solutions?

    - by Cyclops
    There are a lot of Version Control systems available, including open-source ones such as Subversion, Git, and Mercurial, plus commercial ones such as Perforce. How well do they support the process of game-development? What are the issues using VCS, with regard to non-text files (binary files), large projects, etc? What are solutions to these problems, if any? For organization of Answers, let's try on a per-package basis. Update each package/Answer with your results. Also, please list some brief details in your answer, about whether your VCS is free or commercial, distributed versus centralized, etc. Update: Found a nice article comparing two of the VCS below - apparently, Git is MacGyver and Mercurial is Bond. Well, I'm glad that's settled... And the author has a nice quote at the end: It’s OK to proselytize to those who have not switched to a distributed VCS yet, but trying to convert a Git user to Mercurial (or vice-versa) is a waste of everyone’s time and energy. Especially since Git and Mercurial's real enemy is Subversion. Dang, it's a code-eat-code world out there in FOSS-land...

    Read the article

  • Does (should?) changing the URI scheme name change the semantics?

    - by Doug
    If we take: http://example.com/foo is it fair to say that: ftp://example.com/foo .. points to the same resource, just using a different mechanism for resolving it (and of course possibly a different representation, but perhaps not)? This came to light in a discussion we were having surrounding some internal tooling with Git. We have to process some Git repositories, and they come to use as "git@{authority}/{path}" , however the library we're using to interface with them doesn't support the git protocol. I suggested that we should make the service robust in of that it tries to use HTTP or SSH, in essence, discovering what protocols/schemes are supported for resolving the repository at {path} under each {authority}. This was met with some criticism: "We don't know if that's the same repository". My response was: "It had better be!" Looking at RFC 3986, I see this excerpt: URI "resolution" is the process of determining an access mechanism and the appropriate parameters necessary to dereference a URI; this resolution may require several iterations. To use that access mechanism to perform an action on the URI's resource is to "dereference" the URI. Which makes me think that the resolution process is permitted to try different protocols, because: Although many URI schemes are named after protocols, this does not imply that use of these URIs will result in access to the resource via the named protocol. The only concern I have, I guess, is that I only see reference to the notion of changing protocols when it comes to traversing relationships: it is possible for a single set of hypertext documents to be simultaneously accessible and traversable via each of the "file", "http", and "ftp" schemes if the documents refer to each other with relative references. I'm inclined to think I'm wrong in my initial beliefs, because the Normalization and Comparison section of said RFC doesn't mention any way of treating two URIs as equivalent if they use different schemes. It seems like schemes named/based on IP protocols ought to have this notion, at least?

    Read the article

  • VCS for single user using file sync service

    - by StackUnder
    I'm trying to setup a version control for my one man project. My project files are in sync thanks to live mesh (but I could be using dropbox for that matter), between my laptop, my home pc and my office pc. I'm now using Netbeans with local file history. Sometimes it helps to revert to a previous state of one file. But imagine a situation when multiple files have problems. Correct me if I'm wrong but I would have to go to every file and revert to previous "safe" state. I don't like this approach, so I'm considering using a version control between SVN and GIT. I have some previous experience with SVN (TortoiseSVN) and I know that I can create a file:// repo. So, what a want to do is setup a VCS inside my synced folder just to have the ability to "revert" to a previous version if something goes wrong. Since everything's been synced to all computers, I wouldn't ever need to run an update. The file tree organization would be the following: C:...\SyncedFolder\MyProject\ Inside MyProject folder are all the project files plus a directory that has SVN or GIT info of my project (the repo/master). What VCS is best for this situation: SVN or GIT? Does SVN need to store all files from HEAD revision, thus "duplicating" all my project inside my synced folder? Does GIT eliminates this problem? Is this the best approach?

    Read the article

  • Unable to use factory girl with Cucumber and rails 3 (bundler problem)

    - by jbpros
    Hi there, I'm trying to run cucumber features with factory girl factories on a fresh Rails 3 application. Here is my Gemfile: source "http://gemcutter.org" gem "rails", "3.0.0.beta" gem "pg" gem "factory_girl", :git => "git://github.com/thoughtbot/factory_girl.git", :branch => "rails3" gem "rspec-rails", ">= 2.0.0.beta.4" gem "capybara" gem "database_cleaner" gem "cucumber-rails", :require => false Then the bundle install commande just runs smoothly: $ bundle install /usr/lib/ruby/gems/1.8/gems/bundler-0.9.3/lib/bundler/installer.rb:81:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement Updating git://github.com/thoughtbot/factory_girl.git Fetching source index from http://gemcutter.org Resolving dependencies Installing abstract (1.0.0) from system gems Installing actionmailer (3.0.0.beta) from system gems Installing actionpack (3.0.0.beta) from system gems Installing activemodel (3.0.0.beta) from system gems Installing activerecord (3.0.0.beta) from system gems Installing activeresource (3.0.0.beta) from system gems Installing activesupport (3.0.0.beta) from system gems Installing arel (0.2.1) from system gems Installing builder (2.1.2) from system gems Installing bundler (0.9.13) from system gems Installing capybara (0.3.6) from system gems Installing cucumber (0.6.3) from system gems Installing cucumber-rails (0.3.0) from system gems Installing culerity (0.2.9) from system gems Installing database_cleaner (0.5.0) from system gems Installing diff-lcs (1.1.2) from system gems Installing erubis (2.6.5) from system gems Installing factory_girl (1.2.3) from git://github.com/thoughtbot/factory_girl.git (at rails3) Installing ffi (0.6.3) from system gems Installing i18n (0.3.6) from system gems Installing json_pure (1.2.3) from system gems Installing mail (2.1.3) from system gems Installing memcache-client (1.7.8) from system gems Installing mime-types (1.16) from system gems Installing nokogiri (1.4.1) from system gems Installing pg (0.9.0) from system gems Installing polyglot (0.3.0) from system gems Installing rack (1.1.0) from system gems Installing rack-mount (0.4.7) from system gems Installing rack-test (0.5.3) from system gems Installing rails (3.0.0.beta) from system gems Installing railties (3.0.0.beta) from system gems Installing rake (0.8.7) from system gems Installing rspec (2.0.0.beta.4) from system gems Installing rspec-core (2.0.0.beta.4) from system gems Installing rspec-expectations (2.0.0.beta.4) from system gems Installing rspec-mocks (2.0.0.beta.4) from system gems Installing rspec-rails (2.0.0.beta.4) from system gems Installing selenium-webdriver (0.0.17) from system gems Installing term-ansicolor (1.0.5) from system gems Installing text-format (1.0.0) from system gems Installing text-hyphen (1.0.0) from system gems Installing thor (0.13.4) from system gems Installing treetop (1.4.4) from system gems Installing tzinfo (0.3.17) from system gems Installing webrat (0.7.0) from system gems Your bundle is complete! When I run cucumber, here is the error I get: $ rake cucumber (in /home/jbpros/projects/deorbitburn) /usr/lib/ruby/gems/1.8/gems/bundler-0.9.3/lib/bundler/resolver.rb:97:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement NOTICE: CREATE TABLE will create implicit sequence "posts_id_seq" for serial column "posts.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "posts_pkey" for table "posts" /usr/bin/ruby1.8 -I "/usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/lib:lib" "/usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/cucumber" --profile default Using the default profile... git://github.com/thoughtbot/factory_girl.git (at rails3) is not checked out. Please run `bundle install` (Bundler::PathError) /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/source.rb:282:in `load_spec_files' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/source.rb:190:in `local_specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:36:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:35:in `each' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:35:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/index.rb:5:in `build' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:34:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:14:in `index' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/index.rb:5:in `build' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:13:in `index' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:55:in `resolve_locally' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:28:in `specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:65:in `specs_for' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:23:in `requested_specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/runtime.rb:18:in `setup' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler.rb:68:in `setup' /home/jbpros/projects/deorbitburn/config/boot.rb:7 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/config/application.rb:1 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/config/environment.rb:2 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/features/support/env.rb:8 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:85:in `load_code_file' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:77:in `load_code_files' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in `each' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in `load_code_files' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:48:in `execute!' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:20:in `execute' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/cucumber:8 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I "/usr/lib/ruby/gems/1....] (See full trace by running task with --trace) Do I have to do something special for bundler to check out factory girl's repository on github?

    Read the article

  • Is there a major downside to using .htaccess files in your svn/git repository?

    - by Rob
    If our .htaccess files are purely for mod rewrites, is there a security / development downside to committing .htaccess files alongside other files in your repository? For various reasons (our SEO optimisers like to add pretty urls as new promotions occur, etc) we need a fair few rewrite rules inside these files. Would I be better off pushing the routing into php-land and dealing with it there? Or is reading from a .htaccess via apache fine? The .htaccess files are not exposed via the web server, so that's not a security risk.

    Read the article

  • Correct way to protect a private API key when versioning a python application on a public git repo

    - by systempuntoout
    I would like to open-source a python project on Github but it contains an API key that should not be distributed. I guess there's something better than removing the key each time a "push" is committed to the repo. Imagine a simplified foomodule.py : import urllib2 API_KEY = 'XXXXXXXXX' urllib2.urlopen("http://example.com/foo?id=123%s" % API_KEY ).read() What i'm thinking is: Move the API_KEY in a second key.py module importing it on foomodule.py; i would then add key.py on .gitignore file. Same as 1 but using ConfigParser Do you know a good programmatic way to handle this scenario?

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Taglist: Failed to generate tags for macvim [migrated]

    - by Mohit Jain
    When ever I am trying to open a file in my rails project using macVim. I am geting an error Taglist: Failed to generate tags for ....... But it works perfectly in terminal vim. Why its happening? I am a new bie and just installed everything using this dotvim repo. I installed ctags using these commands that I got from this git $ ctags -R --exclude=.git --exclude=log * ctags: illegal option -- R usage: ctags [-BFadtuwvx] [-f tagsfile] file ... #you need to get new ctags, i recommend homebrew but anything will work $ brew install ctags #alias ctags if you used homebrew $ alias ctags="`brew --prefix`/bin/ctags" #try again! ctags -R --exclude=.git --exclude=log * which ctags on terminal returning, same if i do from vim or gvim using ! (bang): /usr/bin/ctags Can anyone help me?

    Read the article

  • I am trying to build libmtp 1.1.14 but I cannot fix this error

    - by Kristoffer
    I have run this in a terminal. git clone git://libmtp.git.sourceforge.net/gitroot/libmtp/libmtp cd libmtp ./autogen.sh (answering yes to all questions) But when I try to run the ./configure --prefix=/usr/ I get this error: checking whether to build static libraries... yes ./configure: line 11739: AC_LIB_PREPARE_PREFIX: command not found ./configure: line 11740: AC_LIB_RPATH: command not found ./configure: line 11745: syntax error near unexpected token `iconv' ./configure: line 11745: ` AC_LIB_LINKFLAGS_BODY(iconv)' I have built and installed the libiconv from here. I do not know what to do, been trying for a few hours but I am pretty noob to Linux. How can i fix this? The lines 11739 to 11745 in the configure file looks like this: AC_LIB_PREPARE_PREFIX AC_LIB_RPATH AC_LIB_LINKFLAGS_BODY(iconv)

    Read the article

  • cd command not working anymore

    - by dystroy
    I just did two things : install scm_breeze and mercurial : git clone git://github.com/ndbroadbent/scm_breeze.git ~/.scm_breeze ~/.scm_breeze/install.sh sudo apt-get install mercurial And now my cd command seems to be gone : dys@dys-tour:~/prog> cd ~ bash: /home/dys : is a folder dys@dys-tour:~/prog> cd .. .. : command not found The terminals I opened before installing scm_breeze and mercurial are fine. The terminals I open now have the problem. I uninstalled scm_breeze, with no result. What can I do to diagnose the problem and fix it ?

    Read the article

  • What's the best version control/QA workflow for a legacy system?

    - by John Cromartie
    I am struggling to find a good balance with our development and testing process. We use Git right now, and I am convinced that ReinH's Git Workflow For Agile Teams is not just great for capital-A Agile, but for pretty much any team on DVCS. That's what I've tried to implement but it's just not catching. We have a large legacy system with a complex environment, hundreds of outstanding and undiscovered defects, and no real good way to set up a test environment with realistic data. It's also hard to release updates without disrupting users. Most of all, it's hard to do thorough QA with this process... and we need thorough testing with this legacy system. I feel like we can't really pull off anything as slick as the Git workflow outlined in the link. What's the way to do it?

    Read the article

  • Lenovo Y570 nVidia drivers are giving me problems. (GT 555)

    - by Joe
    So, I've tried the bumblebee "hack" listed HERE:https://github.com/Bumblebee-Project/bbswitch/tree/hack-lenovo I copied every line below into the terminal, is that what I was supposed to do? I'm a Linux noob. git clone http://github.com/Bumblebee-Project/bbswitch.git -b hack-lenovo cd bbswitch mkdir /usr/src/acpi-handle-hack-0.0.1 cp Makefile acpi-handle-hack.c /usr/src/acpi-handle-hack-0.0.1 cp dkms/acpi-handle-hack.conf /usr/src/acpi-handle-hack-0.0.1/dkms.conf dkms add acpi-handle-hack/0.0.1 dkms build acpi-handle-hack/0.0.1 dkms install acpi-handle-hack/0.0.1 echo acpi-handle-hack | sudo tee -a /etc/modules sudo update-initramfs -u (had to use http:// instead of git://, in my uni) It doesn't change anything, I get joe@ubuntu:~$ optirun glxspheres [ 4447.830749] [ERROR]Cannot access secondary GPU - error: Could not load GPU driver [ 4447.830844] [ERROR]Aborting because fallback start is disabled.

    Read the article

  • NHibernate 3.0 and FluentNHibernate, how to get up and running&hellip;.

    - by DesigningCode
    First up. Its actually really easy. I’m not very religious about my DB tech, I don’t really care, I just want something that works.  So I’m happy to consider all options if they provide an advantage, and recently I was considering jumping from NHibernate to EF 4.0.  However before ditching NHibernate and jumping to EF 4.0 I thought I should try the head version of NHibernates trunk and the Head version of FluentNHibernate. I currently have a “Repository / Unit of Work” Framework built up around these two techs.  All up it makes my life pretty simple for dealing with databases.   The problem is the current release of NHibernate + the Linq provider wasn’t too hot for our purposes.  Especially trying to plug it into older VB.NET code.   The Linq provider spat the dummy with VB.NET lambdas.  Mainly because in C# Query().Where(l => l.Name.Contains("x") || l.Name.Contains("y")).ToList(); is not the same as the VB.NET Query().Where(Function(l) l.Name.Contains("x") Or l.Name.Contains("y")).ToList VB.NET seems to spit out … well…. something different :-) so anyways… Compiling your own version of NHibernate and FluentNHibernate.  It’s actually pretty easy! First you’ll need to install tortisesvn NAnt and Git if you don’t already have them.  NHibernate first step, get the subversion trunk https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/ into a directory somewhere.  eg \thirdparty\nhibernate Then use NAnt to build it.   (if you open the .sln it will show errors in that  AssemblyInfo.cs doesn’t exist ) to build it, there is a .txt document with sample command line build instructions,  I simply used :- NAnt -D:project.config=release clean build >output-release-build.log *wait* *wait* *wait* and ta da, you will have a bin directory with all the release dlls. FluentNHibernate This was pretty simple. there’s instructions here :- http://wiki.fluentnhibernate.org/Getting_started#Installation basically, with git, create a directory, and you issue the command git clone git://github.com/jagregory/fluent-nhibernate.git and wait, and soon enough you have the source. Now, from the bin directory that NHibernate spit out, take everything and dump it into the subdirectory “fluent-nhibernate\tools\NHibernate” Now, to build, you can use rake….which a ruby build system, however you can also just open the solution and build.   Which is what I did.  I had a few problems with the references which I simply re-added using the new ones.  Once built, I just took all the NHibnerate dlls, and the fluent ones and replaced my existing NHibernate / Fluent and killed off the old linq project. All I had to change is the places that used  .Linq<T>  and replace them with .Query<T>  (which was easy as I had wrapped it already to isolate my code from such changes) and hey presto, everything worked.  Even the VB.NET linq calls. I need to do some more testing as I’ve only done basic smoke tests, but its all looking pretty good, so for now, I will stick to NHibernate!

    Read the article

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

  • DotNetOpenAuth: Message signature was incorrect

    - by Shawn Miller
    I'm getting a "Message signature was incorrect" exception when trying to authenticate with MyOpenID and Yahoo. I'm using pretty much the ASP.NET MVC sample code that came with DotNetOpenAuth 3.4.2 public ActionResult Authenticate(string openid) { var openIdRelyingParty = new OpenIdRelyingParty(); var authenticationResponse = openIdRelyingParty.GetResponse(); if (authenticationResponse == null) { // Stage 2: User submitting identifier Identifier identifier; if (Identifier.TryParse(openid, out identifier)) { var realm = new Realm(Request.Url.Root() + "openid"); var authenticationRequest = openIdRelyingParty.CreateRequest(openid, realm); authenticationRequest.RedirectToProvider(); } else { return RedirectToAction("login", "home"); } } else { // Stage 3: OpenID provider sending assertion response switch (authenticationResponse.Status) { case AuthenticationStatus.Authenticated: { // TODO } case AuthenticationStatus.Failed: { throw authenticationResponse.Exception; } } } return new EmptyResult(); } Working fine with Google, AOL and others. However, Yahoo and MyOpenID fall into the AuthenticationStatus.Failed case with the following exception: DotNetOpenAuth.Messaging.Bindings.InvalidSignatureException: Message signature was incorrect. at DotNetOpenAuth.OpenId.ChannelElements.SigningBindingElement.ProcessIncomingMessage(IProtocolMessage message) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\OpenId\ChannelElements\SigningBindingElement.cs:line 139 at DotNetOpenAuth.Messaging.Channel.ProcessIncomingMessage(IProtocolMessage message) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\Messaging\Channel.cs:line 992 at DotNetOpenAuth.OpenId.ChannelElements.OpenIdChannel.ProcessIncomingMessage(IProtocolMessage message) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\OpenId\ChannelElements\OpenIdChannel.cs:line 172 at DotNetOpenAuth.Messaging.Channel.ReadFromRequest(HttpRequestInfo httpRequest) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\Messaging\Channel.cs:line 386 at DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse(HttpRequestInfo httpRequestInfo) in c:\Users\andarno\git\dotnetopenid\src\DotNetOpenAuth\OpenId\RelyingParty\OpenIdRelyingParty.cs:line 540 Appears that others are having the same problem: http://trac.dotnetopenauth.net:8000/ticket/172 Does anyone have a workaround?

    Read the article

  • A good machine learning technique to weed out good URLs from bad

    - by git-noob
    Hi, I have an application that needs to discriminate between good HTTP GET requests and bad. For example: http://somesite.com?passes=dodgy+parameter # BAD http://anothersite.com?passes=a+good+parameter # GOOD My system can make a binary decision about whether or not a URL is good or bad - but ideally I would like it to predict whether or not a previously unseen URL is good or bad. http://some-new-site.com?passes=a+really+dodgy+parameter # BAD I feel the need for a support vector machine (SVM) ... but I need to learn machine learning. Some questions: 1) Is an SVM appropriate for this task? 2) Can I train it with the raw URLs? - without explicitly specifying 'features' 3) How many URLs will I need for it to be good at predictions? 4) What kind of SVM kernel should I use? 5) After I train it, how do I keep it up to date? 6) How do I test unseen URLs again the SVM to decide whether it's good or bad? I

    Read the article

  • Executing javascript during redirect without changing original referrer

    - by git-noob
    I need to test whether or not a click-through is valid by using some javascript client-side tests (e.g., browser window dimensions). However, I would like the original click referrer to remain the same. Is there a way I can do a redirect, execute some javascript, capture the browser details and then continue the click-through while keeping the original referrer value the same?

    Read the article

  • Storing a remote path by name in Hg

    - by Erik Vold
    In git I can git remote add x http://... then git pull x how can I do this in hg? I was told to add the following to .hgrc: [paths] x = http://... so I added the above to /path/to/repo/.hgrc then tried hg pull x and got the following error: abort: repository x not found! where x was mozilla and http:// was http://hg.mozilla.org/labs/jetpack-sdk/

    Read the article

  • Compiling x264 with mp4 support problem

    - by johnas
    I'm trying to compile x264 with mp4 output support. I download the latest version from their git by typing git clone git://git.videolan.org/x264.git When I run ./configure it configures and I'm able to make it. But when i try to configure it with ./configure --enable-mp4-output and then try to make it, it returns a strange error, indicating that there is a compilation error. The error message looks like: ... Lots of similar errors ... output/mp4.c:297: warning: implicit declaration of function ‘gf_isom_add_sample’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘p_file’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘i_track’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘i_descidx’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘p_sample’ output/mp4.c:299: error: ‘mp4_hnd_t’ has no member named ‘p_sample’ output/mp4.c:300: error: ‘mp4_hnd_t’ has no member named ‘i_numframe’ I've tried different releases. I've installed gpac, ffmpeg, and tried numerous tips from the net. But still can't get it to work. The reason I want it with mp4 output is because I want to use ffmpeg to create mp4 files encoded with x264. I'm running Ubuntu Server 9.10 32 bits.

    Read the article

  • Error when running bundler install

    - by lmumar
    Hi all, I tried running bundle install in our production server, but I encounter this problem: Updating git://github.com/collectiveidea/delayed_job.git fatal: Refusing to fetch into current branch refs/heads/master of non-bare repository An error has occurred in git. Cannot complete bundling. I have bundler version 0.9.25 installed. Thanks

    Read the article

  • Editing remote files over SSH, using TextMate?

    - by Zachary Burt
    I LOVE using TextMate on my MacBook. It's great. Unfortunately, I want to edit some files directly on my dev server, since it's difficult to recreate the environment locally. I'm using Git, so one alternative is to just edit locally, git commit, git push, and then git merge, but that's kind of complicated every time I want to make a simple change. I'd rather just ... use another solution. One thing I tried is mounting a hard drive via MacFusion, and then loading that in an editor. But that's so freaking laggy/slow! Has anyone cooked up a better solution?

    Read the article

  • Is bundler ready for prime time?

    - by schof
    I'm considering using bundler for deploying a Spree app on Heroku. My question is, is bundler ready for prime time? I know there are some rough edges but I guess I'd like to know more about what the current limitations are and figure out if this is an option for us. Specifically, I'd like to do the git repository stuff git "git://github.com/indirect/rails3-generators.git" gem "rails3-generator Does anyone want to encourage/discourage me from this course of action? Anybody have experience with this on Heroku in particular?

    Read the article

  • Heroku push rejected, failed to install gems via Bundler

    - by ismaelsow
    Hi everybody ! I am struggling to push my code to Heroku. And after searching on Google and Stack Overflow questions, I have not been able to find the solution. Here is What I get when I try "git push heroku master" : Heroku receiving push -----> Rails app detected -----> Detected Rails is not set to serve static_assets Installing rails3_serve_static_assets... done -----> Gemfile detected, running Bundler version 1.0.3 Unresolved dependencies detected; Installing... Fetching source index for http://rubygems.org/ /usr/ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems/remote_fetcher.rb:300:in `open_uri_or_path': bad response Not Found 404 (http://rubygems.org/quick/Marshal.4.8/mail-2.2.6.001.gemspec.rz) (Gem::RemoteFetcher::FetchError) from /usr/ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems/remote_fetcher.rb:172:in `fetch_path' . .... And finally: FAILED: http://docs.heroku.com/bundler ! Heroku push rejected, failed to install gems via Bundler error: hooks/pre-receive exited with error code 1 To [email protected]:myapp.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:myapp.git' Thanks for your help!

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >