Search Results

Search found 13318 results on 533 pages for 'svn config'.

Page 127/533 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Public static ip for vagrant box

    - by Numbata
    I have server (Debian Squeeze) with 1 ethernet card and 2 public static IPs (188.120.245.4 and 188.120.244.5). What I want: Setup virtual box (Ubuntu) with access via static IP (188.120.244.5). What I was trying: config.vm.forward_port - good idea: setup interface "eth1:1" with 188.120.244.5 on host-machine, and add to Vagrant file "config.vm.forward_port = hmm..?" config.vm.network :hostonly, "188.120.244.5" - not working. Was created new interface on host-machine with ip "188.120.244.1". Of course 188.120.244.1 IP isn't mine and I can't access my server via this IP. config.vm.network :bridged - I'm confused how this works :) What I have now: Not working configuration. Debian-host-machine# cat Vagrantfile Vagrant::Config.run do |config| config.vm.define :gitlab do |box_config| box_config.vm.box = "ubuntu" box_config.vm.host_name = "ubuntu" box_config.vm.network :bridged box_config.vm.network :hostonly, "188.120.244.5", :auto_config => false end end Debian-host-machine# ifconfig eth1 Link encap:Ethernet HWaddr 00:15:17:69:71:bb inet addr:188.120.245.4 Bcast:188.120.247.255 Mask:255.255.248.0 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 vboxnet0 Link encap:Ethernet HWaddr 0a:00:27:00:00:00 inet addr:188.120.244.1 Bcast:188.120.246.255 Mask:255.255.255.0 Ubuntu-virtual-machine# ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:ee:8d:0c inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 eth1 Link encap:Ethernet HWaddr 08:00:27:45:71:87 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 How I can access virtual box via public static IP from network? I'm using Oracle VM VirtualBox Manager 4.1.18 and Vagrant version 1.0.3. Thanks in advance for your feedback.

    Read the article

  • "unrecognized options" while installing php

    - by user1692333
    I want to compile php 5.4.8 on my mac 10.8.2, but get some errors which cant solve by my self, so need your help. Firstly i get default php options with php -i | head, after it do this command ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --disable-dependency-tracking --sysconfdir=/private/etc --with-apxs2=/usr/sbin/apxs --enable-cli --with-config-file-path=/etc --with-libxml-dir=/usr --with-openssl=/usr --with-kerberos=/usr --with-zlib=/usr --enable-bcmath --with-bz2=/usr --enable-calendar --disable-cgi --with-curl=/usr --enable-dba --enable-ndbm=/usr --enable-exif --enable-fpm --enable-ftp --with-gd --with-freetype-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --with-jpeg-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --with-png-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --enable-gd-native-ttf --with-icu-dir=/usr --with-iodbc=/usr --with-ldap=/usr --with-ldap-sasl=/usr --with-libedit=/usr --enable-mbstring --enable-mbregex --with-mysql=mysqlnd --with-mysqli=mysqlnd --without-pear --with-pdo-mysql=mysqlnd --with-mysql-sock=/var/mysql/mysql.sock --with-readline=/usr --enable-shmop --with-snmp=/usr --enable-soap --enable-sockets --enable-sqlite-utf8 --enable-suhosin --enable-sysvmsg --enable-sysvsem --enable-sysvshm --with-tidy --enable-wddx --with-xmlrpc --with-iconv-dir=/usr --with-xsl=/usr --enable-zend-multibyte --enable-zip --with-pcre-regex --with-pgsql=/usr --with-pdo-pgsql=/usr But get this error config.status: creating Makefile config.status: creating jconfig.h config.status: jconfig.h is unchanged config.status: executing depfiles commands config.status: executing libtool commands configure: WARNING: unrecognized options: --enable-cli, --with-config-file-path, --with-libxml-dir, --with-openssl, --with-kerberos, --with-zlib, --enable-bcmath, --with-bz2, --enable-calendar, --disable-cgi, --with-curl, --enable-dba, --enable-ndbm, --enable-exif, --enable-fpm, --enable-ftp, --with-gd, --with-freetype-dir, --with-jpeg-dir, --with-png-dir, --enable-gd-native-ttf, --with-icu-dir, --with-iodbc, --with-ldap, --with-ldap-sasl, --with-libedit, --enable-mbstring, --enable-mbregex, --with-mysql, --with-mysqli, --without-pear, --with-pdo-mysql, --with-mysql-sock, --with-readline, --enable-shmop, --with-snmp, --enable-soap, --enable-sockets, --enable-sqlite-utf8, --enable-suhosin, --enable-sysvmsg, --enable-sysvsem, --enable-sysvshm, --with-tidy, --enable-wddx, --with-xmlrpc, --with-iconv-dir, --with-xsl, --enable-zend-multibyte, --enable-zip, --with-pcre-regex, --with-pgsql, --with-pdo-pgsql Maybe someone have some suggestions on this?

    Read the article

  • OpenWRT + OpenVPN client forwarding from lan to vpn not working

    - by Dariusz Górecki
    I've OpenWRT router with Backfire 10.03.1-rc3 (arch:brcm 2.6 kernel) I've set up an OpenVPN client connecting my router with workplace lan, and it works nicely, I can connect from router to networks (several) in workplace. My OpenVPN client uci-config looks like: config 'openvpn' 'stream_client' option 'nobind' '1' option 'float' '1' option 'client' '1' option 'reneg_sec' '0' option 'management' '127.0.0.1 31194' option 'explicit_exit_notify' '1' option 'verb' '3' option 'persist_tun' '1' option 'persist_key' '1' list 'remote' 'remote.address.cutted' option 'ca' '/lib/uci/upload/cbid.openvpn.stream_client.ca' option 'key' '/lib/uci/upload/cbid.openvpn.stream_client.key' option 'cert' '/lib/uci/upload/cbid.openvpn.stream_client.cert' option 'enable' '1' option 'dev' 'tun1' I've set the 'STREAM_VPN' Zone to allow in/out traffic, and I've added rules for zone-to-zone lan<-vpn and vpn<-lan config 'zone' option 'name' 'stream_vpn' option 'network' 'stream_vpn' option 'input' 'ACCEPT' option 'output' 'ACCEPT' option 'forward' 'REJECT' config 'forwarding' option 'src' 'lan' option 'dest' 'stream_vpn' config 'forwarding' option 'src' 'stream_vpn' option 'dest' 'lan' And interface config: config 'interface' 'stream_vpn' option 'proto' 'none' option 'ifname' 'tun1' option 'defaultroute' '0' option 'peerdns' '0' Now, from my router everything works nicely, the problem is that I cannot connect from computer inside a lan to hosts in networks provided by vpn connection :/ What I've missed, or what I'm doing wrong? And how can I force using specified DNS when connected to vpn? (I know that sever should use PUSH DNS option, but is PUSHes only routes)

    Read the article

  • CKEditor inside jQuery Dialog, how do I build it?

    - by Ben Dauphinee
    So, I'm working with CKEditor and jQuery, trying to build a pop-out editor. Below is what I have coded so far, and I can't seem to get it working the way I want it to. Basically, click the 'Edit' link, dialog box pops up, with the content to edit loaded into the CKEditor. Also, not required, but helpful if you can suggest how to do it. I can't seem to find out how to make the save button work in CKEditor (though I think the form will do it). Thanks in advance for any help. $(document).ready(function(){ var config = new Array(); config.height = "350px"; config.resize_enabled = false; config.tabSpaces = 4; config.toolbarCanCollapse = false; config.width = "700px"; config.toolbar_Full = [["Save","-","Cut","Copy","Paste","-","Undo","Redo","-","Bold","Italic", "-", "NumberedList","BulletedList","-","Link","Unlink","-","Image","Table"]]; $("a.opener").click(function(){ var editid = $(this).attr("href"); var editwin = \'<form><div id="header"><input type="text"></div><div id="content"><textarea id="content"></textarea></div></form>\'; var $dialog = $("<div>"+editwin+"</div>").dialog({ autoOpen: false, title: "Editor", height: 360, width: 710, buttons: { "Ok": function(){ var data = $(this).val(); } } }); //$(this).dialog("close"); $.getJSON("ajax/" + editid, function(data){ alert("datagrab"); $dialog.("textarea#content").html(data.content).ckeditor(config); alert("winset"); $dialog.dialog("open"); }); return false; }); });

    Read the article

  • How to run autoconf in OSX linking to specified OS SDK

    - by kroko
    Hello! I have written a small command line tool that includes some GPL code. Everything runs smothly. Using os 10.6. The external code used has a config.h header file, made by calling autoconf. I'd like to deploy the tool to different OS versions. Thus config.h could look like // config.h #if MAC_OS_X_VERSION_MAX_ALLOWED == MAC_OS_X_VERSION_10_4 // autoconf created config.h content for 10.4 comes here #elif MAC_OS_X_VERSION_MAX_ALLOWED == MAC_OS_X_VERSION_10_5 // autoconf created config.h content for 10.5 comes here #elif MAC_OS_X_VERSION_MAX_ALLOWED == MAC_OS_X_VERSION_10_6 // autoconf created config.h content for 10.6 comes here #else #error "muahahaha" #endif What is the way to tell autoconf to use /Developer/SDKs/MacOSX10.XXXX.sdk/usr/ while generating the config.h? In order to test it I have run #!/bin/bash # for 10.6 export CC="/usr/bin/gcc-4.2" export CXX="/usr/bin/g++-4.2" export MACOSX_DEPLOYMENT_TARGET="10.6" export OSX_SDK="/Developer/SDKs/MacOSX10.6.sdk" export OSX_CFLAGS="-isysroot $OSX_SDK -arch x86_64 -arch i386" export OSX_LDFLAGS="-Wl,-syslibroot,$OSX_SDK -arch x86_64 -arch i386" export CFLAGS=$OSX_CFLAGS export CXXFLAGS=$OSX_CFLAGS export LDFLAGS=$OSX_LDFLAGS before calling ./configure on OS 10.6. I know that the configure script looks for libintl.h, which is not in the "out of box 10.6 / SDK", but is present in the local machine under /usr/local The config.h header file produced with method described above has info that libintl.h is in the system- thus "linking" autoconf only to SDK has failed. Is it happening because... "we don't have a crystal ball"? :). Or is it incorrect "setup"/flag-export before running autoconf, which I hope is the case? If so, then what would be the correct way to set up envvariables? Many thanks in advance.

    Read the article

  • log4net from embedded xml?

    - by sanjeev40084
    i have two projects in visual studio. One is the console project while other is regular c# project. In the regular c# project, i have added config file(i.e. Test.config) with log4net section. This file is embedded. <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <log4net> <appender name="fileAppender" type="log4net.Appender.RollingFileAppender"> <file value="log//testapp.log" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="100MB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout,log4net"> <param name="ConversionPattern" value="%d{ISO8601} [%t] [%-5p] %c - %m%n" /> </layout> </appender> <!-- Setup the root category, add the appenders and set the default priority --> <root> <priority value="ALL" /> <appender-ref ref="fileAppender" /> </root> </log4net> Now in my console project, i want to tell my log4net to load log4net information from (Test.config) which is in another project. This is what i did in the constructor of console project: Assembly asm = Assembly.GetExecutingAssembly(); Stream xmlStream = asm.GetManifestResourceStream("Northwind.Participant.Config.Test.config"); ILog log = LogManager.GetLogger(typeof(ConsoleStart)); 'Northwind.Participant is full namespace. Config - folder where Test.config file is situated. Does anyone know how i can do that?

    Read the article

  • Basic Subversion questions

    - by Epitaph
    I've just started using subversion, and have read the official documentation (svn book), cheat sheet and a couple of guides. I know how to install subversion (in linux), create a repository (svnadmin create), and import my Eclipse project into the repository (SVN import), view the repository files (using svn list). But I am unable to understand some of the other terminologies. For example, after importing my Eclipse project into the newly created repository I have made changes to my Eclipse project (more than 1 file). Now, how should I update the repository with this added files/changes made to my Eclipse project? The svn update command brings the changes from the repository into your working copy - which is the opposite of what I want i.e. bring the changes I made in my Eclipse project into the previously imported project in repository. If I am correct, you update the repository more often (as you keep extending your project implementation) than your current project (with update). Also, I do not understand when would you use svn merge. The svn book states it applies the differences between 2 sources to a working copy. Is there a scenario which would explain this? Finally, can I have more than 1 project checked into the repository? Or is it better to create a new repository for each project?

    Read the article

  • ruby on rails: undefined method "version_requirements' when attempting to start server after new install

    - by ezabak
    Hi there, I had to newly install ruby on rails recently. When I attempted to start the server for a project I had already been working on previous to this new install, I received the following error: $ ruby script/server => Booting WEBrick... ./script/../config/../vendor/rails/railties/lib/rails/gem_dependency.rb:107:in `requirement': undefined method `version_requirements' for #<Gem::Dependency:0xb74bf764> (NoMethodError) from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `map' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:165:in `process' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `send' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `run' from /media/78C0-455B/bidmc/schedule/config/environment.rb:13 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/servers/webrick.rb:59 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/server.rb:49 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from script/server:3 I have the latest versions of ruby, rubygems, and rails. Any suggestions? Thanks.

    Read the article

  • How to get database table header information into an CSV File.

    - by Rachel
    I am trying to connect to the database and get current state of a table and update that information into csv file, with below mentioned piece of code, am able to get data information into csv file but am not able to get header information from database table into csv file. So my questions is How can I get Database Table Header information into an CSV File ? $config['database'] = 'sakila'; $config['host'] = 'localhost'; $config['username'] = 'root'; $config['password'] = ''; $d = new PDO('mysql:dbname='.$config['database'].';host='.$config['host'], $config['username'], $config['password']); $query = "SELECT * FROM actor"; $stmt = $d->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); $data = fopen('file.csv', 'w'); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Header information meaning: Vehicle Build Model car 2009 Toyota jeep 2007 Mahindra So header information for this would be Vehicle Build Model Any guidance would be highly appreciated.

    Read the article

  • Cucumber can't find installed gems

    - by artemave
    environment/cucumber.rb: ... # gem dependencies config.gem 'cucumber-rails', :lib => false, :version => '>=0.3.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'database_cleaner', :lib => false, :version => '>=0.5.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'webrat', :lib => false, :version => '>=0.7.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'spork', :lib => false, :version => '>=0.7.5' unless File.directory?(File.join(Rails.root, 'vend config.gem 'factory_girl', :source => 'http://gemcutter.org' config.gem 'selenium-client', :lib => false config.gem 'Selenium', :lib => false config.gem 'rspec', :lib => 'spec' config.gem 'rspec-rails', :lib => 'spec/rails' config.gem 'test-unit', :lib => false Running cucumber gives missing gems error: artem:~/projects/food4feed (master)$ cucumber ... no such file to load -- test-unit /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/rails/gem_dependency.rb:208:in `load' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `block in load_gems' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `each' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `load_gems' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:169:in `process' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:113:in `run' /home/artem/projects/food4feed/config/environment.rb:9:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/projects/food4feed/features/support/env.rb:12:in `block in <top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/spork-0.8.1/lib/spork.rb:23:in `prefork' /home/artem/projects/food4feed/features/support/env.rb:10:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:85:in `load_code_file' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:77:in `block in load_code_files' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:76:in `each' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:76:in `load_code_files' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/cli/main.rb:48:in `execute!' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/cli/main.rb:20:in `execute' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/bin/cucumber:8:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/bin/cucumber:19:in `load' /home/artem/.rvm/gems/ruby-1.9.1-p378/bin/cucumber:19:in `<main>' Missing these required gems: selenium-client Selenium rspec-rails test-unit You're running: ruby 1.9.1.378 at /home/artem/.rvm/rubies/ruby-1.9.1-p378/bin/ruby rubygems 1.3.5 at /home/artem/.rvm/gems/ruby-1.9.1-p378, /home/artem/.rvm/gems/ruby-1.9.1-p378%global All gems are obviously there: artem:~/projects/food4feed (master)$ gem list | egrep "elenium|rspec|test-unit" rspec (1.3.0) rspec-rails (1.3.2) Selenium (1.1.14) selenium-client (1.2.18) test-unit (2.0.7) The even more confusing part is that it only complains about certain gems. factory_girl and rspec don't cause problems. Any idea what is going on? My environment: Rails 2.3.5 cucumber (0.6.3) cucumber-rails (0.3.0)

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Spurious alleged file corruption on Windows 7

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

  • Spurious alleged file corruption with an SSD

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

  • Subversion problem, repo has moved

    - by Rudiger
    Hi, I've set up subversion on a CentOS fresh install. Web view works fine and gives no errors and requests password but when I try and access it through svn client (xcode) it gives the error 175011 (Repository has been moved). I've tried some of the solutions out there but no success. My subversion.conf: <Location /repos> DAV svn SVNParentPath /var/www/html/repos # Limit write permission to list of valid users. # Require SSL connection for password protection. SSLRequireSSL AuthType Basic AuthName "Authorization Realm" AuthUserFile /etc/svn-auth-conf Require valid-user </Location> My Apache DocumentRoot: /var/www/html I've only set up one svn repository so far so there shouldn't be any conflicts there. If you need any more info let me know. Thanks

    Read the article

  • Apache - Include conf Files Relative to ServerConfigFile (-f arg)

    - by Synetech inc.
    Hi, I want to use the -f command-line option for the Apache server so that I can store the conf files in a separate place (a data diectory) from the server binaries. The problem is that I use the Include directive to separate and organize the configurations, but when I use a command like Include "addons/SVN.conf", it fails because Apache looks for addons/SVN.conf in relative to the ServerRoot directory instead of the ServerConfigFile directory. I can work around this by using absolute paths (eg Include "e:\foo\bar\baz\Apache\conf\addons\svn.conf", but I don’t like that since it means I would have to change each and every Include directive if I move the conf folder as opposed to simply changing the -f option. Does anyone know of a way to get the Include directive to work relative to the conf file that Apache is passed. I tried Include "./addons/SVN.conf", but that too was relative to the ServerRoot. This forced relative-to-ServerRoot Include behavior kind of defeats the whole purpose of specifying an alternate config file to the one in ServerRoot/conf. Thanks.

    Read the article

  • LDAP installed, running, but can't connect remotely [Ubuntu 10.10]

    - by Casey Jordan
    Hi all, I installed LDAP on my ubuntu 10.10 system, using the tutorial found here: https://help.ubuntu.com/10.10/serverguide/C/openldap-server.html Everything seems to be working well, when logged into the server via ssh I can run commands like: > ldapsearch -xLLL -b "dc=easydita,dc=com" uid=john sn givenName cn dn: uid=john,ou=people,dc=easydita,dc=com sn: Doe givenName: John cn: John Doe So I think that's a good sign that things are working well. However I have had zero luck connecting to the server remotely via GUI tools or command line. I have tied JXplorer, and LDAP administration tool. Running commands like this: > ldapsearch -xLLL -W -H ldap://ice.rit.edu -d1 "dc=easydita,dc=com" ldap_url_parse_ext(ldap://ice.rit.edu) ldap_create ldap_url_parse_ext(ldap://ice.rit.edu:389/??base) Enter LDAP Password: ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP ice.rit.edu:389 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:389 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({i) ber: ber_flush2: 34 bytes to sd 3 ldap_result ld 0xb8940170 msgid 1 wait4msg ld 0xb8940170 msgid 1 (infinite timeout) wait4msg continue ld 0xb8940170 msgid 1 all 1 ** ld 0xb8940170 Connections: * host: ice.rit.edu port: 389 (default) refcnt: 2 status: Connected last used: Thu Mar 17 19:42:29 2011 ** ld 0xb8940170 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0xb8940170 request count 1 (abandoned 0) ** ld 0xb8940170 Response Queue: Empty ld 0xb8940170 response count 0 ldap_chkResponseList ld 0xb8940170 msgid 1 all 1 ldap_chkResponseList returns ld 0xb8940170 NULL ldap_int_select read1msg: ld 0xb8940170 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 16 contents: read1msg: ld 0xb8940170 msgid 1 message type bind ber_scanf fmt ({eAA) ber: read1msg: ld 0xb8940170 0 new referrals read1msg: mark request completed, ld 0xb8940170 msgid 1 request done: ld 0xb8940170 msgid 1 res_errno: 49, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_err2string ldap_bind: Invalid credentials (49) I am pretty sure that I set up the admin password correctly, but the tutorial was not very specific about that. (Also could not find instructions on how to reset admin password.) Additional info: I was told that this file might hold important information so I will post it: /etc/ldap/slapd.d/cn=config/olcDatabase={0}config.ldif dn: olcDatabase={0}config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: {0}to * by dn.exact=cn=localroot,cn=config manage by * break olcRootDN: cn=admin,cn=config structuralObjectClass: olcDatabaseConfig entryUUID: eca09490-e524-102f-87c5-17d7a82e8985 creatorsName: cn=config createTimestamp: 20110317205733Z entryCSN: 20110317205733.193089Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20110317205733Z Given that it seems I have this almost set up correctly is there any steps I can take to correct this? Thanks, Casey

    Read the article

  • Puppet Directory and File ownership ignored

    - by Phil Sturgeon
    Puppet seems to be lying to me, which is not very nice. I am trying to set some files and directories included in /vagrant/src to be 666 and 777, and set the ownership group to the correct Apache user (using the PuppetLabs Apache module). Output from Puppet says yes. [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with /tmp/vagrant-puppet/manifests/default.pp... stdin: is not a tty No LSB modules are available. warning: require is a metaparam; this value will inherit to all contained resources warning: notify is a metaparam; this value will inherit to all contained resources notice: /Stage[main]//File[/vagrant/src/addons/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//Package[curl]/ensure: ensure changed 'purged' to 'present' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/uploads/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/mode: mode changed '0755' to '0777' notice: /Stage[main]/Apache/Service[httpd]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/mode: mode changed '0755' to '0777' notice: Finished catalog run in 2.29 seconds Output from ls -lah says no: $ ls -lah /vagrant/src/ total 36K drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 00:11 . drwxr-xr-x 1 vagrant vagrant 340 2012-07-03 08:08 .. drwxr-xr-x 1 vagrant vagrant 136 2012-07-03 00:11 addons drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 assets drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 07:45 .git -rw-r--r-- 1 vagrant vagrant 1.3K 2012-07-03 00:11 .gitignore -rwxr-xr-x 1 vagrant vagrant 1.4K 2012-07-03 00:11 .htaccess -rwxr-xr-x 1 vagrant vagrant 8.8K 2012-07-03 00:11 index.php drwxr-xr-x 1 vagrant vagrant 442 2012-07-03 00:11 installer -rwxr-xr-x 1 vagrant vagrant 2.8K 2012-07-03 00:11 LICENSE -rw-r--r-- 1 vagrant vagrant 1.1K 2012-07-03 00:11 phpdoc.dist.xml -rw-r--r-- 1 vagrant vagrant 3.3K 2012-07-03 00:11 README.md drwxr-xr-x 1 vagrant vagrant 204 2012-07-03 00:11 system -rw-r--r-- 1 vagrant vagrant 42 2012-07-03 00:11 .travis.yml drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 uploads Whats up with that? My entire config can be found here.

    Read the article

  • Issues with softlink

    - by Ajay
    I am trying to create a softlink through unix command that is ln -s /ajay/i4sites/i4gear.com/docs/includes /ajay/i4sites/i4content.com/docs/includes but this always give one message this is not in directory. I want to know only one thing that for creating softlink unix command svn should be worked or not.Is it compulsory.i have execute the command svn status svn: warning: '.' is not a working copy. so please clear me..

    Read the article

  • Subversion multi checkout post-commit hook?

    - by FLX
    The title must sound strange but I'm trying to achieve the following: SVN repo location: /home/flx/svn/flxdev SVN repo "flxdev" structure: + Project1 ++ files + Project2 + Project3 + Project4 I'm trying to set up a post-commit hook that automatically checks out on the other end when I do a commit. The post-commit doc explicitly lists the following: # POST-COMMIT HOOK # # The post-commit hook is invoked after a commit. Subversion runs # this hook by invoking a program (script, executable, binary, etc.) # named 'post-commit' (for which this file is a template) with the # following ordered arguments: # # [1] REPOS-PATH (the path to this repository) # [2] REV (the number of the revision just committed) So I made the following command to test: REPOS="$1" REV="$2" echo "Updated project $REPOS to $REV" However when I edit files in Project1 for example, this outputs "Updated project /home/flx/svn/flxdev to 1016" I'd like this to be: "Updated project Project1 to 1016" Having this variable allows me to specify to do different actions per project post-commit. How can I specify the project parameter? Thanks! Dennis

    Read the article

  • Writing to Samba share as different user?

    - by Hamid Elaosta
    I have a Samba share on my NAS drive mounter as follows: mount -t smbfs -o username=backup,password=backups_password //sharebox/SVNBackup /mnt/SVNBackup I am then trying to run: sudo svnadmin dump /usr/local/svn/repos/testrepo > /mnt/SVNBackup/test1.svn but I get: bash: /mnt/SVNBackup/test1.svn: Permission Denied The backup location is setup to accept access only from the user "backup" (who doesn't exist on the local system) How do I go about solving this problem? Thanks

    Read the article

  • s3fs not maintaining uid/gid

    - by Publiccert
    I'm mounting s3 using s3fs with the following command: sudo /usr/local/bin/s3fs -o use_cache=/tmp,uid=1000,gid=1000,allow_other svn.domain.com /svn Allow_other is confirmed to be working along with cache. However, no variation/different placement of the uid/gid's is having any affect in the meta data on S3. Both the uid and gid come up as '0' in S3. I have created a user called svn with uid of 1000 to see if that would fix the problem. No luck.

    Read the article

  • localhost name error with linux machines

    - by coderex
    Hi, CASE 1: I have a Ubuntu machine with name midhun.local I can access this in http://midhun.local/svn ... But its can't access from other machines(both Windows and Linux) through this host name. But it works with http://192.168.1.192/svn CASE 2: I have a another machine(windows) having the host-name myname:555 In this case i can access https://myname:555/svn from other windows machines with the same URL. But if am trying to access from the a Linux machine it will not work with the same URL instead of that https://192.1.168.111:555/svn will work. How can I solve the problem. I need to access via the same name from cross domain. How is it possible in LAN Thanks in advance!!

    Read the article

  • IIS httpTracing setting has no effect

    - by digahill
    I'm trying to troubleshoot some performance issues we are having on a specific ASP.NET page with Microsoft's Perfecto Tool on IIS 7.5. Perfecto uses the ETW hooks build in to IIS to report on specific HTTP request, and is working quite well. However, I only want IIS to emit traces for one specific page, say "Default.aspx" in my TestApp Web Application. Following the instructions on the httpTracing man page, I should be able to add the traceUrls element to my root web.config file for TestApp. This doesn't seem to affect tracing whatsoever when I do so. For example, I've used the following settings in the web.config file and every request that hits the IIS server is sending tracing messages that are in turn picked up by Perfecto. (In the System.WebServer section) <httpTracing> <traceUrls> <add value="/Default.aspx" /> </traceUrls> </httpTracing> I then found that the applicationHost.config file on the server had an empty element. I tried removing this element, as well as the httpTracing element in the web.config. After a machine reboot, I was still getting tracing messages! My understanding is that the presense of the httpTracing element is what controlls whether ETW tracing is on or not. I ensured there was no reference to httpTracing in the machine.config, too. At a loss, I decided to remove the IIS Tracing feature with Server Manager. After a reboot, I no longer got ETW tracing. I then reinstalled IIS Tracing feature with Server Manager. As expected, the httpTracing element reappeared in the applicationhost.config file. Tracing messages began sending again for all sites and pages. I then tried to use the traceUrls element at the applicationhost.config level. This also didn't filter out and traces. I must be misunderstanting something key with how httpTracing works. There aren't many resources on the web to help me, either. Can anyone tell me if what I'm trying should work? Has anyone else had success filtering tracing message per page with traceUrls? I should note that I also tried changing with the following setting in applicationhost.config to "allow". It didn't seem to help. <section name="httpTracing" overrideModeDefault="Allow" />

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >