Search Results

Search found 14406 results on 577 pages for 'virtual terminal'.

Page 279/577 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Need Advice on Server configuration

    - by user45324
    I have lab of 80 computer in my school. I need to run terminal server on each pc using remote desktop service of Microsoft server 2003. I need to run applications like Java, .NET, wamp, turbo c and other low memory applications like ms office and internet. so, could you please recommended some configuration for Mother Board and Processor. If you need any other information feel free to ask.

    Read the article

  • GIT not functionnal on Mac OS X Lion?

    - by user1187727
    I am trying to use GIT to manage my computing projects. But all commands using GIT do not respond on my terminal. For example if I try git --version followed by entry keyboard typing, a blank line appear and wait until ever. If I type again the entry key on my keyboard the command line is again available but nothing appear. It's the same for all git function that I type. Do you have any solution or explanation for this ?

    Read the article

  • Why do I get this error when I try to push my SQLite3 to Postgresql (via Taps) on Cedar Stack?

    - by rhodee
    I've done quite a bit of research on Heroku Dev Center and I am now looking to the community for help. Here is my problem. I can not push my db to Heroku Cedar Stack. I am trying to migrate a sqlite database to postgresql via Taps gem. When I am ready to deploy I run: bundle install --without production heroku run db:push I get the following result: Running db:seed attached to terminal... up, run.17 sh: db:seed: not found heroku run rake db:migrate And when I run the migration: heroku run rake db:migrate I get the following: Running rake db:migrate attached to terminal... up, run.18 rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) /usr/local/lib/ruby/1.9.1/rake.rb:2367:in `raw_load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2007:in `block in load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2058:in `standard_exception_handling' /usr/local/lib/ruby/1.9.1/rake.rb:2006:in `load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:1991:in `run' /usr/local/bin/rake:31:in `<main>' Everytime I push to Heroku (git push heroku master) it fails because my gem file is attempting to install sqlite3 gem-even though its inside of the development and test groups in my Gemfile. My database.yml production environment still points to sqlite adapter even after I have run the following command successfully: heroku config:add BUNDLE_WITHOUT="test development" --app app_name_on_heroku Out of ideas. Please help. If its useful I can post results of my gemfile, heroku ps and logs. Cheers UPDATE: After following @John's direction I now receive the following terminal message. Sending schema Schema: 100% |==========================================| Time: 00:00:07 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 Sending data 4 tables, 6 records schema_migrat: 0% | | ETA: --:--:-- Saving session to push_201111070749.dat.. !!! Caught Server Exception HTTP CODE: 500 Taps Server Error: LoadError: no such file to load -- sequel/adapters/ And the following warnings: ["/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:in require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:inblock in tsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:72:in block in check_requiring_thread'", "<internal:prelude>:10:insynchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:69:in check_requiring_thread'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:intsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:25:in adapter_class'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:54:inconnect'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:119:in connect'", "/app/lib/taps/db_session.rb:14:inconn'", "/app/lib/taps/server.rb:91:in block in <class:Server>'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:in block in route'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:ininstance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:in route_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:500:inblock (2 levels) in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:inblock in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:in each'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:inroute!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:601:in dispatch!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:inblock in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in instance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:inblock in invoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:ininvoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:399:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/auth/basic.rb:25:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:inblock in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in synchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:incall'", "/home/heroku_rack/lib/static_assets.rb:9:in call'", "/home/heroku_rack/lib/last_access.rb:15:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:47:in block in call'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:ineach'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:in call'", "/home/heroku_rack/lib/date_header.rb:14:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/builder.rb:77:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:76:inblock in pre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:inpre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:57:in process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:42:inreceive_data'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in run_machine'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:inrun'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/server.rb:156:instart'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/controllers/controller.rb:80:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:177:inrun_command'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:143:in run!'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/bin/thin:6:in'", "/usr/ruby1.9.2/bin/thin:19:in load'", "/usr/ruby1.9.2/bin/thin:19:in'"]

    Read the article

  • Why "menus" unit is finalized too early?

    - by Harriv
    I tested my application with FastMM and FullDebugMode turned on, since I had some shutdown problems. After solving bunch of my own problems FastMM started to complain about calling virtual method on a freed object in TPopupList. I tried to move the menus unit as early as possible in uses so that it would be finalized last, but it didn't help. Is this real problem, a bug in vcl or false alarm from FastMM? Here's the full report from FastMM: FastMM has detected an attempt to call a virtual method on a freed object. An access violation will now be raised in order to abort the current operation. Freed object class: TPopupList Virtual method: Offset +16 Virtual method address: 4714E4 The allocation number was: 220 The object was allocated by thread 0x1CC0, and the stack trace (return addresses) at the time was: 403216 [sys\system.pas][System][System.@GetMem][2654] 404A4F [sys\system.pas][System][System.TObject.NewInstance][8807] 404E16 [sys\system.pas][System][System.@ClassCreate][9472] 404A84 [sys\system.pas][System][System.TObject.Create][8822] 7F2602 [Menus.pas][Menus][Menus.Menus][4223] 40570F [sys\system.pas][System][System.InitUnits][11397] 405777 [sys\system.pas][System][System.@StartExe][11462] 40844F [SysInit.pas][SysInit][SysInit.@InitExe][663] 7F6368 [PCCSServer.dpr][PCCSServer][PCCSServer.PCCSServer][148] 7C90DCBA [ZwSetInformationThread] 7C817077 [Unknown function at RegisterWaitForInputIdle] The object was subsequently freed by thread 0x1CC0, and the stack trace (return addresses) at the time was: 403232 [sys\system.pas][System][System.@FreeMem][2699] 404A6D [sys\system.pas][System][System.TObject.FreeInstance][8813] 404E61 [sys\system.pas][System][System.@ClassDestroy][9513] 428D15 [common\Classes.pas][Classes][Classes.TList.Destroy][2914] 404AB3 [sys\system.pas][System][System.TObject.Free][8832] 472091 [Menus.pas][Menus][Menus.Finalization][4228] 4056A7 [sys\system.pas][System][System.FinalizeUnits][11256] 4056BF [sys\system.pas][System][System.FinalizeUnits][11261] 7C9032A8 [RtlConvertUlongToLargeInteger] 7C90327A [RtlConvertUlongToLargeInteger] 7C92AA0F [Unknown function at towlower] The current thread ID is 0x1CC0, and the stack trace (return addresses) leading to this error is: 4714B8 [Menus.pas][Menus][Menus.TPopupList.MainWndProc][3779] 435BB2 [common\Classes.pas][Classes][Classes.StdWndProc][11583] 7E418734 [Unknown function at GetDC] 7E418816 [Unknown function at GetDC] 7E428EA0 [Unknown function at DefWindowProcW] 7E428EEC [Unknown function at DefWindowProcW] 7C90E473 [KiUserCallbackDispatcher] 7E42B1A8 [DestroyWindow] 47CE31 [Controls.pas][Controls][Controls.TWinControl.DestroyWindowHandle][6857] 493BE4 [Forms.pas][Forms][Forms.TCustomForm.DestroyWindowHandle][4564] 4906D9 [Forms.pas][Forms][Forms.TCustomForm.Destroy][2929] Current memory dump of 256 bytes starting at pointer address 7FF9CFF0: 2C FE 82 00 80 80 80 80 80 80 80 80 80 80 80 80 80 80 80 80 C4 A3 2D 0C 00 00 00 00 B1 D0 F9 7F 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 C0 00 00 00 16 32 40 00 9D 5B 40 00 C8 5B 40 00 CE 82 40 00 3C 40 91 7C B0 B1 94 7C 0A 77 92 7C 84 77 92 7C 7C F0 96 7C 94 B3 94 7C 84 77 92 7C C0 1C 00 00 32 32 40 00 12 5B 40 00 EF 69 40 00 BA 20 47 00 A7 56 40 00 BF 56 40 00 A8 32 90 7C 7A 32 90 7C 0F AA 92 7C 0A 77 92 7C 84 77 92 7C C0 1C 00 00 0E 00 00 00 00 00 00 00 C7 35 65 59 2C FE 82 00 80 80 80 80 80 80 80 80 80 80 38 CA 9A A6 80 80 80 80 80 80 00 00 00 00 51 D1 F9 7F 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 C1 00 00 00 16 32 40 00 9D 5B 40 00 C8 5B 40 00 CE 82 40 00 3C 40 91 7C B0 B1 94 7C 0A 77 92 7C 84 77 92 7C 7C F0 96 7C 94 B3 94 7C 84 77 92 7C , þ ‚ . € € € € € € € € € € € € € € € € Ä £ - . . . . . ± Ð ù . . . . . . . . . . . . . . . . À . . . . 2 @ . [ @ . È [ @ . Î ‚ @ . < @ ‘ | ° ± ” | . w ’ | „ w ’ | | ð – | ” ³ ” | „ w ’ | À . . . 2 2 @ . . [ @ . ï i @ . º G . § V @ . ¿ V @ . ¨ 2 | z 2 | . ª ’ | . w ’ | „ w ’ | À . . . . . . . . . . . Ç 5 e Y , þ ‚ . € € € € € € € € € € 8 Ê š ¦ € € € € € € . . . . Q Ñ ù . . . . . . . . . . . . . . . . Á . . . . 2 @ . [ @ . È [ @ . Î ‚ @ . < @ ‘ | ° ± ” | . w ’ | „ w ’ | | ð – | ” ³ ” | „ w ’ | I'm using Delphi 2007 and FastMM 4.97.

    Read the article

  • Completing install of ruby 1.9.3 with Ruby for for Mac OS X 10.7.5 Leopard, Xcode 4.5.2 -- problems with rvm pkg install openssl

    - by user1848361
    First, many thanks in advance for any help. I'm a complete novice with programming and I'm trying to get started with this Ruby on Rails tutorial (http://ruby.railstutorial.org/ruby-on-rails-tutorial-book?version=3.2) I have been trying figure this out for about 7 hours now and since I don't have any hair left to pull out I'm turning to these hallowed pages. I have searched for solutions here again and again. System: Mac OS X 10.7.5 Leopard, Xcode 4.5.2 I installed homebrew and have updated it multiple times I used homebrew to install rvm and have updated it multiple times I installed git The standard ruby on the system (checking with $ ruby -v) is 1.8.7 My problem is that every time I try to use rvm to install a new version of Ruby ($ rvm install 1.9.3) I get the following error: Ruby (and needed base gems) for your selection will be installed shortly. Before it happens, please read and execute the instructions below. Please use a separate terminal to execute any additional commands. Notes for Mac OS X 10.7.5, Xcode 4.5.2. For JRuby: Install the JDK. See http://developer.apple.com/java/download/ # Current Java version "1.6.0_26" For IronRuby: Install Mono >= 2.6 For Ruby 1.9.3: Install libksba # If using Homebrew, 'brew install libksba' For Opal: Install Nodejs with NPM. See http://nodejs.org/download/ To use an RVM installed Ruby as default, instead of the system ruby: rvm install 1.8.7 # installs patch 357: closest supported version rvm system ; rvm gemset export system.gems ; rvm 1.8.7 ; rvm gemset import system.gems # migrate your gems rvm alias create default 1.8.7 And reopen your terminal windows. Xcode and gcc: : I have performed $ brew install libksba and when I try to do it again it tells me that libksba is installed already. When I type "$ rvm requirements" I get: Notes for Mac OS X 10.7.5, Xcode 4.5.2. For JRuby: Install the JDK. See http://developer.apple.com/java/download/ # Current Java version "1.6.0_26" For IronRuby: Install Mono >= 2.6 For Ruby 1.9.3: Install libksba # If using Homebrew, 'brew install libksba' For Opal: Install Nodejs with NPM. See http://nodejs.org/download/ To use an RVM installed Ruby as default, instead of the system ruby: rvm install 1.8.7 # installs patch 357: closest supported version rvm system ; rvm gemset export system.gems ; rvm 1.8.7 ; rvm gemset import system.gems # migrate your gems rvm alias create default 1.8.7 And reopen your terminal windows. Xcode and gcc: Right now Ruby requires gcc to compile, but Xcode 4.2 and later no longer ship with gcc. Instead they ship with llvm-gcc (to which gcc is a symlink) and clang, neither of which are supported for building Ruby. Xcode 4.1 was the last version to ship gcc, which was /usr/bin/gcc-4.2. Xcode 4.1 and earlier: - Ruby will build fine. Xcode 4.2 and later (including Command Line Tools for Xcode): - If you have gcc-4.2 (and friends) from an earlier Xcode version, Ruby will build fine. - If you don't have gcc-4.2, you have two options to get it: * Install apple-gcc42 from Homebrew * Install osx-gcc-installer Homebrew: If you are using Homebrew, you can install the apple-gcc42 and required libraries from homebrew/dupes: brew update brew tap homebrew/dupes brew install autoconf automake apple-gcc42 rvm pkg install openssl Xcode 4.2+ install or/and Command Line Tools for Xcode is required to provide make and other tools. osx-gcc-installer: If you don't use Homebrew, you can download and install osx-gcc-installer: https://github.com/kennethreitz/osx-gcc-installer. Warning: Installing osx-gcc-installer on top of a recent Xcode is known to cause problems, so you must uninstall Xcode before installing osx-gcc-installer. Afterwards you may install Xcode 4.2+ or Command Line Tools for Xcode if you desire. ** NOTE: Currently, Node.js is having issues building with osx-gcc-installer. The only fix is to install Xcode over osx-gcc-installer. So I assume I have to do something with brew update brew tap homebrew/dupes brew install autoconf automake apple-gcc42 rvm pkg install openssl Everything seemed to work fine until "$ rvm pkg install openssl", which returns: Fetching openssl-1.0.1c.tar.gz to /Users/thierinvestmentservices/.rvm/archives Extracting openssl to /Users/thierinvestmentservices/.rvm/src/openssl-1.0.1c Configuring openssl in /Users/thierinvestmentservices/.rvm/src/openssl-1.0.1c. Compiling openssl in /Users/thierinvestmentservices/.rvm/src/openssl-1.0.1c. Error running 'make', please read /Users/thierinvestmentservices/.rvm/log/openssl/make.log Please note that it's required to reinstall all rubies: rvm reinstall all --force Updating openssl certificates Error running 'update_openssl_certs', please read /Users/thierinvestmentservices/.rvm/log/openssl.certs.log Johns-MacBook-Pro:~ thierinvestmentservices$ rvm pkg install openssl Fetching openssl-1.0.1c.tar.gz to /Users/thierinvestmentservices/.rvm/archives Extracting openssl to /Users/thierinvestmentservices/.rvm/src/openssl-1.0.1c Configuring openssl in /Users/thierinvestmentservices/.rvm/src/openssl-1.0.1c. Compiling openssl in /Users/thierinvestmentservices/.rvm/src/openssl-1.0.1c. Error running 'make', please read /Users/thierinvestmentservices/.rvm/log/openssl/make.log Please note that it's required to reinstall all rubies: rvm reinstall all --force Updating openssl certificates Error running 'update_openssl_certs', please read /Users/thierinvestmentservices/.rvm/log/openssl.certs.log make.log reads "[2012-11-23 13:15:28] make /Users/thierinvestmentservices/.rvm/scripts/functions/utility: line 116: make: command not found" and openssl.certs.log reads "[2012-11-23 14:04:04] update_openssl_certs update_openssl_certs () { ( chpwd_functions="" builtin cd $rvm_usr_path/ssl && command curl -O http://curl.haxx.se/ca/cacert.pem && mv cacert.pem cert.pem ) } current path: /Users/thierinvestmentservices command(1): update_openssl_certs /Users/thierinvestmentservices/.rvm/scripts/functions/pkg: line 205: cd: /Users/thierinvestmentservices/.rvm/usr/ssl: No such file or directory" At this point the letters might as well be wingdings I have no idea what is going on. I have tried to install rvm make with something I saw on one forum post but I got a bunch of warnings. If anyone has any suggestions I would be deeply grateful, I am completely in over my head,

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • A first look at ConfORM - Part 1

    - by thangchung
    All source codes for this post can be found at here.Have you ever heard of ConfORM is not? I have read it three months ago when I wrote an post about NHibernate and Autofac. At that time, this project really has just started and still in beta version, so I still do not really care much. But recently when reading a book by Jason Dentler NHibernate 3.0 Cookbook, I started to pay attention to it. Author have mentioned quite a lot of OSS in his book. And now again I have reviewed ConfORM once again. I have been involved in ConfORM development group on google and read some articles about it. Fabio Maulo spent a lot of work for the OSS, and I hope it will adapt a great way for NHibernate (because he contributed to NHibernate that). So what is ConfORM? It is stand for Configuration ORM, and it was trying to use a lot of heuristic model for identifying entities from C# code. Today, it's mostly Model First Driven development, so the first thing is to build the entity model. This is really important and we can see it is the heart of business software. Then we have to tell DB about the entity of this model. We often will use Inversion Engineering here, Database Schema is will create based on recently Entity Model. From now we will absolutely not interested in the DB again, only focus on the Entity Model.Fluent NHibenate really good, I liked this OSS. Sharp Architecture and has done so well in Fluent NHibernate integration with applications. A Multiple Database technical in Sharp Architecture is truly awesome. It can receive configuration, a connection string and a dll containing entity model, which would then create a SessionFactory, finally caching inside the computer memory. As the number of SessionFactory can be very large and will full of the memory, it has also devised a way of caching SessionFactory in the file. This post I hope this will not completely explain about and building a model of multiple databases. I just tried to mount a number of posts from the community and apply some of my knowledge to build a management model Session for ConfORM.As well as Fluent NHibernate, ConfORM also supported on the interface mapping, see this to understand it. So the first thing we will build the Entity Model for it, and here is what I will use the model for this article. A simple model for managing news and polls, it will be too easy for a number of people, but I hope not to bring complexity to this post.I will then have some code to build super type for the Entity Model. public interface IEntity<TId>    {        TId Id { get; set; }    } public abstract class EntityBase<TId> : IEntity<TId>    {        public virtual TId Id { get; set; }         public override bool Equals(object obj)        {            return Equals(obj as EntityBase<TId>);        }         private static bool IsTransient(EntityBase<TId> obj)        {            return obj != null &&            Equals(obj.Id, default(TId));        }         private Type GetUnproxiedType()        {            return GetType();        }         public virtual bool Equals(EntityBase<TId> other)        {            if (other == null)                return false;            if (ReferenceEquals(this, other))                return true;            if (!IsTransient(this) &&            !IsTransient(other) &&            Equals(Id, other.Id))            {                var otherType = other.GetUnproxiedType();                var thisType = GetUnproxiedType();                return thisType.IsAssignableFrom(otherType) ||                otherType.IsAssignableFrom(thisType);            }            return false;        }         public override int GetHashCode()        {            if (Equals(Id, default(TId)))                return base.GetHashCode();            return Id.GetHashCode();        }    } Database schema will be created as:The next step is to build the ConORM builder to create a NHibernate Configuration. Patrick have a excellent article about it at here. Contract of it below: public interface IConfigBuilder    {        Configuration BuildConfiguration(string connectionString, string sessionFactoryName);    } The idea here is that I will pass in a connection string and a set of the DLL containing the Entity Model and it makes me a NHibernate Configuration (shame that I stole this ideas of Sharp Architecture). And here is its code: public abstract class ConfORMConfigBuilder : RootObject, IConfigBuilder    {        private static IConfigurator _configurator;         protected IEnumerable<Type> DomainTypes;         private readonly IEnumerable<string> _assemblies;         protected ConfORMConfigBuilder(IEnumerable<string> assemblies)            : this(new Configurator(), assemblies)        {            _assemblies = assemblies;        }         protected ConfORMConfigBuilder(IConfigurator configurator, IEnumerable<string> assemblies)        {            _configurator = configurator;            _assemblies = assemblies;        }         public abstract void GetDatabaseIntegration(IDbIntegrationConfigurationProperties dBIntegration, string connectionString);         protected abstract HbmMapping GetMapping();         public Configuration BuildConfiguration(string connectionString, string sessionFactoryName)        {            Contract.Requires(!string.IsNullOrEmpty(connectionString), "ConnectionString is null or empty");            Contract.Requires(!string.IsNullOrEmpty(sessionFactoryName), "SessionFactory name is null or empty");            Contract.Requires(_configurator != null, "Configurator is null");             return CatchExceptionHelper.TryCatchFunction(                () =>                {                    DomainTypes = GetTypeOfEntities(_assemblies);                     if (DomainTypes == null)                        throw new Exception("Type of domains is null");                     var configure = new Configuration();                    configure.SessionFactoryName(sessionFactoryName);                     configure.Proxy(p => p.ProxyFactoryFactory<ProxyFactoryFactory>());                    configure.DataBaseIntegration(db => GetDatabaseIntegration(db, connectionString));                     if (_configurator.GetAppSettingString("IsCreateNewDatabase").ConvertToBoolean())                    {                        configure.SetProperty("hbm2ddl.auto", "create-drop");                    }                     configure.Properties.Add("default_schema", _configurator.GetAppSettingString("DefaultSchema"));                    configure.AddDeserializedMapping(GetMapping(),                                                     _configurator.GetAppSettingString("DocumentFileName"));                     SchemaMetadataUpdater.QuoteTableAndColumns(configure);                     return configure;                }, Logger);        }         protected IEnumerable<Type> GetTypeOfEntities(IEnumerable<string> assemblies)        {            var type = typeof(EntityBase<Guid>);            var domainTypes = new List<Type>();             foreach (var assembly in assemblies)            {                var realAssembly = Assembly.LoadFrom(assembly);                 if (realAssembly == null)                    throw new NullReferenceException();                 domainTypes.AddRange(realAssembly.GetTypes().Where(                    t =>                    {                        if (t.BaseType != null)                            return string.Compare(t.BaseType.FullName,                                          type.FullName) == 0;                        return false;                    }));            }             return domainTypes;        }    } I do not want to dependency on any RDBMS, so I made a builder as an abstract class, and so I will create a concrete instance for SQL Server 2008 as follows: public class SqlServerConfORMConfigBuilder : ConfORMConfigBuilder    {        public SqlServerConfORMConfigBuilder(IEnumerable<string> assemblies)            : base(assemblies)        {        }         public override void GetDatabaseIntegration(IDbIntegrationConfigurationProperties dBIntegration, string connectionString)        {            dBIntegration.Dialect<MsSql2008Dialect>();            dBIntegration.Driver<SqlClientDriver>();            dBIntegration.KeywordsAutoImport = Hbm2DDLKeyWords.AutoQuote;            dBIntegration.IsolationLevel = IsolationLevel.ReadCommitted;            dBIntegration.ConnectionString = connectionString;            dBIntegration.LogSqlInConsole = true;            dBIntegration.Timeout = 10;            dBIntegration.LogFormatedSql = true;            dBIntegration.HqlToSqlSubstitutions = "true 1, false 0, yes 'Y', no 'N'";        }         protected override HbmMapping GetMapping()        {            var orm = new ObjectRelationalMapper();             orm.Patterns.PoidStrategies.Add(new GuidPoidPattern());             var patternsAppliers = new CoolPatternsAppliersHolder(orm);            //patternsAppliers.Merge(new DatePropertyByNameApplier()).Merge(new MsSQL2008DateTimeApplier());            patternsAppliers.Merge(new ManyToOneColumnNamingApplier());            patternsAppliers.Merge(new OneToManyKeyColumnNamingApplier(orm));             var mapper = new Mapper(orm, patternsAppliers);             var entities = new List<Type>();             DomainDefinition(orm);            Customize(mapper);             entities.AddRange(DomainTypes);             return mapper.CompileMappingFor(entities);        }         private void DomainDefinition(IObjectRelationalMapper orm)        {            orm.TablePerClassHierarchy(new[] { typeof(EntityBase<Guid>) });            orm.TablePerClass(DomainTypes);             orm.OneToOne<News, Poll>();            orm.ManyToOne<Category, News>();             orm.Cascade<Category, News>(Cascade.All);            orm.Cascade<News, Poll>(Cascade.All);            orm.Cascade<User, Poll>(Cascade.All);        }         private static void Customize(Mapper mapper)        {            CustomizeRelations(mapper);            CustomizeTables(mapper);            CustomizeColumns(mapper);        }         private static void CustomizeRelations(Mapper mapper)        {        }         private static void CustomizeTables(Mapper mapper)        {        }         private static void CustomizeColumns(Mapper mapper)        {            mapper.Class<Category>(                cm =>                {                    cm.Property(x => x.Name, m => m.NotNullable(true));                    cm.Property(x => x.CreatedDate, m => m.NotNullable(true));                });             mapper.Class<News>(                cm =>                {                    cm.Property(x => x.Title, m => m.NotNullable(true));                    cm.Property(x => x.ShortDescription, m => m.NotNullable(true));                    cm.Property(x => x.Content, m => m.NotNullable(true));                });             mapper.Class<Poll>(                cm =>                {                    cm.Property(x => x.Value, m => m.NotNullable(true));                    cm.Property(x => x.VoteDate, m => m.NotNullable(true));                    cm.Property(x => x.WhoVote, m => m.NotNullable(true));                });             mapper.Class<User>(                cm =>                {                    cm.Property(x => x.UserName, m => m.NotNullable(true));                    cm.Property(x => x.Password, m => m.NotNullable(true));                });        }    } As you can see that we can do so many things in this class, such as custom entity relationships, custom binding on the columns, custom table name, ... Here I only made two so-Appliers for OneToMany and ManyToOne relationships, you can refer to it here public class ManyToOneColumnNamingApplier : IPatternApplier<PropertyPath, IManyToOneMapper>    {        #region IPatternApplier<PropertyPath,IManyToOneMapper> Members         public void Apply(PropertyPath subject, IManyToOneMapper applyTo)        {            applyTo.Column(subject.ToColumnName() + "Id");        }         #endregion         #region IPattern<PropertyPath> Members         public bool Match(PropertyPath subject)        {            return subject != null;        }         #endregion    } public class OneToManyKeyColumnNamingApplier : OneToManyPattern, IPatternApplier<PropertyPath, ICollectionPropertiesMapper>    {        public OneToManyKeyColumnNamingApplier(IDomainInspector domainInspector) : base(domainInspector) { }         #region Implementation of IPattern<PropertyPath>         public bool Match(PropertyPath subject)        {            return Match(subject.LocalMember);        }         #endregion Implementation of IPattern<PropertyPath>         #region Implementation of IPatternApplier<PropertyPath,ICollectionPropertiesMapper>         public void Apply(PropertyPath subject, ICollectionPropertiesMapper applyTo)        {            applyTo.Key(km => km.Column(GetKeyColumnName(subject)));        }         #endregion Implementation of IPatternApplier<PropertyPath,ICollectionPropertiesMapper>         protected virtual string GetKeyColumnName(PropertyPath subject)        {            Type propertyType = subject.LocalMember.GetPropertyOrFieldType();            Type childType = propertyType.DetermineCollectionElementType();            var entity = subject.GetContainerEntity(DomainInspector);            var parentPropertyInChild = childType.GetFirstPropertyOfType(entity);            var baseName = parentPropertyInChild == null ? subject.PreviousPath == null ? entity.Name : entity.Name + subject.PreviousPath : parentPropertyInChild.Name;            return GetKeyColumnName(baseName);        }         protected virtual string GetKeyColumnName(string baseName)        {            return string.Format("{0}Id", baseName);        }    } Everyone also can download the ConfORM source at google code and see example inside it. Next part I will write about multiple database factory. Hope you enjoy about it. happy coding and see you next part.

    Read the article

  • Postfix Submission port issue

    - by RevSpot
    I have setup postfix+mailman on my debian server and i have an issue with postfix submission port. My ISP blocks SMTP on port 25 to prevent *spams and i must to use submission port (587). I have uncomment the following line from master.cf (/etc/postfix/) but nothing happens. submission inet n - - - - smtpd This is my mail logs file when i try to invite a user to mailman list Nov 6 00:35:34 myhostname postfix/qmgr[1763]: C90BF1060D: from=<[email protected]>, size=1743, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: DF54B10608: from=<[email protected]>, size=488, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: 80F0D10609: from=<[email protected]>, size=483, nrcpt=1 (queue active) Nov 6 00:35:55 myhostname postfix/smtp[2269]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2270]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2271]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2269]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2270]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2271]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2269]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2270]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname4 postfix/smtp[2271]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2269]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2270]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2271]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: C90BF1060D: to=<[email protected]>, relay=none, delay=23711, delays=23606/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: DF54B10608: to=<[email protected]>, relay=none, delay=23882, delays=23777/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: 80F0D10609: to=<[email protected]>, relay=none, delay=23875, delays=23770/0.04/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) main.cf smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.mydomain.com, localhost.mydomain.com,localhost relayhost = relay_domains = $mydestination, mail.mydomain.com relay_recipient_maps = hash:/var/lib/mailman/data/virtual-mailman transport_maps = hash:/etc/postfix/transport mailman_destination_recipient_limit = 1 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all local_recipient_maps = master.cf smtp inet n - - - - smtpd submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user}

    Read the article

  • What steps can you take to ensure sane build environments when compiling software?

    - by Chris Adams
    Hi guys, I've been stuck with a compilation problem when building a standardised virtual machine on CentOS 5.4, and I'm in the dark here as to a) why this error is occurring, and b) how to fix it, and in the hope that someone else stumbles across this problem too, I'm hoping someone can help me find the solution here. I'm getting a configure: error: newly created file is older than distributed files! error when trying to compile Ruby Enterprise like below when I try to run the installer, and the solutions offered to on the forums (of checking the tine, and touching the files to update the time associated with them) don't seem to be helping here. What steps can I take to work out what the cause of this problem? [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo ./installer Welcome to the Ruby Enterprise Edition installer This installer will help you install Ruby Enterprise Edition 1.8.7-2009.10. Don't worry, none of your system files will be touched if you don't want them to, so there is no risk that things will screw up. You can expect this from the installation process: 1. Ruby Enterprise Edition will be compiled and optimized for speed for this system. 2. Ruby on Rails will be installed for Ruby Enterprise Edition. 3. You will learn how to tell Phusion Passenger to use Ruby Enterprise Edition instead of regular Ruby. Press Enter to continue, or Ctrl-C to abort. Checking for required software... * C compiler... found at /usr/bin/gcc * C++ compiler... found at /usr/bin/g++ * The 'make' tool... found at /usr/bin/make * Zlib development headers... found * OpenSSL development headers... found * GNU Readline development headers... found -------------------------------------------- Target directory Where would you like to install Ruby Enterprise Edition to? (All Ruby Enterprise Edition files will be put inside that directory.) [/opt/ruby-enterprise] : -------------------------------------------- Compiling and optimizing the memory allocator for Ruby Enterprise Edition In the mean time, feel free to grab a cup of coffee. ./configure --prefix=/opt/ruby-enterprise --disable-dependency-tracking checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... configure: error: newly created file is older than distributed files! Check your system clock This is a virtual machine running on virtualbox, and the time of the host and the virtual machine are identical, and up to date. I've also tried running this after updating time with an ntp-client, so no avail. I tried this after reading this post here of someone having a similar problem [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ date Tue Apr 27 08:09:05 BST 2010 The other approach I've tried is to touch the top level the files in the build folder like suggested here, but this hasn't worked either (an to be honest, I'm not sure why it would have worked either) [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo touch ruby-enterprise-1.8.7-2009.10/* I'm not sure what I can do next here - the problem seems to be the bash configure script that returns this error error: newly created file is older than distributed files!, at line :2214 { echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi ### PROBLEM LINE #### # this line is the problem line - this is returned true, sometimes it isn't and I can't # see a pattern that that determines when this will test will pass or not. test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi the thing that makes this really frustrating is that this script works sometimes, when the VM has been running for an hour or so it works, but not at boot. There's nothing I see in the crontab that suggests any hourly tasks are run that might change the state of the system enough make a difference to this script working. I'm totally at a loss when it comes to debugging beyond here. What's the best approach to take here? Thanks

    Read the article

  • 500 Internal Server Error with PHP application

    - by James
    I have written a PHP application using Windows and XAMPP. I've been trying to run it on Ubuntu 10.10 with Lighttpd 1.4.26. Parts of the application work fine, but whenever I try to log in, I get a 500 - Internal Server Error page. The only thing that shows up in /var/log/lighttpd/error.log is 2011-02-25 13:43:13: (mod_fastcgi.c.2582) unexpected end-of-file (perhaps the fastcgi process died): pid: 1169 socket: unix:/tmp/php.socket-0 2011-02-25 13:43:13: (mod_fastcgi.c.3367) response not received, request sent: 1596 on socket: unix:/tmp/php.socket-0 for /~denton/customer-facing-portal/index.php?, closing connection If I had any output whatsoever from PHP, this would be a lot easier to debug. Any ideas on how to get some? Here is my /etc/lighttpd/lighttpd.conf file: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load server.modules = ( "mod_alias", "mod_compress", # "mod_rewrite", # "mod_redirect", # "mod_usertrack", # "mod_expire", # "mod_flv_streaming", # "mod_evasive", "mod_setenv" ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to upload files to, purged daily. server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## Use ipv6 only if available. (disabled for while, check #560837) #include_shell "/usr/share/lighttpd/use-ipv6.pl" ## bind to port (default: 80) # server.port = 81 ## bind to localhost only (default: all interfaces) ## server.bind = "localhost" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts server.pid-file = "/var/run/lighttpd.pid" ## ## Format: <errorfile-prefix><status>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/var/www/" ## virtual directory listings dir-listing.encoding = "utf-8" server.dir-listing = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't change) server.username = "www-data" ## change gid to <gid> (default: don't change) server.groupname = "www-data" #### compress module compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ("text/plain", "text/html", "application/x-javascript", "text/css") #### url handling modules (rewrite, redirect, access) # url.rewrite = ( "^/$" => "/server-status" ) # url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### expire module # expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### external configuration files ## mimetype mapping include_shell "/usr/share/lighttpd/create-mime.assign.pl" ## load enabled configuration files, ## read /etc/lighttpd/conf-available/README first include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ## Set environment variables setenv.add-environment = ( "DB_URL__DEMO" => "192.168.1.231", "DB_NAME_DEMO" => "demo", "DB_USER_DEMO" => "user", "DB_PASS_DEMO" => "password", "DB_AGENCY_DEMO" => "demo" ) Here is my /etc/php5/cgi/php.ini file (sans 1641 lines of comments): [PHP] register_long_arrays = Off short_open_tag = Off engine = On short_open_tag = Off asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = On display_startup_errors = On log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = On html_errors = On variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=1 file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] date.timezone = "America/Chicago" [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba] Update: here is /etc/lighttpd/conf-enabled/15-fastcgi-php.conf As far as I know, it's just the default config file the Ubuntu package installed. ## FastCGI programs have the same functionality as CGI programs, ## but are considerably faster through lower interpreter startup ## time and socketed communication ## ## Documentation: /usr/share/doc/lighttpd-doc/fastcgi.txt.gz ## http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions#mod_fastcgi-fastcgi ## Start an FastCGI server for php (needs the php5-cgi package) fastcgi.server += ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) )

    Read the article

  • SSH not working over IPSec tunnel (Strongswan)

    - by PattPatel
    I configured a small network on a cloud virtual machine. This virtual machine has a static IP address assigned to eth0 interface that I'll call $EXTIP. mydomain.com points to $EXTIP. Inside, I have some linux containers, that get their ip through DHCP in the Subnet 10.0.0.0/24 (i called the virtual interface nat ). They run some services that can be reached through DNAT. Then I wanted to connect to these containers through an IPSec tunnel, so I configured StrongSwan. ipsec.conf: conn %default dpdaction=none rekey=no conn remote keyexchange=ikev2 ike=######## left=[$EXTIP] leftsubnet=10.0.1.0/24,10.0.0.0/24 leftauth=pubkey lefthostaccess=yes leftcert=########.pem leftfirewall=yes leftid="#########" right=%any rightsourceip=10.0.1.0/24 rightauth=######## rightid=%any rightsendcert=never eap_identity=%any auto=add type=tunnel Everything works fine, IPSec clients get IPs of the 10.0.1.0/24 subnet and can reach the containers subnet. My problem is that I'm not able to get SSH connections over the tunnel. It simply does not work, ssh client does not produce any output. Sniffing with tcpdump gives: tcpdump: 09:50:29.648206 ARP, Request who-has 10.0.0.1 tell mydomain.com, length 28 09:50:29.648246 ARP, Reply 10.0.0.1 is-at 00:ff:aa:00:00:01 (oui Unknown), length 28 09:50:29.648253 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [S], seq 4007849772, win 29200, options [mss 1460,sackOK,TS val 1151153 ecr 0,nop,wscale 7], length 0 09:50:29.648296 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [S.], seq 2809522632, ack 4007849773, win 14480, options [mss 1460,sackOK,TS val 11482992 ecr 1151153,nop,wscale 6], length 0 09:50:29.677225 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 2809522633, win 229, options [nop,nop,TS val 1151162 ecr 11482992], length 0 09:50:29.679370 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [P.], seq 0:23, ack 1, win 229, options [nop,nop,TS val 1151162 ecr 11482992], length 23 09:50:29.679403 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], ack 24, win 227, options [nop,nop,TS val 11483002 ecr 1151162], length 0 09:50:29.684337 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [P.], seq 1:32, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 31 09:50:29.685471 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], seq 32:1480, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 1448 09:50:29.685519 IP mydomain.com > 10.0.0.1: ICMP mydomain.com unreachable - need to frag (mtu 1422), length 556 09:50:29.685567 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], seq 32:1402, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 1370 09:50:29.685572 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], seq 1402:1480, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 78 09:50:29.714601 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 32, win 229, options [nop,nop,TS val 1151173 ecr 11483003], length 0 09:50:29.714642 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [P.], seq 1480:1600, ack 24, win 227, options [nop,nop,TS val 11483012 ecr 1151173], length 120 09:50:29.723649 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [P.], seq 1393:1959, ack 32, win 229, options [nop,nop,TS val 1151174 ecr 11483003], length 566 09:50:29.723677 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], ack 24, win 227, options [nop,nop,TS val 11483015 ecr 1151173,nop,nop,sack 1 {1394:1960}], length 0 09:50:29.725688 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 1480, win 251, options [nop,nop,TS val 1151177 ecr 11483003], length 0 09:50:29.952394 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [P.], seq 1480:1600, ack 24, win 227, options [nop,nop,TS val 11483084 ecr 1151173,nop,nop,sack 1 {1394:1960}], length 120 09:50:29.981056 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 1600, win 251, options [nop,nop,TS val 1151253 ecr 11483084,nop,nop,sack 1 {1480:1600}], length 0 If you need it this is my iptables configuration file: iptables: *filter :INPUT ACCEPT [144:9669] :FORWARD DROP [0:0] :OUTPUT ACCEPT [97:15649] :interfacce-trusted - [0:0] :porte-trusted - [0:0] -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -j interfacce-trusted -A FORWARD -j porte-trusted -A FORWARD -j REJECT --reject-with icmp-host-unreachable -A FORWARD -d 10.0.0.1/32 -p tcp -m tcp --dport 80 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 10.0.0.1/32 -p tcp -m tcp --dport 443 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 10.0.0.3/32 -p tcp -m tcp --dport 1234 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A interfacce-trusted -i nat -j ACCEPT -A porte-trusted -d 10.0.0.1/32 -p tcp -m tcp --dport 80 -j ACCEPT -A porte-trusted -d 10.0.0.1/32 -p tcp -m tcp --dport 443 -j ACCEPT -A porte-trusted -d 10.0.0.3/32 -p tcp -m tcp --dport 1234 -j ACCEPT COMMIT *nat :PREROUTING ACCEPT [10:600] :INPUT ACCEPT [10:600] :OUTPUT ACCEPT [4:268] :POSTROUTING ACCEPT [18:1108] -A PREROUTING -d [$EXTIP] -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0.1:80 -A PREROUTING -d [$EXTIP] -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.0.0.1:443 -A PREROUTING -d [$EXTIP] -p tcp -m tcp --dport 8069 -j DNAT --to-destination 10.0.0.3:1234 -A POSTROUTING -s 10.0.0.0/24 -o eth0 -m policy --dir out --pol ipsec -j ACCEPT -A POSTROUTING -s 10.0.1.0/24 -o nat -j MASQUERADE -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j MASQUERADE COMMIT Probably I'm missing something stupid... Thanks in advance for helping :))

    Read the article

  • Citrix Xen VM's lose networking

    - by Ash
    My client has a XenServer 6.0.2 installation with 2 Window Server 2008 R2 virtual machines. Whenever the virtual machines are rebooted they lose their IP settings (IP address, subnet, gateway). Each time after a reboot I need to login to each VM via XenCenter and re-apply the required static IP settings. This causes issues with connected iSCSI drives within each VM - drives need to be reconnected after each reboot. For example, a network adapter has the following settings pre-reboot: Description . . . . . . . . . . . : Citrix PV Ethernet Adapter #0 Physical Address. . . . . . . . . : C6-FB-A2-4F-2C-F3 IPv4 Address. . . . . . . . . . . : 10.101.0.101(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.101.0.10 DNS Servers . . . . . . . . . . . : 10.101.0.100 NetBIOS over Tcpip. . . . . . . . : Enabled Post-reboot: Description . . . . . . . . . . . : Citrix PV Ethernet Adapter #0 Physical Address. . . . . . . . . : C6-FB-A2-4F-2C-F3 Autoconfiguration IPv4 Address. . : 169.254.153.174(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 10.101.0.100 NetBIOS over Tcpip. . . . . . . . : Enabled Under XenCenter -- Virtual Network Interfaces, each adapter is set to a static MAC address (i.e. "Use this MAC address"). I have tried the following commands within one VM but this had no effect: netsh winsock reset catalog netsh int ip reset Can someone please help?

    Read the article

  • How do you automap List<float> or float[] with Fluent NHibernate?

    - by Tom Bushell
    Having successfully gotten a sample program working, I'm now starting to do Real Work with Fluent NHibernate - trying to use Automapping on my project's class heirarchy. It's a scientific instrumentation application, and the classes I'm mapping have several properties that are arrays of floats e.g. private float[] _rawY; public virtual float[] RawY { get { return _rawY; } set { _rawY = value; } } These arrays can contain a maximum of 500 values. I didn't expect Automapping to work on arrays, but tried it anyway, with some success at first. Each array was auto mapped to a BLOB (using SQLite), which seemed like a viable solution. The first problem came when I tried to call SaveOrUpdate on the objects containing the arrays - I got "No persister for float[]" exceptions. So my next thought was to convert all my arrays into ILists e.g. public virtual IList<float> RawY { get; set; } But now I get: NHibernate.MappingException: Association references unmapped class: System.Single Since Automapping can deal with lists of complex objects, it never occured to me it would not be able to map lists of basic types. But after doing some Googling for a solution, this seems to be the case. Some people seem to have solved the problem, but the sample code I saw requires more knowledge of NHibernate than I have right now - I didn't understand it. Questions: 1. How can I make this work with Automapping? 2. Also, is it better to use arrays or lists for this application? I can modify my app to use either if necessary (though I prefer lists). Edit: I've studied the code in Mapping Collection of Strings, and I see there is test code in the source that sets up an IList of strings, e.g. public virtual IList<string> ListOfSimpleChildren { get; set; } [Test] public void CanSetAsElement() { new MappingTester<OneToManyTarget>() .ForMapping(m => m.HasMany(x => x.ListOfSimpleChildren).Element("columnName")) .Element("class/bag/element").Exists(); } so this must be possible using pure Automapping, but I've had zero luck getting anything to work, probably because I don't have the requisite knowlege of manually mapping with NHibernate. Starting to think I'm going to have to hack this (by encoding the array of floats as a single string, or creating a class that contains a single float which I then aggregate into my lists), unless someone can tell me how to do it properly. End Edit Here's my CreateSessionFactory method, if that helps formulate a reply... private static ISessionFactory CreateSessionFactory() { ISessionFactory sessionFactory = null; const string autoMapExportDir = "AutoMapExport"; if( !Directory.Exists(autoMapExportDir) ) Directory.CreateDirectory(autoMapExportDir); try { var autoPersistenceModel = AutoMap.AssemblyOf<DlsAppOverlordExportRunData>() .Where(t => t.Namespace == "DlsAppAutomapped") .Conventions.Add( DefaultCascade.All() ) ; sessionFactory = Fluently.Configure() .Database(SQLiteConfiguration.Standard .UsingFile(DbFile) .ShowSql() ) .Mappings(m => m.AutoMappings.Add(autoPersistenceModel) .ExportTo(autoMapExportDir) ) .ExposeConfiguration(BuildSchema) .BuildSessionFactory() ; } catch (Exception e) { Debug.WriteLine(e); } return sessionFactory; }

    Read the article

  • Make interchangeable class types via pointer casting only, without having to allocate any new objects?

    - by HostileFork
    UPDATE: I do appreciate "don't want that, want this instead" suggestions. They are useful, especially when provided in context of the motivating scenario. Still...regardless of goodness/badness, I've become curious to find a hard-and-fast "yes that can be done legally in C++11" vs "no it is not possible to do something like that". I want to "alias" an object pointer as another type, for the sole purpose of adding some helper methods. The alias cannot add data members to the underlying class (in fact, the more I can prevent that from happening the better!) All aliases are equally applicable to any object of this type...it's just helpful if the type system can hint which alias is likely the most appropriate. There should be no information about any specific alias that is ever encoded in the underlying object. Hence, I feel like you should be able to "cheat" the type system and just let it be an annotation...checked at compile time, but ultimately irrelevant to the runtime casting. Something along these lines: Node<AccessorFoo>* fooPtr = Node<AccessorFoo>::createViaFactory(); Node<AccessorBar>* barPtr = reinterpret_cast< Node<AccessorBar>* >(fooPtr); Under the hood, the factory method is actually making a NodeBase class, and then using a similar reinterpret_cast to return it as a Node<AccessorFoo>*. The easy way to avoid this is to make these lightweight classes that wrap nodes and are passed around by value. Thus you don't need casting, just Accessor classes that take the node handle to wrap in their constructor: AccessorFoo foo (NodeBase::createViaFactory()); AccessorBar bar (foo.getNode()); But if I don't have to pay for all that, I don't want to. That would involve--for instance--making a special accessor type for each sort of wrapped pointer (AccessorFooShared, AccessorFooUnique, AccessorFooWeak, etc.) Having these typed pointers being aliased for one single pointer-based object identity is preferable, and provides a nice orthogonality. So back to that original question: Node<AccessorFoo>* fooPtr = Node<AccessorFoo>::createViaFactory(); Node<AccessorBar>* barPtr = reinterpret_cast< Node<AccessorBar>* >(fooPtr); Seems like there would be some way to do this that might be ugly but not "break the rules". According to ISO14882:2011(e) 5.2.10-7: An object pointer can be explicitly converted to an object pointer of a different type.70 When a prvalue v of type "pointer to T1" is converted to the type "pointer to cv T2", the result is static_cast(static_cast(v)) if both T1 and T2 are standard-layout types (3.9) and the alignment requirements of T2 are no stricter than those of T1, or if either type is void. Converting a prvalue of type "pointer to T1" to the type "pointer to T2" (where T1 and T2 are object types and where the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value. The result of any other such pointer conversion is unspecified. Drilling into the definition of a "standard-layout class", we find: has no non-static data members of type non-standard-layout-class (or array of such types) or reference, and has no virtual functions (10.3) and no virtual base classes (10.1), and has the same access control (clause 11) for all non-static data members, and has no non-standard-layout base classes, and either has no non-static data member in the most-derived class and at most one base class with non-static data members, or has no base classes with non-static data members, and has no base classes of the same type as the first non-static data member. Sounds like working with something like this would tie my hands a bit with no virtual methods in the accessors or the node. Yet C++11 apparently has std::is_standard_layout to keep things checked. Can this be done safely? Appears to work in gcc-4.7, but I'd like to be sure I'm not invoking undefined behavior.

    Read the article

  • Problems with passing an anonymous temporary function-object to a templatized constructor.

    - by Akanksh
    I am trying to attach a function-object to be called on destruction of a templatized class. However, I can not seem to be able to pass the function-object as a temporary. The warning I get is (if the comment the line xi.data = 5;): warning C4930: 'X<T> xi2(writer (__cdecl *)(void))': prototyped function not called (was a variable definition intended?) with [ T=int ] and if I try to use the constructed object, I get a compilation error saying: error C2228: left of '.data' must have class/struct/union I apologize for the lengthy piece of code, but I think all the components need to be visible to assess the situation. template<typename T> struct Base { virtual void run( T& ){} virtual ~Base(){} }; template<typename T, typename D> struct Derived : public Base<T> { virtual void run( T& t ) { D d; d(t); } }; template<typename T> struct X { template<typename R> X(const R& r) { std::cout << "X(R)" << std::endl; ptr = new Derived<T,R>(); } X():ptr(0) { std::cout << "X()" << std::endl; } ~X() { if(ptr) { ptr->run(data); delete ptr; } else { std::cout << "no ptr" << std::endl; } } Base<T>* ptr; T data; }; struct writer { template<typename T> void operator()( const T& i ) { std::cout << "T : " << i << std::endl; } }; int main() { { writer w; X<int> xi2(w); //X<int> xi2(writer()); //This does not work! xi2.data = 15; } return 0; }; The reason I am trying this out is so that I can "somehow" attach function-objects types with the objects without keeping an instance of the function-object itself within the class. Thus when I create an object of class X, I do not have to keep an object of class writer within it, but only a pointer to Base<T> (I'm not sure if I need the <T> here, but for now its there). The problem is that I seem to have to create an object of writer and then pass it to the constructor of X rather than call it like X<int> xi(writer(); I might be missing something completely stupid and obvious here, any suggestions?

    Read the article

  • How to reserve public API to internal usage in .NET?

    - by mark
    Dear ladies and sirs. Let me first present the case, which will explain my question. This is going to be a bit long, so I apologize in advance :-). I have objects and collections, which should support the Merge API (it is my custom API, the signature of which is immaterial for this question). This API must be internal, meaning only my framework should be allowed to invoke it. However, derived types should be able to override the basic implementation. The natural way to implement this pattern as I see it, is this: The Merge API is declared as part of some internal interface, let us say IMergeable. Because the interface is internal, derived types would not be able to implement it directly. Rather they must inherit it from a common base type. So, a common base type is introduced, which would implement the IMergeable interface explicitly, where the interface methods delegate to respective protected virtual methods, providing the default implementation. This way the API is only callable by my framework, but derived types may override the default implementation. The following code snippet demonstrates the concept: internal interface IMergeable { void Merge(object obj); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } void IMergeable.Merge(object obj) { Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } All is fine, provided a single common base type suffices, which is usually true for non collection types. The thing is that collections must be mergeable as well. Collections do not play nicely with the presented concept, because developers do not develop collections from the scratch. There are predefined implementations - observable, filtered, compound, read-only, remove-only, ordered, god-knows-what, ... They may be developed from scratch in-house, but once finished, they serve wide range of products and should never be tailored to some specific product. Which means, that either: they do not implement the IMergeable interface at all, because it is internal to some product the scope of the IMergeable interface is raised to public and the API becomes open and callable by all. Let us refer to these collections as standard collections. Anyway, the first option screws my framework, because now each possible standard collection type has to be paired with the respective framework version, augmenting the standard with the IMergeable interface implementation - this is so bad, I am not even considering it. The second option breaks the framework as well, because the IMergeable interface should be internal for a reason (whatever it is) and now this interface has to open to all. So what to do? My solution is this. make IMergeable public API, but add an extra parameter to the Merge method, I call it a security token. The interface implementation may check that the token references some internal object, which is never exposed to the outside. If this is the case, then the method was called from within the framework, otherwise - some outside API consumer attempted to invoke it and so the implementation can blow up with a SecurityException. Here is the modified code snippet demonstrating this concept: internal static class InternalApi { internal static readonly object Token = new object(); } public interface IMergeable { void Merge(object obj, object token); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } public void Merge(object obj, object token) { if (!object.ReferenceEquals(token, InternalApi.Token)) { throw new SecurityException("bla bla bla"); } Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } Of course, this is less explicit than having an internally scoped interface and the check is moved from the compile time to run time, yet this is the best I could come up with. Now, I have a gut feeling that there is a better way to solve the problem I have presented. I do not know, may be using some standard Code Access Security features? I have only vague understanding of it, but can LinkDemand attribute be somehow related to it? Anyway, I would like to hear other opinions. Thanks.

    Read the article

  • Unity 3D won't load - Choosing "Ubuntu" during login still loads Unity 2D

    - by Shasteriskt
    I just installed Ubuntu 11.10 on my ASUS N43SL laptop and Unity 3D loaded right after installation. But upon reboot, I noticed that Ubuntu loaded Unity 2D instead of Unity 3D. I logged-off and made sure I had "Ubuntu" selected during login, but after many attempts Unity 3D just won't load. Just to make it definite, I did echo $DESKTOP_SESSION on the terminal and it says ubuntu-2d, which is not expected. My laptop runs on i5 with 1Gb NVIDIA GT540M so I don't think its a case of lacking graphics capability. And yes, I have installed the proprietary driver using "Additional Drivers" in "System Settings". What can be the cause of this problem? How can it be fixed?

    Read the article

  • Cannot find winemenubuilder.exe when trying to run StarCraft 2

    - by Gernot
    Recently I've installed StarCraft 2 via playonlinux. The installation was absolutly no problem, everything was fine. But if I want to start the game now, it crashes. If I start it on the Terminal I get following error: optirun /usr/share/playonlinux/playonlinux --run "StarCraft II Wings of Liberty" [POL_Wine_SetVersionEnv] Message: Setting wine version path: 1.3.27, amd64 [POL_Wine_SetVersionEnv] Message: "/home/gernot/.PlayOnLinux//wine/linux-amd64/1.3.27" exists [POL_Wine] Message: Running wine-1.3.27 StarCraft II.exe wine: cannot find L"C:\\windows\\system32\\winemenubuilder.exe" rm: Cannot remove »*“ : Can't find directory or file. Has anybody an idea what to do?

    Read the article

  • Custom Live CD using UCK

    - by phifer2088
    I am trying to create a custom live iso that I can place onto thumb drives for the purpose of using putty to program Cisco switches and routers. I have ubuntu 14.04 LTS installed on a laptop and also installed UCK. I used the desktop image for 14.04 and created a custom ISO. I was able to run terminal and install the putty application and add a new user. I added the new user to the dialout group so they could then access the serial port. I am not exactly where I want to be yet, but it is a start. I have loaded the ISO to a USB drive using netbootin, but it will not boot into the live cd for me to test that everything works the way I need in the image. Is this because of the image I used? Any help would be great. Thank you.

    Read the article

  • How to debug a .bash_profile

    - by Blankman
    I was updating my .bash_profile, and unfortunetly I made a few updates and now I am getting: env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory -bash: tar: command not found -bash: grep: command not found -bash: cat: command not found -bash: find: command not found -bash: dirname: command not found -bash: /preexec.sh.lib: No such file or directory -bash: preexec_install: command not found -bash: sed: command not found -bash: git: command not found My bash_profile actually pulls in other .sh files (sources them) so I am not exactly sure which modification may have caused this. Now if I even try and to a list of files, I get: >ls -bash: ls: command not found -bash: sed: command not found -bash: git: command not found Any tips on how to trace the source of the error, and how to be able to use the terminal for basic things like listing files etc?

    Read the article

  • Every command fails with "command not found" after changing .bash_profile?

    - by Blankman
    I was updating my .bash_profile, and unfortunetly I made a few updates and now I am getting: env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory -bash: tar: command not found -bash: grep: command not found -bash: cat: command not found -bash: find: command not found -bash: dirname: command not found -bash: /preexec.sh.lib: No such file or directory -bash: preexec_install: command not found -bash: sed: command not found -bash: git: command not found My bash_profile actually pulls in other .sh files (sources them) so I am not exactly sure which modification may have caused this. Now if I even try and to a list of files, I get: >ls -bash: ls: command not found -bash: sed: command not found -bash: git: command not found Any tips on how to trace the source of the error, and how to be able to use the terminal for basic things like listing files etc?

    Read the article

  • 12.04 LTS: unity --reset hangs

    - by Gregory R. Pace
    Nearly each time I reboot my machine, the system panel and integrated app menus fail to load. At a terminal, when issuing 'unity --reset', I get the following errors: ... Initializing widget options...done Initializing winrules options...done Initializing wobbly options...done ERROR 2012-11-05 04:36:48 unity.glib-gobject <unknown>:0 g_object_unref: assertion `G_IS_OBJECT (object)' failed ERROR 2012-11-05 04:36:48 unity.gtk <unknown>:0 gtk_window_resize: assertion `width > 0' failed WARN 2012-11-05 04:37:14 unity <unknown>:0 Unable to fetch children: No such interface `org.ayatana.bamf.view' on object at path /org/ayatana/bamf/application885622223 ERROR 2012-11-05 04:37:21 unity.glib-gobject <unknown>:0 g_object_set_qdata: assertion `G_IS_OBJECT (object)' failed Setting Update "main_menu_key" Setting Update "run_key" WARN 2012-11-05 04:38:06 unity.iconloader IconLoader.cpp:438 Unable to load icon stock-person at size 24 WARN 2012-11-05 04:38:26 unity.glib.dbusproxy GLibDBusProxy.cpp:182 Unable to connect to proxy: Error calling StartServiceByName for com.canonical.Unity.Lens.Applications: Timeout was reached WARN 2012-11-05 04:38:26 unity.glib.dbusproxy GLibDBusProxy.cpp:182 Unable to connect to proxy: Error calling StartServiceByName for com.canonical.Unity.Lens.Applications: Timeout was reached The procedure hangs at this point. Any ideas how to solve these problems ? Thanks in advance.

    Read the article

  • Hot to fix nautilus desktop on linux mint

    - by user59530
    so I'm using Linux Mint 13 with Cinnamon and suddenly there are no icons on the desktop and the right click doesn't work, it's like the desktop doesn't start up at all, but the Cinnamon interface and everything else are working just fine. This happens only when I open the session with Cinnamon, if I start the session on the classic Gnome or MATE the desktop works. I tried to re-install Cinnamon but nothing changed. Then, I noticed that there are some little problems in Nautilus (sometimes menus aren't the color they're supposed to be), so I'm convinced that Nautilus might be the problem, but I don't know how to fix this, I've tried a few thing but I'm starting to fear that I'm only making it worse. Also, when I open the terminal and type in nautilus here's what's shows up, any help? (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:85:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:192:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:228:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:275:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:310:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:389:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:737:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1095:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1137:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1755:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1856:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1873:18: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1889:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1947:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1954:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1967:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:2025:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:2075:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:2090:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:2195:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gnome-panel.css:92:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:15:15: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:15:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:79:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:84:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:113:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:118:18: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:15:15: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:15:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:79:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:84:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:113:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:118:18: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING *: Theme parsing error: unity.css:21:18: Not using units is deprecated. Assuming 'px'. Initializing nautilus-dropbox 1.4.0 Initializing nautilus-open-terminal extension * Message: Initializing gksu extension...

    Read the article

  • Ubuntu tweak not showing all the menus

    - by Gaurav Butola
    Ubuntu-Tweak doesn't have the option Startup which includes Session Manager Session Control and few other options are not there. I am running the latest version available to download. I remember having all those menus in lucid. for a better difference comparison see the menus in the ubuntu tweak homepage http://ubuntu-tweak.com/ with mine.... how can I get these option back. here is the error I get when I run Ubuntu Tweak from terminal ERROR:dbus.proxies:Introspect error on :1.142:/com/ubuntu_tweak/daemon: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.AccessDenied: Rejected send message, 1 matched rules; type="method_call", sender=":1.141" (uid=1000 pid=16550 comm="/usr/bin/python) interface="org.freedesktop.DBus.Introspectable" member="Introspect" error name="(unset)" requested_reply=0 destination=":1.142" (uid=0 pid=16560 comm="/usr/bin/python)) Update: I installed the same deb on another computer and that has nothing wrong. all the menus are listed fine.

    Read the article

  • /sbin/getty process causing 100% CPU utilization

    - by scrrr
    I have an instance of Ubuntu 12.04 LTS (GNU/Linux 3.2.0-25-virtual i686) running as a KVM-VM on a host-machine that runs one more VM beside it. I deploy a Ruby on Rails application using the Capistrano deployment-gem. However, if I deploy twice in a row in a short time, the CPU usage jumps to 100% because of the /sbin/getty process. How can this be? I believe getty is a rather simple program that passes a login-name from a terminal to a login-process. Also: In my Capfile (Capistrano configuration file) I am running certain commands after the Rails application is deployed including a call to sudo /sbin/restart <APPNAME> which is an upstart task. Could this be related somehow? I can always kill the getty process and the problem is gone until the next deployment, but I would rather understand and fix the problem. Any help is appreciated. Attached is a screenshot of my problem.

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >