Search Results

Search found 53294 results on 2132 pages for 'null pointers etc'.

Page 199/2132 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Suspend not working after kernel update on an HP Envy14 1050

    - by leoxweb
    I just update ubuntu 12.04 to the kernel 3.2.0-30 and together with it a lot of packeges in the system. I am running it on an HP Envy14 1050. When I first made a fresh install of the clean ubuntu 12.04 I had the problem that when restoring from suspend the screen was black, although some backlight was there. I could fix it following this https://bugs.launchpad.net/ubuntu/+source/linux/+bug/989674/comments/20. Now, after the update the black screen after suspend has reappeared but with no backlight at all and the led in the caps lock key blinking. The laptop is using a ATI radeon 5600 with the privative drivers. During the update process there was an error with depmod . You can see the /var/log/dist-upgrade/apt-term.log file at the end. UPDATE: suspend is not working at all. The problem is not when restoring from suspend, but when I try to suspend the system it gets blocked and the fan running. Only option is to press power button. Log started: 2012-09-12 00:46:46 (Reading database ... 198909 files and directories currently installed.) Removing icedtea-7-jre-cacao ... Selecting previously unselected package linux-image-3.2.0-30-generic. (Reading database ... 198901 files and directories currently installed.) Unpacking linux-image-3.2.0-30-generic (from .../linux-image-3.2.0-30-generic_3.2.0-30.48_amd64.deb) ... Done. Preparing to replace icedtea-7-jre-jamvm 7~u3-2.1.1~pre1-1ubuntu3 (using .../icedtea-7-jre-jamvm_7u7-2.3.2-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement icedtea-7-jre-jamvm ... Preparing to replace openjdk-7-jre-lib 7~u3-2.1.1~pre1-1ubuntu3 (using .../openjdk-7-jre-lib_7u7-2.3.2-1ubuntu0.12.04.1_all.deb) ... Unpacking replacement openjdk-7-jre-lib ... Preparing to replace openjdk-7-jre-headless 7~u3-2.1.1~pre1-1ubuntu3 (using .../openjdk-7-jre-headless_7u7-2.3.2-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-7-jre-headless ... Preparing to replace python-problem-report 2.0.1-0ubuntu12 (using .../python-problem-report_2.0.1-0ubuntu13_all.deb) ... Unpacking replacement python-problem-report ... Preparing to replace python-apport 2.0.1-0ubuntu12 (using .../python-apport_2.0.1-0ubuntu13_all.deb) ... Unpacking replacement python-apport ... Preparing to replace apport 2.0.1-0ubuntu12 (using .../apport_2.0.1-0ubuntu13_all.deb) ... apport stop/waiting Unpacking replacement apport ... Preparing to replace apport-gtk 2.0.1-0ubuntu12 (using .../apport-gtk_2.0.1-0ubuntu13_all.deb) ... Unpacking replacement apport-gtk ... Preparing to replace firefox-globalmenu 15.0+build1-0ubuntu0.12.04.1 (using .../firefox-globalmenu_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox-globalmenu ... Preparing to replace firefox 15.0+build1-0ubuntu0.12.04.1 (using .../firefox_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox ... Preparing to replace firefox-gnome-support 15.0+build1-0ubuntu0.12.04.1 (using .../firefox-gnome-support_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox-gnome-support ... Preparing to replace firefox-locale-en 15.0+build1-0ubuntu0.12.04.1 (using .../firefox-locale-en_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox-locale-en ... Preparing to replace firefox-locale-es 15.0+build1-0ubuntu0.12.04.1 (using .../firefox-locale-es_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox-locale-es ... Preparing to replace firefox-locale-zh-hans 15.0+build1-0ubuntu0.12.04.1 (using .../firefox-locale-zh-hans_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox-locale-zh-hans ... Preparing to replace totem-mozilla 3.0.1-0ubuntu21 (using .../totem-mozilla_3.0.1-0ubuntu21.1_amd64.deb) ... Unpacking replacement totem-mozilla ... Preparing to replace libtotem0 3.0.1-0ubuntu21 (using .../libtotem0_3.0.1-0ubuntu21.1_amd64.deb) ... Unpacking replacement libtotem0 ... Preparing to replace totem-plugins 3.0.1-0ubuntu21 (using .../totem-plugins_3.0.1-0ubuntu21.1_amd64.deb) ... Unpacking replacement totem-plugins ... Preparing to replace totem 3.0.1-0ubuntu21 (using .../totem_3.0.1-0ubuntu21.1_amd64.deb) ... Unpacking replacement totem ... Preparing to replace totem-common 3.0.1-0ubuntu21 (using .../totem-common_3.0.1-0ubuntu21.1_all.deb) ... Unpacking replacement totem-common ... Preparing to replace gir1.2-totem-1.0 3.0.1-0ubuntu21 (using .../gir1.2-totem-1.0_3.0.1-0ubuntu21.1_amd64.deb) ... Unpacking replacement gir1.2-totem-1.0 ... Preparing to replace glib-networking-common 2.32.1-1ubuntu1 (using .../glib-networking-common_2.32.1-1ubuntu2_all.deb) ... Unpacking replacement glib-networking-common ... Preparing to replace glib-networking 2.32.1-1ubuntu1 (using .../glib-networking_2.32.1-1ubuntu2_amd64.deb) ... Unpacking replacement glib-networking ... Preparing to replace glib-networking-services 2.32.1-1ubuntu1 (using .../glib-networking-services_2.32.1-1ubuntu2_amd64.deb) ... Unpacking replacement glib-networking-services ... Preparing to replace linux-firmware 1.79 (using .../linux-firmware_1.79.1_all.deb) ... Unpacking replacement linux-firmware ... Preparing to replace linux-generic 3.2.0.29.31 (using .../linux-generic_3.2.0.30.32_amd64.deb) ... Unpacking replacement linux-generic ... Preparing to replace linux-image-generic 3.2.0.29.31 (using .../linux-image-generic_3.2.0.30.32_amd64.deb) ... Unpacking replacement linux-image-generic ... Selecting previously unselected package linux-headers-3.2.0-30. Unpacking linux-headers-3.2.0-30 (from .../linux-headers-3.2.0-30_3.2.0-30.48_all.deb) ... Selecting previously unselected package linux-headers-3.2.0-30-generic. Unpacking linux-headers-3.2.0-30-generic (from .../linux-headers-3.2.0-30-generic_3.2.0-30.48_amd64.deb) ... Preparing to replace linux-headers-generic 3.2.0.29.31 (using .../linux-headers-generic_3.2.0.30.32_amd64.deb) ... Unpacking replacement linux-headers-generic ... Preparing to replace linux-libc-dev 3.2.0-29.46 (using .../linux-libc-dev_3.2.0-30.48_amd64.deb) ... Unpacking replacement linux-libc-dev ... Preparing to replace openjdk-7-jre 7~u3-2.1.1~pre1-1ubuntu3 (using .../openjdk-7-jre_7u7-2.3.2-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-7-jre ... Preparing to replace openjdk-7-jdk 7~u3-2.1.1~pre1-1ubuntu3 (using .../openjdk-7-jdk_7u7-2.3.2-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-7-jdk ... Preparing to replace policykit-1-gnome 0.105-1ubuntu3 (using .../policykit-1-gnome_0.105-1ubuntu3.1_amd64.deb) ... Unpacking replacement policykit-1-gnome ... Preparing to replace xserver-xorg-input-synaptics 1.6.2-1ubuntu1~precise1 (using .../xserver-xorg-input-synaptics_1.6.2-1ubuntu1~precise2_amd64.deb) ... Unpacking replacement xserver-xorg-input-synaptics ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for hicolor-icon-theme ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Processing triggers for man-db ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for gconf2 ... Processing triggers for libglib2.0-0:i386 ... Processing triggers for libglib2.0-0 ... Setting up linux-image-3.2.0-30-generic (3.2.0-30.48) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Error! Problems with depmod detected. Automatically uninstalling this module. DKMS: Install Failed (depmod problems). Module rolled back to built state. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic update-initramfs: Generating /boot/initrd.img-3.2.0-30-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-30-generic Found initrd image: /boot/initrd.img-3.2.0-30-generic Found linux image: /boot/vmlinuz-3.2.0-29-generic Found initrd image: /boot/initrd.img-3.2.0-29-generic Found linux image: /boot/vmlinuz-3.2.0-23-generic Found initrd image: /boot/initrd.img-3.2.0-23-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Found Windows Recovery Environment (loader) on /dev/sda2 Found Windows Recovery Environment (loader) on /dev/sda3 done Setting up python-problem-report (2.0.1-0ubuntu13) ... Setting up python-apport (2.0.1-0ubuntu13) ... Setting up apport (2.0.1-0ubuntu13) ... apport start/running Setting up apport-gtk (2.0.1-0ubuntu13) ... Setting up firefox (15.0.1+build1-0ubuntu0.12.04.1) ... Please restart all running instances of firefox, or you will experience problems. Setting up firefox-globalmenu (15.0.1+build1-0ubuntu0.12.04.1) ... Setting up firefox-gnome-support (15.0.1+build1-0ubuntu0.12.04.1) ... Setting up firefox-locale-en (15.0.1+build1-0ubuntu0.12.04.1) ... Setting up firefox-locale-es (15.0.1+build1-0ubuntu0.12.04.1) ... Setting up firefox-locale-zh-hans (15.0.1+build1-0ubuntu0.12.04.1) ... Setting up libtotem0 (3.0.1-0ubuntu21.1) ... Setting up totem-common (3.0.1-0ubuntu21.1) ... Setting up totem (3.0.1-0ubuntu21.1) ... Setting up totem-mozilla (3.0.1-0ubuntu21.1) ... Setting up gir1.2-totem-1.0 (3.0.1-0ubuntu21.1) ... Setting up totem-plugins (3.0.1-0ubuntu21.1) ... Setting up glib-networking-common (2.32.1-1ubuntu2) ... Setting up glib-networking-services (2.32.1-1ubuntu2) ... Setting up glib-networking (2.32.1-1ubuntu2) ... Setting up linux-firmware (1.79.1) ... Setting up linux-image-generic (3.2.0.30.32) ... Setting up linux-generic (3.2.0.30.32) ... Setting up linux-headers-3.2.0-30 (3.2.0-30.48) ... Setting up linux-headers-3.2.0-30-generic (3.2.0-30.48) ... Examining /etc/kernel/header_postinst.d. run-parts: executing /etc/kernel/header_postinst.d/dkms 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Setting up linux-headers-generic (3.2.0.30.32) ... Setting up linux-libc-dev (3.2.0-30.48) ... Setting up policykit-1-gnome (0.105-1ubuntu3.1) ... Setting up xserver-xorg-input-synaptics (1.6.2-1ubuntu1~precise2) ... Setting up openjdk-7-jre-headless (7u7-2.3.2-1ubuntu0.12.04.1) ... Installing new version of config file /etc/java-7-openjdk/security/java.security ... Installing new version of config file /etc/java-7-openjdk/jvm-amd64.cfg ... Setting up openjdk-7-jre-lib (7u7-2.3.2-1ubuntu0.12.04.1) ... Setting up icedtea-7-jre-jamvm (7u7-2.3.2-1ubuntu0.12.04.1) ... Setting up openjdk-7-jre (7u7-2.3.2-1ubuntu0.12.04.1) ... Setting up openjdk-7-jdk (7u7-2.3.2-1ubuntu0.12.04.1) ... update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/bin/jcmd to provide /usr/bin/jcmd (jcmd) in auto mode. Processing triggers for libc-bin ... ldconfig deferred processing now taking place Log ended: 2012-09-12 00:49:16

    Read the article

  • Loosely coupled .NET Cache Provider using Dependency Injection

    - by Rhames
    I have recently been reading the excellent book “Dependency Injection in .NET”, written by Mark Seemann. I do not generally buy software development related books, as I never seem to have the time to read them, but I have found the time to read Mark’s book, and it was time well spent I think. Reading the ideas around Dependency Injection made me realise that the Cache Provider code I wrote about earlier (see http://geekswithblogs.net/Rhames/archive/2011/01/10/using-the-asp.net-cache-to-cache-data-in-a-model.aspx) could be refactored to use Dependency Injection, which should produce cleaner code. The goals are to: Separate the cache provider implementation (using the ASP.NET data cache) from the consumers (loose coupling). This will also mean that the dependency on System.Web for the cache provider does not ripple down into the layers where it is being consumed (such as the domain layer). Provide a decorator pattern to allow a consumer of the cache provider to be implemented separately from the base consumer (i.e. if we have a base repository, we can decorate this with a caching version). Although I used the term repository, in reality the cache consumer could be just about anything. Use constructor injection to provide the Dependency Injection, with a suitable DI container (I use Castle Windsor). The sample code for this post is available on github, https://github.com/RobinHames/CacheProvider.git ICacheProvider In the sample code, the key interface is ICacheProvider, which is in the domain layer. 1: using System; 2: using System.Collections.Generic; 3:   4: namespace CacheDiSample.Domain 5: { 6: public interface ICacheProvider<T> 7: { 8: T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); 9: IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); 10: } 11: }   This interface contains two methods to retrieve data from the cache, either as a single instance or as an IEnumerable. the second paramerter is of type Func<T>. This is the method used to retrieve data if nothing is found in the cache. The ASP.NET implementation of the ICacheProvider interface needs to live in a project that has a reference to system.web, typically this will be the root UI project, or it could be a separate project. The key thing is that the domain or data access layers do not need system.web references adding to them. In my sample MVC application, the CacheProvider is implemented in the UI project, in a folder called “CacheProviders”: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Caching; 6: using CacheDiSample.Domain; 7:   8: namespace CacheDiSample.CacheProvider 9: { 10: public class CacheProvider<T> : ICacheProvider<T> 11: { 12: public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 13: { 14: return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry); 15: } 16:   17: public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 18: { 19: return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry); 20: } 21:   22: #region Helper Methods 23:   24: private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 25: { 26: U value; 27: if (!TryGetValue<U>(key, out value)) 28: { 29: value = retrieveData(); 30: if (!absoluteExpiry.HasValue) 31: absoluteExpiry = Cache.NoAbsoluteExpiration; 32:   33: if (!relativeExpiry.HasValue) 34: relativeExpiry = Cache.NoSlidingExpiration; 35:   36: HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value); 37: } 38: return value; 39: } 40:   41: private bool TryGetValue<U>(string key, out U value) 42: { 43: object cachedValue = HttpContext.Current.Cache.Get(key); 44: if (cachedValue == null) 45: { 46: value = default(U); 47: return false; 48: } 49: else 50: { 51: try 52: { 53: value = (U)cachedValue; 54: return true; 55: } 56: catch 57: { 58: value = default(U); 59: return false; 60: } 61: } 62: } 63:   64: #endregion 65:   66: } 67: }   The FetchAndCache helper method checks if the specified cache key exists, if it does not, the Func<U> retrieveData method is called, and the results are added to the cache. Using Castle Windsor to register the cache provider In the MVC UI project (my application root), Castle Windsor is used to register the CacheProvider implementation, using a Windsor Installer: 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain; 6: using CacheDiSample.CacheProvider; 7:   8: namespace CacheDiSample.WindsorInstallers 9: { 10: public class CacheInstaller : IWindsorInstaller 11: { 12: public void Install(IWindsorContainer container, IConfigurationStore store) 13: { 14: container.Register( 15: Component.For(typeof(ICacheProvider<>)) 16: .ImplementedBy(typeof(CacheProvider<>)) 17: .LifestyleTransient()); 18: } 19: } 20: }   Note that the cache provider is registered as a open generic type. Consuming a Repository I have an existing couple of repository interfaces defined in my domain layer: IRepository.cs 1: using System; 2: using System.Collections.Generic; 3:   4: using CacheDiSample.Domain.Model; 5:   6: namespace CacheDiSample.Domain.Repositories 7: { 8: public interface IRepository<T> 9: where T : EntityBase 10: { 11: T GetById(int id); 12: IList<T> GetAll(); 13: } 14: }   IBlogRepository.cs 1: using System; 2: using CacheDiSample.Domain.Model; 3:   4: namespace CacheDiSample.Domain.Repositories 5: { 6: public interface IBlogRepository : IRepository<Blog> 7: { 8: Blog GetByName(string name); 9: } 10: }   These two repositories are implemented in the DataAccess layer, using Entity Framework to retrieve data (this is not important though). One important point is that in the BaseRepository implementation of IRepository, the methods are virtual. This will allow the decorator to override them. The BlogRepository is registered in a RepositoriesInstaller, again in the MVC UI project. 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain.CacheDecorators; 6: using CacheDiSample.Domain.Repositories; 7: using CacheDiSample.DataAccess; 8:   9: namespace CacheDiSample.WindsorInstallers 10: { 11: public class RepositoriesInstaller : IWindsorInstaller 12: { 13: public void Install(IWindsorContainer container, IConfigurationStore store) 14: { 15: container.Register(Component.For<IBlogRepository>() 16: .ImplementedBy<BlogRepository>() 17: .LifestyleTransient() 18: .DependsOn(new 19: { 20: nameOrConnectionString = "BloggingContext" 21: })); 22: } 23: } 24: }   Now I can inject a dependency on the IBlogRepository into a consumer, such as a controller in my sample code: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:   7: using CacheDiSample.Domain.Repositories; 8: using CacheDiSample.Domain.Model; 9:   10: namespace CacheDiSample.Controllers 11: { 12: public class HomeController : Controller 13: { 14: private readonly IBlogRepository blogRepository; 15:   16: public HomeController(IBlogRepository blogRepository) 17: { 18: if (blogRepository == null) 19: throw new ArgumentNullException("blogRepository"); 20:   21: this.blogRepository = blogRepository; 22: } 23:   24: public ActionResult Index() 25: { 26: ViewBag.Message = "Welcome to ASP.NET MVC!"; 27:   28: var blogs = blogRepository.GetAll(); 29:   30: return View(new Models.HomeModel { Blogs = blogs }); 31: } 32:   33: public ActionResult About() 34: { 35: return View(); 36: } 37: } 38: }   Consuming the Cache Provider via a Decorator I used a Decorator pattern to consume the cache provider, this means my repositories follow the open/closed principle, as they do not require any modifications to implement the caching. It also means that my controllers do not have any knowledge of the caching taking place, as the DI container will simply inject the decorator instead of the root implementation of the repository. The first step is to implement a BlogRepository decorator, with the caching logic in it. Note that this can reside in the domain layer, as it does not require any knowledge of the data access methods. BlogRepositoryWithCaching.cs 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5:   6: using CacheDiSample.Domain.Model; 7: using CacheDiSample.Domain; 8: using CacheDiSample.Domain.Repositories; 9:   10: namespace CacheDiSample.Domain.CacheDecorators 11: { 12: public class BlogRepositoryWithCaching : IBlogRepository 13: { 14: // The generic cache provider, injected by DI 15: private ICacheProvider<Blog> cacheProvider; 16: // The decorated blog repository, injected by DI 17: private IBlogRepository parentBlogRepository; 18:   19: public BlogRepositoryWithCaching(IBlogRepository parentBlogRepository, ICacheProvider<Blog> cacheProvider) 20: { 21: if (parentBlogRepository == null) 22: throw new ArgumentNullException("parentBlogRepository"); 23:   24: this.parentBlogRepository = parentBlogRepository; 25:   26: if (cacheProvider == null) 27: throw new ArgumentNullException("cacheProvider"); 28:   29: this.cacheProvider = cacheProvider; 30: } 31:   32: public Blog GetByName(string name) 33: { 34: string key = string.Format("CacheDiSample.DataAccess.GetByName.{0}", name); 35: // hard code 5 minute expiry! 36: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 37: return cacheProvider.Fetch(key, () => 38: { 39: return parentBlogRepository.GetByName(name); 40: }, 41: null, relativeCacheExpiry); 42: } 43:   44: public Blog GetById(int id) 45: { 46: string key = string.Format("CacheDiSample.DataAccess.GetById.{0}", id); 47:   48: // hard code 5 minute expiry! 49: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 50: return cacheProvider.Fetch(key, () => 51: { 52: return parentBlogRepository.GetById(id); 53: }, 54: null, relativeCacheExpiry); 55: } 56:   57: public IList<Blog> GetAll() 58: { 59: string key = string.Format("CacheDiSample.DataAccess.GetAll"); 60:   61: // hard code 5 minute expiry! 62: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 63: return cacheProvider.Fetch(key, () => 64: { 65: return parentBlogRepository.GetAll(); 66: }, 67: null, relativeCacheExpiry) 68: .ToList(); 69: } 70: } 71: }   The key things in this caching repository are: I inject into the repository the ICacheProvider<Blog> implementation, via the constructor. This will make the cache provider functionality available to the repository. I inject the parent IBlogRepository implementation (which has the actual data access code), via the constructor. This will allow the methods implemented in the parent to be called if nothing is found in the cache. I override each of the methods implemented in the repository, including those implemented in the generic BaseRepository. Each override of these methods follows the same pattern. It makes a call to the CacheProvider.Fetch method, and passes in the parentBlogRepository implementation of the method as the retrieval method, to be used if nothing is present in the cache. Configuring the Caching Repository in the DI Container The final piece of the jigsaw is to tell Castle Windsor to use the BlogRepositoryWithCaching implementation of IBlogRepository, but to inject the actual Data Access implementation into this decorator. This is easily achieved by modifying the RepositoriesInstaller to use Windsor’s implicit decorator wiring: 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain.CacheDecorators; 6: using CacheDiSample.Domain.Repositories; 7: using CacheDiSample.DataAccess; 8:   9: namespace CacheDiSample.WindsorInstallers 10: { 11: public class RepositoriesInstaller : IWindsorInstaller 12: { 13: public void Install(IWindsorContainer container, IConfigurationStore store) 14: { 15:   16: // Use Castle Windsor implicit wiring for the block repository decorator 17: // Register the outermost decorator first 18: container.Register(Component.For<IBlogRepository>() 19: .ImplementedBy<BlogRepositoryWithCaching>() 20: .LifestyleTransient()); 21: // Next register the IBlogRepository inmplementation to inject into the outer decorator 22: container.Register(Component.For<IBlogRepository>() 23: .ImplementedBy<BlogRepository>() 24: .LifestyleTransient() 25: .DependsOn(new 26: { 27: nameOrConnectionString = "BloggingContext" 28: })); 29: } 30: } 31: }   This is all that is needed. Now if the consumer of the repository makes a call to the repositories method, it will be routed via the caching mechanism. You can test this by stepping through the code, and seeing that the DataAccess.BlogRepository code is only called if there is no data in the cache, or this has expired. The next step is to add the SQL Cache Dependency support into this pattern, this will be a future post.

    Read the article

  • ?RAC??????(Rolling)??/????????

    - by JaneZhang(???)
       ?RAC??????????,???????,???????????(Rolling),????,???????,??????????,???????????,????????,???????????????,?????????????????,???????   ?????????????????Rolling???,???????????Rolling?,?????????? ????,???Rolling???????:1. ?????2. ?????,????????????3. ????????????????????4. ??????,????????????????5. ?????????Readme????????????:1). ?oracle???????????????????.2). ??????:3). ??1????ORACLE_HOME?????????+ASM??(???);4). ?1?????:$cd $ORACLE_HOME/OPatch/10082277$opatch apply5). ??opatch????????????,??????????:6). ??1????ORACLE_HOME?????????+ASM??(???);7). ??2????ORACLE_HOME?????????+ASM??(???);8). ?????????????,??????????;9). ??2????ORACLE_HOME?????????+ASM??(???);10).???????,????? ????10.2.0.4 RAC???(Rolling)????8575528???:1).?oracle???????????????????,??:$ORACLE_HOME/OPatch??.$ pwd/u01/app/oracle/OPatch$ lsdocs  emdpatch.pl  jlib  opatch  opatch.ini  opatch.pl  opatchprereqs  p8575528_10204_Linux-x86.zip2).??????:su - oracle$ unzip p8575528_10204_Linux-x86.zipArchive:  p8575528_10204_Linux-x86.zip  creating: 8575528/  creating: 8575528/files/  creating: 8575528/files/lib/  creating: 8575528/files/lib/libserver10.a/ inflating: 8575528/files/lib/libserver10.a/kks1.o inflating: 8575528/files/lib/libserver10.a/kksc.o inflating: 8575528/files/lib/libserver10.a/kksh.o inflating: 8575528/files/lib/libserver10.a/ksmp.o  creating: 8575528/etc/  creating: 8575528/etc/config/ inflating: 8575528/etc/config/inventory inflating: 8575528/etc/config/actions  creating: 8575528/etc/xml/ inflating: 8575528/etc/xml/GenericActions.xml inflating: 8575528/etc/xml/ShiphomeDirectoryStructure.xml inflating: 8575528/README.txt    $ ls8575528  docs  emdpatch.pl  jlib  opatch  opatch.ini  opatch.pl  opatchprereqs  p8575528_10204_Linux-x86.zip3).????????????RAC?????(rolling)?$ $ORACLE_HOME/OPatch/opatch query -all /u01/app/oracle/OPatch/8575528|grep rollingPatch is a rolling patch: true <=====??????4).??1??????ORACLE_HOME?????????(???ASM,????):$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>??:$srvctl stop instance -d ONEPIECE -i ONEPIECE1$srvctl stop asm -n nascds14$ crs_stat -tName           Type           Target    State     Host      ------------------------------------------------------------ora....E1.inst application    OFFLINE   OFFLINE            ora....SM1.asm application    OFFLINE   OFFLINE5). ?1?????:??:$su - oracle$cd /u01/app/oracle/OPatch/8575528$opatch applyInvoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_01-27-38AM.logApplySession applying interim patch '8575528' to OH '/u01/app/oracle'Running prerequisite checks...OPatch detected the node list and the local node from the inventory.  OPatch will patch the local system thenpropagate the patch to the remote nodes.This node is part of an Oracle Real Application Cluster.Remote nodes: 'nascds15'Local node: 'nascds14'Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.(Oracle Home = '/u01/app/oracle')Is the local system ready for patching? [y|n]y <======??yUser Responded with: YBacking up files and inventory (not for auto-rollback) for the Oracle HomeBacking up files affected by the patch '8575528' for restore. This might take a while...Backing up files affected by the patch '8575528' for rollback. This might take a while...Patching component oracle.rdbms, 10.2.0.4.0...Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kks1.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksc.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksh.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/ksmp.o"Running make for target ioracleApplySession adding interim patch '8575528' to inventoryVerifying the update...Inventory check OK: Patch ID 8575528 is registered in Oracle Home inventory with proper meta-data.Files check OK: Files from Patch ID 8575528 are present in Oracle Home.The local system has been patched.  You can restart Oracle instances on it.Patching in rolling mode.The node 'nascds15' will be patched next.Please shutdown Oracle instances running out of this ORACLE_HOME on 'nascds15'.(Oracle Home = '/u01/app/oracle')Is the node ready for patching? [y|n]6). ??opatch????????????????????????7). ??1???ASM ????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds14$srvctl start instance -d ONEPIECE -i ONEPIECE1$crs_stat -tora....E1.inst application    ONLINE    ONLINE    nascds14  ora....SM1.asm application    ONLINE    ONLINE    nascds148).??2???ASM????????:$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>$srvctl stop instance -d ONEPIECE -i ONEPIECE2$srvctl stop asm -n nascds15$crs_statora....E2.inst application    OFFLINE   OFFLINE            ora....SM2.asm application    OFFLINE   OFFLINE9). ?????????????,???????????Is the node ready for patching? [y|n] y <====??yUser Responded with: YUpdating nodes 'nascds15'  Apply-related files are:    FP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt"    DP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt"    MP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt"    RC = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remote_cmds.txt"Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt" withactual path.Propagating files to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt" withactual path.Propagating directories to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt" withactual path.Running command on remote node 'nascds15':cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle || echoREMOTE_MAKE_FAILED::>&2The node 'nascds15' has been patched.  You can restart Oracle instances on it.There were relinks on remote nodes.  Remember to check the binary size and timestamp on the nodes 'nascds15' .The following make commands were invoked on remote nodes:'cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle'OPatch succeeded.10). ??2???ASM????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds15$srvctl start instance -d ONEPIECE -i ONEPIECE211).??????????????????????????$ORACLE_HOME/OPatch/opatch lsinventory[oracle@nascds14 8575528]$ $ORACLE_HOME/OPatch/opatch lsinventoryInvoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_01-44-11AM.logLsinventory Output file location : /u01/app/oracle/cfgtoollogs/opatch/lsinv/lsinventory2012-06-13_01-44-11AM.txt--------------------------------------------------------------------------------Installed Top-level Products (2):Oracle Database 10g                                                  10.2.0.1.0Oracle Database 10g Release 2 Patch Set 3                            10.2.0.4.0There are 2 products installed in this Oracle Home.Interim patches (1) :Patch  8575528      : applied on Wed Jun 13 01:28:24 CST 2012<<<<<<<<<<<<<<<<<<<  Created on 17 Aug 2010, 07:56:36 hrs PST8PDT  Bugs fixed:    8575528Rac system comprising of multiple nodes Local node = nascds14 Remote node = nascds15--------------------------------------------------------------------------------OPatch succeeded.Rac system comprising of multiple nodes Local node = nascds14 Remote node = nascds15--------------------------------------------------------------------------------OPatch succeeded. ????10.2.0.4 RAC???(Rolling)????8575528???: 1).??1?????ORACLE_HOME?????????(???ASM,????):$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>??:$srvctl stop instance -d ONEPIECE -i ONEPIECE1$srvctl stop asm -n nascds14$crs_stat -tName           Type           Target    State     Host        ------------------------------------------------------------ora....E1.inst application    OFFLINE   OFFLINE              ora....SM1.asm application    OFFLINE   OFFLINE  2). ?1??????:??:$su - oracle$cd $ORACLE_HOME/OPatch/8575528$opatch rollback -id 8575528Invoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_18-22-10PM.logRollbackSession rolling back interim patch '8575528' from OH '/u01/app/oracle'Running prerequisite checks...OPatch detected the node list and the local node from the inventory.  OPatch will patch the local system thenpropagate the patch to the remote nodes.This node is part of an Oracle Real Application Cluster.Remote nodes: 'nascds15'Local node: 'nascds14'Please shut down Oracle instances running out of this ORACLE_HOME on all the nodes.(Oracle Home = '/u01/app/oracle')Are all the nodes ready for patching? [y|n]y <=========??yUser Responded with: YBacking up files affected by the patch '8575528' for restore. This might take a while...Patching component oracle.rdbms, 10.2.0.4.0...Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kks1.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksc.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksh.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/ksmp.o"Running make for target ioracleRollbackSession removing interim patch '8575528' from inventoryPatching in rolling mode.The node 'nascds15' will be patched next.Please shutdown Oracle instances running out of this ORACLE_HOME on 'nascds15'.(Oracle Home = '/u01/app/oracle')Is the node ready for patching? [y|n]3). ??opatch????????????????????????????4). ??1??ASM ????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds14$srvctl start instance -d ONEPIECE -i ONEPIECE1$crs_stat -tora....E1.inst application    ONLINE    ONLINE    nascds14    ora....SM1.asm application    ONLINE    ONLINE    nascds145).??2???ASM????????:$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>$srvctl stop instance -d ONEPIECE -i ONEPIECE2$srvctl stop asm -n nascds15$crs_stat -tora....E2.inst application    OFFLINE   OFFLINE              ora....SM2.asm application    OFFLINE   OFFLINE  6). ??????????????,??????????The node 'nascds15' will be patched next.Please shutdown Oracle instances running out of this ORACLE_HOME on 'nascds15'.(Oracle Home = '/u01/app/oracle')Is the node ready for patching? [y|n]y <=========??yUser Responded with: YUpdating nodes 'nascds15'  Rollback-related files are:    FR = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_files.txt"    DR = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_dirs.txt"    FP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt"    MP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt"    RC = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remote_cmds.txt"Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_dirs.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_dirs.txt" withactual path.Removing directories on remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt" withactual path.Propagating files to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt" withactual path.Propagating directories to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt" withactual path.Running command on remote node 'nascds15':cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle || echoREMOTE_MAKE_FAILED::>&2The node 'nascds15' has been patched.  You can restart Oracle instances on it.There were relinks on remote nodes.  Remember to check the binary size and timestamp on the nodes 'nascds15' .The following make commands were invoked on remote nodes:'cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle'OPatch succeeded.7). ??2???ASM????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds15$srvctl start instance -d ONEPIECE -i ONEPIECE28).??????????????????????????$ $ORACLE_HOME/OPatch/opatch lsinventoryInvoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_19-40-41PM.logLsinventory Output file location : /u01/app/oracle/cfgtoollogs/opatch/lsinv/lsinventory2012-06-13_19-40-41PM.txt--------------------------------------------------------------------------------Installed Top-level Products (2):Oracle Database 10g                                                  10.2.0.1.0Oracle Database 10g Release 2 Patch Set 3                            10.2.0.4.0There are 2 products installed in this Oracle Home.There are no Interim patches installed in this Oracle Home.Rac system comprising of multiple nodes Local node = nascds14 Remote node = nascds15--------------------------------------------------------------------------------OPatch succeeded.

    Read the article

  • start apache2 in chroot environment

    - by xero
    This is my first time I am trying to install Apache2 HTTP server in a chroot environment. That's why i decided to follow this procedure : http://www.symantec.com/connect/articles/securing-apache-2-step-step my web server start with successful : root@ubuntu:/usr/local/apache2/bin/apachectl start [Tue Oct 29 01:49:15.879868 2013] [core:warn] [pid 10835] AH00117: Ignoring deprecated use of DefaultType in line 60 of /usr/local/apache2/conf/httpd.conf. AH00548: NameVirtualHost has no effect and will be removed in the next release /usr/local/apache2/conf/httpd.conf:81 AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message root@ubuntu:/chroot/httpd/etc# netstat -antu Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN But at the end of part "Chrooting the server" i have always the same problem. When i try to start apache2 in chroot i have always this error : root@ubuntu:/chroot/httpd/etc# chroot /chroot/httpd /usr/local/apache2/bin/apachectl chroot: failed to run command `/usr/local/apache2/bin/apachectl': No such file or directory however my apachectl file exist : root@ubuntu:/chroot/httpd/etc# ls -l /chroot/httpd/usr/local/apache2/bin/apachectl -rwxr-xr-x 1 root root 3437 Oct 29 02:28 /chroot/httpd/usr/local/apache2/bin/apachectl when I use strace to debug, there are errors with coreutils.mo and libc.mo : root@ubuntu:/chroot/httpd/etc# chroot /chroot/httpd /usr/local/apache2/bin/httpd group hosts nsswitch.conf passwd passwords resolv.conf root@ubuntu:/chroot/httpd/etc# strace chroot /chroot/httpd /usr/local/apache2/bin/apachectl execve("/usr/sbin/chroot", ["chroot", "/chroot/httpd", "/usr/local/apache2/bin/apachectl"], [/* 18 vars */]) = 0 brk(0) = 0x1e46000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fe89563b000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=18263, ...}) = 0 mmap(NULL, 18263, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fe895636000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\30\2\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1815224, ...}) = 0 mmap(NULL, 3929304, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fe89505b000 mprotect(0x7fe895210000, 2097152, PROT_NONE) = 0 mmap(0x7fe895410000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b5000) = 0x7fe895410000 mmap(0x7fe895416000, 17624, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fe895416000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fe895635000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fe895634000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fe895633000 arch_prctl(ARCH_SET_FS, 0x7fe895634700) = 0 mprotect(0x7fe895410000, 16384, PROT_READ) = 0 mprotect(0x606000, 4096, PROT_READ) = 0 mprotect(0x7fe89563d000, 4096, PROT_READ) = 0 munmap(0x7fe895636000, 18263) = 0 brk(0) = 0x1e46000 brk(0x1e67000) = 0x1e67000 open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=2919792, ...}) = 0 mmap(NULL, 2919792, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fe894d92000 close(3) = 0 chroot("/chroot/httpd") = 0 chdir("/") = 0 execve("/usr/local/apache2/bin/apachectl", ["/usr/local/apache2/bin/apachectl"], [/* 18 vars */]) = -1 ENOENT (No such file or directory) open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) write(2, "chroot: ", 8chroot: ) = 8 write(2, "failed to run command `/usr/loca"..., 56failed to run command `/usr/local/apache2/bin/apachectl') = 56 open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) write(2, ": No such file or directory", 27: No such file or directory) = 27 write(2, "\n", 1 ) = 1 close(1) = 0 close(2) = 0 exit_group(127) = ? using the tutorial I did not find and copie libraries following on my server. I suppose there is no link with coreutils.mo and libc.mo : /usr/libexec/ld-elf.so.1 /var/run/ld-elf.so.hints I don't understand what files i forgot to copy in my chroot environment to be able to start my apache2. Any ideas ?

    Read the article

  • WD MBWE II (White Strip Light) 2TB - unable to access data

    - by user210477
    I have a WD MBWE II (White Strip Light) 2TB - (WD20000H2NC-00) Was working fine until a few days ago. I guess there was a power failure and after that I am unable to access the 'Public' or the 'Download' folder anymore. I have been searching for answers everywhere but came up empty handed. Web GUI still works, SSH works. I hooked up both the drives on my PC and UFS Explorer sees the drive. But so far I am unable to retrieve any of my data. I do not remember what RAID setting I used when I first got the drive. I can see from GUI that it is set as "Stripe". The drive contains 10 years of family pictures which I really do not want to loose. Sadly and stupidly, I didn't even keep a backup of this drive. Can somebody please help or point me in the right direction. Thank you in advance for your help. Disk Utility on Ubuntu reports 1405 bad sectors on one drive. How can I retrieve my data? Please help. Logs below: ~ # mdadm --detail /dev/md[012345678] /dev/md0: Version : 0.90 Creation Time : Wed Jul 15 08:36:17 2009 Raid Level : raid1 Array Size : 1959872 (1914.26 MiB 2006.91 MB) Used Dev Size : 1959872 (1914.26 MiB 2006.91 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:53:29 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 04f7a661:98983b3b:26b29e4f:9b646adb Events : 0.266 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 /dev/md1: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 256896 (250.92 MiB 263.06 MB) Used Dev Size : 256896 (250.92 MiB 263.06 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 30 22:08:21 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : aaa7b859:c475312d:efc5a766:6526b867 Events : 0.10 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 /dev/md3: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 987904 (964.91 MiB 1011.61 MB) Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:26:33 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 3f4099f2:72e6171b:5ba962fd:48464a62 Events : 0.54 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 mdadm: md device /dev/md4 does not appear to be active. mdadm: md device /dev/md5 does not appear to be active. mdadm: md device /dev/md6 does not appear to be active. mdadm: md device /dev/md7 does not appear to be active. mdadm: md device /dev/md8 does not appear to be active. ~ # cat /etc/mtab securityfs /sys/kernel/security securityfs rw 0 0 /dev/md2 /DataVolume xfs rw,usrquota 0 0 /dev/md4 /ExtendVolume xfs rw,usrquota 0 0 ~ # df -k Filesystem 1k-blocks Used Available Use% Mounted on /dev/md0 1929044 145092 1685960 8% / /dev/md3 972344 123452 799500 13% /var /dev/ram0 63412 20 63392 0% /mnt/ram ~ # mdadm -D /dev/md2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 ~ # mdadm -D /dev/md4 mdadm: md device /dev/md4 does not appear to be active. ~ # mount /dev/root on / type ext3 (rw,noatime,data=ordered) proc on /proc type proc (rw) sys on /sys type sysfs (rw) /dev/pts on /dev/pts type devpts (rw) securityfs on /sys/kernel/security type securityfs (rw) /dev/md3 on /var type ext3 (rw,noatime,data=ordered) /dev/ram0 on /mnt/ram type tmpfs (rw) ~ # cat /var/log/messages Oct 29 18:04:50 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 29 18:17:45 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 00:50:11 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 16:29:47 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 18:27:22 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:06:03 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:14:58 shmotashNAS daemon.warn wixEvent[3462]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 19:20:05 shmotashNAS daemon.alert wixEvent[3462]: Thermal Alarm - System temperature exceeded threshold.(66 degrees) Oct 30 19:58:29 shmotashNAS daemon.alert wixEvent[3462]: HDD SMART - HDD 1 SMART Health Status: Failed. Oct 30 22:05:39 shmotashNAS daemon.info init: Starting pid 13043, console /dev/null: '/usr/bin/killall' Oct 30 22:05:39 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:08:09 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:08:09 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:19 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:25 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:37 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:44 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:08:46 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: ****** database does not exist. ret = -1, creating path Oct 30 22:08:49 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:08:50 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4605, console /dev/null: '/bin/touch' Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4607, console /dev/ttyS0: '/sbin/getty' Oct 30 22:09:10 shmotashNAS daemon.info wixEvent[3557]: System Startup - System startup. Oct 30 22:09:16 shmotashNAS daemon.warn wixEvent[3557]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 22:14:14 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:29:36 shmotashNAS daemon.warn wixEvent[3557]: System Reboot - System will reboot. Oct 30 22:29:40 shmotashNAS daemon.info init: Starting pid 5974, console /dev/null: '/usr/bin/killall' Oct 30 22:29:40 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:47:56 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:47:56 shmotashNAS daemon.warn wixEvent[3461]: Network Link - NIC 1 link is down. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:48:09 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4079, console /dev/null: '/bin/touch' Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4080, console /dev/ttyS0: '/sbin/getty' Oct 30 22:48:28 shmotashNAS daemon.info wixEvent[3461]: System Startup - System startup. Oct 30 22:49:01 shmotashNAS daemon.warn wixEvent[3461]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 23:51:11 shmotashNAS daemon.warn wixEvent[3461]: System Reboot - System will reboot. Oct 30 23:51:16 shmotashNAS daemon.info init: Starting pid 6498, console /dev/null: '/usr/bin/killall' Oct 30 23:51:16 shmotashNAS syslog.info System log daemon exiting. Oct 30 23:54:19 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 23:55:44 [Version 01.09.00.96] ++++++++++++++ Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4115, console /dev/null: '/bin/touch' Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4116, console /dev/ttyS0: '/sbin/getty' Oct 30 23:55:58 shmotashNAS daemon.info wixEvent[3476]: System Startup - System startup. Oct 30 23:56:33 shmotashNAS daemon.warn wixEvent[3476]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 00:29:14 shmotashNAS auth.info sshd[5409]: Server listening on 0.0.0.0 port 22. Oct 31 00:31:25 shmotashNAS auth.info sshd[5486]: Accepted password for root from 192.168.1.100 port 50785 ssh2 Oct 31 00:33:44 shmotashNAS auth.info sshd[5565]: Accepted password for root from 192.168.1.100 port 50817 ssh2 Oct 31 00:36:39 shmotashNAS daemon.info init: Starting pid 5680, console /dev/null: '/usr/bin/killall' Oct 31 00:36:39 shmotashNAS syslog.info System log daemon exiting. Oct 31 00:40:44 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:31 - 00:41:00 [Version 01.09.00.96] ++++++++++++++ Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 31 00:41:01 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4101, console /dev/null: '/bin/touch' Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4102, console /dev/ttyS0: '/sbin/getty' Oct 31 00:41:15 shmotashNAS daemon.info wixEvent[3464]: System Startup - System startup. Oct 31 00:41:47 shmotashNAS daemon.warn wixEvent[3464]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 01:13:19 shmotashNAS daemon.info init: Starting pid 5385, console /dev/null: '/usr/bin/killall' Oct 31 01:13:19 shmotashNAS syslog.info System log daemon exiting. Nov 1 13:26:25 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network Link - NIC 1 link is up 100 Mbps full duplex. Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:11:01 - 13:26:38 [Version 01.09.00.96] ++++++++++++++ Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: mc_db_init ... Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === inotify init done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === Walking directory done. Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4078, console /dev/null: '/bin/touch' Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4079, console /dev/ttyS0: '/sbin/getty' Nov 1 13:26:52 shmotashNAS daemon.info wixEvent[3471]: System Startup - System startup. Nov 1 13:27:28 shmotashNAS daemon.warn wixEvent[3471]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Nov 1 13:44:48 shmotashNAS auth.info sshd[5375]: Accepted password for root from 192.168.1.103 port 50217 ssh2 Nov 1 13:51:08 shmotashNAS auth.info sshd[5894]: Accepted password for root from 192.168.1.103 port 50380 ssh2

    Read the article

  • Installing multiple php versions plus extensions on freebsd

    - by jgtumusiime
    I'm a currently learning how to work with freebsd. Lately I have been trying to run multiple php versions along with their respective packages. However, I seem to be running into issues while making installations. The default location for my php installation is /usr/local/etc/, however I want to be able to install php5.2, php5.3 and php5.4 in /usr/local/etc/php52, /usr/local/etc/php53 and /usr/local/etc/php54 respectively. Using ports I simply achieved this by doing cd /usr/ports/lang/php5x && make PREFIX="/usr/local/etc/php5x" install clean. The problem now is: How do I do the same for extensions of all my PHP versions? When I try installing php-extensions like so: cd /usr/ports/lang/php5x-extension && make PREFIX="/usr/local/etc/php5x/lib/php" install clean, I get this error ... ===> PHPizing for php53-bcmath-5.3.17 env: /usr/local/bin/phpize: No such file or directory *** Error code 127 Stop in /usr/ports/math/php53-bcmath. *** Error code 1 Stop in /usr/ports/lang/php53-extensions. My PHPize is located in /usr/local/etc/php5x/bin/phpize So how do I get make or whatever to look for phpize in the right path? Is there a cleaner, may be simpler way of maintaining multiple php installations? I need to achieve this because of compatibility issues from some legacy code that runs on 5.2 and breaks on 5.3. Thank you. ================= So I successfully installed an configured freebsd jail and I would like to install software within my jail but I cannot connect to the network. Here is my rc.conf jail_enable="YES" # Set to NO to disable starting of any jails jail_list="mambo2" # Space separated list of names of jails jail_mambo2_rootdir="/usr/jails/j01" # jail's root directory jail_mambo2_hostname="mambo2.ug" # jail's hostname jail_mambo2_ip="192.168.100.174" # jail's IP address jail_mambo2_devfs_enable="YES" # mount devfs in the jail jail_mambo2_devfs_ruleset="mambo2_ruleset" # devfs ruleset to apply to jail here is my jail ifconfig output mambo2# ifconfig rl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:c1:28:00:48:db media: Ethernet autoselect (100baseTX <full-duplex>) status: active plip0: flags=108810<POINTOPOINT,SIMPLEX,MULTICAST,NEEDSGIANT> metric 0 mtu 1500 lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 mambo2# I created a /etc/resolv.conf for nameservers mambo2# cat /etc/resolv.conf nameserver 192.168.100.251 nameserver 8.8.8.8 mambo2# Here is a list of jails running [root@mambo /usr/home/jtumusiime]# jls JID IP Address Hostname Path 5 192.168.100.174 mambo2.ug /usr/jails/j01 my host has 4 ip addresses, 3 public and one private: 192.168.100.173 I tried creating a jail using ezjail and this does not work out. [root@mambo /usr/home/jtumusiime]# ezjail-admin update -p -i Error: Cannot find your copy of the FreeBSD source tree in . Consider using 'ezjail-admin install' to create the base jail from an ftp server. [root@mambo /usr/home/jtumusiime]# I have an updated copy of freebsd 7.1 source in /usr/src/ and I did #make buildworld while building the first jail mambo2 Here is an excerpt of ouput of ezjail-admin install ... 221 Goodbye. Trying 193.162.146.4... Connected to ftp.freebsd.org. 220 ftp.beastie.tdk.net FTP server (Version 6.00LS) ready. 331 Guest login ok, send your email address as password. 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. 200 Type set to I. 550 pub/FreeBSD-Archive/old-releases/i386/7.1-RELEASE/base: No such file or directory. 221 Goodbye. Could not fetch base from ftp.freebsd.org. Maybe your release (7.1-RELEASE) is specified incorrectly or the host ftp.freebsd.org does not provide that release build. Use the -r option to specify an existing release or the -h option to specify an alternative ftp server. Querying your ftp-server... The ftp server you specified (ftp.freebsd.org) seems to provide the following builds: Trying 193.162.146.4... total 10 drwxrwxr-x 13 1006 1006 512 Feb 20 2011 8.2-RELEASE drwxrwxr-x 13 1006 1006 512 Apr 10 2012 8.3-RELEASE lrwxr-xr-x 1 1006 1006 16 Jan 7 2012 9.0-RELEASE -> i386/9.0-RELEASE drwxrwxr-x 7 1006 1006 1024 Feb 19 2012 ISO-IMAGES -rw-rw-r-- 1 1006 1006 637 Nov 23 2005 README.TXT drwxrwxr-x 5 1006 1006 512 Nov 2 02:59 i386 I do not want to upgrade my freebsd installation. I have googled around; but all on vail

    Read the article

  • Cyrus on CentOS with sasl / pam / ldap

    - by Oscar
    SASL/PAM/LDAP is driving me crazy... that's what I read a lot when googling for problems in this area, and what I experience myself :-S I'm trying to get Cyrus imap working for virtual hosting on CentOS with this authorisation backend and really don't know what's happening. In saslauthd I configured the LDAP search filter to use, but it looks like pam completely ignores it. Here's what I do for testing (done more tests but all with similar results): [root@testserv ~]# imtest -u [email protected] -a [email protected] WARNING: no hostname supplied, assuming localhost S: * OK [CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS] testserv. Cyrus IMAP4 v2.3.7-Invoca-RPM-2.3.7-7.el5_6.4 server ready C: C01 CAPABILITY S: * CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS ACL RIGHTS=kxte QUOTA MAILBOX-REFERRALS NAMESPACE UIDPLUS NO_ATOMIC_RENAME UNSELECT CHILDREN MULTIAPPEND BINARY SORT SORT=MODSEQ THREAD=ORDEREDSUBJECT THREAD=REFERENCES ANNOTATEMORE CATENATE CONDSTORE IDLE LISTEXT LIST-SUBSCRIBED X-NETSCAPE URLAUTH S: C01 OK Completed Please enter your password: C: L01 LOGIN [email protected] {6} S: + go ahead C: <omitted> S: L01 NO Login failed: authentication failure Authentication failed. generic failure Security strength factor: 0 C: Q01 LOGOUT * BYE LOGOUT received Q01 OK Completed Connection closed. The LDAP entry does exist (and so does the mailbox in Cyrus): [root@testserv ~]# ldapsearch -WxD cn=Manager,o=mydomain,c=com [email protected] Enter LDAP Password: # extended LDIF # # LDAPv3 # base <> with scope subtree # filter: [email protected] # requesting: ALL # # myuser, accounts, testserv.mydomain.com, mydomain, com dn: uid=myuser,ou=accounts,dc=testserv.mydomain.com,o=mydomain,c=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uidNumber: 16 uid: myuser gidNumber: 5 givenName: My sn: Name mail: [email protected] cn: My Name userPassword:: dYN5ebB0fXhNRn1pZllhRnJX7Uk= shadowLastChange: 15176 homeDirectory: /dev/null # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 This is what I get in /var/log/messages Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] ... /var/adm/auth.log Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:11 testserv cyrus/imap[12514]: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb Aug 2 04:00:19 testserv saslauthd[5926]: DEBUG: auth_pam: pam_authenticate failed: User not known to the underlying authentication module Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] (AFAIK I can ignore the auxprop msg) ... and /var/log/slapd.log: Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 ACCEPT from IP=127.0.0.1:51403 (IP=0.0.0.0:389) Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 BIND dn="" method=128 Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 RESULT tag=97 err=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SRCH base="o=mydomain,c=com" scope=2 deref=0 filter="([email protected])" Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=2 UNBIND Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 closed These are the settings in In /etc/imapd.conf: sasl_mech_list: PLAIN LOGIN sasl_pwcheck_method: saslauthd ## sasl_auxprop_plugin: sasldb sasl_auto_transition: no and my sasl config: [root@testserv ~]# cat /etc/sysconfig/saslauthd # Directory in which to place saslauthd's listening socket, pid file, and so # on. This directory must already exist. SOCKETDIR=/var/run/saslauthd # Mechanism to use when checking passwords. Run "saslauthd -v" to get a list # of which mechanism your installation was compiled with the ablity to use. MECH=pam # Additional flags to pass to saslauthd on the command line. See saslauthd(8) # for the list of accepted flags. FLAGS="-c -r -O /etc/saslauthd.conf" [root@testserv ~]# cat /etc/saslauthd.conf ldap_servers: ldap://127.0.0.1/ ldap_search_base: dc=%d,o=mydomain,c=com ldap_auth_method: bind #ldap_filter: (|(uid=%u)((&(mail=%u@%d)(accountStatus=active))) ldap_filter: (&(mail=%u@%d)(accountStatus=active)) ldap_debug: 1 ldap_version: 3 The accountStatus=active is not in ldap yet, but that doesn't make a difference since I don't see it in the filter... that's not the reason for the failure. The weird thing is, I do get an error when I rename or remove /etc/saslauthd.conf, but when the file exists it seems happily ignored... The filter in slapd.log seems to be taken from /etc/ldap.conf. Apart from some timers, that only contains: host 127.0.0.1 base o=mydomain,c=com pam_login_attribute mail Outcommenting the pam_login_attribute results in this filter in slapd.log: filter="([email protected])" Pam-imap looks like this: [root@testserv ~]# cat /etc/pam.d/imap auth required pam_ldap.so debug account required pam_ldap.so debug #auth sufficient pam_unix.so likeauth nullok #auth sufficient pam_ldap.so use_first_pass #auth required pam_deny.so #account sufficient pam_unix.so #account sufficient pam_ldap.so The outcommented stuff is because I don't have the cyrus admin user in Ldap; that's a Linux user. That works fine when uncommented, but I still need to play around with that a little and first I wanna get imap working. Finally nsswitch: [root@testserv ~]# cat /etc/nsswitch.conf # # /etc/nsswitch.conf # # An example Name Service Switch config file. This file should be # sorted with the most-used services at the beginning. # # The entry '[NOTFOUND=return]' means that the search for an # entry should stop if the search in the previous entry turned # up nothing. Note that if the search failed due to some other reason # (like no NIS server responding) then the search continues with the # next entry. # # Legal entries are: # # nisplus or nis+ Use NIS+ (NIS version 3) # nis or yp Use NIS (NIS version 2), also called YP # dns Use DNS (Domain Name Service) # files Use the local files # db Use the local database (.db) files # compat Use NIS on compat mode # hesiod Use Hesiod for user lookups # [NOTFOUND=return] Stop searching if not found so far # # To use db, put the "db" in front of "files" for entries you want to be # looked up first in the databases # # Example: #passwd: db files nisplus nis #shadow: db files nisplus nis #group: db files nisplus nis passwd: compat ldap group: compat ldap shadow: compat ldap hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: nisplus publickey: nisplus automount: files nisplus aliases: files nisplus Any info where to start looking will be greatly appreciated! Thnx in advance

    Read the article

  • Virtualhost entries gets over-written when apache httpd.conf is rebuilt

    - by Amitabh
    Background: We have been trying to get a wildcard SSL working on multiple sub domains on a single dedicated address.. We have two sub domains next.my-personal-website.com and blog.my-personal-website.com Part of our strategy has been to edit the httpd.conf and add the NameVirtualHost xx.xx.144.72:443 directive and the virtualhost entries for port 443 for the subdomains there. This works good if we just edit the httpd.conf, add the entries, save it and restart the apache. The problem: But if we add a new sub domain from cpanel or we run the # /usr/local/cpanel/bin/apache_conf_distiller --update # /scripts/rebuildhttpdconf the virtualhost entries that we added manually are no more there in the newly generated httpd.conf file. Only the virtualhost entry for the main domain for port 443 that was there before we made edits to the httpd.conf is there(assuming we are not discussing virtualhost entries for port 80). I understand we need to put the new virtualhost entries in some include files as mentioned here in the cpanel documentation. But am not sure where to. So the question would be where do I put the NameVirtualHost xx.xx.144.72:443 directive and the two virtualhost directive for port 443, so that they are not overwritten when httpd.conf is rebuilt/regenerated later. Virtualhost entries: The two virtualhost entries for the subdomains are: <VirtualHost xx.xx.144.72:443> ServerName next.my-personal-website.com ServerAlias www.next.my-personal-website.com DocumentRoot /home/myguardi/public_html/next.my-personal-website.com ServerAdmin [email protected] UseCanonicalName On CustomLog /usr/local/apache/domlogs/next.my-personal-website.com combined CustomLog /usr/local/apache/domlogs/next.my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/next.my-personal-website.com/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/next.my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> and <VirtualHost xx.xx.144.72:443> ServerName blog.my-personal-website.com ServerAlias www.blog.my-personal-website.com DocumentRoot /home/myguardi/public_html/blog.my-personal-website.com ServerAdmin [email protected] UseCanonicalName On CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com combined CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/blog.my-personal-website.com/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> and the automatically generated virtualhost entry for the main domain for port 443 is <VirtualHost xx.xx.144.72:443> ServerName my-personal-website.com ServerAlias www.my-personal-website.com DocumentRoot /home/myguardi/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/my-personal-website.com combined CustomLog /usr/local/apache/domlogs/my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/ssl/2/myguardi/my-personal-website.com/*.conf" I really appreciate if somebody can tell me how to proceed on this. Thank you. Update: Include directives present are: `Include "/usr/local/apache/conf/includes/pre_main_global.conf" Include "/usr/local/apache/conf/includes/pre_main_2.conf" Include "/usr/local/apache/conf/php.conf" Include "/usr/local/apache/conf/includes/errordocument.conf" Include "/usr/local/apache/conf/modsec2.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_2.conf" ` These are the entries that are generated before any virtualhost entry is defined. Towards the end of the httpd.conf file , the following two entries are added Include "/usr/local/apache/conf/includes/post_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/post_virtualhost_2.conf" The older httpd.conf file before we added the virtualhost entries for sub domains for port 443 can be viewed here

    Read the article

  • Handling WCF Service Paths in Silverlight 4 – Relative Path Support

    - by dwahlin
    If you’re building Silverlight applications that consume data then you’re probably making calls to Web Services. We’ve been successfully using WCF along with Silverlight for several client Line of Business (LOB) applications and passing a lot of data back and forth. Due to the pain involved with updating the ServiceReferences.ClientConfig file generated by a Silverlight service proxy (see Tim Heuer’s post on that subject to see different ways to deal with it) we’ve been using our own technique to figure out the service URL. Going that route makes it a peace of cake to switch between development, staging and production environments. To start, we have a ServiceProxyBase class that handles identifying the URL to use based on the XAP file’s location (this assumes that the service is in the same Web project that serves up the XAP file). The GetServiceUrlBase() method handles this work: public class ServiceProxyBase { public ServiceProxyBase() { if (!IsDesignTime) { ServiceUrlBase = GetServiceUrlBase(); } } public string ServiceUrlBase { get; set; } public static bool IsDesignTime { get { return (Application.Current == null) || (Application.Current.GetType() == typeof (Application)); } } public static string GetServiceUrlBase() { if (!IsDesignTime) { string url = Application.Current.Host.Source.OriginalString; return url.Substring(0, url.IndexOf("/ClientBin", StringComparison.InvariantCultureIgnoreCase)); } return null; } } Silverlight 4 now supports relative paths to services which greatly simplifies things.  We changed the code above to the following: public class ServiceProxyBase { private const string ServiceUrlPath = "../Services/JobPlanService.svc"; public ServiceProxyBase() { if (!IsDesignTime) { ServiceUrl = ServiceUrlPath; } } public string ServiceUrl { get; set; } public static bool IsDesignTime { get { return (Application.Current == null) || (Application.Current.GetType() == typeof (Application)); } } public static string GetServiceUrl() { if (!IsDesignTime) { return ServiceUrlPath; } return null; } } Our ServiceProxy class derives from ServiceProxyBase and handles creating the ABC’s (Address, Binding, Contract) needed for a WCF service call. Looking through the code (mainly the constructor) you’ll notice that the service URI is created by supplying the base path to the XAP file along with the relative path defined in ServiceProxyBase:   public class ServiceProxy : ServiceProxyBase, IServiceProxy { private const string CompletedEventargs = "CompletedEventArgs"; private const string Completed = "Completed"; private const string Async = "Async"; private readonly CustomBinding _Binding; private readonly EndpointAddress _EndPointAddress; private readonly Uri _ServiceUri; private readonly Type _ProxyType = typeof(JobPlanServiceClient); public ServiceProxy() { _ServiceUri = new Uri(Application.Current.Host.Source, ServiceUrl); var elements = new BindingElementCollection { new BinaryMessageEncodingBindingElement(), new HttpTransportBindingElement { MaxBufferSize = 2147483647, MaxReceivedMessageSize = 2147483647 } }; // order of entries in collection is significant: dumb _Binding = new CustomBinding(elements); _EndPointAddress = new EndpointAddress(_ServiceUri); } #region IServiceProxy Members /// <summary> /// Used to call a WCF service operation. /// </summary> /// <typeparam name="T">The type of EventArgs that will be returned by the service operation.</typeparam> /// <param name="callback">The method to call once the WCF call returns (the callback).</param> /// <param name="parameters">Any parameters that the service operation expects.</param> public void CallService<T>(EventHandler<T> callback, params object[] parameters) where T : EventArgs { try { var proxy = new JobPlanServiceClient(_Binding, _EndPointAddress); string action = typeof (T).Name.Replace(CompletedEventargs, String.Empty); _ProxyType.GetEvent(action + Completed).AddEventHandler(proxy, callback); _ProxyType.InvokeMember(action + Async, BindingFlags.InvokeMethod, null, proxy, parameters); } catch (Exception exp) { MessageBox.Show("Unable to use ServiceProxy.CallService to retrieve data: " + exp.Message); } } #endregion } The relative path support for calling services in Silverlight 4 definitely simplifies code and is yet another good reason to move from Silverlight 3 to Silverlight 4.   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • Juju stuck in "pending" state when using LXC

    - by Andre
    So I'm trying to get started with Juju, and tried to do this locally using LXC. I followed the instructions here: How do I configure juju for local usage? Unfortunately this doesn't seem to work for me. status shows the following: $ juju status machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 exposed: true relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 open-ports: [] public-address: null 2012-05-10 14:09:38,155 INFO 'status' command finished successfully As you can see the agent-state is 'pending' and there is no public address where I'm able to access the newly created site. Am I missing something here? UPDATE: Tried destroying the environment an doing everything again (multiple times). This is the output for debug-log: ~$ juju debug-log 2012-05-11 08:50:23,790 INFO Enabling distributed debug log. 2012-05-11 08:50:23,806 INFO Tailing logs - Ctrl-C to stop. 2012-05-11 08:50:42,338 Machine:0: juju.agents.machine DEBUG: Units changed old:set([]) new:set(['mysql/0']) 2012-05-11 08:50:42,339 Machine:0: juju.agents.machine DEBUG: Starting service unit: mysql/0 ... 2012-05-11 08:50:42,459 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/mysql-1 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:50:42,620 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c54b6c> for mysql/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:50:42,648 Machine:0: unit.deploy DEBUG: Starting service unit mysql/0... 2012-05-11 08:50:42,649 Machine:0: unit.deploy DEBUG: Creating master container... 2012-05-11 08:54:33,992 Machine:0: unit.deploy DEBUG: Created master container andre-local-0-template 2012-05-11 08:54:33,993 Machine:0: unit.deploy INFO: Creating container mysql-0... 2012-05-11 08:56:18,760 Machine:0: unit.deploy INFO: Container created for mysql/0 2012-05-11 08:56:19,466 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:56:19,569 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started container for mysql/0 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started service unit mysql/0 2012-05-11 08:56:23,012 Machine:0: juju.agents.machine DEBUG: Units changed old:set(['mysql/0']) new:set(['wordpress/0', 'mysql/0']) 2012-05-11 08:56:23,039 Machine:0: juju.agents.machine DEBUG: Starting service unit: wordpress/0 ... 2012-05-11 08:56:23,154 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/wordpress-0 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:56:23,396 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c519cc> for wordpress/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:56:23,620 Machine:0: unit.deploy DEBUG: Starting service unit wordpress/0... 2012-05-11 08:56:23,621 Machine:0: unit.deploy INFO: Creating container wordpress-0... 2012-05-11 08:58:24,739 Machine:0: unit.deploy INFO: Container created for wordpress/0 2012-05-11 08:58:25,163 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:58:25,397 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:58:27,982 Machine:0: unit.deploy INFO: Started container for wordpress/0 2012-05-11 08:58:27,983 Machine:0: unit.deploy INFO: Started service unit wordpress/0 This is the result for the status command (with verbose flag): ~$ juju -v status 2012-05-11 08:51:53,464 DEBUG Initializing juju status runtime 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@662: Client environment:host.name=andre-ufo 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-24-generic-pae 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@671: Client environment:os.version=#37-Ubuntu SMP Wed Apr 25 10:47:59 UTC 2012 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@679: Client environment:user.name=andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@687: Client environment:user.home=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@699: Client environment:user.dir=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=192.168.122.1:41779 sessionTimeout=10000 watcher=0xb7780620 sessionId=0 sessionPasswd=<null> context=0x9242ee8 flags=0 2012-05-11 08:51:53,627:4030(0xb6b90b40):ZOO_INFO@check_events@1585: initiated connection to server [192.168.122.1:41779] 2012-05-11 08:51:53,649:4030(0xb6b90b40):ZOO_INFO@check_events@1632: session establishment complete on server [192.168.122.1:41779], sessionId=0x1373ae057d90007, negotiated timeout=10000 2012-05-11 08:51:53,651 DEBUG Environment is initialized. machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 public-address: null

    Read the article

  • Using progress dialog in Visual Studio extensions

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2014/05/23/using-progress-dialog-in-visual-studio-extensions.aspxAs a Visual Studio extension developer you are required to keep the aesthetics of Visual Studio in tact when you integrate your extension with Visual Studio. Your extension looks odd when you try to use windows controls and dialogs in your extensions. Visual Studio SDK exposes many interfaces so that your extension looks as integrated with Visual Studio as possible. When your extension is performing a long running task, you have many options to notify the progress to the user. One such option is through Visual Studio status bar. I have previously blogged about displaying progress through Visual Studio status bar. In this blog post I am going to highlight another way using IVsThreadedWaitDialog2 interface. One thing to note is, as the IVsThreadedWaitDialog2 interface name suggests it is a dialog hence user cannot perform any action when the dialog is being shown. So Visual Studio seems responsive to user, even when a task is being performed. Visual Studio itself makes use of this interface heavily. One example is when you are loading a solution (.sln) with lot of projects Visual Studio displays dialog implemented by this interface (screenshot below). So the first step is to get the instance of IVsThreadedWaitDialog2 interface using IServiceProvider interface. var dialogFactory = _serviceProvider.GetService(typeof(SVsThreadedWaitDialogFactory)) as IVsThreadedWaitDialogFactory; IVsThreadedWaitDialog2 dialog = null; if (dialogFactory != null) { dialogFactory.CreateInstance(out dialog); } So if your have the package initialized properly out object dialog will be not null and would contain the instance of IVsThreadedWaitDialog2 interface. Once the instance is got, you call the different methods to manage the dialog. I will cover 3 methods StartWaitDialog, EndWaitDialog and HasCanceled in this blog post. You show the progress dialog as below. if (dialog != null && dialog.StartWaitDialog( "Threaded Wait Dialog", "VS is Busy", "Progress text", null, "Waiting status bar text", 0, false, true) == VSConstants.S_OK) { Thread.Sleep(4000); } As you can see from the method syntax it is very similar to standard windows message box. If you pass true to the 7th parameter to StartWaitDialog method, you will also see a cancel button allowing user to cancel the running task. You can react when user cancels the task as below. bool isCancelled; dialog.HasCanceled(out isCancelled); if (isCancelled) { MessageBox.Show("Cancelled"); } Finally, you can close the dialog when you complete the task running as below. int usercancel; dialog.EndWaitDialog(out usercancel); To help you quickly experience the above code, I have created a sample. It is available for download from GitHub. The sample creates a tool window with two buttons to demo the above explained scenarios. The tool window can be accessed by clicking View –> Other Windows -> ProgressDialogDemo Window

    Read the article

  • Triangle Line-Segment Intersection - detecting near misses

    - by Will
    A ray is a very poor approximation of a player! I think approximating a player with a sphere traveling a straight line each game tick will solve my problems of the player intersecting edges of scenery because their line segment missed it yet their own model is not infinitely thin... I have a 3D triangle and a line segment. I have the normal triangle-line-segment intersection code which I admit I have only a woolly grasp of. To model movement and compute collisions of the player I have to determine if a line passes within sphere-radius of a triangle. But I can find no convenient line near-miss intersection code! Here's the classic triangle intersection ### commented ### code with my starting assumptions: function triangle_ray_intersection(a,b,c,ray_origin,ray_dir,ray_radius) { // http://softsurfer.com/Archive/algorithm_0105/algorithm_0105.htm#intersect_RayTriangle%28%29 // get triangle edge vectors and plane normal var u = vec3_sub(b,a); var v = vec3_sub(c,a); var n = vec3_cross(u,v); if(n[0]==0 && n[1]==0 && n[2]==0) return null; // triangle is degenerate var w0 = vec3_sub(ray_origin,a); var j = vec3_dot(n,ray_dir); if(Math.abs(j) < 0.00000001) { //### if parallel, might still pass within ray_radius of it return null; // parallel, disjoint or on plane } var i = -vec3_dot(n,w0); // get intersect point of ray with triangle plane var k = i / j; if(k < 0.0) return null; // ray goes away from triangle //### as its a line segment, k > 1+ray_radius means no intersect var hit = vec3_add(ray_origin,vec3_scale(ray_dir,k)); // intersect point of ray and plane // is I inside T? //### here I'm a bit lost; this is presumably computing barycentric coordinates? var uu = vec3_dot(u,u); var uv = vec3_dot(u,v); var vv = vec3_dot(v,v); var w = vec3_sub(hit,a); var wu = vec3_dot(w,u); var wv = vec3_dot(w,v); var D = uv * uv - uu * vv; var s = (uv * wv - vv * wu) / D; //### therefore, compute if its within ray_radius scaled to the 0..1 of barycentric coordinates? if(s<0.0 || s>1.0) return null; // I is outside T var t = (uv * wu - uu * wv) / D; if(t<0.0 || (s+t)>1.0) return null; // I is outside T //### finally, if it passses a barycentric test it might still be too far //### to a point; must check that its distance from a corner is within ray_radius too if more than one barycentric coord is >1 //### so we have rounded corners... return [hit,n]; // I is in T } Given the distance between the point of plane intersection and each corner, I ought to be able to determine distance at world scale of how far beyond the edge - beyond 1.0 in barycentric coordinates for each axis - that point is... At this point my head explodes! Is this the right track? What's the actual code? UPDATE: you can earn 100 pts on SO if you answer this question there...! How can you determine if a line segment passes within some distance of a triangle?

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • Thoughts on C# Extension Methods

    - by Damon
    I'm not a huge fan of extension methods.  When they first came out, I remember seeing a method on an object that was fairly useful, but when I went to use it another piece of code that method wasn't available.  Turns out it was an extension method and I hadn't included the appropriate assembly and imports statement in my code to use it.  I remember being a bit confused at first about how the heck that could happen (hey, extension methods were new, cut me some slack) and it took a bit of time to track down exactly what it was that I needed to include to get that method back.  I just imagined a new developer trying to figure out why a method was missing and fruitlessly searching on MSDN for a method that didn't exist and it just didn't sit well with me. I am of the opinion that if you have an object, then you shouldn't have to include additional assemblies to get additional instance level methods out of that object.  That opinion applies to namespaces as well - I do not like it when the contents of a namespace are split out into multiple assemblies.  I prefer to have static utility classes instead of extension methods to keep things nicely packaged into a cohesive unit.  It also makes it abundantly clear where utility methods are used in code.  I will concede, however, that it can make code a bit more verbose and lengthy.  There is always a trade-off. Some people harp on extension methods because it breaks the tenants of object oriented development and allows you to add methods to sealed classes.  Whatever.  Extension methods are just utility methods that you can tack onto an object after the fact.  Extension methods do not give you any more access to an object than the developer of that object allows, so I say that those who cry OO foul on extension methods really don't have much of an argument on which to stand.  In fact, I have to concede that my dislike of them is really more about style than anything of great substance. One interesting thing that I found regarding extension methods is that you can call them on null objects. Take a look at this extension method: namespace ExtensionMethods {   public static class StringUtility   {     public static int WordCount(this string str)     {       if(str == null) return 0;       return str.Split(new char[] { ' ', '.', '?' },         StringSplitOptions.RemoveEmptyEntries).Length;     }   }   } Notice that the extension method checks to see if the incoming string parameter is null.  I was worried that the runtime would perform a check on the object instance to make sure it was not null before calling an extension method, but that is apparently not the case.  So, if you call the following code it runs just fine. string s = null; int words = s.WordCount(); I am a big fan of things working, but this seems to go against everything I've come to know about instance level methods.  However, an extension method is really a static method masquerading as an instance-level method, so I suppose it would be far more frustrating if it failed since there is really no reason it shouldn't succeed. Although I'm not a fan of extension methods, I will say that if you ever find yourself at an impasse with a die-hard fan of either the utility class or extension method approach, then there is a common ground.  Extension methods are defined in static classes, and you call them from those static classes as well as directly from the objects they extend.  So if you build your utility classes using extension methods, then you can have it your way and they can have it theirs. 

    Read the article

  • Undefined fireball movement behavior

    - by optimisez
    Demonstration video I try to do after the player shoot 10 times of fireball, then delete all the fireball objects and recreate a 10 new set of fireball objects. I did it but there is a weird bug happens that sometimes the fireball will come out from top and move to the right after shooting a few times. All the 10 fireballs should follow the player all the time and all the fireball should come out from player even after a new set of fireballs is recreated. Any ideas to fix it? Variables typedef struct gameObject{ float X; float Y; int length; int height; bool action; }; // Fireball #define FIREBALL_NUM 10 LPDIRECT3DTEXTURE9 fireball = NULL; RECT fireballRect; gameObject *fireballDest = new gameObject[FIREBALL_NUM]; int iFireBallAnimation; int fireballCount = 0; Set up Fireball void setUpFireBall() { // Initialize destination rectangle, rectangle height and length for (int i = 0; i < FIREBALL_NUM; i++) { fireballDest[i].X = 0; fireballDest[i].Y = 0; fireballDest[i].length = fireballRect.right - fireballRect.left; fireballDest[i].height = fireballRect.bottom - fireballRect.top; } iFireBallAnimation = fireballRect.right - fireballRect.left; // Initialize boolean for (int i = 0; i < FIREBALL_NUM; i++) { fireballDest[i].action = false; } } Initialize fireball void initFireball() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "fireball.png", 512, 512, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 0), NULL, NULL, &fireball); // Initialize source rectangle fireballRect.left = 0; fireballRect.top = 256; fireballRect.right = 64; fireballRect.bottom = 320; setUpFireBall(); } Update fireball void update() { updateAnimation(); updateAI(); updatePhysics(); updateGameState(); } void updatePhysics() { motion(); collison(); } void motion() { playerMove(); playerJump(); playerGravity(); shootFireball(); fireballFollowPlayer(); } void shootFireball() { if (keyArr['Z']) fireballDest[fireballCount].action = true; if (fireballDest[fireballCount].action) { fireballDest[fireballCount].X += 10; if (fireballDest[fireballCount].X > 800) fireballCount++; } } void fireballFollowPlayer() { for (int i = 0; i < FIREBALL_NUM; i++) { if (fireballDest[i].action == false) { fireballDest[i].X = playerDest.X - 30; fireballDest[i].Y = playerDest.Y - 14; } } } void updateGameState() { // When no more fireball left, rearm fireball if (fireballCount == FIREBALL_NUM) { delete[] fireballDest; fireballDest = new gameObject[10]; fireballCount = 0; setUpFireBall(); } } Render fireball void renderFireball() { for (int i = 0; i < FIREBALL_NUM; i++) { if (fireballDest[i].action) sprite->Draw(fireball, &fireballRect, NULL, &D3DXVECTOR3(fireballDest[i].X, fireballDest[i].Y, 0), D3DCOLOR_XRGB(255,255, 255)); } }

    Read the article

  • dynamic? I'll never use that ... or then again, maybe it could ...

    - by adweigert
    So, I don't know about you, but I was highly skeptical of the dynamic keywork when it was announced. I thought to myself, oh great, just another move towards VB compliance. Well after seeing it being used in things like DynamicXml (which I use for this example) I then was working with a MVC controller and wanted to move some things like operation timeout of an action to a configuration file. Thinking big picture, it'd be really nice to have configuration for all my controllers like that. Ugh, I don't want to have to create all those ConfigurationElement objects... So, I started thinking self, use what you know and do something cool ... Well after a bit of zoning out, self came up with use a dynamic object duh! I was thinking of a config like this ...<controllers> <add type="MyApp.Web.Areas.ComputerManagement.Controllers.MyController, MyApp.Web"> <detail timeout="00:00:30" /> </add> </controllers> So, I ended up with a couple configuration classes like this ...blic abstract class DynamicConfigurationElement : ConfigurationElement { protected DynamicConfigurationElement() { this.DynamicObject = new DynamicConfiguration(); } public DynamicConfiguration DynamicObject { get; private set; } protected override bool OnDeserializeUnrecognizedAttribute(string name, string value) { this.DynamicObject.Add(name, value); return true; } protected override bool OnDeserializeUnrecognizedElement(string elementName, XmlReader reader) { this.DynamicObject.Add(elementName, new DynamicXml((XElement)XElement.ReadFrom(reader))); return true; } } public class ControllerConfigurationElement : DynamicConfigurationElement { [ConfigurationProperty("type", Options = ConfigurationPropertyOptions.IsRequired | ConfigurationPropertyOptions.IsKey)] public string TypeName { get { return (string)this["type"]; } } public Type Type { get { return Type.GetType(this.TypeName, true); } } } public class ControllerConfigurationElementCollection : ConfigurationElementCollection { protected override ConfigurationElement CreateNewElement() { return new ControllerConfigurationElement(); } protected override object GetElementKey(ConfigurationElement element) { return ((ControllerConfigurationElement)element).Type; } } And then had to create the meat of the DynamicConfiguration class which looks like this ...public class DynamicConfiguration : DynamicObject { private Dictionary<string, object> properties = new Dictionary<string, object>(StringComparer.CurrentCultureIgnoreCase); internal void Add<T>(string name, T value) { this.properties.Add(name, value); } public override bool TryGetMember(GetMemberBinder binder, out object result) { var propertyName = binder.Name; result = null; if (this.properties.ContainsKey(propertyName)) { result = this.properties[propertyName]; } return true; } } So all being said, I made a base controller class like a good little MVC-itizen ...public abstract class BaseController : Controller { protected BaseController() : base() { var configuration = ManagementConfigurationSection.GetInstance(); var controllerConfiguration = configuration.Controllers.ForType(this.GetType()); if (controllerConfiguration != null) { this.Configuration = controllerConfiguration.DynamicObject; } } public dynamic Configuration { get; private set; } } And used it like this ...public class MyController : BaseController { static readonly string DefaultDetailTimeout = TimeSpan.MaxValue.ToString(); public MyController() { this.DetailTimeout = TimeSpan.Parse(this.Configuration.Detail.Timeout ?? DefaultDetailTimeout); } public TimeSpan DetailTimeout { get; private set; } } And there I have an actual use for the dynamic keyword ... never thoguht I'd see the day when I first heard of it as I don't do much COM work ... oh dont' forget this little helper extension methods to find the controller configuration by the controller type.public static ControllerConfigurationElement ForType<T>(this ControllerConfigurationElementCollection collection) { Contract.Requires(collection != null); return ForType(collection, typeof(T)); } public static ControllerConfigurationElement ForType(this ControllerConfigurationElementCollection collection, Type type) { Contract.Requires(collection != null); Contract.Requires(type != null); return collection.Cast<ControllerConfigurationElement>().Where(element => element.Type == type).SingleOrDefault(); } Sure, it isn't perfect and I'm sure I can tweak it over time, but I thought it was a pretty cool way to take advantage of the dynamic keyword functionality. Just remember, it only validates you did it right at runtime, which isn't that bad ... is it? And yes, I did make it case-insensitive so my code didn't have to look like my XML objects, tweak it to your liking if you dare to use this creation.

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BlockingCollection having issues with byte arrays

    - by MJLaukala
    I am having an issue where an object with a byte[20] is being passed into a BlockingCollection on one thread and another thread returning the object with a byte[0] using BlockingCollection.Take(). I think this is a threading issue but I do not know where or why this is happening considering that BlockingCollection is a concurrent collection. Sometimes on thread2, myclass2.mybytes equals byte[0]. Any information on how to fix this is greatly appreciated. MessageBuffer.cs public class MessageBuffer : BlockingCollection<Message> { } In the class that has Listener() and ReceivedMessageHandler(object messageProcessor) private MessageBuffer RecievedMessageBuffer; On Thread1 private void Listener() { while (this.IsListening) { try { Message message = Message.ReadMessage(this.Stream, this); if (message != null) { this.RecievedMessageBuffer.Add(message); } } catch (IOException ex) { if (!this.Client.Connected) { this.OnDisconnected(); } else { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } catch (Exception ex) { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } } Message.ReadMessage(NetworkStream stream, iTcpConnectClient client) public static Message ReadMessage(NetworkStream stream, iTcpConnectClient client) { int ClassType = -1; Message message = null; try { ClassType = stream.ReadByte(); if (ClassType == -1) { return null; } if (!Message.IDTOCLASS.ContainsKey((byte)ClassType)) { throw new IOException("Class type not found"); } message = Message.GetNewMessage((byte)ClassType); message.Client = client; message.ReadData(stream); if (message.Buffer.Length < message.MessageSize + Message.HeaderSize) { return null; } } catch (IOException ex) { Logger.LogException(ex.ToString()); throw ex; } catch (Exception ex) { Logger.LogException(ex.ToString()); //throw ex; } return message; } On Thread2 private void ReceivedMessageHandler(object messageProcessor) { if (messageProcessor != null) { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(messageProcessor); } } else { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(); } } } PlayerStateMessage.cs public class PlayerStateMessage : Message { public GameObject PlayerState; public override int MessageSize { get { return 12; } } public PlayerStateMessage() : base() { this.PlayerState = new GameObject(); } public PlayerStateMessage(GameObject playerState) { this.PlayerState = playerState; } public override void Reconstruct() { this.PlayerState.Poisiton = this.GetVector2FromBuffer(0); this.PlayerState.Rotation = this.GetFloatFromBuffer(8); base.Reconstruct(); } public override void Deconstruct() { this.CreateBuffer(); this.AddToBuffer(this.PlayerState.Poisiton, 0); this.AddToBuffer(this.PlayerState.Rotation, 8); base.Deconstruct(); } public override void HandleMessage(object messageProcessor) { ((MessageProcessor)messageProcessor).ProcessPlayerStateMessage(this); } } Message.GetVector2FromBuffer(int bufferlocation) This is where the exception is thrown because this.Buffer is byte[0] when it should be byte[20]. public Vector2 GetVector2FromBuffer(int bufferlocation) { return new Vector2( BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation), BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation + 4)); }

    Read the article

  • Problem with sprite direction and rotation

    - by user2236165
    I have a sprite called Tool that moves with a speed represented as a float and in a direction represented as a Vector2. When I click the mouse on the screen the sprite change its direction and starts to move towards the mouseclick. In addition to that I rotate the sprite so that it is facing in the direction it is heading. However, when I add a camera that is suppose to follow the sprite so that the sprite is always centered on the screen, the sprite won't move in the given direction and the rotation isn't accurate anymore. This only happens when I add the Camera.View in the spriteBatch.Begin(). I was hoping anyone could maybe shed a light on what I am missing in my code, that would be highly appreciated. Here is the camera class i use: public class Camera { private const float zoomUpperLimit = 1.5f; private const float zoomLowerLimit = 0.1f; private float _zoom; private Vector2 _pos; private int ViewportWidth, ViewportHeight; #region Properties public float Zoom { get { return _zoom; } set { _zoom = value; if (_zoom < zoomLowerLimit) _zoom = zoomLowerLimit; if (_zoom > zoomUpperLimit) _zoom = zoomUpperLimit; } } public Rectangle Viewport { get { int width = (int)((ViewportWidth / _zoom)); int height = (int)((ViewportHeight / _zoom)); return new Rectangle((int)(_pos.X - width / 2), (int)(_pos.Y - height / 2), width, height); } } public void Move(Vector2 amount) { _pos += amount; } public Vector2 Position { get { return _pos; } set { _pos = value; } } public Matrix View { get { return Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(ViewportWidth * 0.5f, ViewportHeight * 0.5f, 0)); } } #endregion public Camera(Viewport viewport, float initialZoom) { _zoom = initialZoom; _pos = Vector2.Zero; ViewportWidth = viewport.Width; ViewportHeight = viewport.Height; } } And here is my Update and Draw-method: protected override void Update (GameTime gameTime) { float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; TouchCollection touchCollection = TouchPanel.GetState (); foreach (TouchLocation tl in touchCollection) { if (tl.State == TouchLocationState.Pressed || tl.State == TouchLocationState.Moved) { //direction the tool shall move towards direction = touchCollection [0].Position - toolPos; if (direction != Vector2.Zero) { direction.Normalize (); } //change the direction the tool is moving and find the rotationangle the texture must rotate to point in given direction toolPos += (direction * speed * elapsed); RotationAngle = (float)Math.Atan2 (direction.Y, direction.X); } } if (direction != Vector2.Zero) { direction.Normalize (); } //move tool in given direction toolPos += (direction * speed * elapsed); //change cameracentre to the tools position Camera.Position = toolPos; base.Update (gameTime); } protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear (Color.Blue); spriteBatch.Begin (SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, null, null, null, Camera.View); spriteBatch.Draw (tool, new Vector2 (toolPos.X, toolPos.Y), null, Color.White, RotationAngle, originOfToolTexture, 1, SpriteEffects.None, 1); spriteBatch.End (); base.Draw (gameTime); }

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Configuring OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never Why is it that I can connect to the same server using one version of JRE while I cannot with another ?

    Read the article

  • ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

    - by exxoid
    I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server. The error I get in my FTP Client is: Status: Retrieving directory listing... Command: CDUP Response: 250 CDUP command successful Command: PWD Response: 257 "/var" is the current directory Command: PASV Response: 227 Entering Passive Mode (172,16,4,22,237,205). Command: MLSD Response: 550 Access is denied. Error: Failed to retrieve directory listing Any idea? Here is the config of my proftpd: # # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file. # To really apply changes, reload proftpd after modifications, if # it runs in daemon mode. It is not required in inetd/xinetd mode. # # Includes DSO modules Include /etc/proftpd/modules.conf # Set off to disable IPv6 support which is annoying on IPv4 only boxes. UseIPv6 off # If set on you can experience a longer connection delay in many cases. IdentLookups off ServerName "Drupal Intranet" ServerType standalone ServerIdent on "FTP Server ready" DeferWelcome on # Set the user and group that the server runs as User nobody Group nogroup MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ # Use this to jail all users in their homes # DefaultRoot ~ # Users require a valid shell listed in /etc/shells to login. # Use this directive to release that constrain. # RequireValidShell off # Port 21 is the standard FTP port. Port 21 # In some cases you have to specify passive ports range to by-pass # firewall limitations. Ephemeral ports can be used for that, but # feel free to use a more narrow range. # PassivePorts 49152 65534 # If your host was NATted, this option is useful in order to # allow passive tranfers to work. You have to use your public # address and opening the passive ports used on your firewall as well. # MasqueradeAddress 1.2.3.4 # This is useful for masquerading address with dynamic IPs: # refresh any configured MasqueradeAddress directives every 8 hours <IfModule mod_dynmasq.c> # DynMasqRefresh 28800 </IfModule> # To prevent DoS attacks, set the maximum number of child processes # to 30. If you need to allow more than 30 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode, in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 30 # Set the user and group that the server normally runs at. # Umask 022 is a good standard umask to prevent new files and dirs # (second parm) from being group and world writable. Umask 022 022 # Normally, we want files to be overwriteable. AllowOverwrite on # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords: # PersistentPasswd off # This is required to use both PAM-based authentication and local passwords AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # Be warned: use of this directive impacts CPU average load! # Uncomment this if you like to see progress and transfer rate with ftpwho # in downloads. That is not needed for uploads rates. # UseSendFile off TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log # Logging onto /var/log/lastlog is enabled but set to off by default #UseLastlog on # In order to keep log file dates consistent after chroot, use timezone info # from /etc/localtime. If this is not set, and proftpd is configured to # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight # savings timezone regardless of whether DST is in effect. #SetEnv TZ :/etc/localtime <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> # Delay engine reduces impact of the so-called Timing Attack described in # http://www.securityfocus.com/bid/11430/discuss # It is on by default. <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # Alternative authentication frameworks # #Include /etc/proftpd/ldap.conf #Include /etc/proftpd/sql.conf # # This is used for FTPS connections # #Include /etc/proftpd/tls.conf # # Useful to keep VirtualHost/VirtualRoot directives separated # #Include /etc/proftpd/virtuals.con # A basic anonymous configuration, no upload directories. # <Anonymous ~ftp> # User ftp # Group nogroup # # We want clients to be able to login with "anonymous" as well as "ftp" # UserAlias anonymous ftp # # Cosmetic changes, all files belongs to ftp user # DirFakeUser on ftp # DirFakeGroup on ftp # # RequireValidShell off # # # Limit the maximum number of anonymous logins # MaxClients 10 # # # We want 'welcome.msg' displayed at login, and '.message' displayed # # in each newly chdired directory. # DisplayLogin welcome.msg # DisplayChdir .message # # # Limit WRITE everywhere in the anonymous chroot # <Directory *> # <Limit WRITE> # DenyAll # </Limit> # </Directory> # # # Uncomment this if you're brave. # # <Directory incoming> # # # Umask 022 is a good standard umask to prevent new files and dirs # # # (second parm) from being group and world writable. # # Umask 022 022 # # <Limit READ WRITE> # # DenyAll # # </Limit> # # <Limit STOR> # # AllowAll # # </Limit> # # </Directory> # # </Anonymous> # Include other custom configuration files Include /etc/proftpd/conf.d/ UseReverseDNS off <Global> RootLogin on UseFtpUsers on ServerIdent on DefaultChdir /var/www DeleteAbortedStores on LoginPasswordPrompt on AccessGrantMsg "You have been authenticated successfully." </Global> Any idea what could be wrong? Thanks for your help!

    Read the article

  • Suspended Laptop Cannot Wake Up - Ubuntu

    - by Zack
    I've got an ASUS G73JH, and whenever I suspend it or hibernate it, it will not wake up. The screen stays backlight but is black. The fan remains running, however the HDD does not, not disk activity is noticeable (audibly (It's not a SSD)). I can't: Awaken it with the keyboard Awaken it with the mouse Soft power-off by pressing the power button Change virtual screens by pressing Ctrl-Alt-# Restart X by pressing Ctrl-Alt-Backspace I have to hold down the power button and shut it down that way, and this seems a little unreasonable. Is there a place I could look for more detail as to what's causing this? Is there a known quick-fix to this issue? Nothing is logged as happening when the system is in "suspend" mode. Here's what happened immediately before and after the suspend "happened," note the time gap: May 4 17:46:13 tofu NetworkManager: <info> (eth0): carrier now OFF (device state 1) May 4 17:48:57 tofu kernel: imklog 4.2.0, log source = /proc/kmsg started. This one's kinda long, here's what happened immediately before the suspend, I'm not sure if it'll help but if you can find a use for it: May 4 17:46:10 tofu anacron[3353]: Anacron 2.3 started on 2010-05-04 May 4 17:46:10 tofu anacron[3353]: Normal exit (0 jobs run) May 4 17:46:10 tofu kernel: [ 2241.775927] CPU0 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.775958] CPU1 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.775987] CPU2 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.776138] CPU3 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.776168] CPU4 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.776197] CPU5 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.776200] CPU6 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.776229] CPU7 attaching NULL sched-domain. May 4 17:46:10 tofu kernel: [ 2241.919611] CPU0 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.919668] domain 0: span 0,4 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.919699] groups: 0 (cpu_power = 589) 4 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.919733] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.919762] groups: 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.919850] CPU1 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.919852] domain 0: span 1,5 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.919881] groups: 1 (cpu_power = 589) 5 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.919912] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.919915] groups: 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920003] CPU2 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920005] domain 0: span 2,6 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920033] groups: 2 (cpu_power = 589) 6 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920065] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920093] groups: 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920155] CPU3 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920157] domain 0: span 3,7 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920185] groups: 3 (cpu_power = 589) 7 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920217] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920245] groups: 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920307] CPU4 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920335] domain 0: span 0,4 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920337] groups: 4 (cpu_power = 589) 0 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920368] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920397] groups: 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920459] CPU5 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920487] domain 0: span 1,5 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920489] groups: 5 (cpu_power = 589) 1 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920520] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920549] groups: 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920611] CPU6 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920639] domain 0: span 2,6 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920641] groups: 6 (cpu_power = 589) 2 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920699] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920701] groups: 2,6 (cpu_power = 1178) 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) May 4 17:46:10 tofu kernel: [ 2241.920762] CPU7 attaching sched-domain: May 4 17:46:10 tofu kernel: [ 2241.920791] domain 0: span 3,7 level SIBLING May 4 17:46:10 tofu kernel: [ 2241.920793] groups: 7 (cpu_power = 589) 3 (cpu_power = 589) May 4 17:46:10 tofu kernel: [ 2241.920851] domain 1: span 0-7 level MC May 4 17:46:10 tofu kernel: [ 2241.920853] groups: 3,7 (cpu_power = 1178) 0,4 (cpu_power = 1178) 1,5 (cpu_power = 1178) 2,6 (cpu_power = 1178) May 4 17:46:12 tofu NetworkManager: <info> Sleeping... May 4 17:46:12 tofu NetworkManager: <info> (wlan0): now unmanaged May 4 17:46:12 tofu NetworkManager: <info> (wlan0): device state change: 8 -> 1 (reason 37) May 4 17:46:12 tofu NetworkManager: <info> (wlan0): deactivating device (reason: 37). May 4 17:46:12 tofu NetworkManager: <info> (wlan0): canceled DHCP transaction, dhcp client pid 1984 May 4 17:46:12 tofu kernel: [ 2244.084515] wlan0: deauthenticating from 68:7f:74:23:02:ae by local choice (reason=3) May 4 17:46:12 tofu avahi-daemon[1176]: Withdrawing address record for 192.168.1.2 on wlan0. May 4 17:46:12 tofu avahi-daemon[1176]: Leaving mDNS multicast group on interface wlan0.IPv4 with address 192.168.1.2. May 4 17:46:12 tofu avahi-daemon[1176]: Interface wlan0.IPv4 no longer relevant for mDNS. May 4 17:46:12 tofu NetworkManager: <info> Policy set 'Auto eth0' (eth0) as default for routing and DNS. May 4 17:46:12 tofu NetworkManager: <info> (wlan0): cleaning up... May 4 17:46:12 tofu NetworkManager: <info> (wlan0): taking down device. May 4 17:46:12 tofu avahi-daemon[1176]: Withdrawing address record for 2002:4c6e:638a:0:1e4b:d6ff:fe78:951d on wlan0. May 4 17:46:12 tofu wpa_supplicant[1212]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys May 4 17:46:13 tofu NetworkManager: <info> (eth0): now unmanaged May 4 17:46:13 tofu NetworkManager: <info> (eth0): device state change: 8 -> 1 (reason 37) May 4 17:46:13 tofu NetworkManager: <info> (eth0): deactivating device (reason: 37). May 4 17:46:13 tofu NetworkManager: <info> (eth0): canceled DHCP transaction, dhcp client pid 1559 May 4 17:46:13 tofu NetworkManager: <WARN> check_one_route(): (eth0) error -34 returned from rtnl_route_del(): Sucess#012 May 4 17:46:13 tofu avahi-daemon[1176]: Withdrawing address record for 192.168.1.3 on eth0. May 4 17:46:13 tofu avahi-daemon[1176]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.1.3. May 4 17:46:13 tofu avahi-daemon[1176]: Interface eth0.IPv4 no longer relevant for mDNS. May 4 17:46:13 tofu NetworkManager: <info> (eth0): cleaning up... May 4 17:46:13 tofu NetworkManager: <info> (eth0): taking down device. May 4 17:46:13 tofu avahi-daemon[1176]: Withdrawing address record for 2002:4c6e:638a:0:4a5b:39ff:fe0b:325d on eth0. May 4 17:46:13 tofu NetworkManager: <info> (eth0): carrier now OFF (device state 1)

    Read the article

  • sp_send_dbmail attach files stored as varbinary in database

    - by Mindstorm Interactive
    I have a two part question relating to sending query results as attachments using sp_send_dbmail. Problem 1: Only basic .txt files will open. Any other format like .pdf or .jpg are corrupted. Problem 2: When attempting to send multiple attachments, I receive one file with all file names glued together. I'm running SQL Server 2005 and I have a table storing uploaded documents: CREATE TABLE [dbo].[EmailAttachment]( [EmailAttachmentID] [int] IDENTITY(1,1) NOT NULL, [MassEmailID] [int] NULL, -- foreign key [FileData] [varbinary](max) NOT NULL, [FileName] [varchar](100) NOT NULL, [MimeType] [varchar](100) NOT NULL I also have a MassEmail table with standard email stuff. Here is the SQL Send Mail script. For brevity, I've excluded declare statements. while ( (select count(MassEmailID) from MassEmail where status = 20 )>0) begin select @MassEmailID = Min(MassEmailID) from MassEmail where status = 20 select @Subject = [Subject] from MassEmail where MassEmailID = @MassEmailID select @Body = Body from MassEmail where MassEmailID = @MassEmailID set @query = 'set nocount on; select cast(FileData as varchar(max)) from Mydatabase.dbo.EmailAttachment where MassEmailID = '+ CAST(@MassEmailID as varchar(100)) select @filename = '' select @filename = COALESCE(@filename+ ',', '') +FileName from EmailAttachment where MassEmailID = @MassEmailID exec msdb.dbo.sp_send_dbmail @profile_name = 'MASS_EMAIL', @recipients = '[email protected]', @subject = @Subject, @body =@Body, @body_format ='HTML', @query = @query, @query_attachment_filename = @filename, @attach_query_result_as_file = 1, @query_result_separator = '; ', @query_no_truncate = 1, @query_result_header = 0; update MassEmailset status= 30,SendDate = GetDate() where MassEmailID = @MassEmailID end I am able to successfully read files from the database so I know the binary data is not corrupted. .txt files only read when I cast FilaData to varchar. But clearly original headers are lost. It's also worth noting that attachment file sizes are different than the original files. That is most likely due to improper encoding as well. So I'm hoping there's a way to create file headers using the stored mimetype, or some way to include file headers in the binary data? I'm also not confident in the values of the last few parameters, and I know coalesce is not quite right, because it prepends the first file name with a comma. But good documentation is nearly impossible to find. Please help!

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >