Search Results

Search found 9133 results on 366 pages for 'global interpreter lock'.

Page 363/366 | < Previous Page | 359 360 361 362 363 364 365 366  | Next Page >

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1

    - by shiju
    In this post, I will demonstrate web application development using ASP. NET MVC 3, Razor and EF code First. This post will also cover Dependency Injection using Unity 2.0 and generic Repository and Unit of Work for EF Code First. The following frameworks will be used for this step by step tutorial. ASP.NET MVC 3 EF Code First CTP 5 Unity 2.0 Define Domain Model Let’s create domain model for our simple web application Category class public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } }   Expense class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First.    public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     }   RepositoryBasse – Generic Repository class public abstract class RepositoryBase<T> where T : class { private MyFinanceContext database; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) {     DatabaseFactory = databaseFactory;     dbset = Database.Set<T>(); }   protected IDatabaseFactory DatabaseFactory {     get; private set; }   protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } }   DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work   public interface IUnitOfWork {     void Commit(); }   UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } }   The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>.   public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext.   public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } }   Let’s create controller factory for Unity in the ASP.NET MVC 3 application. public class UnityControllerFactory : DefaultControllerFactory { IUnityContainer container; public UnityControllerFactory(IUnityContainer container) {     this.container = container; } protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType) {     IController controller;     if (controllerType == null)         throw new HttpException(                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   }   Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies.   private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); }   In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity.   protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); }   Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller   public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        }   Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div>   We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Let’s create view page for insert Category information   @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.      @{     Layout = "~/Views/Shared/_Layout.cshtml"; }   Source Code You can download the source code from http://efmvc.codeplex.com/ . The source will be refactored on over time.   Summary In this post, we have created a simple web application using ASP.NET MVC 3 and EF Code First. We have discussed on technologies and practices such as ASP.NET MVC 3, Razor, EF Code First, Unity 2, generic Repository and Unit of Work. In my later posts, I will modify the application and will be discussed on more things. Stay tuned to my blog  for more posts on step by step application building.

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • help with fixing fwts errors log

    - by jasmines
    Here is an extract of results.log: MTRR validation. Test 1 of 3: Validate the kernel MTRR IOMEM setup. FAILED [MEDIUM] MTRRIncorrectAttr: Test 1, Memory range 0xc0000000 to 0xdfffffff (PCI Bus 0000:00) has incorrect attribute Write-Combining. FAILED [MEDIUM] MTRRIncorrectAttr: Test 1, Memory range 0xfee01000 to 0xffffffff (PCI Bus 0000:00) has incorrect attribute Write-Protect. ==================================================================================================== Test 1 of 1: Kernel log error check. Kernel message: [ 0.208079] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored ADVICE: This is not exactly a failure mode but a warning from the kernel. The _OSI() method has implemented a match to the 'Linux' query in the DSDT and this is redundant because the ACPI driver matches onto the Windows _OSI strings by default. FAILED [HIGH] KlogACPIErrorMethodExecutionParse: Test 1, HIGH Kernel message: [ 3.512783] ACPI Error : Method parse/execution failed [\_SB_.PCI0.GFX0._DOD] (Node f7425858), AE_AML_PACKAGE_LIMIT (20110623/psparse-536) ADVICE: This is a bug picked up by the kernel, but as yet, the firmware test suite has no diagnostic advice for this particular problem. Found 1 unique errors in kernel log. ==================================================================================================== Check if system is using latest microcode. ---------------------------------------------------------------------------------------------------- Cannot read microcode file /usr/share/misc/intel-microcode.dat. Aborted test, initialisation failed. ==================================================================================================== MSR register tests. FAILED [MEDIUM] MSRCPUsInconsistent: Test 1, MSR SYSENTER_ESP (0x175) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0xffffffffffffffff). MSR CPU 0 -> 0xf7bb9c40 vs CPU 1 -> 0xf7bc7c40 FAILED [MEDIUM] MSRCPUsInconsistent: Test 1, MSR MISC_ENABLE (0x1a0) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0x400c51889). MSR CPU 0 -> 0x850088 vs CPU 1 -> 0x850089 ==================================================================================================== Checks firmware has set PCI Express MaxReadReq to a higher value on non-motherboard devices. ---------------------------------------------------------------------------------------------------- Test 1 of 1: Check firmware settings MaxReadReq for PCI Express devices. MaxReadReq for pci://00:00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) is low (128) [Audio device]. MaxReadReq for pci://00:02:00.0 Network controller: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection is low (128) [Network controller]. FAILED [LOW] LowMaxReadReq: Test 1, 2 devices have low MaxReadReq settings. Firmware may have configured these too low. ADVICE: The MaxReadRequest size is set too low and will affect performance. It will provide excellent bus sharing at the cost of bus data transfer rates. Although not a critical issue, it may be worth considering setting the MaxReadRequest size to 256 or 512 to increase throughput on the PCI Express bus. Some drivers (for example the Brocade Fibre Channel driver) allow one to override the firmware settings. Where possible, this BIOS configuration setting is worth increasing it a little more for better performance at a small reduction of bus sharing. ==================================================================================================== PCIe ASPM check. ---------------------------------------------------------------------------------------------------- Test 1 of 2: PCIe ASPM ACPI test. PCIE ASPM is not controlled by Linux kernel. ADVICE: BIOS reports that Linux kernel should not modify ASPM settings that BIOS configured. It can be intentional because hardware vendors identified some capability bugs between the motherboard and the add-on cards. Test 2 of 2: PCIe ASPM registers test. WARNING: Test 2, RP 00h:1Ch.01h L0s not enabled. WARNING: Test 2, RP 00h:1Ch.01h L1 not enabled. WARNING: Test 2, Device 02h:00h.00h L0s not enabled. WARNING: Test 2, Device 02h:00h.00h L1 not enabled. PASSED: Test 2, PCIE aspm setting matched was matched. WARNING: Test 2, RP 00h:1Ch.05h L0s not enabled. WARNING: Test 2, RP 00h:1Ch.05h L1 not enabled. WARNING: Test 2, Device 85h:00h.00h L0s not enabled. WARNING: Test 2, Device 85h:00h.00h L1 not enabled. PASSED: Test 2, PCIE aspm setting matched was matched. ==================================================================================================== Extract and analyse Windows Management Instrumentation (WMI). Test 1 of 2: Check Windows Management Instrumentation in DSDT Found WMI Method WMAA with GUID: 5FB7F034-2C63-45E9-BE91-3D44E2C707E4, Instance 0x01 Found WMI Event, Notifier ID: 0x80, GUID: 95F24279-4D7B-4334-9387-ACCDC67EF61C, Instance 0x01 PASSED: Test 1, GUID 95F24279-4D7B-4334-9387-ACCDC67EF61C is handled by driver hp-wmi (Vendor: HP). Found WMI Event, Notifier ID: 0xa0, GUID: 2B814318-4BE8-4707-9D84-A190A859B5D0, Instance 0x01 FAILED [MEDIUM] WMIUnknownGUID: Test 1, GUID 2B814318-4BE8-4707-9D84-A190A859B5D0 is unknown to the kernel, a driver may need to be implemented for this GUID. ADVICE: A WMI driver probably needs to be written for this event. It can checked for using: wmi_has_guid("2B814318-4BE8-4707-9D84-A190A859B5D0"). One can install a notify handler using wmi_install_notify_handler("2B814318-4BE8-4707-9D84-A190A859B5D0", handler, NULL). http://lwn.net/Articles/391230 describes how to write an appropriate driver. Found WMI Object, Object ID AB, GUID: 05901221-D566-11D1-B2F0-00A0C9062910, Instance 0x01, Flags: 00 Found WMI Method WMBA with GUID: 1F4C91EB-DC5C-460B-951D-C7CB9B4B8D5E, Instance 0x01 Found WMI Object, Object ID BC, GUID: 2D114B49-2DFB-4130-B8FE-4A3C09E75133, Instance 0x7f, Flags: 00 Found WMI Object, Object ID BD, GUID: 988D08E3-68F4-4C35-AF3E-6A1B8106F83C, Instance 0x19, Flags: 00 Found WMI Object, Object ID BE, GUID: 14EA9746-CE1F-4098-A0E0-7045CB4DA745, Instance 0x01, Flags: 00 Found WMI Object, Object ID BF, GUID: 322F2028-0F84-4901-988E-015176049E2D, Instance 0x01, Flags: 00 Found WMI Object, Object ID BG, GUID: 8232DE3D-663D-4327-A8F4-E293ADB9BF05, Instance 0x01, Flags: 00 Found WMI Object, Object ID BH, GUID: 8F1F6436-9F42-42C8-BADC-0E9424F20C9A, Instance 0x00, Flags: 00 Found WMI Object, Object ID BI, GUID: 8F1F6435-9F42-42C8-BADC-0E9424F20C9A, Instance 0x00, Flags: 00 Found WMI Method WMAC with GUID: 7391A661-223A-47DB-A77A-7BE84C60822D, Instance 0x01 Found WMI Object, Object ID BJ, GUID: DF4E63B6-3BBC-4858-9737-C74F82F821F3, Instance 0x05, Flags: 00 ==================================================================================================== Disassemble DSDT to check for _OSI("Linux"). ---------------------------------------------------------------------------------------------------- Test 1 of 1: Disassemble DSDT to check for _OSI("Linux"). This is not strictly a failure mode, it just alerts one that this has been defined in the DSDT and probably should be avoided since the Linux ACPI driver matches onto the Windows _OSI strings { If (_OSI ("Linux")) { Store (0x03E8, OSYS) } If (_OSI ("Windows 2001")) { Store (0x07D1, OSYS) } If (_OSI ("Windows 2001 SP1")) { Store (0x07D1, OSYS) } If (_OSI ("Windows 2001 SP2")) { Store (0x07D2, OSYS) } If (_OSI ("Windows 2006")) { Store (0x07D6, OSYS) } If (LAnd (MPEN, LEqual (OSYS, 0x07D1))) { TRAP (0x01, 0x48) } TRAP (0x03, 0x35) } WARNING: Test 1, DSDT implements a deprecated _OSI("Linux") test. ==================================================================================================== 0 passed, 0 failed, 1 warnings, 0 aborted, 0 skipped, 0 info only. ==================================================================================================== ACPI DSDT Method Semantic Tests. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP Failed to install global event handler. Test 22 of 93: Check _PSR (Power Source). ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 22, Detected an infinite loop when evaluating method '\_SB_.AC__._PSR'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 22, \_SB_.AC__._PSR correctly acquired and released locks 16 times. Test 35 of 93: Check _TMP (Thermal Zone Current Temp). ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 35, Detected an infinite loop when evaluating method '\_TZ_.DTSZ._TMP'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 35, \_TZ_.DTSZ._TMP correctly acquired and released locks 14 times. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 35, Detected an infinite loop when evaluating method '\_TZ_.CPUZ._TMP'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 35, \_TZ_.CPUZ._TMP correctly acquired and released locks 10 times. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 35, Detected an infinite loop when evaluating method '\_TZ_.SKNZ._TMP'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 35, \_TZ_.SKNZ._TMP correctly acquired and released locks 10 times. PASSED: Test 35, _TMP correctly returned sane looking value 0x00000b4c (289.2 degrees K) PASSED: Test 35, \_TZ_.BATZ._TMP correctly acquired and released locks 9 times. PASSED: Test 35, _TMP correctly returned sane looking value 0x00000aac (273.2 degrees K) PASSED: Test 35, \_TZ_.FDTZ._TMP correctly acquired and released locks 7 times. Test 46 of 93: Check _DIS (Disable). FAILED [MEDIUM] MethodShouldReturnNothing: Test 46, \_SB_.PCI0.LPCB.SIO_.COM1._DIS returned values, but was expected to return nothing. Object returned: INTEGER: 0x00000000 ADVICE: This probably won't cause any errors, but it should be fixed as the AML code is not conforming to the expected behaviour as described in the ACPI specification. FAILED [MEDIUM] MethodShouldReturnNothing: Test 46, \_SB_.PCI0.LPCB.SIO_.LPT0._DIS returned values, but was expected to return nothing. Object returned: INTEGER: 0x00000000 ADVICE: This probably won't cause any errors, but it should be fixed as the AML code is not conforming to the expected behaviour as described in the ACPI specification. Test 61 of 93: Check _WAK (System Wake). Test _WAK(1) System Wake, State S1. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(2) System Wake, State S2. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(3) System Wake, State S3. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(4) System Wake, State S4. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(5) System Wake, State S5. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test 87 of 93: Check _BCL (Query List of Brightness Control Levels Supported). Package has 2 elements: 00: INTEGER: 0x00000000 01: INTEGER: 0x00000000 FAILED [MEDIUM] Method_BCLElementCount: Test 87, Method _BCL should return a package of more than 2 integers, got just 2. Test 88 of 93: Check _BCM (Set Brightness Level). ACPICA Exception AE_AML_PACKAGE_LIMIT during execution of method _BCM FAILED [CRITICAL] AEAMLPackgeLimit: Test 88, Detected error 'Package limit' when evaluating '\_SB_.PCI0.GFX0.DD02._BCM'. ==================================================================================================== ACPI table settings sanity checks. ---------------------------------------------------------------------------------------------------- Test 1 of 1: Check ACPI tables. PASSED: Test 1, Table APIC passed. Table ECDT not present to check. FAILED [MEDIUM] FADT32And64BothDefined: Test 1, FADT 32 bit FIRMWARE_CONTROL is non-zero, and X_FIRMWARE_CONTROL is also non-zero. Section 5.2.9 of the ACPI specification states that if the FIRMWARE_CONTROL is non-zero then X_FIRMWARE_CONTROL must be set to zero. ADVICE: The FADT FIRMWARE_CTRL is a 32 bit pointer that points to the physical memory address of the Firmware ACPI Control Structure (FACS). There is also an extended 64 bit version of this, the X_FIRMWARE_CTRL pointer that also can point to the FACS. Section 5.2.9 of the ACPI specification states that if the X_FIRMWARE_CTRL field contains a non zero value then the FIRMWARE_CTRL field *must* be zero. This error is also detected by the Linux kernel. If FIRMWARE_CTRL and X_FIRMWARE_CTRL are defined, then the kernel just uses the 64 bit version of the pointer. PASSED: Test 1, Table HPET passed. PASSED: Test 1, Table MCFG passed. PASSED: Test 1, Table RSDT passed. PASSED: Test 1, Table RSDP passed. Table SBST not present to check. PASSED: Test 1, Table XSDT passed. ==================================================================================================== Re-assemble DSDT and find syntax errors and warnings. ---------------------------------------------------------------------------------------------------- Test 1 of 2: Disassemble and reassemble DSDT FAILED [HIGH] AMLAssemblerError4043: Test 1, Assembler error in line 2261 Line | AML source ---------------------------------------------------------------------------------------------------- 02258| 0x00000000, // Range Minimum 02259| 0xFEDFFFFF, // Range Maximum 02260| 0x00000000, // Translation Offset 02261| 0x00000000, // Length | ^ | error 4043: Invalid combination of Length and Min/Max fixed flags 02262| ,, _Y0E, AddressRangeMemory, TypeStatic) 02263| DWordMemory (ResourceProducer, PosDecode, MinFixed, MaxFixed, Cacheable, ReadWrite, 02264| 0x00000000, // Granularity ==================================================================================================== ADVICE: (for error #4043): This occurs if the length is zero and just one of the resource MIF/MAF flags are set, or the length is non-zero and resource MIF/MAF flags are both set. These are illegal combinations and need to be fixed. See section 6.4.3.5 Address Space Resource Descriptors of version 4.0a of the ACPI specification for more details. FAILED [HIGH] AMLAssemblerError4050: Test 1, Assembler error in line 2268 Line | AML source ---------------------------------------------------------------------------------------------------- 02265| 0xFEE01000, // Range Minimum 02266| 0xFFFFFFFF, // Range Maximum 02267| 0x00000000, // Translation Offset 02268| 0x011FEFFF, // Length | ^ | error 4050: Length is not equal to fixed Min/Max window 02269| ,, , AddressRangeMemory, TypeStatic) 02270| }) 02271| Method (_CRS, 0, Serialized) ==================================================================================================== ADVICE: (for error #4050): The minimum address is greater than the maximum address. This is illegal. FAILED [HIGH] AMLAssemblerError1104: Test 1, Assembler error in line 8885 Line | AML source ---------------------------------------------------------------------------------------------------- 08882| Method (_DIS, 0, NotSerialized) 08883| { 08884| DSOD (0x02) 08885| Return (0x00) | ^ | warning level 0 1104: Reserved method should not return a value (_DIS) 08886| } 08887| 08888| Method (_SRS, 1, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 1, Assembler error in line 9195 Line | AML source ---------------------------------------------------------------------------------------------------- 09192| Method (_DIS, 0, NotSerialized) 09193| { 09194| DSOD (0x01) 09195| Return (0x00) | ^ | warning level 0 1104: Reserved method should not return a value (_DIS) 09196| } 09197| 09198| Method (_SRS, 1, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1127: Test 1, Assembler error in line 9242 Line | AML source ---------------------------------------------------------------------------------------------------- 09239| CreateWordField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y21._MAX, MAX2) 09240| CreateByteField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y21._LEN, LEN2) 09241| CreateWordField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y22._INT, IRQ0) 09242| CreateWordField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y23._DMA, DMA0) | ^ | warning level 0 1127: ResourceTag smaller than Field (Tag: 8 bits, Field: 16 bits) 09243| If (RLPD) 09244| { 09245| Store (0x00, Local0) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1128: Test 1, Assembler error in line 18682 Line | AML source ---------------------------------------------------------------------------------------------------- 18679| Store (0x01, Index (DerefOf (Index (Local0, 0x02)), 0x01)) 18680| If (And (WDPE, 0x40)) 18681| { 18682| Wait (\_SB.BEVT, 0x10) | ^ | warning level 0 1128: Result is not used, possible operator timeout will be missed 18683| } 18684| 18685| Store (BRID, Index (DerefOf (Index (Local0, 0x02)), 0x02)) ==================================================================================================== ADVICE: (for warning level 0 #1128): The operation can possibly timeout, and hence the return value indicates an timeout error. However, because the return value is not checked this very probably indicates that the code is buggy. A possible scenario is that a mutex times out and the code attempts to access data in a critical region when it should not. This will lead to undefined behaviour. This should be fixed. Table DSDT (0) reassembly: Found 2 errors, 4 warnings. Test 2 of 2: Disassemble and reassemble SSDT PASSED: Test 2, SSDT (0) reassembly, Found 0 errors, 0 warnings. FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 60 Line | AML source ---------------------------------------------------------------------------------------------------- 00057| { 00058| Store (CPDC (Arg0), Local0) 00059| GCAP (Local0) 00060| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00061| } 00062| 00063| Method (_OSC, 4, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 174 Line | AML source ---------------------------------------------------------------------------------------------------- 00171| { 00172| Store (\_PR.CPU0.CPDC (Arg0), Local0) 00173| GCAP (Local0) 00174| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00175| } 00176| 00177| Method (_OSC, 4, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 244 Line | AML source ---------------------------------------------------------------------------------------------------- 00241| { 00242| Store (\_PR.CPU0.CPDC (Arg0), Local0) 00243| GCAP (Local0) 00244| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00245| } 00246| 00247| Method (_OSC, 4, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 290 Line | AML source ---------------------------------------------------------------------------------------------------- 00287| { 00288| Store (\_PR.CPU0.CPDC (Arg0), Local0) 00289| GCAP (Local0) 00290| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00291| } 00292| 00293| Method (_OSC, 4, NotSerialized) ==================================================================================================== Table SSDT (1) reassembly: Found 0 errors, 4 warnings. PASSED: Test 2, SSDT (2) reassembly, Found 0 errors, 0 warnings. PASSED: Test 2, SSDT (3) reassembly, Found 0 errors, 0 warnings. ==================================================================================================== 3 passed, 10 failed, 0 warnings, 0 aborted, 0 skipped, 0 info only. ==================================================================================================== Critical failures: 1 method test, at 1 log line: 1449: Detected error 'Package limit' when evaluating '\_SB_.PCI0.GFX0.DD02._BCM'. High failures: 11 klog test, at 1 log line: 121: HIGH Kernel message: [ 3.512783] ACPI Error: Method parse/execution failed [\_SB_.PCI0.GFX0._DOD] (Node f7425858), AE_AML_PACKAGE_LIMIT (20110623/psparse-536) syntaxcheck test, at 1 log line: 1668: Assembler error in line 2261 syntaxcheck test, at 1 log line: 1687: Assembler error in line 2268 syntaxcheck test, at 1 log line: 1703: Assembler error in line 8885 syntaxcheck test, at 1 log line: 1716: Assembler error in line 9195 syntaxcheck test, at 1 log line: 1729: Assembler error in line 9242 syntaxcheck test, at 1 log line: 1742: Assembler error in line 18682 syntaxcheck test, at 1 log line: 1766: Assembler error in line 60 syntaxcheck test, at 1 log line: 1779: Assembler error in line 174 syntaxcheck test, at 1 log line: 1792: Assembler error in line 244 syntaxcheck test, at 1 log line: 1805: Assembler error in line 290 Medium failures: 9 mtrr test, at 1 log line: 76: Memory range 0xc0000000 to 0xdfffffff (PCI Bus 0000:00) has incorrect attribute Write-Combining. mtrr test, at 1 log line: 78: Memory range 0xfee01000 to 0xffffffff (PCI Bus 0000:00) has incorrect attribute Write-Protect. msr test, at 1 log line: 165: MSR SYSENTER_ESP (0x175) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0xffffffffffffffff). msr test, at 1 log line: 173: MSR MISC_ENABLE (0x1a0) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0x400c51889). wmi test, at 1 log line: 528: GUID 2B814318-4BE8-4707-9D84-A190A859B5D0 is unknown to the kernel, a driver may need to be implemented for this GUID. method test, at 1 log line: 1002: \_SB_.PCI0.LPCB.SIO_.COM1._DIS returned values, but was expected to return nothing. method test, at 1 log line: 1011: \_SB_.PCI0.LPCB.SIO_.LPT0._DIS returned values, but was expected to return nothing. method test, at 1 log line: 1443: Method _BCL should return a package of more than 2 integers, got just 2. acpitables test, at 1 log line: 1643: FADT 32 bit FIRMWARE_CONTROL is non-zero, and X_FIRMWARE_CONTROL is also non-zero. Se

    Read the article

  • Virtual host is not working in Ubuntu 14 VPS using XAMPP 1.8.3

    - by viral4ever
    I am using XAMPP as server in ubuntu 14.04 VPS of digitalocean. I tried to setup virtual hosts. But it is not working and I am getting 403 error of access denied. I changed files too. My files with changes are /opt/lampp/etc/httpd.conf # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/trunk/> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/trunk/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so 'log/access_log' # with ServerRoot set to '/www' will be interpreted by the # server as '/www/log/access_log', where as '/log/access_log' will be # interpreted as '/log/access_log'. # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to specify a local disk on the # Mutex directive, if file-based mutexes are used. If you wish to share the # same ServerRoot for multiple httpd daemons, you will need to change at # least PidFile. # ServerRoot "/opt/lampp" # # Mutex: Allows you to set the mutex mechanism and mutex file directory # for individual mutexes, or change the global defaults # # Uncomment and change the directory if mutexes are file-based and the default # mutex file directory is not on a local disk or is not appropriate for some # other reason. # # Mutex default:logs # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule authn_socache_module modules/mod_authn_socache.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_dbd_module modules/mod_authz_dbd.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule access_compat_module modules/mod_access_compat.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_form_module modules/mod_auth_form.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule allowmethods_module modules/mod_allowmethods.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule cache_module modules/mod_cache.so LoadModule cache_disk_module modules/mod_cache_disk.so LoadModule socache_shmcb_module modules/mod_socache_shmcb.so LoadModule socache_dbm_module modules/mod_socache_dbm.so LoadModule socache_memcache_module modules/mod_socache_memcache.so LoadModule dbd_module modules/mod_dbd.so LoadModule bucketeer_module modules/mod_bucketeer.so LoadModule dumpio_module modules/mod_dumpio.so LoadModule echo_module modules/mod_echo.so LoadModule case_filter_module modules/mod_case_filter.so LoadModule case_filter_in_module modules/mod_case_filter_in.so LoadModule buffer_module modules/mod_buffer.so LoadModule ratelimit_module modules/mod_ratelimit.so LoadModule reqtimeout_module modules/mod_reqtimeout.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule request_module modules/mod_request.so LoadModule include_module modules/mod_include.so LoadModule filter_module modules/mod_filter.so LoadModule substitute_module modules/mod_substitute.so LoadModule sed_module modules/mod_sed.so LoadModule charset_lite_module modules/mod_charset_lite.so LoadModule deflate_module modules/mod_deflate.so LoadModule mime_module modules/mod_mime.so LoadModule ldap_module modules/mod_ldap.so LoadModule log_config_module modules/mod_log_config.so LoadModule log_debug_module modules/mod_log_debug.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule cern_meta_module modules/mod_cern_meta.so LoadModule expires_module modules/mod_expires.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule unique_id_module modules/mod_unique_id.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule version_module modules/mod_version.so LoadModule remoteip_module modules/mod_remoteip.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so LoadModule proxy_scgi_module modules/mod_proxy_scgi.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_express_module modules/mod_proxy_express.so LoadModule session_module modules/mod_session.so LoadModule session_cookie_module modules/mod_session_cookie.so LoadModule session_dbd_module modules/mod_session_dbd.so LoadModule slotmem_shm_module modules/mod_slotmem_shm.so LoadModule ssl_module modules/mod_ssl.so LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so LoadModule unixd_module modules/mod_unixd.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule suexec_module modules/mod_suexec.so LoadModule cgi_module modules/mod_cgi.so LoadModule cgid_module modules/mod_cgid.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so <IfDefine JUSTTOMAKEAPXSHAPPY> LoadModule php4_module modules/libphp4.so LoadModule php5_module modules/libphp5.so </IfDefine> <IfModule unixd_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User root Group www </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:@@Port@@ # XAMPP ServerName localhost # # Deny access to the entirety of your server's filesystem. You must # explicitly permit access to web content directories in other # <Directory> blocks below. # <Directory /> AllowOverride none Require all denied </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/opt/lampp/htdocs" <Directory "/opt/lampp/htdocs"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/trunk/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks # XAMPP Options Indexes FollowSymLinks ExecCGI Includes # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride None # since XAMPP 1.4: AllowOverride All # # Controls who can get stuff from this server. # Require all granted </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> #DirectoryIndex index.html # XAMPP DirectoryIndex index.html index.html.var index.php index.php3 index.php4 </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ".ht*"> Require all denied </Files> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "logs/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "logs/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "logs/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/opt/lampp/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock logs/cgisock </IfModule> # # "/opt/lampp/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/opt/lampp/cgi-bin"> AllowOverride None Options None Require all granted </Directory> <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig etc/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # XAMPP, since LAMPP 0.9.8: AddHandler cgi-script .cgi .pl # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # # XAMPP AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile etc/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or one of the special # values 'default', 'none' or 'unlimited'. # Default setting is to accept 200 Ranges. #MaxRanges unlimited # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall may be used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # Defaults: EnableMMAP On, EnableSendfile Off # EnableMMAP off EnableSendfile off # Supplemental configuration # # The configuration files in the etc/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) #Include etc/extra/httpd-mpm.conf # Multi-language error messages Include etc/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include etc/extra/httpd-autoindex.conf # Language settings #Include etc/extra/httpd-languages.conf # User home directories #Include etc/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include etc/extra/httpd-info.conf # Virtual hosts Include etc/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include etc/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include etc/extra/httpd-dav.conf # Various default settings Include etc/extra/httpd-default.conf # Configure mod_proxy_html to understand HTML4/XHTML1 <IfModule proxy_html_module> Include etc/extra/proxy-html.conf </IfModule> # Secure (SSL/TLS) connections <IfModule ssl_module> # XAMPP <IfDefine SSL> Include etc/extra/httpd-ssl.conf </IfDefine> </IfModule> # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> # XAMPP Include etc/extra/httpd-xampp.conf Include "/opt/lampp/apache2/conf/httpd.conf" I used command shown in this example. I used below lines to change and add group Add group "groupadd www" Add user to group "usermod -aG www root" Change htdocs group "chgrp -R www /opt/lampp/htdocs" Change sitedir group "chgrp -R www /opt/lampp/htdocs/mysite" Change htdocs chmod "chmod 2775 /opt/lampp/htdocs" Change sitedir chmod "chmod 2775 /opt/lampp/htdocs/mysite" And then I changed my vhosts.conf file # Virtual Hosts # # Required modules: mod_log_config # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.4/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host.example.com" ServerName dummy-host.example.com ServerAlias www.dummy-host.example.com ErrorLog "logs/dummy-host.example.com-error_log" CustomLog "logs/dummy-host.example.com-access_log" common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host2.example.com" ServerName dummy-host2.example.com ErrorLog "logs/dummy-host2.example.com-error_log" CustomLog "logs/dummy-host2.example.com-access_log" common </VirtualHost> NameVirtualHost * <VirtualHost *> ServerAdmin [email protected] DocumentRoot "/opt/lampp/htdocs/mysite" ServerName mysite.com ServerAlias mysite.com ErrorLog "/opt/lampp/htdocs/mysite/errorlogs" CustomLog "/opt/lampp/htdocs/mysite/customlog" common <Directory "/opt/lampp/htdocs/mysite"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all Require all granted </Directory> </VirtualHost> but still its not working and I am getting 403 error on my ip and domain however I can access phpmyadmin. If anyone can help me, please help me.

    Read the article

  • Understanding G1 GC Logs

    - by poonam
    The purpose of this post is to explain the meaning of GC logs generated with some tracing and diagnostic options for G1 GC. We will take a look at the output generated with PrintGCDetails which is a product flag and provides the most detailed level of information. Along with that, we will also look at the output of two diagnostic flags that get enabled with -XX:+UnlockDiagnosticVMOptions option - G1PrintRegionLivenessInfo that prints the occupancy and the amount of space used by live objects in each region at the end of the marking cycle and G1PrintHeapRegions that provides detailed information on the heap regions being allocated and reclaimed. We will be looking at the logs generated with JDK 1.7.0_04 using these options. Option -XX:+PrintGCDetails Here's a sample log of G1 collection generated with PrintGCDetails. 0.522: [GC pause (young), 0.15877971 secs] [Parallel Time: 157.1 ms] [GC Worker Start (ms): 522.1 522.2 522.2 522.2 Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] [Processed Buffers : 2 2 3 2 Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] [GC Worker Other (ms): 0.3 0.3 0.3 0.3 Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] [Clear CT: 0.1 ms] [Other: 1.5 ms] [Choose CSet: 0.0 ms] [Ref Proc: 0.3 ms] [Ref Enq: 0.0 ms] [Free CSet: 0.3 ms] [Eden: 12M(12M)->0B(10M) Survivors: 0B->2048K Heap: 13M(64M)->9739K(64M)] [Times: user=0.59 sys=0.02, real=0.16 secs] This is the typical log of an Evacuation Pause (G1 collection) in which live objects are copied from one set of regions (young OR young+old) to another set. It is a stop-the-world activity and all the application threads are stopped at a safepoint during this time. This pause is made up of several sub-tasks indicated by the indentation in the log entries. Here's is the top most line that gets printed for the Evacuation Pause. 0.522: [GC pause (young), 0.15877971 secs] This is the highest level information telling us that it is an Evacuation Pause that started at 0.522 secs from the start of the process, in which all the regions being evacuated are Young i.e. Eden and Survivor regions. This collection took 0.15877971 secs to finish. Evacuation Pauses can be mixed as well. In which case the set of regions selected include all of the young regions as well as some old regions. 1.730: [GC pause (mixed), 0.32714353 secs] Let's take a look at all the sub-tasks performed in this Evacuation Pause. [Parallel Time: 157.1 ms] Parallel Time is the total elapsed time spent by all the parallel GC worker threads. The following lines correspond to the parallel tasks performed by these worker threads in this total parallel time, which in this case is 157.1 ms. [GC Worker Start (ms): 522.1 522.2 522.2 522.2Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] The first line tells us the start time of each of the worker thread in milliseconds. The start times are ordered with respect to the worker thread ids – thread 0 started at 522.1ms and thread 1 started at 522.2ms from the start of the process. The second line tells the Avg, Min, Max and Diff of the start times of all of the worker threads. [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] This gives us the time spent by each worker thread scanning the roots (globals, registers, thread stacks and VM data structures). Here, thread 0 took 1.6ms to perform the root scanning task and thread 1 took 1.5 ms. The second line clearly shows the Avg, Min, Max and Diff of the times spent by all the worker threads. [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] Update RS gives us the time each thread spent in updating the Remembered Sets. Remembered Sets are the data structures that keep track of the references that point into a heap region. Mutator threads keep changing the object graph and thus the references that point into a particular region. We keep track of these changes in buffers called Update Buffers. The Update RS sub-task processes the update buffers that were not able to be processed concurrently, and updates the corresponding remembered sets of all regions. [Processed Buffers : 2 2 3 2Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] This tells us the number of Update Buffers (mentioned above) processed by each worker thread. [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] These are the times each worker thread had spent in scanning the Remembered Sets. Remembered Set of a region contains cards that correspond to the references pointing into that region. This phase scans those cards looking for the references pointing into all the regions of the collection set. [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] These are the times spent by each worker thread copying live objects from the regions in the Collection Set to the other regions. [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] Termination time is the time spent by the worker thread offering to terminate. But before terminating, it checks the work queues of other threads and if there are still object references in other work queues, it tries to steal object references, and if it succeeds in stealing a reference, it processes that and offers to terminate again. [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] This gives the number of times each thread has offered to terminate. [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] These are the times in milliseconds at which each worker thread stopped. [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] These are the total lifetimes of each worker thread. [GC Worker Other (ms): 0.3 0.3 0.3 0.3Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] These are the times that each worker thread spent in performing some other tasks that we have not accounted above for the total Parallel Time. [Clear CT: 0.1 ms] This is the time spent in clearing the Card Table. This task is performed in serial mode. [Other: 1.5 ms] Time spent in the some other tasks listed below. The following sub-tasks (which individually may be parallelized) are performed serially. [Choose CSet: 0.0 ms] Time spent in selecting the regions for the Collection Set. [Ref Proc: 0.3 ms] Total time spent in processing Reference objects. [Ref Enq: 0.0 ms] Time spent in enqueuing references to the ReferenceQueues. [Free CSet: 0.3 ms] Time spent in freeing the collection set data structure. [Eden: 12M(12M)->0B(13M) Survivors: 0B->2048K Heap: 14M(64M)->9739K(64M)] This line gives the details on the heap size changes with the Evacuation Pause. This shows that Eden had the occupancy of 12M and its capacity was also 12M before the collection. After the collection, its occupancy got reduced to 0 since everything is evacuated/promoted from Eden during a collection, and its target size grew to 13M. The new Eden capacity of 13M is not reserved at this point. This value is the target size of the Eden. Regions are added to Eden as the demand is made and when the added regions reach to the target size, we start the next collection. Similarly, Survivors had the occupancy of 0 bytes and it grew to 2048K after the collection. The total heap occupancy and capacity was 14M and 64M receptively before the collection and it became 9739K and 64M after the collection. Apart from the evacuation pauses, G1 also performs concurrent-marking to build the live data information of regions. 1.416: [GC pause (young) (initial-mark), 0.62417980 secs] ….... 2.042: [GC concurrent-root-region-scan-start] 2.067: [GC concurrent-root-region-scan-end, 0.0251507] 2.068: [GC concurrent-mark-start] 3.198: [GC concurrent-mark-reset-for-overflow] 4.053: [GC concurrent-mark-end, 1.9849672 sec] 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.090: [GC concurrent-cleanup-start] 4.091: [GC concurrent-cleanup-end, 0.0002721] The first phase of a marking cycle is Initial Marking where all the objects directly reachable from the roots are marked and this phase is piggy-backed on a fully young Evacuation Pause. 2.042: [GC concurrent-root-region-scan-start] This marks the start of a concurrent phase that scans the set of root-regions which are directly reachable from the survivors of the initial marking phase. 2.067: [GC concurrent-root-region-scan-end, 0.0251507] End of the concurrent root region scan phase and it lasted for 0.0251507 seconds. 2.068: [GC concurrent-mark-start] Start of the concurrent marking at 2.068 secs from the start of the process. 3.198: [GC concurrent-mark-reset-for-overflow] This indicates that the global marking stack had became full and there was an overflow of the stack. Concurrent marking detected this overflow and had to reset the data structures to start the marking again. 4.053: [GC concurrent-mark-end, 1.9849672 sec] End of the concurrent marking phase and it lasted for 1.9849672 seconds. 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] This corresponds to the remark phase which is a stop-the-world phase. It completes the left over marking work (SATB buffers processing) from the previous phase. In this case, this phase took 0.0030184 secs and out of which 0.0000254 secs were spent on Reference processing. 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] Cleanup phase which is again a stop-the-world phase. It goes through the marking information of all the regions, computes the live data information of each region, resets the marking data structures and sorts the regions according to their gc-efficiency. In this example, the total heap size is 138M and after the live data counting it was found that the total live data size dropped down from 117M to 106M. 4.090: [GC concurrent-cleanup-start] This concurrent cleanup phase frees up the regions that were found to be empty (didn't contain any live data) during the previous stop-the-world phase. 4.091: [GC concurrent-cleanup-end, 0.0002721] Concurrent cleanup phase took 0.0002721 secs to free up the empty regions. Option -XX:G1PrintRegionLivenessInfo Now, let's look at the output generated with the flag G1PrintRegionLivenessInfo. This is a diagnostic option and gets enabled with -XX:+UnlockDiagnosticVMOptions. G1PrintRegionLivenessInfo prints the live data information of each region during the Cleanup phase of the concurrent-marking cycle. 26.896: [GC cleanup ### PHASE Post-Marking @ 26.896### HEAP committed: 0x02e00000-0x0fe00000 reserved: 0x02e00000-0x12e00000 region-size: 1048576 Cleanup phase of the concurrent-marking cycle started at 26.896 secs from the start of the process and this live data information is being printed after the marking phase. Committed G1 heap ranges from 0x02e00000 to 0x0fe00000 and the total G1 heap reserved by JVM is from 0x02e00000 to 0x12e00000. Each region in the G1 heap is of size 1048576 bytes. ### type address-range used prev-live next-live gc-eff### (bytes) (bytes) (bytes) (bytes/ms) This is the header of the output that tells us about the type of the region, address-range of the region, used space in the region, live bytes in the region with respect to the previous marking cycle, live bytes in the region with respect to the current marking cycle and the GC efficiency of that region. ### FREE 0x02e00000-0x02f00000 0 0 0 0.0 This is a Free region. ### OLD 0x02f00000-0x03000000 1048576 1038592 1038592 0.0 Old region with address-range from 0x02f00000 to 0x03000000. Total used space in the region is 1048576 bytes, live bytes as per the previous marking cycle are 1038592 and live bytes with respect to the current marking cycle are also 1038592. The GC efficiency has been computed as 0. ### EDEN 0x03400000-0x03500000 20992 20992 20992 0.0 This is an Eden region. ### HUMS 0x0ae00000-0x0af00000 1048576 1048576 1048576 0.0### HUMC 0x0af00000-0x0b000000 1048576 1048576 1048576 0.0### HUMC 0x0b000000-0x0b100000 1048576 1048576 1048576 0.0### HUMC 0x0b100000-0x0b200000 1048576 1048576 1048576 0.0### HUMC 0x0b200000-0x0b300000 1048576 1048576 1048576 0.0### HUMC 0x0b300000-0x0b400000 1048576 1048576 1048576 0.0### HUMC 0x0b400000-0x0b500000 1001480 1001480 1001480 0.0 These are the continuous set of regions called Humongous regions for storing a large object. HUMS (Humongous starts) marks the start of the set of humongous regions and HUMC (Humongous continues) tags the subsequent regions of the humongous regions set. ### SURV 0x09300000-0x09400000 16384 16384 16384 0.0 This is a Survivor region. ### SUMMARY capacity: 208.00 MB used: 150.16 MB / 72.19 % prev-live: 149.78 MB / 72.01 % next-live: 142.82 MB / 68.66 % At the end, a summary is printed listing the capacity, the used space and the change in the liveness after the completion of concurrent marking. In this case, G1 heap capacity is 208MB, total used space is 150.16MB which is 72.19% of the total heap size, live data in the previous marking was 149.78MB which was 72.01% of the total heap size and the live data as per the current marking is 142.82MB which is 68.66% of the total heap size. Option -XX:+G1PrintHeapRegions G1PrintHeapRegions option logs the regions related events when regions are committed, allocated into or are reclaimed. COMMIT/UNCOMMIT events G1HR COMMIT [0x6e900000,0x6ea00000]G1HR COMMIT [0x6ea00000,0x6eb00000] Here, the heap is being initialized or expanded and the region (with bottom: 0x6eb00000 and end: 0x6ec00000) is being freshly committed. COMMIT events are always generated in order i.e. the next COMMIT event will always be for the uncommitted region with the lowest address. G1HR UNCOMMIT [0x72700000,0x72800000]G1HR UNCOMMIT [0x72600000,0x72700000] Opposite to COMMIT. The heap got shrunk at the end of a Full GC and the regions are being uncommitted. Like COMMIT, UNCOMMIT events are also generated in order i.e. the next UNCOMMIT event will always be for the committed region with the highest address. GC Cycle events G1HR #StartGC 7G1HR CSET 0x6e900000G1HR REUSE 0x70500000G1HR ALLOC(Old) 0x6f800000G1HR RETIRE 0x6f800000 0x6f821b20G1HR #EndGC 7 This shows start and end of an Evacuation pause. This event is followed by a GC counter tracking both evacuation pauses and Full GCs. Here, this is the 7th GC since the start of the process. G1HR #StartFullGC 17G1HR UNCOMMIT [0x6ed00000,0x6ee00000]G1HR POST-COMPACTION(Old) 0x6e800000 0x6e854f58G1HR #EndFullGC 17 Shows start and end of a Full GC. This event is also followed by the same GC counter as above. This is the 17th GC since the start of the process. ALLOC events G1HR ALLOC(Eden) 0x6e800000 The region with bottom 0x6e800000 just started being used for allocation. In this case it is an Eden region and allocated into by a mutator thread. G1HR ALLOC(StartsH) 0x6ec00000 0x6ed00000G1HR ALLOC(ContinuesH) 0x6ed00000 0x6e000000 Regions being used for the allocation of Humongous object. The object spans over two regions. G1HR ALLOC(SingleH) 0x6f900000 0x6f9eb010 Single region being used for the allocation of Humongous object. G1HR COMMIT [0x6ee00000,0x6ef00000]G1HR COMMIT [0x6ef00000,0x6f000000]G1HR COMMIT [0x6f000000,0x6f100000]G1HR COMMIT [0x6f100000,0x6f200000]G1HR ALLOC(StartsH) 0x6ee00000 0x6ef00000G1HR ALLOC(ContinuesH) 0x6ef00000 0x6f000000G1HR ALLOC(ContinuesH) 0x6f000000 0x6f100000G1HR ALLOC(ContinuesH) 0x6f100000 0x6f102010 Here, Humongous object allocation request could not be satisfied by the free committed regions that existed in the heap, so the heap needed to be expanded. Thus new regions are committed and then allocated into for the Humongous object. G1HR ALLOC(Old) 0x6f800000 Old region started being used for allocation during GC. G1HR ALLOC(Survivor) 0x6fa00000 Region being used for copying old objects into during a GC. Note that Eden and Humongous ALLOC events are generated outside the GC boundaries and Old and Survivor ALLOC events are generated inside the GC boundaries. Other Events G1HR RETIRE 0x6e800000 0x6e87bd98 Retire and stop using the region having bottom 0x6e800000 and top 0x6e87bd98 for allocation. Note that most regions are full when they are retired and we omit those events to reduce the output volume. A region is retired when another region of the same type is allocated or we reach the start or end of a GC(depending on the region). So for Eden regions: For example: 1. ALLOC(Eden) Foo2. ALLOC(Eden) Bar3. StartGC At point 2, Foo has just been retired and it was full. At point 3, Bar was retired and it was full. If they were not full when they were retired, we will have a RETIRE event: 1. ALLOC(Eden) Foo2. RETIRE Foo top3. ALLOC(Eden) Bar4. StartGC G1HR CSET 0x6e900000 Region (bottom: 0x6e900000) is selected for the Collection Set. The region might have been selected for the collection set earlier (i.e. when it was allocated). However, we generate the CSET events for all regions in the CSet at the start of a GC to make sure there's no confusion about which regions are part of the CSet. G1HR POST-COMPACTION(Old) 0x6e800000 0x6e839858 POST-COMPACTION event is generated for each non-empty region in the heap after a full compaction. A full compaction moves objects around, so we don't know what the resulting shape of the heap is (which regions were written to, which were emptied, etc.). To deal with this, we generate a POST-COMPACTION event for each non-empty region with its type (old/humongous) and the heap boundaries. At this point we should only have Old and Humongous regions, as we have collapsed the young generation, so we should not have eden and survivors. POST-COMPACTION events are generated within the Full GC boundary. G1HR CLEANUP 0x6f400000G1HR CLEANUP 0x6f300000G1HR CLEANUP 0x6f200000 These regions were found empty after remark phase of Concurrent Marking and are reclaimed shortly afterwards. G1HR #StartGC 5G1HR CSET 0x6f400000G1HR CSET 0x6e900000G1HR REUSE 0x6f800000 At the end of a GC we retire the old region we are allocating into. Given that its not full, we will carry on allocating into it during the next GC. This is what REUSE means. In the above case 0x6f800000 should have been the last region with an ALLOC(Old) event during the previous GC and should have been retired before the end of the previous GC. G1HR ALLOC-FORCE(Eden) 0x6f800000 A specialization of ALLOC which indicates that we have reached the max desired number of the particular region type (in this case: Eden), but we decided to allocate one more. Currently it's only used for Eden regions when we extend the young generation because we cannot do a GC as the GC-Locker is active. G1HR EVAC-FAILURE 0x6f800000 During a GC, we have failed to evacuate an object from the given region as the heap is full and there is no space left to copy the object. This event is generated within GC boundaries and exactly once for each region from which we failed to evacuate objects. When Heap Regions are reclaimed ? It is also worth mentioning when the heap regions in the G1 heap are reclaimed. All regions that are in the CSet (the ones that appear in CSET events) are reclaimed at the end of a GC. The exception to that are regions with EVAC-FAILURE events. All regions with CLEANUP events are reclaimed. After a Full GC some regions get reclaimed (the ones from which we moved the objects out). But that is not shown explicitly, instead the non-empty regions that are left in the heap are printed out with the POST-COMPACTION events.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Installed SQL Server 2008 and now TFS is broken.

    - by johnnycakes
    Hi, My W2K3 server was running TFS 2008 SP1, SQL Server 2005 Development edition. I installed SQL Server 2008 Standard. I installed it while leaving SQL Server 2005 alone. Upgrading was not possible due to the differences in editions of the SQL Servers. Now TFS is broken. On a client computer, if I go Team - Connect to Team Foundation Server, I get this error: Team Foundation services are not available from server myserver. Technical information (for administrator): TF30059: Fatal error while initializing web service. So I head on over to my event viewer on the server. Under Application, I see one warning and two errors. First, the warning: Source: SQLSERVERAGENT Event ID: 208 Description: SQL Server Scheduled Job 'TfsWorkItemTracking Process Identities Job' (0x21F395C1F444CA499A63EBF05D717749) - Status: Failed - Invoked on: 2010-04-26 13:30:00 - Message: The job failed. The Job was invoked by Schedule 9 (ProcessIdentitiesSchedule). The last step to run was step 1 (Process Identities). Then the first error: Source: TFS Services Event ID: 3017 Description: TF53010: The following error has occurred in a Team Foundation component or extension: Date (UTC): 4/26/2010 5:36:29 PM Machine: myserver Application Domain: /LM/W3SVC/799623628/Root/Services-2-129167769888923968 Assembly: Microsoft.TeamFoundation.Server, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a; v2.0.50727 Process Details: Process Name: w3wp Process Id: 4008 Thread Id: 224 Account name: DOMAIN\TFSService Detailed Message: TF53013: A crash report is being prepared for Microsoft. The following information is included in that report: System Values OS Version Information=Microsoft Windows NT 5.2.3790 Service Pack 2 CLR Version Information=2.0.50727.3053 Machine Name=myserver Processor Count=1 Working Set=34897920 System Directory=C:\WINDOWS\system32 Process Values ExitCode=0 Interactive=False Has Shutdown Started=False Process Environment Variables Path = C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Microsoft SQL Server\80\Tools\Binn\;C:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\Program Files\Microsoft SQL Server\90\DTS\Binn\;C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\;C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies\;C:\Program Files\Microsoft SQL Server\100\Tools\Binn\;C:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\;C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies\;C:\WINDOWS\system32\WindowsPowerShell\v1.0 PATHEXT = .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.PSC1 PROCESSOR_ARCHITECTURE = x86 SystemDrive = C: windir = C:\WINDOWS TMP = C:\WINDOWS\TEMP USERPROFILE = C:\Documents and Settings\Default User ProgramFiles = C:\Program Files FP_NO_HOST_CHECK = NO COMPUTERNAME = myserver APP_POOL_ID = Microsoft Team Foundation Server Application Pool NUMBER_OF_PROCESSORS = 1 PROCESSOR_IDENTIFIER = x86 Family 16 Model 5 Stepping 2, AuthenticAMD ClusterLog = C:\WINDOWS\Cluster\cluster.log SystemRoot = C:\WINDOWS ComSpec = C:\WINDOWS\system32\cmd.exe CommonProgramFiles = C:\Program Files\Common Files PROCESSOR_LEVEL = 16 PROCESSOR_REVISION = 0502 lib = C:\Program Files\SQLXML 4.0\bin\ ALLUSERSPROFILE = C:\Documents and Settings\All Users TEMP = C:\WINDOWS\TEMP OS = Windows_NT Request Details Url=http://myserver.domain.local:8080/Services/v1.0/Registration.asmx [method = POST] User Agent=Team Foundation (devenv.exe, 10.0.30128.1) Headers=Content-Length=390&Content-Type=text%2fxml%3b+charset%3dutf-8&Accept-Encoding=gzip%2cgzip%2cgzip&Accept-Language=en-US&Authorization=NTLM+TlRMTVNTUAADAAAAGAAYAIQAAABAAUABnAAAABAAEABYAAAADAAMAGgAAAAQABAAdAAAAAAAAADcAQAABYKIogYBsB0AAAAPN9gzQTXfZIiIFnXDlQrxjUgAWQBQAEUAUgBJAE8ATgBKAG8AaABuAG4AeQBQAEwAQQBUAFkAUABVAFMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUrL79KzznBHCSJi2wVjn5QEBAAAAAAAAuhQoBGflygEImxiHPrhZoAAAAAACABAASABZAFAARQBSAEkATwBOAAEACgBUAEkAVABBAE4ABAAcAEgAeQBwAGUAcgBpAG8AbgAuAGwAbwBjAGEAbAADACgAdABpAHQAYQBuAC4ASAB5AHAAZQByAGkAbwBuAC4AbABvAGMAYQBsAAUAHABIAHkAcABlAHIAaQBvAG4ALgBsAG8AYwBhAGwACAAwADAAAAAAAAAAAAAAAAAwAACg0XxPlP8uXycSFhksBJWiwp8oW7iVDqf%2f6h5U30CEXgoAEAAAAAAAAAAAAAAAAAAAAAAACQAyAEgAVABUAFAALwB0AGkAdABhAG4ALgBoAHkAcABlAHIAaQBvAG4ALgBsAG8AYwBhAGwAAAAAAAAAAAA%3d&Expect=100-continue&Host=myserver.domain.local%3a8080&User-Agent=Team+Foundation+(devenv.exe%2c+10.0.30128.1)&X-TFS-Version=1.0.0.0&X-TFS-Session=b7e7fdec-e7ee-48fc-92e8-537d1cd87ea4&SOAPAction=%22http%3a%2f%2fschemas.microsoft.com%2fTeamFoundation%2f2005%2f06%2fServices%2fRegistration%2f03%2fGetRegistrationEntries%22 Path=/Services/v1.0/Registration.asmx Local Request=False User Host Address=10.0.5.78 User=DOMAIN\Johnny [auth = NTLM] Application Provided Information Team Foundation Application Information Event Log Source = TFS Services Configured Team Foundation Server = http://myserver:8080 License Type = WorkgroupLicense Server Culture = en-US Activity Logging Name = Integration Component Name = CS Initialized = No Requests Processed = 0 Exception: TypeInitializationException Message: The type initializer for 'Microsoft.TeamFoundation.Server.IntegrationResourceComponent' threw an exception. Stack Trace: at Microsoft.TeamFoundation.Server.IntegrationResourceComponent.RegisterExceptions() at Microsoft.TeamFoundation.Server.Global.Initialize() at Microsoft.TeamFoundation.Server.TeamFoundationApplication.Init() Inner Exception Details Exception: ReflectionTypeLoadException Message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Stack Trace: at System.Reflection.Module._GetTypesInternal(StackCrawlMark& stackMark) at System.Reflection.Assembly.GetTypes() at Microsoft.TeamFoundation.Server.SqlResourceComponent.RegisterExceptions(Assembly assembly) at Microsoft.TeamFoundation.Server.IntegrationResourceComponent.RegisterExceptions() at Microsoft.TeamFoundation.Server.IntegrationResourceComponent..cctor() Application Domain Information Assembly Name=mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Assembly CLR Version=v2.0.50727 Assembly Version=2.0.0.0 Assembly Location=C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll Assembly File Version: File: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll InternalName: mscorlib.dll OriginalFilename: mscorlib.dll FileVersion: 2.0.50727.3053 (netfxsp.050727-3000) FileDescription: Microsoft Common Language Runtime Class Library Product: Microsoft® .NET Framework ProductVersion: 2.0.50727.3053 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: English (United States) Assembly Name=System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Assembly CLR Version=v2.0.50727 Assembly Version=2.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_32\System.Web\2.0.0.0__b03f5f7f11d50a3a\System.Web.dll Assembly File Version: File: C:\WINDOWS\assembly\GAC_32\System.Web\2.0.0.0__b03f5f7f11d50a3a\System.Web.dll InternalName: System.Web.dll OriginalFilename: System.Web.dll FileVersion: 2.0.50727.3053 (netfxsp.050727-3000) FileDescription: System.Web.dll Product: Microsoft® .NET Framework ProductVersion: 2.0.50727.3053 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: English (United States) Assembly Name=System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Assembly CLR Version=v2.0.50727 Assembly Version=2.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_MSIL\System\2.0.0.0__b77a5c561934e089\System.dll Assembly File Version: File: C:\WINDOWS\assembly\GAC_MSIL\System\2.0.0.0__b77a5c561934e089\System.dll InternalName: System.dll OriginalFilename: System.dll FileVersion: 2.0.50727.3053 (netfxsp.050727-3000) FileDescription: .NET Framework Product: Microsoft® .NET Framework ProductVersion: 2.0.50727.3053 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: English (United States) Assembly Name=System.Xml, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Assembly CLR Version=v2.0.50727 Assembly Version=2.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_MSIL\System.Xml\2.0.0.0__b77a5c561934e089\System.Xml.dll Assembly File Version: File: C:\WINDOWS\assembly\GAC_MSIL\System.Xml\2.0.0.0__b77a5c561934e089\System.Xml.dll InternalName: System.Xml.dll OriginalFilename: System.Xml.dll FileVersion: 2.0.50727.3053 (netfxsp.050727-3000) FileDescription: .NET Framework Product: Microsoft® .NET Framework ProductVersion: 2.0.50727.3053 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: English (United States) Assembly Name=System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Assembly CLR Version=v2.0.50727 Assembly Version=2.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_MSIL\System.Configuration\2.0.0.0__b03f5f7f11d50a3a\System.Configuration.dll Assembly File Version: File: C:\WINDOWS\assembly\GAC_MSIL\System.Configuration\2.0.0.0__b03f5f7f11d50a3a\System.Configuration.dll InternalName: System.Configuration.dll OriginalFilename: System.Configuration.dll FileVersion: 2.0.50727.3053 (netfxsp.050727-3000) FileDescription: System.Configuration.dll Product: Microsoft® .NET Framework ProductVersion: 2.0.50727.3053 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: English (United States) Assembly Name=Microsoft.JScript, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Assembly CLR Version=v2.0.50727 Assembly Version=8.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_MSIL\Microsoft.JScript\8.0.0.0__b03f5f7f11d50a3a\Microsoft.JScript.dll Assembly File Version: File: C:\WINDOWS\assembly\GAC_MSIL\Microsoft.JScript\8.0.0.0__b03f5f7f11d50a3a\Microsoft.JScript.dll InternalName: Microsoft.JScript.dll OriginalFilename: Microsoft.JScript.dll FileVersion: 8.0.50727.3053 FileDescription: Microsoft.JScript.dll Product: Microsoft (R) Visual Studio (R) 2005 ProductVersion: 8.0.50727.3053 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: Language Neutral Assembly Name=App_global.asax.4nq_g1xi, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null Assembly CLR Version=v2.0.50727 Assembly Version=0.0.0.0 Assembly Location=C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\services\87e24ff8\921625fe\App_global.asax.4nq_g1xi.dll Assembly File Version: File: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\services\87e24ff8\921625fe\App_global.asax.4nq_g1xi.dll InternalName: App_global.asax.4nq_g1xi.dll OriginalFilename: App_global.asax.4nq_g1xi.dll FileVersion: 0.0.0.0 FileDescription: Product: ProductVersion: 0.0.0.0 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: Language Neutral Assembly Name=Microsoft.TeamFoundation.Server, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Assembly CLR Version=v2.0.50727 Assembly Version=9.0.0.0 Assembly Location=C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\services\87e24ff8\921625fe\assembly\dl3\9051eeb6\603ea9a2_d822c801\Microsoft.TeamFoundation.Server.DLL Assembly File Version: File: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\services\87e24ff8\921625fe\assembly\dl3\9051eeb6\603ea9a2_d822c801\Microsoft.TeamFoundation.Server.DLL InternalName: Microsoft.TeamFoundation.Server.dll OriginalFilename: Microsoft.TeamFoundation.Server.dll FileVersion: 9.0.21022.8 FileDescription: Microsoft.TeamFoundation.Server.dll Product: Microsoft (R) Visual Studio (R) 2008 ProductVersion: 9.0.21022.8 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: Language Neutral Assembly Name=Microsoft.TeamFoundation.Common, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Assembly CLR Version=v2.0.50727 Assembly Version=9.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_32\Microsoft.TeamFoundation.Common\9.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.Common.dll Assembly File Version: File: C:\WINDOWS\assembly\GAC_32\Microsoft.TeamFoundation.Common\9.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.Common.dll InternalName: Microsoft.TeamFoundation.Common.dll OriginalFilename: Microsoft.TeamFoundation.Common.dll FileVersion: 9.0.30729.1 FileDescription: Microsoft.TeamFoundation.Common.dll Product: Microsoft (R) Visual Studio (R) 2008 ProductVersion: 9.0.30729.1 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: Language Neutral Assembly Name=Microsoft.TeamFoundation, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Assembly CLR Version=v2.0.50727 Assembly Version=9.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_32\Microsoft.TeamFoundation\9.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.dll Assembly File Version: File: C:\WINDOWS\assembly\GAC_32\Microsoft.TeamFoundation\9.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.dll InternalName: Microsoft.TeamFoundation.dll OriginalFilename: Microsoft.TeamFoundation.dll FileVersion: 9.0.30729.1 FileDescription: Microsoft.TeamFoundation.dll Product: Microsoft (R) Visual Studio (R) 2008 ProductVersion: 9.0.30729.1 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: Language Neutral Assembly Name=System.Security, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Assembly CLR Version=v2.0.50727 Assembly Version=2.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_MSIL\System.Security\2.0.0.0__b03f5f7f11d50a3a\System.Security.dll Assembly File Version: File: C:\WINDOWS\assembly\GAC_MSIL\System.Security\2.0.0.0__b03f5f7f11d50a3a\System.Security.dll InternalName: System.Security.dll OriginalFilename: System.Security.dll FileVersion: 2.0.50727.3053 (netfxsp.050727-3000) FileDescription: System.Security.dll Product: Microsoft® .NET Framework ProductVersion: 2.0.50727.3053 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: English (United States) Assembly Name=System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Assembly CLR Version=v2.0.50727 Assembly Version=2.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_32\System.Data\2.0.0.0__b77a5c561934e089\System.Data.dll Assembly File Version: File: C:\WINDOWS\assembly\GAC_32\System.Data\2.0.0.0__b77a5c561934e089\System.Data.dll InternalName: system.data.dll OriginalFilename: system.data.dll FileVersion: 2.0.50727.3053 (netfxsp.050727-3000) FileDescription: .NET Framework Product: Microsoft® .NET Framework ProductVersion: 2.0.50727.3053 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: English (United States) Assembly Name=Microsoft.TeamFoundation.Common.Library, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Assembly CLR Version=v2.0.50727 Assembly Version=9.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_32\Microsoft.TeamFoundation.Common.Library\9.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.Common.Library.dll Assembly File Version: File: C:\WINDOWS\assembly\GAC_32\Microsoft.TeamFoundation.Common.Library\9.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.Common.Library.dll InternalName: Microsoft.TeamFoundation.Common.Library.dll OriginalFilename: Microsoft.TeamFoundation.Common.Library.dll FileVersion: 9.0.30729.1 FileDescription: Microsoft.TeamFoundation.Common.Library.dll Product: Microsoft (R) Visual Studio (R) 2008 ProductVersion: 9.0.30729.1 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: Language Neutral Assembly Name=System.Web.Mobile, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Assembly CLR Version=v2.0.50727 Assembly Version=2.0.0.0 Assembly Location=C:\WINDOWS\assembly\GAC_MSIL\System.Web.Mobile\2.0.0.0__b03f5f7f11d50a3a\System.Web.Mobile.dll As And finally, the second error: Source: Team Foundation Error Reporting Event ID: 5000 Description: EventType teamfoundationue, P1 1.0.0.0, P2 tfs, P3 9.0.30729.1, P4 9.0.0.0, P5 general, P6 typeinitializationexcept, P7 4758b22a940fe6d9, P8 d15c14bb, P9 NIL, P10 NIL. Any ideas? Thanks.

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • Macbook OSX applications crashing on startup

    - by Ryan Doom
    Closed my computer last night, went home. Opened it and it had restarted. Now when I open a couple programs such as Adobe Fireworks or Appcelerator Titanium they throw up a nasty error like below. Other programs (Chrome, Firefox, Textmate, Versions) works fine. Any thoughts on this? I haven't owned my Macbook long so I'm not even aware of the right tools or places to look to track this down. Any help would be most appreciated. It's making it hard to get my work done :] If it helps at all both those programs were probably open when it restarted. From the look of it I'm not sure if it's a permissions error or something? I completely re-installed on of the applications Titanium. Didn't seem to help. Process: Adobe Fireworks CS5 [1044] Path: /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/MacOS/Adobe Fireworks CS5 Identifier: com.macromedia.fireworks Version: Adobe Fireworks CS5 version 11.0.0.484 (11.0.0) Code Type: X86 (Native) Parent Process: launchd [87] Date/Time: 2011-02-18 09:45:47.689 -0500 OS Version: Mac OS X 10.6.6 (10J567) Report Version: 6 Interval Since Last Report: 12983 sec Crashes Since Last Report: 6 Per-App Interval Since Last Report: 325365 sec Per-App Crashes Since Last Report: 4 Anonymous UUID: D16EAFE7-2F04-44D4-A984-5902A6EF8943 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x00000000b0327ff8 Crashed Thread: 7 Thread 0: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x97dd0142 semaphore_wait_signal_trap + 10 1 libSystem.B.dylib 0x97dd5c46 pthread_mutex_lock + 490 2 libstdc++.6.dylib 0x91887559 __gnu_cxx::__recursive_mutex::lock() + 17 3 libstdc++.6.dylib 0x918874e6 __cxa_guard_acquire + 68 4 libTrueTypeScaler.dylib 0x91c92ab3 TTScalerInfo() + 50 5 libFontParser.dylib 0x9979a5f1 TTrueTypeScaler::CreateTrueTypeScaler() + 43 6 libSystem.B.dylib 0x97dee900 pthread_once + 82 7 libFontParser.dylib 0x9979a575 TTrueTypeScaler::GetTrueTypeScaler() + 47 8 libFontParser.dylib 0x9979a520 TTrueTypeScaler::TTrueTypeScaler(TScalerStrike const&) + 26 9 libFontParser.dylib 0x9979a4be TFontScaler::CreateFontScaler(TScalerStrike const&) + 52 10 libFontParser.dylib 0x9979bd93 FPFontGetGlyphsForUnichars + 344 11 com.apple.CoreText 0x98255cfe TBaseFont::CalculateFontMetrics(bool) const + 342 12 com.apple.CoreText 0x98255b55 TBaseFont::InitFontMetrics() const + 51 13 com.apple.CoreText 0x98255959 TBaseFont::GetStrikeMetrics(float, CGAffineTransform const*, bool) const + 81 14 com.apple.CoreText 0x982558cd TFont::InitStrikeMetrics() const + 55 15 com.apple.CoreText 0x982592cf CTFontGetAscent + 49 16 com.apple.AppKit 0x989f5d08 _NSFontInstanceInfoInitializeMetricsInfo + 48 17 com.apple.AppKit 0x989f5cbc -[_NSSharedFontInstanceInfo _defaultLineHeight:] + 40 18 com.apple.AppKit 0x98f3c5e8 +[NSStringDrawingTextStorage _fastDrawString:attributes:length:inRect:graphicsContext:baselineRendering:usesFontLeading:usesScreenFont:typesetterBehavior:paragraphStyle:lineBreakMode:boundingRect:padding:scrollable:] + 2041 19 com.apple.AppKit 0x98abd2d9 _NSStringDrawingCore + 1555 20 com.apple.AppKit 0x98abca8b _NSDrawTextCell + 3465 21 com.apple.AppKit 0x98ac6185 -[NSTextFieldCell drawInteriorWithFrame:inView:] + 764 22 com.apple.AppKit 0x98ac5d26 -[NSTextFieldCell drawWithFrame:inView:] + 816 23 com.apple.AppKit 0x98ac03de -[NSControl drawRect:] + 589 24 com.apple.AppKit 0x98ab882a -[NSView _drawRect:clip:] + 3510 25 com.apple.AppKit 0x98ab74c8 -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 1600 26 com.apple.AppKit 0x98ab77fd -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 2421 27 com.apple.AppKit 0x98ab77fd -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 2421 28 com.apple.AppKit 0x98ab59e7 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 711 29 com.apple.AppKit 0x98b54aa3 -[NSNextStepFrame _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 311 30 com.apple.AppKit 0x98ab1ea2 -[NSView _displayRectIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:] + 3309 31 com.apple.AppKit 0x98a12a57 -[NSView displayIfNeeded] + 818 32 com.apple.AppKit 0x989c6661 -[NSNextStepFrame displayIfNeeded] + 98 33 com.apple.AppKit 0x98b55390 -[NSWindow display] + 75 34 com.macromedia.fireworks 0x00bade98 0x1000 + 12242584 35 com.macromedia.fireworks 0x0089f778 0x1000 + 9037688 36 libPowerPlant2.dylib 0x08109722 FW_PowerPlant::LCarbonApp::Run() + 54 37 com.macromedia.fireworks 0x008a138c 0x1000 + 9044876 38 com.macromedia.fireworks 0x00003596 0x1000 + 9622 Thread 1: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x97df6982 kevent + 10 1 libSystem.B.dylib 0x97df709c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x97df6559 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x97df62fe _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x97df5d81 _pthread_wqthread + 390 5 libSystem.B.dylib 0x97df5bc6 start_wqthread + 30 Thread 2: 0 libSystem.B.dylib 0x97df5a12 __workq_kernreturn + 10 1 libSystem.B.dylib 0x97df5fa8 _pthread_wqthread + 941 2 libSystem.B.dylib 0x97df5bc6 start_wqthread + 30 Thread 3: 0 libSystem.B.dylib 0x97dd015a semaphore_timedwait_signal_trap + 10 1 libSystem.B.dylib 0x97dfdce5 _pthread_cond_wait + 1066 2 libSystem.B.dylib 0x97e2cac8 pthread_cond_timedwait_relative_np + 47 3 ...ple.CoreServices.CarbonCore 0x97af4ecd TSWaitOnConditionTimedRelative + 242 4 ...ple.CoreServices.CarbonCore 0x97af4c0b TSWaitOnSemaphoreCommon + 511 5 ...ple.CoreServices.CarbonCore 0x97b18e33 TimerThread + 97 6 libSystem.B.dylib 0x97dfd85d _pthread_start + 345 7 libSystem.B.dylib 0x97dfd6e2 thread_start + 34 Thread 4: 0 libSystem.B.dylib 0x97dd00fa mach_msg_trap + 10 1 libSystem.B.dylib 0x97dd0867 mach_msg + 68 2 ...ple.CoreServices.CarbonCore 0x97b9e0d0 YieldToThread + 446 3 ...ple.CoreServices.CarbonCore 0x97b9e1d3 SetThreadState + 134 4 ...ple.CoreServices.CarbonCore 0x97b9e28e SetThreadStateEndCritical + 111 5 libPowerPlant2.dylib 0x0811ab51 FW_PowerPlant::LThread::SemWait(FW_PowerPlant::LSemaphore*, long, QHdr&, unsigned char&) + 119 6 libPowerPlant2.dylib 0x08119b07 FW_PowerPlant::LSemaphore::BlockThread(long) + 61 7 libPowerPlant2.dylib 0x08119b6d FW_PowerPlant::LSemaphore::Wait(long) + 71 8 libPowerPlant2.dylib 0x0811af70 FW_PowerPlant::LThread::Cleanup::Run() + 32 9 libPowerPlant2.dylib 0x0811b94e FW_PowerPlant::LThread::DoEntry(void*) + 30 10 ...ple.CoreServices.CarbonCore 0x97b9e85f CooperativeThread + 309 11 libSystem.B.dylib 0x97dfd85d _pthread_start + 345 12 libSystem.B.dylib 0x97dfd6e2 thread_start + 34 Thread 5: 0 libSystem.B.dylib 0x97dd0142 semaphore_wait_signal_trap + 10 1 libSystem.B.dylib 0x97dfdcfc _pthread_cond_wait + 1089 2 libSystem.B.dylib 0x97e4646f pthread_cond_wait + 48 3 com.adobe.amt.services 0x1dd73126 AMTConditionLock::LockWhenCondition(int) + 46 4 com.adobe.amt.services 0x1dd6bdb0 _AMTThreadedPCDService::PCDThreadWorker(_AMTThreadedPCDService*) + 116 5 com.adobe.amt.services 0x1dd7318c AMTThread::Worker(void*) + 24 6 libSystem.B.dylib 0x97dfd85d _pthread_start + 345 7 libSystem.B.dylib 0x97dfd6e2 thread_start + 34 Thread 6: 0 libSystem.B.dylib 0x97dfe0a6 __semwait_signal + 10 1 libSystem.B.dylib 0x97dfdd62 _pthread_cond_wait + 1191 2 libSystem.B.dylib 0x97dff9f8 pthread_cond_wait$UNIX2003 + 73 3 ...ple.CoreServices.CarbonCore 0x97b0951e TSWaitOnCondition + 126 4 ...ple.CoreServices.CarbonCore 0x97af4ea5 TSWaitOnConditionTimedRelative + 202 5 ...ple.CoreServices.CarbonCore 0x97af0873 MPWaitOnQueue + 250 6 com.macromedia.fireworks 0x00ae43cf 0x1000 + 11416527 7 ...ple.CoreServices.CarbonCore 0x97ad485a PrivateMPEntryPoint + 68 8 libSystem.B.dylib 0x97dfd85d _pthread_start + 345 9 libSystem.B.dylib 0x97dfd6e2 thread_start + 34 Thread 7 Crashed: 0 libstdc++.6.dylib 0x9184e00c std::basic_ofstream ::open(char const*, std::_Ios_Openmode) + 16 1 libstdc++.6.dylib 0x9184fe9b std::basic_ifstream ::basic_ifstream(char const*, std::_Ios_Openmode) + 211 2 ...pdaterNotificationFramework 0x1e824779 ESDifstream::ESDifstream(std::string const&, char const*, std::_Ios_Openmode) + 73 3 ...pdaterNotificationFramework 0x1e821b6a esd::ExpatDOMBuilder::ParseFile(std::string const&, bool) + 96 4 ...pdaterNotificationFramework 0x1e822da4 esd::PrefsWriter::SetPrefsPath(std::string const&) + 206 5 ...pdaterNotificationFramework 0x1e8449b3 AdobeUpdaterPrefs::AdobeUpdaterPrefs() + 8609 6 ...pdaterNotificationFramework 0x1e8459f4 AdobeUpdaterPrefs::GetAdobeUpdaterPrefs() + 68 7 ...pdaterNotificationFramework 0x1e820728 UpdaterNotificationsImpl::InitLogFile() + 48 8 ...pdaterNotificationFramework 0x1e820d49 UpdaterNotificationsImpl::Instance() + 53 9 ...pdaterNotificationFramework 0x1e823638 UpdaterNotificationsIsUpdaterEnabled + 22 10 com.adobe.amt.services 0x1dd69d15 _AMTAUMService::IsUpdaterEnabled(T_CSUStatusMajor*, int*) + 359 11 com.adobe.amtlib 0x01f5501c AMTAUMServiceIsUpdaterEnabled + 290 12 com.adobe.amtlib 0x01f1f789 AMTImpl::CallMenuEnablers() + 71 13 com.adobe.amtlib 0x01f260fa AMTImpl::DoLaunchWorkflow(AMTImpl::LaunchSequence) + 1664 14 com.adobe.amtlib 0x01f26a5d AMTImpl::DoValidateWorkflow(AMTImpl::LaunchSequence) + 293 15 com.adobe.amtlib 0x01f26cf5 AMTImpl::DoPreValidateWorkflow() + 119 16 com.adobe.amtlib 0x01f26e71 AMTImpl::ServiceLoaderThread(void*) + 45 17 com.adobe.amtlib 0x01f54c48 AMTThread::Worker(void*) + 24 18 libSystem.B.dylib 0x97dfd85d _pthread_start + 345 19 libSystem.B.dylib 0x97dfd6e2 thread_start + 34 Thread 7 crashed with X86 Thread State (32-bit): eax: 0x00000016 ebx: 0x098c9a00 ecx: 0xa013dfc0 edx: 0x00000003 edi: 0x098c9a08 esi: 0x098c9c0c ebp: 0xb03a7448 esp: 0xb0327ff0 ss: 0x0000001f efl: 0x00010202 eip: 0x9184e00c cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0xb0327ff8 Binary Images: 0x1000 - 0x1448ff1 +com.macromedia.fireworks Adobe Fireworks CS5 version 11.0.0.484 (11.0.0) <38213EBD-FDB0-FC20-40E8-87935A5386BB /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/MacOS/Adobe Fireworks CS5 0x1e76000 - 0x1ec9ffb +com.adobe.headlights.LogSessionFramework ??? (2.0.1.011) <4F2BFF03-01D2-A07D-E5E2-7F88D4C2DEC4 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/LogSession.framework/Versions/A/LogSession 0x1f11000 - 0x1f77ffb +com.adobe.amtlib amtlib 3.0.0.64 (3.0.0.64) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/amtlib.framework/Versions/A/amtlib 0x1fa7000 - 0x2146fe7 +com.adobe.owl AdobeOwl version 3.0.81 (3.0.81) <9C261D9E-9BD7-5DE6-5676-AEEF4828D17B /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeOwl.framework/Versions/A/AdobeOwl 0x21af000 - 0x22e7fe7 +WRServices ??? (???) <52CE5B97-1E6A-92A2-EA70-93511AB7EA2E /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/WRServices.framework/Versions/A/WRServices 0x232d000 - 0x239afef +FileInfo ??? (???) <4A4C74F9-CA83-B174-F56D-F7671DC61389 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/FileInfo.framework/Versions/A/FileInfo 0x23b5000 - 0x23dbff6 +AdobeAXE8SharedExpat ??? (???) <5848BBCE-3A3E-66EE-5527-97A96F0CA4CC /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeAXE8SharedExpat.framework/Versions/A/AdobeAXE8SharedExpat 0x23ec000 - 0x2407fff +AdobeBIB ??? (???) <3B3092DC-A296-9D1C-1922-D20E6A5A7D7E /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeBIB.framework/Versions/A/AdobeBIB 0x2411000 - 0x2469ff7 +AdobeXMP ??? (???) <73329999-C364-2451-6574-4D0277057D19 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeXMP.framework/Versions/A/AdobeXMP 0x2478000 - 0x2aa6fe7 +AdobeAGM ??? (???) <91D37E54-E985-47E1-2696-0BD7E4183132 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeAGM.framework/Versions/A/AdobeAGM 0x2c04000 - 0x2d18fff +AdobeACE ??? (???) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeACE.framework/Versions/A/AdobeACE 0x2d3b000 - 0x302dff7 +AdobeCoolType ??? (???) <9FDD596D-9824-2BB9-5DA2-25DACAB6A324 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeCoolType.framework/Versions/A/AdobeCoolType 0x30b5000 - 0x30d6ff7 +AdobeBIBUtils ??? (???) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeBIBUtils.framework/Versions/A/AdobeBIBUtils 0x30e2000 - 0x311efff +AdobeARE ??? (???) <76851E91-2381-5D05-742C-BB24E4BAD276 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeARE.framework/Versions/A/AdobeARE 0x3127000 - 0x34ffff7 +AdobeMPS ??? (???) <13614867-4D80-EB74-FA7F-6136492478BA /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeMPS.framework/Versions/A/AdobeMPS 0x362e000 - 0x3c62feb +AdobePDFL ??? (???) <49D6D58A-1EBB-424A-4CB0-8F9691E0991D /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobePDFL.framework/Versions/A/AdobePDFL 0x3d8e000 - 0x4ad1fff +com.adobe.psl AdobePSL 12.0.0.7524 (12.0.0.7524) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobePSL.framework/Versions/A/AdobePSL 0x4e10000 - 0x4e9aff7 +com.adobe.AdobeScCore ScCore 4.1.7 (4.1.7.5522) <053A109E-3E3E-D3EE-7186-4920D927D2AD /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeScCore.framework/Versions/A/AdobeScCore 0x4edd000 - 0x4fc0fef +AdobePDFPort ??? (???) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobePDFPort.framework/Versions/A/AdobePDFPort 0x4ff5000 - 0x4ff8ff8 +com.adobe.ape.shim adbeape version 3.1.65.7508 (3.1.65.7508) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/adbeape.framework/Versions/A/adbeape 0x4ffe000 - 0x508fff7 +libicucnv.dylib.36.0 36.0.0 (compatibility 36.0.0) <581475CC-C039-1B42-49BA-71811D8B4E15 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ICUConverter.framework/Versions/3.6/libicucnv.dylib.36.0 0x50ae000 - 0x5a5efff +libicudata.dylib.36.0 36.0.0 (compatibility 36.0.0) <02108DEA-3DD2-14BE-DAEB-BE522B619C1D /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ICUData.framework/Versions/3.6/libicudata.dylib.36.0 0x5a61000 - 0x5b2eff3 +libicui18n.dylib.36.0 36.0.0 (compatibility 36.0.0) <08F15219-7F35-574E-7725-1ACAA1B18A00 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ICUInternationalization.framework/Versions/3.6/libicui18n.dylib.36.0 0x5b91000 - 0x5c6bfef +libicuuc.dylib.36.0 36.0.0 (compatibility 36.0.0) <5EE72009-40B3-7FB7-3A49-576AEDE0D400 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ICUUnicode.framework/Versions/3.6/libicuuc.dylib.36.0 0x5cab000 - 0x6a36fe7 +com.adobe.illustrator 382 (15.0.0) <64F68532-0311-6BBA-1F50-246CAF917549 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AILib.framework/Versions/A/AILib 0x781b000 - 0x785ffff +com.adobe.illustrator.aiport AIPort version 1.0 (1.0) <69EDC44E-D7BB-A259-282D-C42725AE0E26 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AIPort.framework/Versions/A/AIPort 0x78c2000 - 0x7908fff +FilterPort ??? (???) <23FAE9D1-9376-1E71-21F7-D3EB2BFD50EE /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/FilterPort.framework/Versions/A/FilterPort 0x797d000 - 0x797dfff +SPBasic ??? (???) <5D1760D8-C910-C641-0BC9-CF74A1A5190D /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/SPBasic.framework/Versions/A/SPBasic 0x7981000 - 0x7b67ff7 +com.adobe.linguistic.LinguisticManager 5.0.0 (11309) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeLinguistic.framework/Versions/3/AdobeLinguistic 0x7bf5000 - 0x7cc2fe7 +AdobeAXEDOMCore ??? (???) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeAXEDOMCore.framework/Versions/A/AdobeAXEDOMCore 0x7d31000 - 0x7ea9ffb +com.adobe.PlugPlug 2.0.0.746 (2.0.0.746) <08AD22E3-34C0-6749-E497-616C66A246AD /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/PlugPlug.framework/Versions/A/PlugPlug 0x7f4d000 - 0x7f6afef +libCurl.dylib ??? (???) <1BA6E2DE-EF14-D50A-4697-035AE07875D7 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/MacOS/libCurl.dylib 0x7f72000 - 0x7f88ff4 +libChar16.dylib ??? (???) <19B0479C-72B1-EE14-6385-7F655DEC0F02 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/MacOS/libChar16.dylib 0x7f90000 - 0x7fb3fe0 +libCoreTypes.dylib ??? (???) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/MacOS/libCoreTypes.dylib 0x7fc3000 - 0x7fcaffc com.apple.carbonframeworktemplate 1.0 (1.0) <0D270CC7-B715-943E-2B4F-5C9B5775505A /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/NetIO.framework/Versions/A/NetIO 0x7fd6000 - 0x7fd9fff +Dioxide.dylib ??? (???) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/Dioxide.dylib 0x7fe1000 - 0x7fe7ffc +libfwutility.dylib ??? (???) <6A723D9E-A60B-56EE-2B8D-B91991793749 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/libfwutility.dylib 0x7fee000 - 0x803efff +com.macromedia.javascript Javascript version 1.0 (1.0) <540CB029-3946-8E41-BD91-AED6F73C86B7 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/Javascript.framework/Versions/A/Javascript 0x8053000 - 0x8060fff +com.macromedia.moa Moa version 1.0 (1.0) <3C4B7F42-5A5D-78E7-B1DC-DAA06A99CCB2 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/Moa.framework/Versions/A/Moa 0x8069000 - 0x8070fff +com.macromedia.morefiles MoreFiles version 1.0 (1.0) <36115C66-79A3-5DB9-B36B-8D655B46FC76 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/MoreFiles.framework/Versions/A/MoreFiles 0x8077000 - 0x815bfe3 +libPowerPlant2.dylib ??? (???) <964FB3D7-B7EE-94EB-FD95-4AE90C657A4A /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/libPowerPlant2.dylib 0x828e000 - 0x8294ffb +com.macromedia.testframework 1.0 (1.0) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/uwchar.framework/Versions/A/uwchar 0x8298000 - 0x829cffc +com.adobe.AdobeCrashReporter 3.0 (3.0.20100302) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeCrashReporter.framework/Versions/A/AdobeCrashReporter 0x82a3000 - 0x82bbfef +libgiff.dylib ??? (???) <8F90552B-3D11-2B1E-D1BA-A109FEB99969 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/libgiff.dylib 0x82c3000 - 0x82e1fe7 +com.macromedia.png LibPNG version 1.0 (1.0) <2DBA0A3F-4F01-7474-0FED-3021382D635F /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/LibPNG.framework/Versions/A/LibPNG 0x82e9000 - 0x82f7feb +com.macromedia.zlib ZLib version 1.0 (1.0) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ZLib.framework/Versions/A/ZLib 0x82fc000 - 0x8300ffd +com.yourcompany.yourcocoaframework ??? (1.0) <7EF7A82E-0AAE-0022-3B15-7C50F1C550C1 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ASEFramework.framework/Versions/A/ASEFramework 0x8305000 - 0x830cff2 +com.adobe.boost_threads.framework boost_threads version 5.0.0 (5.0.0.0) /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/boost_threads.framework/Versions/A/boost_threads 0x831c000 - 0x8322fef +com.adobe.boost_date_time.framework boost_date_time version 5.0.0 (5.0.0.0) <8837A972-1EBE-CAA9-473A-CD157F17163D /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/boost_date_time.framework/Versions/A/boost_date_time 0x8333000 - 0x83b0fff +AdobeOwlCanvas ??? (???) <65B2E680-4F43-BE46-2290-3500758D1BF7 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeOwlCanvas.framework/Versions/A/AdobeOwlCanvas 0x83cc000 - 0x83d7ff3 +com.adobe.boost_filesystem.framework boost_filesystem version 5.0.0 (5.0.0.0) <90B8B4E3-6C44-D110-1545-1A34EB14B22D /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/boost_filesystem.framework/Versions/A/boost_filesystem 0x83eb000 - 0x83edffb +com.adobe.boost_system.framework boost_system version 5.0.0 (5.0.0.0) <0C4D56E8-9593-4C4A-4A7E-BEAEDE1CA131 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/boost_system.framework/Versions/A/boost_system ... E86745B94A4B /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontParser.dylib 0x9984a000 - 0x9989aff7 com.apple.framework.familycontrols 2.0.2 (2020) /System/Library/PrivateFrameworks/FamilyControls.framework/Versions/A/FamilyControls 0x99a6e000 - 0x99a6fff7 com.apple.audio.units.AudioUnit 1.6.5 (1.6.5) /System/Library/Frameworks/AudioUnit.framework/Versions/A/AudioUnit 0x99a72000 - 0x99a86ffb com.apple.speech.synthesis.framework 3.10.35 (3.10.35) <9F5CE4F7-D05C-8C14-4B76-E43D07A8A680 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/SpeechSynthesis.framework/Versions/A/SpeechSynthesis 0xb0000000 - 0xb000fff8 +com.adobe.ahclientframework 1.5.0.30 (1.5.0.30) <24B39C2F-79B0-BDE3-C6D0-1F0E943070C7 /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ahclient.framework/Versions/A/ahclient 0xffff0000 - 0xffff1fff libSystem.B.dylib ??? (???) <62291026-D016-705D-DC1E-FC2B09D47DE5 /usr/lib/libSystem.B.dylib

    Read the article

  • SQL Server Express 2008 R2 Installation error at Windows 7

    - by Shai Sherman
    Hello, I created install script that will install SQL Server 2008 R2 on windows XP SP3, windows vista and windows 7. One of the command that i used in the installation is for silent installation of SQL Server 2008 R2. When i install it on windows XP everything works just fine but when i try to install it on Windows 7 i get an error. What am I doing wrong? Here is the command line that i use: "Setup.exe /ConfigurationFile=Mysetup.ini" Mysetup.ini file: -------------------------------------Start of ini file --------------------------------- ;SQL SERVER 2008 R2 Configuration File ;Version 1.0, 5 May 2010 ; [SQLSERVER2008] ; Specify the Instance ID for the SQL Server features you have specified. SQL Server directory structure, registry structure, and service names will reflect the instance ID of the SQL Server instance. INSTANCEID="MSSQLSERVER" ; Specifies a Setup work flow, like INSTALL, UNINSTALL, or UPGRADE. This is a required parameter. ACTION="Install" ; Specifies features to install, uninstall, or upgrade. The list of top-level features include SQL, AS, RS, IS, and Tools. The SQL feature will install the database engine, replication, and full-text. The Tools feature will install Management Tools, Books online, Business Intelligence Development Studio, and other shared components. FEATURES=SQLENGINE ; Displays the command line parameters usage HELP="False" ; Specifies that the detailed Setup log should be piped to the console. INDICATEPROGRESS="False" ; Setup will not display any user interface. QUIET="False" ; Setup will display progress only without any user interaction. QUIETSIMPLE="True" ; Specifies that Setup should install into WOW64. This command line argument is not supported on an IA64 or a 32-bit system. ;X86="False" ; Specifies the path to the installation media folder where setup.exe is located. ;MEDIASOURCE="z:\" ; Detailed help for command line argument ENU has not been defined yet. ENU="True" ; Parameter that controls the user interface behavior. Valid values are Normal for the full UI, and AutoAdvance for a simplied UI. ; UIMODE="Normal" ; Specify if errors can be reported to Microsoft to improve future SQL Server releases. Specify 1 or True to enable and 0 or False to disable this feature. ERRORREPORTING="False" ; Specify the root installation directory for native shared components. ;INSTALLSHAREDDIR="D:\Program Files\Microsoft SQL Server" ; Specify the root installation directory for the WOW64 shared components. ;INSTALLSHAREDWOWDIR="D:\Program Files (x86)\Microsoft SQL Server" ; Specify the installation directory. ;INSTANCEDIR="D:\Program Files\Microsoft SQL Server" ; Specify that SQL Server feature usage data can be collected and sent to Microsoft. Specify 1 or True to enable and 0 or False to disable this feature. SQMREPORTING="False" ; Specify a default or named instance. MSSQLSERVER is the default instance for non-Express editions and SQLExpress for Express editions. This parameter is required when installing the SQL Server Database Engine (SQL), Analysis Services (AS), or Reporting Services (RS). INSTANCENAME="SQLEXPRESS" SECURITYMODE=SQL SAPWD=SystemAdmin ; Agent account name AGTSVCACCOUNT="NT AUTHORITY\NETWORK SERVICE" ; Auto-start service after installation. AGTSVCSTARTUPTYPE="Manual" ; Startup type for Integration Services. ;ISSVCSTARTUPTYPE="Automatic" ; Account for Integration Services: Domain\User or system account. ;ISSVCACCOUNT="NT AUTHORITY\NetworkService" ; Controls the service startup type setting after the service has been created. ;ASSVCSTARTUPTYPE="Automatic" ; The collation to be used by Analysis Services. ;ASCOLLATION="Latin1_General_CI_AS" ; The location for the Analysis Services data files. ;ASDATADIR="Data" ; The location for the Analysis Services log files. ;ASLOGDIR="Log" ; The location for the Analysis Services backup files. ;ASBACKUPDIR="Backup" ; The location for the Analysis Services temporary files. ;ASTEMPDIR="Temp" ; The location for the Analysis Services configuration files. ;ASCONFIGDIR="Config" ; Specifies whether or not the MSOLAP provider is allowed to run in process. ;ASPROVIDERMSOLAP="1" ; A port number used to connect to the SharePoint Central Administration web application. ;FARMADMINPORT="0" ; Startup type for the SQL Server service. SQLSVCSTARTUPTYPE="Automatic" ; Level to enable FILESTREAM feature at (0, 1, 2 or 3). FILESTREAMLEVEL="0" ; Set to "1" to enable RANU for SQL Server Express. ENABLERANU="1" ; Specifies a Windows collation or an SQL collation to use for the Database Engine. SQLCOLLATION="SQL_Latin1_General_CP1_CI_AS" ; Account for SQL Server service: Domain\User or system account. SQLSVCACCOUNT="NT Authority\System" ; Default directory for the Database Engine user databases. ;SQLUSERDBDIR="K:\Microsoft SQL Server\MSSQL\Data" ; Default directory for the Database Engine user database logs. ;SQLUSERDBLOGDIR="L:\Microsoft SQL Server\MSSQL\Data\Logs" ; Directory for Database Engine TempDB files. ;SQLTEMPDBDIR="T:\Microsoft SQL Server\MSSQL\Data" ; Directory for the Database Engine TempDB log files. ;SQLTEMPDBLOGDIR="T:\Microsoft SQL Server\MSSQL\Data\Logs" ; Provision current user as a Database Engine system administrator for SQL Server 2008 R2 Express. ADDCURRENTUSERASSQLADMIN="True" ; Specify 0 to disable or 1 to enable the TCP/IP protocol. TCPENABLED="1" ; Specify 0 to disable or 1 to enable the Named Pipes protocol. NPENABLED="0" ; Startup type for Browser Service. BROWSERSVCSTARTUPTYPE="Automatic" ; Specifies how the startup mode of the report server NT service. When ; Manual - Service startup is manual mode (default). ; Automatic - Service startup is automatic mode. ; Disabled - Service is disabled ;RSSVCSTARTUPTYPE="Automatic" ; Specifies which mode report server is installed in. ; Default value: “FilesOnly” ;RSINSTALLMODE="FilesOnlyMode" ; Accept SQL Server 2008 R2 license terms IACCEPTSQLSERVERLICENSETERMS="TRUE" ;setup.exe /CONFIGURATIONFILE=Mysetup.ini /INDICATEPROGRESS --------------------------- End of ini file ------------------------------------- And i get this error: 2010-08-31 18:05:53 Slp: Error result: -2068119551 2010-08-31 18:05:53 Slp: Result facility code: 1211 2010-08-31 18:05:53 Slp: Result error code: 1 2010-08-31 18:05:53 Slp: Sco: Attempting to create base registry key HKEY_LOCAL_MACHINE, machine 2010-08-31 18:05:53 Slp: Sco: Attempting to open registry subkey 2010-08-31 18:05:53 Slp: Sco: Attempting to open registry subkey Software\Microsoft\PCHealth\ErrorReporting\DW\Installed 2010-08-31 18:05:53 Slp: Sco: Attempting to get registry value DW0200 2010-08-31 18:05:53 Slp: Submitted 1 of 1 failures to the Watson data repository What the meaning of this? What do i need to do to fix that problem? Here is the Summary file: Overall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068119551 Exit facility code: 1211 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2010-08-31 18:03:44 End time: 2010-08-31 18:05:51 Requested action: Install Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\Detail.txt Exception help link: http%3a%2f%2fgo.microsoft.com%2ffwlink%3fLinkId%3d20476%26ProdName%3dMicrosoft%2bSQL%2bServer%26EvtSrc%3dsetup.rll%26EvtID%3d50000%26ProdVer%3d10.50.1600.1%26EvtType%3d0x6121810A%400xC24842DB Machine Properties: Machine name: NVR Machine processor count: 2 OS version: Windows 7 OS service pack: OS region: United States OS language: English (United States) OS architecture: x86 Process architecture: 32 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: SQL Server Database Services 2008 R2 ProductName: SQL Server 2008 R2 Type: RTM Version: 10 SPLevel: 0 Installation location: C:\Disk1\setupsql\x86\setup\ Installation edition: EXPRESS User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: True AGTSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE AGTSVCPASSWORD: * AGTSVCSTARTUPTYPE: Disabled ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASDOMAINGROUP: ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSVCACCOUNT: ASSVCPASSWORD: * ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Automatic CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\ConfigurationFile.ini CUSOURCE: ENABLERANU: True ENU: True ERRORREPORTING: False FARMACCOUNT: FARMADMINPORT: 0 FARMPASSWORD: * FEATURES: SQLENGINE FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: FTSVCACCOUNT: FTSVCPASSWORD: * HELP: False IACCEPTSQLSERVERLICENSETERMS: True INDICATEPROGRESS: False INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSQLDATADIR: INSTANCEDIR: C:\Program Files\Microsoft SQL Server\ INSTANCEID: MSSQLSERVER INSTANCENAME: SQLEXPRESS ISSVCACCOUNT: NT AUTHORITY\NetworkService ISSVCPASSWORD: * ISSVCSTARTUPTYPE: Automatic NPENABLED: 0 PASSPHRASE: * PCUSOURCE: PID: * QUIET: False QUIETSIMPLE: True ROLE: AllFeatures_WithDefaults RSINSTALLMODE: FilesOnlyMode RSSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE RSSVCPASSWORD: * RSSVCSTARTUPTYPE: Automatic SAPWD: * SECURITYMODE: SQL SQLBACKUPDIR: SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT Authority\System SQLSVCPASSWORD: * SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: SQLTEMPDBDIR: SQLTEMPDBLOGDIR: SQLUSERDBDIR: SQLUSERDBLOGDIR: SQMREPORTING: False TCPENABLED: 1 UIMODE: AutoAdvance X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0x0A2FBD17@1211@1 Configuration error description: The process cannot access the file because it is being used by another process. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\Detail.txt Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\SystemConfigurationCheck_Report.htm What should I do and why does this problem occur? Thanks , Shai.

    Read the article

  • MacBook OS X applications crashing on startup

    - by Ryan Doom
    Closed my computer last night, went home. Opened it and it had restarted. Now when I open a couple programs such as Adobe Fireworks or Appcelerator Titanium they throw up a nasty error like below. Other programs (Chrome, Firefox, Textmate, Versions) work fine. Any thoughts on this? I haven't owned my MacBook long so I'm not even aware of the right tools or places to look to track this down. Any help would be most appreciated. It's making it hard to get my work done :] If it helps at all both those programs were probably open when it restarted. From the look of it I'm not sure if it's a permissions error or something? I completely re-installed one of the applications (Appcelerator Titanium). Didn't seem to help. Process: Adobe Fireworks CS5 [1044] Path: /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/MacOS/Adobe Fireworks CS5 Identifier: com.macromedia.fireworks Version: Adobe Fireworks CS5 version 11.0.0.484 (11.0.0) Code Type: X86 (Native) Parent Process: launchd [87] Date/Time: 2011-02-18 09:45:47.689 -0500 OS Version: Mac OS X 10.6.6 (10J567) Report Version: 6 Interval Since Last Report: 12983 sec Crashes Since Last Report: 6 Per-App Interval Since Last Report: 325365 sec Per-App Crashes Since Last Report: 4 Anonymous UUID: D16EAFE7-2F04-44D4-A984-5902A6EF8943 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x00000000b0327ff8 Crashed Thread: 7 Thread 0: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x97dd0142 semaphore_wait_signal_trap + 10 1 libSystem.B.dylib 0x97dd5c46 pthread_mutex_lock + 490 2 libstdc++.6.dylib 0x91887559 __gnu_cxx::__recursive_mutex::lock() + 17 3 libstdc++.6.dylib 0x918874e6 __cxa_guard_acquire + 68 4 libTrueTypeScaler.dylib 0x91c92ab3 TTScalerInfo() + 50 5 libFontParser.dylib 0x9979a5f1 TTrueTypeScaler::CreateTrueTypeScaler() + 43 6 libSystem.B.dylib 0x97dee900 pthread_once + 82 7 libFontParser.dylib 0x9979a575 TTrueTypeScaler::GetTrueTypeScaler() + 47 8 libFontParser.dylib 0x9979a520 TTrueTypeScaler::TTrueTypeScaler(TScalerStrike const&) + 26 9 libFontParser.dylib 0x9979a4be TFontScaler::CreateFontScaler(TScalerStrike const&) + 52 10 libFontParser.dylib 0x9979bd93 FPFontGetGlyphsForUnichars + 344 11 com.apple.CoreText 0x98255cfe TBaseFont::CalculateFontMetrics(bool) const + 342 12 com.apple.CoreText 0x98255b55 TBaseFont::InitFontMetrics() const + 51 13 com.apple.CoreText 0x98255959 TBaseFont::GetStrikeMetrics(float, CGAffineTransform const*, bool) const + 81 14 com.apple.CoreText 0x982558cd TFont::InitStrikeMetrics() const + 55 15 com.apple.CoreText 0x982592cf CTFontGetAscent + 49 16 com.apple.AppKit 0x989f5d08 __NSFontInstanceInfoInitializeMetricsInfo + 48 17 com.apple.AppKit 0x989f5cbc -[__NSSharedFontInstanceInfo _defaultLineHeight:] + 40 18 com.apple.AppKit 0x98f3c5e8 +[NSStringDrawingTextStorage _fastDrawString:attributes:length:inRect:graphicsContext:baselineRendering:usesFontLeading:usesScreenFont:typesetterBehavior:paragraphStyle:lineBreakMode:boundingRect:padding:scrollable:] + 2041 19 com.apple.AppKit 0x98abd2d9 _NSStringDrawingCore + 1555 20 com.apple.AppKit 0x98abca8b _NSDrawTextCell + 3465 21 com.apple.AppKit 0x98ac6185 -[NSTextFieldCell drawInteriorWithFrame:inView:] + 764 22 com.apple.AppKit 0x98ac5d26 -[NSTextFieldCell drawWithFrame:inView:] + 816 23 com.apple.AppKit 0x98ac03de -[NSControl drawRect:] + 589 24 com.apple.AppKit 0x98ab882a -[NSView _drawRect:clip:] + 3510 25 com.apple.AppKit 0x98ab74c8 -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 1600 26 com.apple.AppKit 0x98ab77fd -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 2421 27 com.apple.AppKit 0x98ab77fd -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 2421 28 com.apple.AppKit 0x98ab59e7 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 711 29 com.apple.AppKit 0x98b54aa3 -[NSNextStepFrame _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 311 30 com.apple.AppKit 0x98ab1ea2 -[NSView _displayRectIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:] + 3309 31 com.apple.AppKit 0x98a12a57 -[NSView displayIfNeeded] + 818 32 com.apple.AppKit 0x989c6661 -[NSNextStepFrame displayIfNeeded] + 98 33 com.apple.AppKit 0x98b55390 -[NSWindow display] + 75 34 com.macromedia.fireworks 0x00bade98 0x1000 + 12242584 35 com.macromedia.fireworks 0x0089f778 0x1000 + 9037688 36 libPowerPlant2.dylib 0x08109722 FW_PowerPlant::LCarbonApp::Run() + 54 37 com.macromedia.fireworks 0x008a138c 0x1000 + 9044876 38 com.macromedia.fireworks 0x00003596 0x1000 + 9622 Thread 1: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x97df6982 kevent + 10 1 libSystem.B.dylib 0x97df709c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x97df6559 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x97df62fe _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x97df5d81 _pthread_wqthread + 390 5 libSystem.B.dylib 0x97df5bc6 start_wqthread + 30 Thread 2: 0 libSystem.B.dylib 0x97df5a12 __workq_kernreturn + 10 1 libSystem.B.dylib 0x97df5fa8 _pthread_wqthread + 941 2 libSystem.B.dylib 0x97df5bc6 start_wqthread + 30 Thread 3: 0 libSystem.B.dylib 0x97dd015a semaphore_timedwait_signal_trap + 10 1 libSystem.B.dylib 0x97dfdce5 _pthread_cond_wait + 1066 2 libSystem.B.dylib 0x97e2cac8 pthread_cond_timedwait_relative_np + 47 3 ...ple.CoreServices.CarbonCore 0x97af4ecd TSWaitOnConditionTimedRelative + 242 4 ...ple.CoreServices.CarbonCore 0x97af4c0b TSWaitOnSemaphoreCommon + 511 5 ...ple.CoreServices.CarbonCore 0x97b18e33 TimerThread + 97 6 libSystem.B.dylib 0x97dfd85d _pthread_start + 345 7 libSystem.B.dylib 0x97dfd6e2 thread_start + 34 Thread 4: 0 libSystem.B.dylib 0x97dd00fa mach_msg_trap + 10 1 libSystem.B.dylib 0x97dd0867 mach_msg + 68 2 ...ple.CoreServices.CarbonCore 0x97b9e0d0 YieldToThread + 446 3 ...ple.CoreServices.CarbonCore 0x97b9e1d3 SetThreadState + 134 4 ...ple.CoreServices.CarbonCore 0x97b9e28e SetThreadStateEndCritical + 111 5 libPowerPlant2.dylib 0x0811ab51 FW_PowerPlant::LThread::SemWait(FW_PowerPlant::LSemaphore*, long, QHdr&, unsigned char&) + 119 6 libPowerPlant2.dylib 0x08119b07 FW_PowerPlant::LSemaphore::BlockThread(long) + 61 7 libPowerPlant2.dylib 0x08119b6d FW_PowerPlant::LSemaphore::Wait(long) + 71 8 libPowerPlant2.dylib 0x0811af70 FW_PowerPlant::LThread::Cleanup::Run() + 32 9 libPowerPlant2.dylib 0x0811b94e FW_PowerPlant::LThread::DoEntry(void*) + 30 10 ...ple.CoreServices.CarbonCore 0x97b9e85f CooperativeThread + 309 11 libSystem.B.dylib 0x97dfd85d _pthread_start + 345 12 libSystem.B.dylib 0x97dfd6e2 thread_start + 34 Thread 5: 0 libSystem.B.dylib 0x97dd0142 semaphore_wait_signal_trap + 10 1 libSystem.B.dylib 0x97dfdcfc _pthread_cond_wait + 1089 2 libSystem.B.dylib 0x97e4646f pthread_cond_wait + 48 3 com.adobe.amt.services 0x1dd73126 AMTConditionLock::LockWhenCondition(int) + 46 4 com.adobe.amt.services 0x1dd6bdb0 _AMTThreadedPCDService::PCDThreadWorker(_AMTThreadedPCDService*) + 116 5 com.adobe.amt.services 0x1dd7318c AMTThread::Worker(void*) + 24 6 libSystem.B.dylib 0x97dfd85d _pthread_start + 345 7 libSystem.B.dylib 0x97dfd6e2 thread_start + 34 Thread 6: 0 libSystem.B.dylib 0x97dfe0a6 __semwait_signal + 10 1 libSystem.B.dylib 0x97dfdd62 _pthread_cond_wait + 1191 2 libSystem.B.dylib 0x97dff9f8 pthread_cond_wait$UNIX2003 + 73 3 ...ple.CoreServices.CarbonCore 0x97b0951e TSWaitOnCondition + 126 4 ...ple.CoreServices.CarbonCore 0x97af4ea5 TSWaitOnConditionTimedRelative + 202 5 ...ple.CoreServices.CarbonCore 0x97af0873 MPWaitOnQueue + 250 6 com.macromedia.fireworks 0x00ae43cf 0x1000 + 11416527 7 ...ple.CoreServices.CarbonCore 0x97ad485a PrivateMPEntryPoint + 68 8 libSystem.B.dylib 0x97dfd85d _pthread_start + 345 9 libSystem.B.dylib 0x97dfd6e2 thread_start + 34 Thread 7 Crashed: 0 libstdc++.6.dylib 0x9184e00c std::basic_ofstream<char, std::char_traits<char> >::open(char const*, std::_Ios_Openmode) + 16 1 libstdc++.6.dylib 0x9184fe9b std::basic_ifstream<char, std::char_traits<char> >::basic_ifstream(char const*, std::_Ios_Openmode) + 211 2 ...pdaterNotificationFramework 0x1e824779 ESDifstream::ESDifstream(std::string const&, char const*, std::_Ios_Openmode) + 73 3 ...pdaterNotificationFramework 0x1e821b6a esd::ExpatDOMBuilder<esd::XMLDocumentNode>::ParseFile(std::string const&, bool) + 96 4 ...pdaterNotificationFramework 0x1e822da4 esd::PrefsWriter::SetPrefsPath(std::string const&) + 206 5 ...pdaterNotificationFramework 0x1e8449b3 AdobeUpdaterPrefs::AdobeUpdaterPrefs() + 8609 6 ...pdaterNotificationFramework 0x1e8459f4 AdobeUpdaterPrefs::GetAdobeUpdaterPrefs() + 68 7 ...pdaterNotificationFramework 0x1e820728 UpdaterNotificationsImpl::InitLogFile() + 48 8 ...pdaterNotificationFramework 0x1e820d49 UpdaterNotificationsImpl::Instance() + 53 9 ...pdaterNotificationFramework 0x1e823638 UpdaterNotificationsIsUpdaterEnabled + 22 10 com.adobe.amt.services 0x1dd69d15 _AMTAUMService::IsUpdaterEnabled(T_CSUStatusMajor*, int*) + 359 11 com.adobe.amtlib 0x01f5501c AMTAUMServiceIsUpdaterEnabled + 290 12 com.adobe.amtlib 0x01f1f789 AMTImpl::CallMenuEnablers() + 71 13 com.adobe.amtlib 0x01f260fa AMTImpl::DoLaunchWorkflow(AMTImpl::LaunchSequence) + 1664 14 com.adobe.amtlib 0x01f26a5d AMTImpl::DoValidateWorkflow(AMTImpl::LaunchSequence) + 293 15 com.adobe.amtlib 0x01f26cf5 AMTImpl::DoPreValidateWorkflow() + 119 16 com.adobe.amtlib 0x01f26e71 AMTImpl::ServiceLoaderThread(void*) + 45 17 com.adobe.amtlib 0x01f54c48 AMTThread::Worker(void*) + 24 18 libSystem.B.dylib 0x97dfd85d _pthread_start + 345 19 libSystem.B.dylib 0x97dfd6e2 thread_start + 34 Thread 7 crashed with X86 Thread State (32-bit): eax: 0x00000016 ebx: 0x098c9a00 ecx: 0xa013dfc0 edx: 0x00000003 edi: 0x098c9a08 esi: 0x098c9c0c ebp: 0xb03a7448 esp: 0xb0327ff0 ss: 0x0000001f efl: 0x00010202 eip: 0x9184e00c cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0xb0327ff8 Binary Images: 0x1000 - 0x1448ff1 +com.macromedia.fireworks Adobe Fireworks CS5 version 11.0.0.484 (11.0.0) <38213EBD-FDB0-FC20-40E8-87935A5386BB> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/MacOS/Adobe Fireworks CS5 0x1e76000 - 0x1ec9ffb +com.adobe.headlights.LogSessionFramework ??? (2.0.1.011) <4F2BFF03-01D2-A07D-E5E2-7F88D4C2DEC4> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/LogSession.framework/Versions/A/LogSession 0x1f11000 - 0x1f77ffb +com.adobe.amtlib amtlib 3.0.0.64 (3.0.0.64) <DD471011-9120-1BC2-F1B5-D6FF09D0859F> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/amtlib.framework/Versions/A/amtlib 0x1fa7000 - 0x2146fe7 +com.adobe.owl AdobeOwl version 3.0.81 (3.0.81) <9C261D9E-9BD7-5DE6-5676-AEEF4828D17B> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeOwl.framework/Versions/A/AdobeOwl 0x21af000 - 0x22e7fe7 +WRServices ??? (???) <52CE5B97-1E6A-92A2-EA70-93511AB7EA2E> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/WRServices.framework/Versions/A/WRServices 0x232d000 - 0x239afef +FileInfo ??? (???) <4A4C74F9-CA83-B174-F56D-F7671DC61389> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/FileInfo.framework/Versions/A/FileInfo 0x23b5000 - 0x23dbff6 +AdobeAXE8SharedExpat ??? (???) <5848BBCE-3A3E-66EE-5527-97A96F0CA4CC> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeAXE8SharedExpat.framework/Versions/A/AdobeAXE8SharedExpat 0x23ec000 - 0x2407fff +AdobeBIB ??? (???) <3B3092DC-A296-9D1C-1922-D20E6A5A7D7E> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeBIB.framework/Versions/A/AdobeBIB 0x2411000 - 0x2469ff7 +AdobeXMP ??? (???) <73329999-C364-2451-6574-4D0277057D19> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeXMP.framework/Versions/A/AdobeXMP 0x2478000 - 0x2aa6fe7 +AdobeAGM ??? (???) <91D37E54-E985-47E1-2696-0BD7E4183132> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeAGM.framework/Versions/A/AdobeAGM 0x2c04000 - 0x2d18fff +AdobeACE ??? (???) <DD291A17-ECF4-FE20-5837-AC1F5BC76940> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeACE.framework/Versions/A/AdobeACE 0x2d3b000 - 0x302dff7 +AdobeCoolType ??? (???) <9FDD596D-9824-2BB9-5DA2-25DACAB6A324> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeCoolType.framework/Versions/A/AdobeCoolType 0x30b5000 - 0x30d6ff7 +AdobeBIBUtils ??? (???) <E1FAA7A3-E807-DE5A-1F68-7A53780E8202> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeBIBUtils.framework/Versions/A/AdobeBIBUtils 0x30e2000 - 0x311efff +AdobeARE ??? (???) <76851E91-2381-5D05-742C-BB24E4BAD276> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeARE.framework/Versions/A/AdobeARE 0x3127000 - 0x34ffff7 +AdobeMPS ??? (???) <13614867-4D80-EB74-FA7F-6136492478BA> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeMPS.framework/Versions/A/AdobeMPS 0x362e000 - 0x3c62feb +AdobePDFL ??? (???) <49D6D58A-1EBB-424A-4CB0-8F9691E0991D> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobePDFL.framework/Versions/A/AdobePDFL 0x3d8e000 - 0x4ad1fff +com.adobe.psl AdobePSL 12.0.0.7524 (12.0.0.7524) <CFBCB19A-03F7-D095-1F48-8D68F05A25C5> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobePSL.framework/Versions/A/AdobePSL 0x4e10000 - 0x4e9aff7 +com.adobe.AdobeScCore ScCore 4.1.7 (4.1.7.5522) <053A109E-3E3E-D3EE-7186-4920D927D2AD> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeScCore.framework/Versions/A/AdobeScCore 0x4edd000 - 0x4fc0fef +AdobePDFPort ??? (???) <A2E6DCF7-283F-09E9-53AE-D5D84D020469> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobePDFPort.framework/Versions/A/AdobePDFPort 0x4ff5000 - 0x4ff8ff8 +com.adobe.ape.shim adbeape version 3.1.65.7508 (3.1.65.7508) <FFDDAB7A-220F-7344-F12B-010CA0C41DAB> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/adbeape.framework/Versions/A/adbeape 0x4ffe000 - 0x508fff7 +libicucnv.dylib.36.0 36.0.0 (compatibility 36.0.0) <581475CC-C039-1B42-49BA-71811D8B4E15> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ICUConverter.framework/Versions/3.6/libicucnv.dylib.36.0 0x50ae000 - 0x5a5efff +libicudata.dylib.36.0 36.0.0 (compatibility 36.0.0) <02108DEA-3DD2-14BE-DAEB-BE522B619C1D> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ICUData.framework/Versions/3.6/libicudata.dylib.36.0 0x5a61000 - 0x5b2eff3 +libicui18n.dylib.36.0 36.0.0 (compatibility 36.0.0) <08F15219-7F35-574E-7725-1ACAA1B18A00> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ICUInternationalization.framework/Versions/3.6/libicui18n.dylib.36.0 0x5b91000 - 0x5c6bfef +libicuuc.dylib.36.0 36.0.0 (compatibility 36.0.0) <5EE72009-40B3-7FB7-3A49-576AEDE0D400> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ICUUnicode.framework/Versions/3.6/libicuuc.dylib.36.0 0x5cab000 - 0x6a36fe7 +com.adobe.illustrator 382 (15.0.0) <64F68532-0311-6BBA-1F50-246CAF917549> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AILib.framework/Versions/A/AILib 0x781b000 - 0x785ffff +com.adobe.illustrator.aiport AIPort version 1.0 (1.0) <69EDC44E-D7BB-A259-282D-C42725AE0E26> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AIPort.framework/Versions/A/AIPort 0x78c2000 - 0x7908fff +FilterPort ??? (???) <23FAE9D1-9376-1E71-21F7-D3EB2BFD50EE> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/FilterPort.framework/Versions/A/FilterPort 0x797d000 - 0x797dfff +SPBasic ??? (???) <5D1760D8-C910-C641-0BC9-CF74A1A5190D> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/SPBasic.framework/Versions/A/SPBasic 0x7981000 - 0x7b67ff7 +com.adobe.linguistic.LinguisticManager 5.0.0 (11309) <CA1D50A3-F965-F8B2-76B9-007F290C5791> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeLinguistic.framework/Versions/3/AdobeLinguistic 0x7bf5000 - 0x7cc2fe7 +AdobeAXEDOMCore ??? (???) <F76D74DC-FD5A-9783-C447-2E58773DA7E1> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeAXEDOMCore.framework/Versions/A/AdobeAXEDOMCore 0x7d31000 - 0x7ea9ffb +com.adobe.PlugPlug 2.0.0.746 (2.0.0.746) <08AD22E3-34C0-6749-E497-616C66A246AD> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/PlugPlug.framework/Versions/A/PlugPlug 0x7f4d000 - 0x7f6afef +libCurl.dylib ??? (???) <1BA6E2DE-EF14-D50A-4697-035AE07875D7> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/MacOS/libCurl.dylib 0x7f72000 - 0x7f88ff4 +libChar16.dylib ??? (???) <19B0479C-72B1-EE14-6385-7F655DEC0F02> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/MacOS/libChar16.dylib 0x7f90000 - 0x7fb3fe0 +libCoreTypes.dylib ??? (???) <F5306147-FFBD-2826-D356-B26258DBFA09> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/MacOS/libCoreTypes.dylib 0x7fc3000 - 0x7fcaffc com.apple.carbonframeworktemplate 1.0 (1.0) <0D270CC7-B715-943E-2B4F-5C9B5775505A> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/NetIO.framework/Versions/A/NetIO 0x7fd6000 - 0x7fd9fff +Dioxide.dylib ??? (???) <BCE94F23-4CCA-20FB-79A8-DE7925879DCD> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/Dioxide.dylib 0x7fe1000 - 0x7fe7ffc +libfwutility.dylib ??? (???) <6A723D9E-A60B-56EE-2B8D-B91991793749> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/libfwutility.dylib 0x7fee000 - 0x803efff +com.macromedia.javascript Javascript version 1.0 (1.0) <540CB029-3946-8E41-BD91-AED6F73C86B7> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/Javascript.framework/Versions/A/Javascript 0x8053000 - 0x8060fff +com.macromedia.moa Moa version 1.0 (1.0) <3C4B7F42-5A5D-78E7-B1DC-DAA06A99CCB2> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/Moa.framework/Versions/A/Moa 0x8069000 - 0x8070fff +com.macromedia.morefiles MoreFiles version 1.0 (1.0) <36115C66-79A3-5DB9-B36B-8D655B46FC76> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/MoreFiles.framework/Versions/A/MoreFiles 0x8077000 - 0x815bfe3 +libPowerPlant2.dylib ??? (???) <964FB3D7-B7EE-94EB-FD95-4AE90C657A4A> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/libPowerPlant2.dylib 0x828e000 - 0x8294ffb +com.macromedia.testframework 1.0 (1.0) <ED14FA00-1C6F-D433-1EEB-833BB4402B2B> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/uwchar.framework/Versions/A/uwchar 0x8298000 - 0x829cffc +com.adobe.AdobeCrashReporter 3.0 (3.0.20100302) <E6437929-0E69-8A56-E69F-F64305E82DD9> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeCrashReporter.framework/Versions/A/AdobeCrashReporter 0x82a3000 - 0x82bbfef +libgiff.dylib ??? (???) <8F90552B-3D11-2B1E-D1BA-A109FEB99969> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/libgiff.dylib 0x82c3000 - 0x82e1fe7 +com.macromedia.png LibPNG version 1.0 (1.0) <2DBA0A3F-4F01-7474-0FED-3021382D635F> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/LibPNG.framework/Versions/A/LibPNG 0x82e9000 - 0x82f7feb +com.macromedia.zlib ZLib version 1.0 (1.0) <EEA4CFAF-A748-FA72-91F0-ADE7A1BE9FA7> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ZLib.framework/Versions/A/ZLib 0x82fc000 - 0x8300ffd +com.yourcompany.yourcocoaframework ??? (1.0) <7EF7A82E-0AAE-0022-3B15-7C50F1C550C1> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ASEFramework.framework/Versions/A/ASEFramework 0x8305000 - 0x830cff2 +com.adobe.boost_threads.framework boost_threads version 5.0.0 (5.0.0.0) <F966C78A-3CC1-8678-B3B7-B0A2B118343A> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/boost_threads.framework/Versions/A/boost_threads 0x831c000 - 0x8322fef +com.adobe.boost_date_time.framework boost_date_time version 5.0.0 (5.0.0.0) <8837A972-1EBE-CAA9-473A-CD157F17163D> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/boost_date_time.framework/Versions/A/boost_date_time 0x8333000 - 0x83b0fff +AdobeOwlCanvas ??? (???) <65B2E680-4F43-BE46-2290-3500758D1BF7> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/AdobeOwlCanvas.framework/Versions/A/AdobeOwlCanvas 0x83cc000 - 0x83d7ff3 +com.adobe.boost_filesystem.framework boost_filesystem version 5.0.0 (5.0.0.0) <90B8B4E3-6C44-D110-1545-1A34EB14B22D> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/boost_filesystem.framework/Versions/A/boost_filesystem 0x83eb000 - 0x83edffb +com.adobe.boost_system.framework boost_system version 5.0.0 (5.0.0.0) <0C4D56E8-9593-4C4A-4A7E-BEAEDE1CA131> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/boost_system.framework/Versions/A/boost_system ... E86745B94A4B> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontParser.dylib 0x9984a000 - 0x9989aff7 com.apple.framework.familycontrols 2.0.2 (2020) <AF7F86F1-F7BF-CBA8-7A4A-D8F7A19F9601> /System/Library/PrivateFrameworks/FamilyControls.framework/Versions/A/FamilyControls 0x99a6e000 - 0x99a6fff7 com.apple.audio.units.AudioUnit 1.6.5 (1.6.5) <BE4C2495-B758-AD22-DCC0-56A6791E948E> /System/Library/Frameworks/AudioUnit.framework/Versions/A/AudioUnit 0x99a72000 - 0x99a86ffb com.apple.speech.synthesis.framework 3.10.35 (3.10.35) <9F5CE4F7-D05C-8C14-4B76-E43D07A8A680> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/SpeechSynthesis.framework/Versions/A/SpeechSynthesis 0xb0000000 - 0xb000fff8 +com.adobe.ahclientframework 1.5.0.30 (1.5.0.30) <24B39C2F-79B0-BDE3-C6D0-1F0E943070C7> /Applications/Adobe Fireworks CS5/Adobe Fireworks CS5.app/Contents/Frameworks/ahclient.framework/Versions/A/ahclient 0xffff0000 - 0xffff1fff libSystem.B.dylib ??? (???) <62291026-D016-705D-DC1E-FC2B09D47DE5> /usr/lib/libSystem.B.dylib If you prefer, Here are the crashes on Pastebin: Crash 1 (Fireworks) Crash 2 (Appcelerator Titanium)

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • I need advices: small memory footprint linux mail server with spam filtering

    - by petermolnar
    I have a VPS which is originally destined to be a webserver but some minimal mail capabilities are needed to be deployed as well, including sending and receiving as standalone server. The current setup is the following: Postfix reveices the mail, the users are in virtual tables, stored in MySQL on connection all servers are tested with policyd-weight service against some DNSBLs all mail is runs through SpamAssassin spamd with the help of spamc client the mail is then delivered with Dovecot 2' LDA (local delivery agent), virtual users as well As you saw... there's no virus scanner running, and that's for a reason: clamav eats all the memory possible and also, virus mails are all filtered out with this setup (I've tested the same with ClamAV enabled for 1,5 years, no virus mail ever got even to ClamAV) I don't use amavisd and I really don't want to. You only need that monster if you have plenty of memory and lots of simultaneous scanners. It's also a nightmare to fine tune by hand. I run policyd-weight instead of policyd and native DNSBLs in postfix. I don't like to send someone away because a single service listed them. Important statement: everything works fine. I receive very small amount of spam, nearly never get a false positive and most of the bad mail is stopped by policyd-weight. The only "problem" that I feel the services at total uses a bit much memory alltogether. I've already cut the modules of spamassassin (see below), but I'd really like to hear some advices how to cut the memory footprint as low as possible, mostly: what plugins SpamAssassin really needs and what are more or less useless, regarding to my current postfix & policyd-weight setup? SpamAssassin rules are also compiled with sa-compile (sa-update runs once a week from cron, compile runs right after that) These are some of the current configurations that may matter, please tell me if you need anything more. postfix/master.cf (parts only) dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender} postfix/main.cf (parts only) smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_pipelining, reject_unauth_destination, check_policy_service inet:127.0.0.1:12525, permit policyd-weight.conf (parts only) $REJECTMSG = "550 Mail appeared to be SPAM or forged. Ask your Mail/DNS-Administrator to correct HELO and DNS MX settings or to get removed from DNSBLs"; $REJECTLEVEL = 4; $DEFER_STRING = 'IN_SPAMCOP= BOGUS_MX='; $DEFER_ACTION = '450'; $DEFER_LEVEL = 5; $DNSERRMSG = '450 No DNS entries for your MTA, HELO and Domain. Contact YOUR administrator'; # 1: ON, 0: OFF (default) # If ON request that ALL clients are only checked against RBLs $dnsbl_checks_only = 0; # 1: ON (default), 0: OFF # When set to ON it logs only RBLs which affect scoring (positive or negative) $LOG_BAD_RBL_ONLY = 1; ## DNSBL settings @dnsbl_score = ( # host, hit, miss, log name 'dnsbl.ahbl.org', 3, -1, 'dnsbl.ahbl.org', 'dnsbl.njabl.org', 3, -1, 'dnsbl.njabl.org', 'dnsbl.sorbs.net', 3, -1, 'dnsbl.sorbs.net', 'bl.spamcop.net', 3, -1, 'bl.spamcop.net', 'zen.spamhaus.org', 3, -1, 'zen.spamhaus.org', 'pbl.spamhaus.org', 3, -1, 'pbl.spamhaus.org', 'cbl.abuseat.org', 3, -1, 'cbl.abuseat.org', 'list.dsbl.org', 3, -1, 'list.dsbl.org', ); # If Client IP is listed in MORE DNSBLS than this var, it gets REJECTed immediately $MAXDNSBLHITS = 3; # alternatively, if the score of DNSBLs is ABOVE this level, reject immediately $MAXDNSBLSCORE = 9; $MAXDNSBLMSG = '550 Az levelezoszerveruk IP cime tul sok spamlistan talahato, kerjuk ellenorizze! / Your MTA is listed in too many DNSBLs; please check.'; ## RHSBL settings @rhsbl_score = ( 'multi.surbl.org', 4, 0, 'multi.surbl.org', 'rhsbl.ahbl.org', 4, 0, 'rhsbl.ahbl.org', 'dsn.rfc-ignorant.org', 4, 0, 'dsn.rfc-ignorant.org', # 'postmaster.rfc-ignorant.org', 0.1, 0, 'postmaster.rfc-ignorant.org', # 'abuse.rfc-ignorant.org', 0.1, 0, 'abuse.rfc-ignorant.org' ); # skip a RBL if this RBL had this many continuous errors $BL_ERROR_SKIP = 2; # skip a RBL for that many times $BL_SKIP_RELEASE = 10; ## cache stuff # must be a directory (add trailing slash) $LOCKPATH = '/var/run/policyd-weight/'; # socket path for the cache daemon. $SPATH = $LOCKPATH.'/polw.sock'; # how many seconds the cache may be idle before starting maintenance routines #NOTE: standard maintenance jobs happen regardless of this setting. $MAXIDLECACHE = 60; # after this number of requests do following maintenance jobs: checking for config changes $MAINTENANCE_LEVEL = 5; # negative (i.e. SPAM) result cache settings ################################## # set to 0 to disable caching for spam results. To this level the cache will be cleaned. $CACHESIZE = 2000; # at this number of entries cleanup takes place $CACHEMAXSIZE = 4000; $CACHEREJECTMSG = '550 temporarily blocked because of previous errors'; # after NTTL retries the cache entry is deleted $NTTL = 1; # client MUST NOT retry within this seconds in order to decrease TTL counter $NTIME = 30; # positve (i.,e. HAM) result cache settings ################################### # set to 0 to disable caching of HAM. To this number of entries the cache will be cleaned $POSCACHESIZE = 1000; # at this number of entries cleanup takes place $POSCACHEMAXSIZE = 2000; $POSCACHEMSG = 'using cached result'; #after PTTL requests the HAM entry must succeed one time the RBL checks again $PTTL = 60; # after $PTIME in HAM Cache the client must pass one time the RBL checks again. #Values must be nonfractal. Accepted time-units: s, m, h, d $PTIME = '3h'; # The client must pass this time the RBL checks in order to be listed as hard-HAM # After this time the client will pass immediately for PTTL within PTIME $TEMP_PTIME = '1d'; ## DNS settings # Retries for ONE DNS-Lookup $DNS_RETRIES = 1; # Retry-interval for ONE DNS-Lookup $DNS_RETRY_IVAL = 5; # max error count for unresponded queries in a complete policy query $MAXDNSERR = 3; $MAXDNSERRMSG = 'passed - too many local DNS-errors'; # persistent udp connection for DNS queries. #broken in Net::DNS version 0.51. Works with Net::DNS 0.53; DEFAULT: off $PUDP= 0; # Force the usage of Net::DNS for RBL lookups. # Normally policyd-weight tries to use a faster RBL lookup routine instead of Net::DNS $USE_NET_DNS = 0; # A list of space separated NS IPs # This overrides resolv.conf settings # Example: $NS = '1.2.3.4 1.2.3.5'; # DEFAULT: empty $NS = ''; # timeout for receiving from cache instance $IPC_TIMEOUT = 2; # If set to 1 policyd-weight closes connections to smtpd clients in order to avoid too many #established connections to one policyd-weight child $TRY_BALANCE = 0; # scores for checks, WARNING: they may manipulate eachother # or be factors for other scores. # HIT score, MISS Score @client_ip_eq_helo_score = (1.5, -1.25 ); @helo_score = (1.5, -2 ); @helo_score = (0, -2 ); @helo_from_mx_eq_ip_score= (1.5, -3.1 ); @helo_numeric_score= (2.5, 0 ); @from_match_regex_verified_helo= (1,-2 ); @from_match_regex_unverified_helo = (1.6, -1.5 ); @from_match_regex_failed_helo = (2.5, 0 ); @helo_seems_dialup = (1.5, 0 ); @failed_helo_seems_dialup= (2, 0 ); @helo_ip_in_client_subnet= (0,-1.2 ); @helo_ip_in_cl16_subnet = (0,-0.41 ); #@client_seems_dialup_score = (3.75, 0 ); @client_seems_dialup_score = (0, 0 ); @from_multiparted = (1.09, 0 ); @from_anon= (1.17, 0 ); @bogus_mx_score = (2.1, 0 ); @random_sender_score = (0.25, 0 ); @rhsbl_penalty_score = (3.1, 0 ); @enforce_dyndns_score = (3, 0 ); spamassassin/init.pre (I've put the .pre files together) loadplugin Mail::SpamAssassin::Plugin::Hashcash loadplugin Mail::SpamAssassin::Plugin::SPF loadplugin Mail::SpamAssassin::Plugin::Pyzor loadplugin Mail::SpamAssassin::Plugin::Razor2 loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold loadplugin Mail::SpamAssassin::Plugin::MIMEHeader loadplugin Mail::SpamAssassin::Plugin::ReplaceTags loadplugin Mail::SpamAssassin::Plugin::Check loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch loadplugin Mail::SpamAssassin::Plugin::URIDetail loadplugin Mail::SpamAssassin::Plugin::Bayes loadplugin Mail::SpamAssassin::Plugin::BodyEval loadplugin Mail::SpamAssassin::Plugin::DNSEval loadplugin Mail::SpamAssassin::Plugin::HTMLEval loadplugin Mail::SpamAssassin::Plugin::HeaderEval loadplugin Mail::SpamAssassin::Plugin::MIMEEval loadplugin Mail::SpamAssassin::Plugin::RelayEval loadplugin Mail::SpamAssassin::Plugin::URIEval loadplugin Mail::SpamAssassin::Plugin::WLBLEval loadplugin Mail::SpamAssassin::Plugin::VBounce loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody spamassassin/local.cf (parts) use_bayes 1 bayes_auto_learn 1 bayes_store_module Mail::SpamAssassin::BayesStore::MySQL bayes_sql_dsn DBI:mysql:db:127.0.0.1:3306 bayes_sql_username user bayes_sql_password pass bayes_ignore_header X-Bogosity bayes_ignore_header X-Spam-Flag bayes_ignore_header X-Spam-Status ### User settings user_scores_dsn DBI:mysql:db:127.0.0.1:3306 user_scores_sql_password user user_scores_sql_username pass user_scores_sql_custom_query SELECT preference, value FROM _TABLE_ WHERE username = _USERNAME_ OR username = '$GLOBAL' OR username = CONCAT('%',_DOMAIN_) ORDER BY username ASC # for better speed score DNS_FROM_AHBL_RHSBL 0 score __RFC_IGNORANT_ENVFROM 0 score DNS_FROM_RFC_DSN 0 score DNS_FROM_RFC_BOGUSMX 0 score __DNS_FROM_RFC_POST 0 score __DNS_FROM_RFC_ABUSE 0 score __DNS_FROM_RFC_WHOIS 0 UPDATE 01 As adaptr advised I remove policyd-weight and configured postfix postscreen, this resulted approximately -15-20 MB from RAM usage and a lot faster work. I'm not sure it's working at full capacity but it seems promising.

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • UAC being turned off once a day on Windows 7

    - by Mehper C. Palavuzlar
    I have strange problem on my HP laptop. This began to happen recently. Whenever I start my machine, Windows 7 Action Center displays the following warning: You need to restart your computer for UAC to be turned off. Actually, this does not happen if it happened once on a specific day. For example, when I start the machine in the morning, it shows up; but it never shows up in the subsequent restarts within that day. On the next day, the same thing happens again. I never disable UAC, but obviously some rootkit or virus causes this. As soon as I get this warning, I head for the UAC settings, and re-enable UAC to dismiss this warning. This is a bothersome situation as I can't fix it. First, I have run a full scan on the computer for any probable virus and malware/rootkit activity, but TrendMicro OfficeScan said that no viruses have been found. I went to an old Restore Point using Windows System Restore, but the problem was not solved. What I have tried so far (which couldn't find the rootkit): TrendMicro OfficeScan Antivirus AVAST Malwarebytes' Anti-malware Ad-Aware Vipre Antivirus GMER TDSSKiller (Kaspersky Labs) HiJackThis RegRuns UnHackMe SuperAntiSpyware Portable Tizer Rootkit Razor (*) Sophos Anti-Rootkit SpyHunter 4 There are no other strange activities on the machine. Everything works fine except this bizarre incident. What could be the name of this annoying rootkit? How can I detect and remove it? EDIT: Below is the log file generated by HijackThis: Logfile of Trend Micro HijackThis v2.0.4 Scan saved at 13:07:04, on 17.01.2011 Platform: Windows 7 (WinNT 6.00.3504) MSIE: Internet Explorer v8.00 (8.00.7600.16700) Boot mode: Normal Running processes: C:\Windows\system32\taskhost.exe C:\Windows\system32\Dwm.exe C:\Windows\Explorer.EXE C:\Program Files\CheckPoint\SecuRemote\bin\SR_GUI.Exe C:\Windows\System32\igfxtray.exe C:\Windows\System32\hkcmd.exe C:\Windows\system32\igfxsrvc.exe C:\Windows\System32\igfxpers.exe C:\Program Files\Hewlett-Packard\HP Wireless Assistant\HPWAMain.exe C:\Program Files\Synaptics\SynTP\SynTPEnh.exe C:\Program Files\Hewlett-Packard\HP Quick Launch Buttons\QLBCTRL.exe C:\Program Files\Analog Devices\Core\smax4pnp.exe C:\Program Files\Hewlett-Packard\HP Quick Launch Buttons\VolCtrl.exe C:\Program Files\LightningFAX\LFclient\lfsndmng.exe C:\Program Files\Common Files\Java\Java Update\jusched.exe C:\Program Files\Microsoft Office Communicator\communicator.exe C:\Program Files\Iron Mountain\Connected BackupPC\Agent.exe C:\Program Files\Trend Micro\OfficeScan Client\PccNTMon.exe C:\Program Files\Microsoft LifeCam\LifeExp.exe C:\Program Files\Hewlett-Packard\Shared\HpqToaster.exe C:\Program Files\Windows Sidebar\sidebar.exe C:\Program Files\mimio\mimio Studio\system\aps_tablet\atwtusb.exe C:\Program Files\Microsoft Office\Office12\OUTLOOK.EXE C:\Program Files\Babylon\Babylon-Pro\Babylon.exe C:\Program Files\Mozilla Firefox\firefox.exe C:\Users\userx\Desktop\HijackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = about:blank R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R1 - HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings,AutoConfigURL = http://www.yaysat.com.tr/proxy/proxy.pac R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = O2 - BHO: AcroIEHelperStub - {18DF081C-E8AD-4283-A596-FA578C2EBDC3} - C:\Program Files\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll O2 - BHO: Babylon IE plugin - {9CFACCB6-2F3F-4177-94EA-0D2B72D384C1} - C:\Program Files\Babylon\Babylon-Pro\Utils\BabylonIEPI.dll O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files\Java\jre6\bin\jp2ssv.dll O4 - HKLM\..\Run: [IgfxTray] C:\Windows\system32\igfxtray.exe O4 - HKLM\..\Run: [HotKeysCmds] C:\Windows\system32\hkcmd.exe O4 - HKLM\..\Run: [Persistence] C:\Windows\system32\igfxpers.exe O4 - HKLM\..\Run: [hpWirelessAssistant] C:\Program Files\Hewlett-Packard\HP Wireless Assistant\HPWAMain.exe O4 - HKLM\..\Run: [SynTPEnh] C:\Program Files\Synaptics\SynTP\SynTPEnh.exe O4 - HKLM\..\Run: [QlbCtrl.exe] C:\Program Files\Hewlett-Packard\HP Quick Launch Buttons\QlbCtrl.exe /Start O4 - HKLM\..\Run: [SoundMAXPnP] C:\Program Files\Analog Devices\Core\smax4pnp.exe O4 - HKLM\..\Run: [Adobe Reader Speed Launcher] "C:\Program Files\Adobe\Reader 9.0\Reader\Reader_sl.exe" O4 - HKLM\..\Run: [Adobe ARM] "C:\Program Files\Common Files\Adobe\ARM\1.0\AdobeARM.exe" O4 - HKLM\..\Run: [lfsndmng] C:\Program Files\LightningFAX\LFclient\LFSNDMNG.EXE O4 - HKLM\..\Run: [SunJavaUpdateSched] "C:\Program Files\Common Files\Java\Java Update\jusched.exe" O4 - HKLM\..\Run: [Communicator] "C:\Program Files\Microsoft Office Communicator\communicator.exe" /fromrunkey O4 - HKLM\..\Run: [AgentUiRunKey] "C:\Program Files\Iron Mountain\Connected BackupPC\Agent.exe" -ni -sss -e http://localhost:16386/ O4 - HKLM\..\Run: [OfficeScanNT Monitor] "C:\Program Files\Trend Micro\OfficeScan Client\pccntmon.exe" -HideWindow O4 - HKLM\..\Run: [Babylon Client] C:\Program Files\Babylon\Babylon-Pro\Babylon.exe -AutoStart O4 - HKLM\..\Run: [LifeCam] "C:\Program Files\Microsoft LifeCam\LifeExp.exe" O4 - HKCU\..\Run: [Sidebar] C:\Program Files\Windows Sidebar\sidebar.exe /autoRun O4 - Global Startup: mimio Studio.lnk = C:\Program Files\mimio\mimio Studio\mimiosys.exe O8 - Extra context menu item: Microsoft Excel'e &Ver - res://C:\PROGRA~1\MICROS~1\Office12\EXCEL.EXE/3000 O8 - Extra context menu item: Translate this web page with Babylon - res://C:\Program Files\Babylon\Babylon-Pro\Utils\BabylonIEPI.dll/ActionTU.htm O8 - Extra context menu item: Translate with Babylon - res://C:\Program Files\Babylon\Babylon-Pro\Utils\BabylonIEPI.dll/Action.htm O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - C:\PROGRA~1\MICROS~1\Office12\REFIEBAR.DLL O9 - Extra button: Translate this web page with Babylon - {F72841F0-4EF1-4df5-BCE5-B3AC8ACF5478} - C:\Program Files\Babylon\Babylon-Pro\Utils\BabylonIEPI.dll O9 - Extra 'Tools' menuitem: Translate this web page with Babylon - {F72841F0-4EF1-4df5-BCE5-B3AC8ACF5478} - C:\Program Files\Babylon\Babylon-Pro\Utils\BabylonIEPI.dll O16 - DPF: {00134F72-5284-44F7-95A8-52A619F70751} (ObjWinNTCheck Class) - https://172.20.12.103:4343/officescan/console/html/ClientInstall/WinNTChk.cab O16 - DPF: {08D75BC1-D2B5-11D1-88FC-0080C859833B} (OfficeScan Corp Edition Web-Deployment SetupCtrl Class) - https://172.20.12.103:4343/officescan/console/html/ClientInstall/setup.cab O17 - HKLM\System\CCS\Services\Tcpip\Parameters: Domain = yaysat.com O17 - HKLM\Software\..\Telephony: DomainName = yaysat.com O17 - HKLM\System\CS1\Services\Tcpip\Parameters: Domain = yaysat.com O17 - HKLM\System\CS2\Services\Tcpip\Parameters: Domain = yaysat.com O18 - Protocol: qcom - {B8DBD265-42C3-43E6-B439-E968C71984C6} - C:\Program Files\Common Files\Quest Shared\CodeXpert\qcom.dll O22 - SharedTaskScheduler: FencesShellExt - {1984DD45-52CF-49cd-AB77-18F378FEA264} - C:\Program Files\Stardock\Fences\FencesMenu.dll O23 - Service: Andrea ADI Filters Service (AEADIFilters) - Andrea Electronics Corporation - C:\Windows\system32\AEADISRV.EXE O23 - Service: AgentService - Iron Mountain Incorporated - C:\Program Files\Iron Mountain\Connected BackupPC\AgentService.exe O23 - Service: Agere Modem Call Progress Audio (AgereModemAudio) - LSI Corporation - C:\Program Files\LSI SoftModem\agrsmsvc.exe O23 - Service: BMFMySQL - Unknown owner - C:\Program Files\Quest Software\Benchmark Factory for Databases\Repository\MySQL\bin\mysqld-max-nt.exe O23 - Service: Com4QLBEx - Hewlett-Packard Development Company, L.P. - C:\Program Files\Hewlett-Packard\HP Quick Launch Buttons\Com4QLBEx.exe O23 - Service: hpqwmiex - Hewlett-Packard Development Company, L.P. - C:\Program Files\Hewlett-Packard\Shared\hpqwmiex.exe O23 - Service: OfficeScanNT RealTime Scan (ntrtscan) - Trend Micro Inc. - C:\Program Files\Trend Micro\OfficeScan Client\ntrtscan.exe O23 - Service: SMS Task Sequence Agent (smstsmgr) - Unknown owner - C:\Windows\system32\CCM\TSManager.exe O23 - Service: Check Point VPN-1 Securemote service (SR_Service) - Check Point Software Technologies - C:\Program Files\CheckPoint\SecuRemote\bin\SR_Service.exe O23 - Service: Check Point VPN-1 Securemote watchdog (SR_Watchdog) - Check Point Software Technologies - C:\Program Files\CheckPoint\SecuRemote\bin\SR_Watchdog.exe O23 - Service: Trend Micro Unauthorized Change Prevention Service (TMBMServer) - Trend Micro Inc. - C:\Program Files\Trend Micro\OfficeScan Client\..\BM\TMBMSRV.exe O23 - Service: OfficeScan NT Listener (tmlisten) - Trend Micro Inc. - C:\Program Files\Trend Micro\OfficeScan Client\tmlisten.exe O23 - Service: OfficeScan NT Proxy Service (TmProxy) - Trend Micro Inc. - C:\Program Files\Trend Micro\OfficeScan Client\TmProxy.exe O23 - Service: VNC Server Version 4 (WinVNC4) - RealVNC Ltd. - C:\Program Files\RealVNC\VNC4\WinVNC4.exe -- End of file - 8204 bytes As suggested in this very similar question, I have run full scans (+boot time scans) with RegRun and UnHackMe, but they also did not find anything. I have carefully examined all entries in the Event Viewer, but there's nothing wrong. Now I know that there is a hidden trojan (rootkit) on my machine which seems to disguise itself quite successfully. Note that I don't have the chance to remove the HDD, or reinstall the OS as this is a work machine subjected to certain IT policies on a company domain. Despite all my attempts, the problem still remains. I strictly need a to-the-point method or a pukka rootkit remover to remove whatever it is. I don't want to monkey with the system settings, i.e. disabling auto runs one by one, messing the registry, etc. EDIT 2: I have found an article which is closely related to my trouble: Malware can turn off UAC in Windows 7; “By design” says Microsoft. Special thanks(!) to Microsoft. In the article, a VBScript code is given to disable UAC automatically: '// 1337H4x Written by _____________ '// (12 year old) Set WshShell = WScript.CreateObject("WScript.Shell") '// Toggle Start menu WshShell.SendKeys("^{ESC}") WScript.Sleep(500) '// Search for UAC applet WshShell.SendKeys("change uac") WScript.Sleep(2000) '// Open the applet (assuming second result) WshShell.SendKeys("{DOWN}") WshShell.SendKeys("{DOWN}") WshShell.SendKeys("{ENTER}") WScript.Sleep(2000) '// Set UAC level to lowest (assuming out-of-box Default setting) WshShell.SendKeys("{TAB}") WshShell.SendKeys("{DOWN}") WshShell.SendKeys("{DOWN}") WshShell.SendKeys("{DOWN}") '// Save our changes WshShell.SendKeys("{TAB}") WshShell.SendKeys("{ENTER}") '// TODO: Add code to handle installation of rebound '// process to continue exploitation, i.e. place something '// evil in Startup folder '// Reboot the system '// WshShell.Run "shutdown /r /f" Unfortunately, that doesn't tell me how I can get rid of this malicious code running on my system. EDIT 3: Last night, I left the laptop open because of a running SQL task. When I came in the morning, I saw that UAC was turned off. So, I suspect that the problem is not related to startup. It is happening once a day for sure no matter if the machine is rebooted.

    Read the article

  • DNSSEC - Ad Flag not activated

    - by Arancha
    Hi all, I have some doubts regarding DNSSEC. I have one server acting as an Authoritative Name Server and another one as a Cache/Resolver. I'm using Bind 9.7.1-P2 and these are my configuration files: Named.conf (Authoritative Server) // Opciones de configuracion del servidor include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; }; }; options{ version "Peticion no permitida/Query not allowed"; hostname "Peticion no permitida/Query not allowed"; server-id "Peticion no permitida/Query not allowed"; directory "/etc/DNS_RIMA"; pid-file "named.pid"; notify yes; #files 65535; dnssec-enable yes; dnssec-validation yes; allow-transfer { 172.23.2.37; 172.23.3.39; }; transfer-format many-answers; transfers-per-ns 5; transfers-in 10; max-transfer-time-in 120; check-names master ignore; listen-on {172.23.2.57; 80.58.102.13; 80.58.102.103; 127.0.0.1; }; }; zone "test.dnssec" { type master; key-directory "keys"; file "db.test.dnssec.signed"; also-notify { 172.23.2.37 ; 172.23.3.39 ; }; allow-transfer { 172.23.2.37 ; 172.23.3.39 ; }; }; test.dnssec zone test.dnssec. 86400 IN SOA ns.test.dnssec. mxadmin.test.dnssec. ( 2010090902 ; serial 21600 ; refresh (6 hours) 3600 ; retry (1 hour) 1814400 ; expire (3 weeks) 172800 ; minimum (2 days) ) 86400 RRSIG SOA 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. eY99laB6PrtETaXLdCS+G8Uq1lIK7d5vxUB1 pAQ9npv/YbvX1pdWZKGojDgPGw8V65Q0zKQo YW1VuBzvwfSRKax+yrjJzvHQGfCZPJWARehK hgLxHOfXLVH7tyndvLD49ZKcWtrop+Tuy4n9 apWWfSJZxCOngwS7zUi0zCTKfPs= ) 86400 NS ns1.test.dnssec. 86400 RRSIG NS 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. lmlP/Mb2qEXPSlajgSDn/CqWk/jokVCmqjeo idNuytxbiFnbCOunzvaYpgvDpEr0CPrwXaDL TSnb/w53tZl7GHRImJo50vwwNZljLzNT6CFw aaQXFc3rDLsXjCi+WF0/Z7meteM4jYdx5nrV Qx9pgur7VPbP88bJOqWCPBev2Ho= ) 172800 NSEC a.test.dnssec. NS SOA RRSIG NSEC DNSKEY 172800 RRSIG NSEC 5 2 172800 20101009062248 ( 20100909062248 40665 test.dnssec. E76ayamsAAz8Zcj7060KY0nTFzHPztM/Pkc5 OM0EcP7C5+ocn4L8M2J0rmR3jxfYvCpOk0BQ Zniqn9Aw41Qk068yJ2dfDPwV5zT0+te0nzwC /awJGPMXLzMj4JejYTlTiKfspGDJCG44F+lb lHXdcUhbjXf3loqMQadZFQ/eSn0= ) 86400 DNSKEY 256 3 5 ( AwEAAbQ8qrNN5vetx/7E1VOgXZ7fLqwG1y/i 55hWGCeLbcS95ratT9A6UospOvPSwPTlrFgF RWP67Pubzbsy7/damS1F1+p4GgBQway52Hd1 8HjdHKKC6kIxna9pOJBRfhCdzAsv9LnpRvrw mDpcFAqhdn5k5RqwcUF1eOZrKjxXjAOr ) ; key id = 40665 86400 DNSKEY 257 3 5 ( AwEAAcd4dxWyTgOuqha0DJADUH0pk5jvnwdM ZhgZaqnayUdeTh8U9WOjOUHdVCGywZS6NTVp xXqhcegWzh2ZR5VN6thuhezt7kbzLNWbPe7m YF29/ZTXB6nmdSxruQlSvYhzkWTaPNtfrUnI UlbDRxUFWQkSHj9LA1TG76FpR6uqOj1sNrWX nPb/Hwp1Sb2Ik4FlifKb/Vu1+/UnclRJgfPm p2HGTeNYpfk15JHBPSYxJ1TuedXQIdkPGlQX ISmAeV1evGomCC/x9DNleDHCszJOptwurzRP Z7wRXcWnbXz1BU8rAqvUZL3M4UgdNRR5LLTz CkRnrlvXYJpgzDtgmQxE9Bs= ) ; key id = 59647 86400 RRSIG DNSKEY 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. sa4W3tvl6n0TkIcq3xzhG17C2O0lRhllrpUd n5Hs6yVo8r7stewP6tm2XscQiAeseDgmv28w s6Mtiz8uPUbrgFRb6SJk7coH2n/2Y3//S9YP NldDFv3luPnnU1TBb3jDsBKIZWHU9yl/cLNA OKUhlMDd40txk+fQi3iiV5Ls9K8= ) 86400 RRSIG DNSKEY 5 2 86400 20101009062248 ( 20100909062248 59647 test.dnssec. b5fz0dEp2co2pVO7biY896XmsJanjQIR69vC MvSF104/9iZk6eGVFi6hsa4aZcXutEjUDESB ynPkDjMWWIIhN6K1jYKGIc/sFKv1IUONRYHF KXGgZhC6aI0B1E4NA9AXLjlBVF60nHdc3iw8 5gTLDjypP3qAZrnzMvdiBopLnVdB25UZYKn8 mGpOuzKqX02TGMCFMlEVtMX4FP/XKAE8UjiQ 5ehC1JvIKIyg/2zM+ot3nmcqqtUfzp/Hweyc aIkl/9wPJPwMedfTqOjfUKFdB+GiZ0Zz16HZ 5MfJui5IGh5Y6Q04kMrnap2V5U7mByTzx/ud V/eFYhmSHGtAXzBjMA== ) a.test.dnssec. 86400 IN A 1.1.1.1 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. P52N9ypCrYsgS4CFcUmII0xjyE6KNL9ndhzH oU63fHJHQHeQV+fc0Rx8cCmZSzuqk1lSBelV 3Gcl9UNNuCAQ4ORQ/yJkiZ1zn7h93Mep9qsg YEUQJMfk4FLjYW67DHNcuoCnKbDJhZS0ndVf I474k7ZEZJsGslwk/vcIoFnTa4o= ) 172800 NSEC b.test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. TCduf7xPSrWvEAzBO7Kx5haR85yA/lbsswkQ v0QxlskqAqo+9YedGQV+wGblbCIOmkomrYcq u/rXQ5yoQ3SDXd/bw6EFdoQmH8UJOjMc7SdR xY93MjawPB6XXlJsSlbBFPWJwEpILVRhdBFX czdS5VCa1KmhAYZYQp1FY9rMelA= ) b.test.dnssec. 86400 IN A 2.2.2.2 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. f0M6Tcqe6B09ctaN3BGAit4u4cJE8x3Ik8sh gyMu0GN/lMv/Bo7PB6hgylLam3HXtF1pPAzX oYudXmhU8afPapHMXfUitC1lFQB5ZW052ZC7 JXV9MnGULydz1blj2EdN+JL3Za8SJKM0LrLB XdQ+QUV+A/6N7hUV6usz5YmdBeI= ) 172800 NSEC ns1.test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. sc6v19dcOFVa295/Xf1pKxBhbdpEErY8CTDQ fw2fjJf0Y3wL1Y1Mlr5zi5ShceQwgua+6YHE DWNbAPcXrJ0lLMU4DU5r0sAyBiBCgCavngGk i59W+nv11zuIpPMnlaMHpJVfJrQ+c4z7H9MH 77B0fMRFTUnvAXoq6ag8Q5POITI= ) ns1.test.dnssec. 86400 IN A 3.3.3.3 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. UQ3hR/++ta1GokxGz8Yh+GomMcA+xhd3z2Ke z0tdFiNfxvGbm85XyCtSqJIo2S/ZLVJUv/mG nGJbicTfJSziKzYZsD7dp0WJiUK3l7lQ/HpP 5FL8SbjlovVYYAG5woW4p3+os28mmCAJA8gP JTywbcREEhFB4cir2M/QVP+9h+Y= ) 172800 NSEC test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. i7F/ezGl/pGXCC6JyVDaxuwdZMAgv9QLxwzi PTgjCG8Sj6pTIxaQkSLwXsoB9gF77WWBANow R2SWdz0Zai2vWnv/NYoNm9ZfRJEQ9NuExeYp rvX/+lLOHvZXN6tUerIQbWAxO2GwdzHoejSn wReUNVr9MxzZUvuJ33Z7X/7s9VQ= ) Named.conf (Cache/Resolver) include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; }; }; options{ version "Peticion no permitida/Query not allowed"; hostname "Peticion no permitida/Query not allowed"; server-id "Peticion no permitida/Query not allowed"; directory "/etc/DNS_RIMA"; pid-file "named.pid"; recursion yes; notify no; #DNSSEC dnssec-enable yes; dnssec-validation yes; listen-on {127.0.0.1; 172.23.2.87; 80.58.102.37; 80.58.102.115; }; #listen-on {127.0.0.1; 80.58.102.37; 80.58.102.115; }; allow-query { telefonica; }; allow-transfer { none; }; recursive-clients 40000; max-cache-size 838860800; rrset-order { order fixed;}; max-ncache-ttl 600; }; trusted-keys { "test.dnssec." 257 3 5 "AwEAAcd4dxWyTgOuqha0DJADUH0pk5jvnwdMZhgZaqnayUdeTh8U9WOjOUHdVCGywZS6NTVpxXqhcegWzh2ZR5VN6thuhezt7kbzLNWbPe7mYF29/ZT XB6nmdSxruQlSvYhzkWTaPNtfrUnIUlbDRxUFWQkSHj9LA1TG76FpR6uqOj1sNrWXnPb/Hwp1Sb2Ik4FlifKb/Vu1+/UnclRJgfPmp2HGTeNYpfk15JHBPSYxJ1TuedXQIdkPGlQXIS mAeV1evGomCC/x9DNleDHCszJOptwurzRPZ7wRXcWnbXz1BU8rAqvUZL3M4UgdNRR5LLTzCkRnrlvXYJpgzDtgmQxE9Bs="; }; I have configured a secure zone (test.dnssec) and I'm trying to perform some queries from the resolver to the Name server (172.23.2.57): /usr/local/bin/dig @172.23.2.57 a.test.dnssec +dnssec ; <<>> DiG 9.7.1-P2 <<>> @172.23.2.57 a.test.dnssec +dnssec ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2654 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 3 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; QUESTION SECTION: ;a.test.dnssec. IN A ;; ANSWER SECTION: a.test.dnssec. 86400 IN A 1.1.1.1 a.test.dnssec. 86400 IN RRSIG A 5 3 86400 20101009062248 20100909062248 40665 test.dnssec. P52N9ypCrYsgS4CFcUmII0xjyE6KNL9ndhzHoU63fHJHQHeQV+ fc0Rx8 cCmZSzuqk1lSBelV3Gcl9UNNuCAQ4ORQ/yJkiZ1zn7h93Mep9qsgYEUQ JMfk4FLjYW67DHNcuoCnKbDJhZS0ndVfI474k7ZEZJsGslwk/vcIoFnT a4o= ;; AUTHORITY SECTION: test.dnssec. 86400 IN NS ns1.test.dnssec. test.dnssec. 86400 IN RRSIG NS 5 2 86400 20101009062248 20100909062248 40665 test.dnssec. lmlP/Mb2qEXPSlajgSDn/CqWk/jokVCmqjeoidNuytxbiFnbCOunzvaY pgvDpEr0CPrwXaDLTSnb/w53tZl7GHRImJo50vwwNZljLzNT6CFwaaQX Fc3rDLsXjCi+WF0/Z7meteM4jYdx5nrVQx9pgur7VPbP88bJOqWCPBev 2Ho= ;; ADDITIONAL SECTION: ns1.test.dnssec. 86400 IN A 3.3.3.3 ns1.test.dnssec. 86400 IN RRSIG A 5 3 86400 20101009062248 20100909062248 40665 test.dnssec. UQ3hR/++ta1GokxGz8Yh+GomMcA+xhd3z2Kez0tdFiNfxvGbm85XyCtS qJIo2S/ZLVJUv/mGnGJbicTfJSziKzYZsD7dp0WJiUK3l7lQ/HpP5FL8 SbjlovVYYAG5woW4p3+os28mmCAJA8gPJTywbcREEhFB4cir2M/QVP+9 h+Y= ;; Query time: 1 msec ;; SERVER: 172.23.2.57#53(172.23.2.57) ;; WHEN: Thu Sep 9 09:47:14 2010 ;; MSG SIZE rcvd: 605 I obtain the right answer along with the RRSIG records, but the problem is that I'm not seeing the ad flag activated. Any idea about what is wrong????

    Read the article

  • Unstable DNS with bind

    - by yasser abd
    we have a Centos machine called jupiter, on which I have installed bind9, On every other machine the DNS is set to be the IP address of jupiter (192.168.2.101), as you can see in the output of the following command in windows >ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : mypcs Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme 57xx Gigabit Controller Physical Address. . . . . . . . . : 00-1A-A0-AC-E4-CC DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::c16d:3ae4:5907:30c4%8(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.2.98(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Thursday, September 20, 2012 10:26:11 AM Lease Expires . . . . . . . . . . : Sunday, September 23, 2012 10:26:10 AM Default Gateway . . . . . . . . . : 192.168.2.1 DHCP Server . . . . . . . . . . . : 192.168.2.1 DHCPv6 IAID . . . . . . . . . . . : 201333408 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-16-3A-50-01-00-1A-A0-AC-E4-CC DNS Servers . . . . . . . . . . . : 192.168.2.101 192.168.2.1 192.168.2.1 NetBIOS over Tcpip. . . . . . . . : Enabled All machines can always nslookup one of the domain (mydomain.com) that is set in the jupiter's DNS server, you can see that in the output of nslookup on the same windows machine: >nslookup mydomain.com Server: UnKnown Address: 192.168.2.101 Name: mydomain.com Address: 192.168.2.100 The problem is, sometimes mydomain.com can not be pinged, here is the output of the ping on the same windows machine >ping mydomain.com Ping request could not find host mydomain.com. Please check the name and try again. This looks very random, and happens once in a while, so the machine can lookup the DNS records but can't ping it, nor can browse the website that is hosted on mydomain.com, which should resolve to 192.168.2.100 On a linux machine that has the same DNS settings, the output of dig command for mydomain is as follows: $ dig mydomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.2 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36090 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;mydomain.com. IN A ;; ANSWER SECTION: mydomain.com. 86400 IN A 192.168.2.100 ;; AUTHORITY SECTION: mydomain.com. 86400 IN NS jupiter. ;; ADDITIONAL SECTION: jupiter. 86400 IN A 192.168.2.101 ;; Query time: 1 msec ;; SERVER: 192.168.2.101#53(192.168.2.101) ;; WHEN: Thu Sep 20 16:32:14 2012 ;; MSG SIZE rcvd: 83 We've never had the same problem on MACs, they always resolve mydomain.com Here is how I have defined mydomain.com on Bind9's configs on Jupiter, notice that the name of the machine on 192.168.2.100 is venus, so I have this file: /var/named/named.venus: $TTL 1D @ IN SOA jupiter. admin.ourcompany.com. ( 2003052800 ; serial 86400 ; refresh 300 ; retry 604800 ; expire 3600 ; minimum ) @ IN NS jupiter. @ IN A 192.168.2.100 * IN A 192.168.2.100 /var/named/zones/named.venus.zone zone "mydomain.com" IN {type master;file "/var/named/named.venus";allow-update {none;};}; One thing to note is that I haven't defined reverse DNS lookups, only the forward DNS lookups are defined in Bind9 configs, not sure if that's relevant or not. So my question is, why is this being so unstable? what could be the cause?

    Read the article

  • MVC2 EditorTemplate for DropDownList

    - by tschreck
    I've spent the majority of the past week knee deep in the new templating functionality baked into MVC2. I had a hard time trying to get a DropDownList template working. The biggest problem I've been working to solve is how to get the source data for the drop down list to the template. I saw a lot of examples where you can put the source data in the ViewData dictionary (ViewData["DropDownSourceValuesKey"]) then retrieve them in the template itself (var sourceValues = ViewData["DropDownSourceValuesKey"];) This works, but I did not like having a silly string as the lynch pin for making this work. Below is an approach I've come up with and wanted to get opinions on this approach: here are my design goals: The view model should contain the source data for the drop down list Limit Silly Strings Not use ViewData dictionary Controller is responsible for filling the property with the source data for the drop down list Here's my View Model: public class CustomerViewModel { [ScaffoldColumn(false)] public String CustomerCode{ get; set; } [UIHint("DropDownList")] [DropDownList(DropDownListTargetProperty = "CustomerCode"] [DisplayName("Customer Code")] public IEnumerable<SelectListItem> CustomerCodeList { get; set; } public String FirstName { get; set; } public String LastName { get; set; } public String PhoneNumber { get; set; } public String Address1 { get; set; } public String Address2 { get; set; } public String City { get; set; } public String State { get; set; } public String Zip { get; set; } } My View Model has a CustomerCode property which is a value that the user selects from a list of values. I have a CustomerCodeList property that is a list of possible CustomerCode values and is the source for a drop down list. I've created a DropDownList attribute with a DropDownListTargetProperty. DropDownListTargetProperty points to the property which will be populated based on the user selection from the generated drop down (in this case, the CustomerCode property). Notice that the CustomerCode property has [ScaffoldColumn(false)] which forces the generator to skip the field in the generated output. My DropDownList.ascx file will generate a dropdown list form element with the source data from the CustomerCodeList property. The generated dropdown list will use the value of the DropDownListTargetProperty from the DropDownList attribute as the Id and the Name attributes of the Select form element. So the generated code will look like this: <select id="CustomerCode" name="CustomerCode"> <option>... </select> This works out great because when the form is submitted, MVC will populate the target property with the selected value from the drop down list because the name of the generated dropdown list IS the target property. I kinda visualize it as the CustomerCodeList property is an extension of sorts of the CustomerCode property. I've coupled the source data to the property. Here's my code for the controller: public ActionResult Create() { //retrieve CustomerCodes from a datasource of your choosing List<CustomerCode> customerCodeList = modelService.GetCustomerCodeList(); CustomerViewModel viewModel= new CustomerViewModel(); viewModel.CustomerCodeList = customerCodeList.Select(s => new SelectListItem() { Text = s.CustomerCode, Value = s.CustomerCode, Selected = (s.CustomerCode == viewModel.CustomerCode) }).AsEnumerable(); return View(viewModel); } Here's my code for the DropDownListAttribute: namespace AutoForm.Attributes { public class DropDownListAttribute : Attribute { public String DropDownListTargetProperty { get; set; } } } Here's my code for the template (DropDownList.ascx): <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<IEnumerable<SelectListItem>>" %> <%@ Import Namespace="AutoForm.Attributes"%> <script runat="server"> DropDownListAttribute GetDropDownListAttribute() { var dropDownListAttribute = new DropDownListAttribute(); if (ViewData.ModelMetadata.AdditionalValues.ContainsKey("DropDownListAttribute")) { dropDownListAttribute = (DropDownListAttribute)ViewData.ModelMetadata.AdditionalValues["DropDownListAttribute"]; } return dropDownListAttribute; } </script> <% DropDownListAttribute attribute = GetDropDownListAttribute();%> <select id="<%= attribute.DropDownListTargetProperty %>" name="<%= attribute.DropDownListTargetProperty %>"> <% foreach(SelectListItem item in ViewData.Model) {%> <% if (item.Selected == true) {%> <option value="<%= item.Value %>" selected="true"><%= item.Text %></option> <% } %> <% else {%> <option value="<%= item.Value %>"><%= item.Text %></option> <% } %> <% } %> </select> I tried using the Html.DropDownList helper, but it would not allow me to change the Id and Name attributes of the generated Select element. NOTE: you have to override the CreateMetadata method of the DataAnnotationsModelMetadataProvider for the DropDownListAttribute. Here's the code for that: public class MetadataProvider : DataAnnotationsModelMetadataProvider { protected override ModelMetadata CreateMetadata(IEnumerable<Attribute> attributes, Type containerType, Func<object> modelAccessor, Type modelType, string propertyName) { var metadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName); var additionalValues = attributes.OfType<DropDownListAttribute>().FirstOrDefault(); if (additionalValues != null) { metadata.AdditionalValues.Add("DropDownListAttribute", additionalValues); } return metadata; } } Then you have to make a call to the new MetadataProvider in Application_Start of Global.asax.cs: protected void Application_Start() { RegisterRoutes(RouteTable.Routes); ModelMetadataProviders.Current = new MetadataProvider(); } Well, I hope this makes sense and I hope this approach may save you some time. I'd like some feedback on this approach please. Is there a better approach?

    Read the article

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

  • Android WebView not loading a JavaScript file, but Android Browser loads it fine.

    - by Justin
    I'm writing an application which connects to a back office site. The backoffice site contains a whole slew of JavaScript functions, at least 100 times the average site. Unfortunately it does not load them, and causes much of the functionality to not work properly. So I am running a test. I put a page out on my server which loads the FireBugLite javascript text. Its a lot of javascript and perfect to test and see if the Android WebView will load it. The WebView loads nothing, but the browser loads the Firebug Icon. What on earth would make the difference, why can it run in the browser and not in my WebView? Any suggestions. More background information, in order to get the stinking backoffice application available on a Droid (or any other platform except windows) I needed to trick the bakcoffice application to believe what's accessing the website is Internet Explorer. I do this by modifying the WebView User Agent. Also for this application I've slimmed my landing page, so I could give you the source to offer me aid. package ksc.myKMB; import android.app.Activity; import android.app.AlertDialog; import android.app.Dialog; import android.app.ProgressDialog; import android.content.DialogInterface; import android.graphics.Bitmap; import android.os.Bundle; import android.view.Menu; import android.view.MenuInflater; import android.view.MenuItem; import android.view.Window; import android.webkit.WebChromeClient; import android.webkit.WebView; import android.webkit.WebSettings; import android.webkit.WebViewClient; import android.widget.Toast; public class myKMB extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); /** Performs base set up */ /** Create a Activity of this Activity, IE myProcess */ myProcess = this; /*** Create global objects and web browsing objects */ HideDialogOnce = true; webview = new WebView(this) { }; webChromeClient = new WebChromeClient() { public void onProgressChanged(WebView view, int progress) { // Activities and WebViews measure progress with different scales. // The progress meter will automatically disappear when we reach 100% myProcess.setProgress((progress * 100)); //CreateMessage("Progress is : " + progress); } }; webViewClient = new WebViewClient() { public void onReceivedError(WebView view, int errorCode, String description, String failingUrl) { Toast.makeText(myProcess, MessageBegText + description + MessageEndText, Toast.LENGTH_SHORT).show(); } public void onPageFinished (WebView view, String url) { /** Hide dialog */ try { // loadingDialog.dismiss(); } finally { } //myProcess.setProgress(1000); /** Fon't show the dialog while I'm performing fixes */ //HideDialogOnce = true; view.loadUrl("javascript:document.getElementById('JTRANS011').style.visibility='visible';"); } public void onPageStarted(WebView view, String url, Bitmap favicon) { if (HideDialogOnce == false) { //loadingDialog = ProgressDialog.show(myProcess, "", // "One moment, the page is laoding...", true); } else { //HideDialogOnce = true; } } }; getWindow().requestFeature(Window.FEATURE_PROGRESS); webview.setWebChromeClient(webChromeClient); webview.setWebViewClient(webViewClient); setContentView(webview); /** Load the Keynote Browser Settings */ LoadSettings(); webview.loadUrl(LandingPage); } /** Get Menu */ @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.menu, menu); return true; } /** an item gets pushed */ @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { // We have only one menu option case R.id.quit: System.exit(0); break; case R.id.back: webview.goBack(); case R.id.refresh: webview.reload(); case R.id.info: //IncludeJavascript(""); } return true; } /** Begin Globals */ public WebView webview; public WebChromeClient webChromeClient; public WebViewClient webViewClient; public ProgressDialog loadingDialog; public Boolean HideDialogOnce; public Activity myProcess; public String OverideUserAgent_IE = "Mozilla/5.0 (Windows; MSIE 6.0; Android 1.6; en-US) AppleWebKit/525.10+ (KHTML, like Gecko) Version/3.0.4 Safari/523.12.2 myKMB/1.0"; public String LandingPage = "http://kscserver.com/main-leap-slim.html"; public String MessageBegText = "Problem making a connection, Details: "; public String MessageEndText = " For Support Call: (xxx) xxx - xxxx."; public void LoadSettings() { webview.getSettings().setUserAgentString(OverideUserAgent_IE); webview.getSettings().setJavaScriptEnabled(true); webview.getSettings().setBuiltInZoomControls(true); webview.getSettings().setSupportZoom(true); } /** Creates a message alert dialog */ public void CreateMessage(String message) { AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setMessage(message) .setCancelable(true) .setNegativeButton("Close", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { dialog.cancel(); } }); AlertDialog alert = builder.create(); alert.show(); } } My Application is running in the background, and as you can see no Firebug in the lower right hand corner. However the browser (the emulator on top) has the same page but shows the firebug. What am I doing wrong? I'm assuming its either not enough memory allocated to the application, process power allocation, or a physical memory thing. I can't tell, all I know is the results are strange. I get the same thing form my android device, the application shows no firebug but the browser shows the firebug.

    Read the article

  • Use Glassfish JMS from remote client

    - by James
    Hi, I have glassfish installed on a server with a JMS ConnectionFactory set up jms/MyConnectionFactory with a resource type or javax.jms.ConnectionFactory. I now want to access this from a client application on my local machine for this I have the following: public static void main(String[] args) { try{ Properties env = new Properties(); env.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory"); env.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming"); env.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl"); env.setProperty("org.omg.CORBA.ORBInitialHost", "10.97.3.74"); env.setProperty("org.omg.CORBA.ORBInitialPort", "3700"); InitialContext initialContext = new InitialContext(env); ConnectionFactory connectionFactory = null; try { connectionFactory = (ConnectionFactory) initialContext.lookup("jms/MyConnectionFactory"); } catch (Exception e) { System.out.println("JNDI API lookup failed: " + e.toString()); e.printStackTrace(); System.exit(1); } }catch(Exception e){ e.printStackTrace(System.err); } } When I run my client I get the following output: INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate {org.omg.CORBA.ORBInitialPort=3700, java.naming.factory.initial=com.sun.enterprise.naming.SerialInitContextFactory, org.omg.CORBA.ORBInitialHost=10.97.3.74, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl, java.naming.factory.url.pkgs=com.sun.enterprise.naming} 19-Mar-2010 16:09:13 org.hibernate.validator.util.Version <clinit> INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 19-Mar-2010 16:09:13 org.hibernate.validator.engine.resolver.DefaultTraversableResolver detectJPA INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. 19-Mar-2010 16:09:13 com.sun.messaging.jms.ra.ResourceAdapter start INFO: MQJMSRA_RA1101: SJSMQ JMS Resource Adapter starting: REMOTE 19-Mar-2010 16:09:13 com.sun.messaging.jms.ra.ResourceAdapter start INFO: MQJMSRA_RA1101: SJSMQ JMSRA Started:REMOTE 19-Mar-2010 16:09:13 com.sun.enterprise.naming.impl.SerialContext lookup SEVERE: enterprise_naming.serialctx_communication_exception 19-Mar-2010 16:09:13 com.sun.enterprise.naming.impl.SerialContext lookup SEVERE: java.lang.RuntimeException: com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: This pool is not bound in JNDI : jms/MyConnectionFactory at com.sun.enterprise.resource.naming.ConnectorObjectFactory.getObjectInstance(ConnectorObjectFactory.java:159) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at com.sun.enterprise.naming.impl.SerialContext.getObjectInstance(SerialContext.java:472) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:437) at javax.naming.InitialContext.lookup(InitialContext.java:392) at simpleproducerclient.Main.main(Main.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.glassfish.appclient.client.acc.AppClientContainer.launch(AppClientContainer.java:424) at org.glassfish.appclient.client.AppClientFacade.main(AppClientFacade.java:134) Caused by: com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: This pool is not bound in JNDI : jms/MyConnectionFactory at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.obtainManagedConnectionFactory(ConnectorConnectionPoolAdminServiceImpl.java:1017) at com.sun.enterprise.connectors.ConnectorRuntime.obtainManagedConnectionFactory(ConnectorRuntime.java:375) at com.sun.enterprise.resource.naming.ConnectorObjectFactory.getObjectInstance(ConnectorObjectFactory.java:124) ... 11 more Caused by: javax.naming.NamingException: Lookup failed for '__SYSTEM/pools/jms/MyConnectionFactory' in SerialContext targetHost=localhost,targetPort=3700,orb'sInitialHost=ithfdv01,orb'sInitialPort=3700 [Root exception is javax.naming.NameNotFoundException: pools] at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:442) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.getConnectorConnectionPool(ConnectorConnectionPoolAdminServiceImpl.java:804) at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.obtainManagedConnectionFactory(ConnectorConnectionPoolAdminServiceImpl.java:932) ... 13 more Caused by: javax.naming.NameNotFoundException: pools at com.sun.enterprise.naming.impl.TransientContext.resolveContext(TransientContext.java:252) at com.sun.enterprise.naming.impl.TransientContext.lookup(TransientContext.java:171) at com.sun.enterprise.naming.impl.TransientContext.lookup(TransientContext.java:172) at com.sun.enterprise.naming.impl.SerialContextProviderImpl.lookup(SerialContextProviderImpl.java:58) at com.sun.enterprise.naming.impl.RemoteSerialContextProviderImpl.lookup(RemoteSerialContextProviderImpl.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.corba.ee.impl.presentation.rmi.ReflectiveTie.dispatchToMethod(ReflectiveTie.java:146) at com.sun.corba.ee.impl.presentation.rmi.ReflectiveTie._invoke(ReflectiveTie.java:176) at com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.dispatchToServant(CorbaServerRequestDispatcherImpl.java:682) at com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.dispatch(CorbaServerRequestDispatcherImpl.java:216) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequestRequest(CorbaMessageMediatorImpl.java:1841) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequest(CorbaMessageMediatorImpl.java:1695) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleInput(CorbaMessageMediatorImpl.java:1078) at com.sun.corba.ee.impl.protocol.giopmsgheaders.RequestMessage_1_2.callback(RequestMessage_1_2.java:221) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequest(CorbaMessageMediatorImpl.java:797) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.dispatch(CorbaMessageMediatorImpl.java:561) JNDI API lookup failed: javax.naming.CommunicationException: Communication exception for SerialContext targetHost=10.97.3.74,targetPort=3700,orb'sInitialHost=ithfdv01,orb'sInitialPort=3700 [Root exception is java.lang.RuntimeException: com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: This pool is not bound in JNDI : jms/MyConnectionFactory] at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.doWork(CorbaMessageMediatorImpl.java:2558) at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.performWork(ThreadPoolImpl.java:492) at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:528) javax.naming.CommunicationException: Communication exception for SerialContext targetHost=10.97.3.74,targetPort=3700,orb'sInitialHost=ithfdv01,orb'sInitialPort=3700 [Root exception is java.lang.RuntimeException: com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: This pool is not bound in JNDI : jms/MyConnectionFactory] at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:461) at javax.naming.InitialContext.lookup(InitialContext.java:392) at simpleproducerclient.Main.main(Main.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.glassfish.appclient.client.acc.AppClientContainer.launch(AppClientContainer.java:424) at org.glassfish.appclient.client.AppClientFacade.main(AppClientFacade.java:134) Caused by: java.lang.RuntimeException: com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: This pool is not bound in JNDI : jms/MyConnectionFactory at com.sun.enterprise.resource.naming.ConnectorObjectFactory.getObjectInstance(ConnectorObjectFactory.java:159) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at com.sun.enterprise.naming.impl.SerialContext.getObjectInstance(SerialContext.java:472) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:437) ... 8 more Caused by: com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: This pool is not bound in JNDI : jms/MyConnectionFactory at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.obtainManagedConnectionFactory(ConnectorConnectionPoolAdminServiceImpl.java:1017) at com.sun.enterprise.connectors.ConnectorRuntime.obtainManagedConnectionFactory(ConnectorRuntime.java:375) at com.sun.enterprise.resource.naming.ConnectorObjectFactory.getObjectInstance(ConnectorObjectFactory.java:124) ... 11 more Caused by: javax.naming.NamingException: Lookup failed for '__SYSTEM/pools/jms/MyConnectionFactory' in SerialContext targetHost=localhost,targetPort=3700,orb'sInitialHost=ithfdv01,orb'sInitialPort=3700 [Root exception is javax.naming.NameNotFoundException: pools] at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:442) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.getConnectorConnectionPool(ConnectorConnectionPoolAdminServiceImpl.java:804) at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.obtainManagedConnectionFactory(ConnectorConnectionPoolAdminServiceImpl.java:932) ... 13 more Caused by: javax.naming.NameNotFoundException: pools at com.sun.enterprise.naming.impl.TransientContext.resolveContext(TransientContext.java:252) at com.sun.enterprise.naming.impl.TransientContext.lookup(TransientContext.java:171) at com.sun.enterprise.naming.impl.TransientContext.lookup(TransientContext.java:172) at com.sun.enterprise.naming.impl.SerialContextProviderImpl.lookup(SerialContextProviderImpl.java:58) at com.sun.enterprise.naming.impl.RemoteSerialContextProviderImpl.lookup(RemoteSerialContextProviderImpl.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.corba.ee.impl.presentation.rmi.ReflectiveTie.dispatchToMethod(ReflectiveTie.java:146) at com.sun.corba.ee.impl.presentation.rmi.ReflectiveTie._invoke(ReflectiveTie.java:176) at com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.dispatchToServant(CorbaServerRequestDispatcherImpl.java:682) at com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.dispatch(CorbaServerRequestDispatcherImpl.java:216) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequestRequest(CorbaMessageMediatorImpl.java:1841) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequest(CorbaMessageMediatorImpl.java:1695) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleInput(CorbaMessageMediatorImpl.java:1078) at com.sun.corba.ee.impl.protocol.giopmsgheaders.RequestMessage_1_2.callback(RequestMessage_1_2.java:221) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequest(CorbaMessageMediatorImpl.java:797) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.dispatch(CorbaMessageMediatorImpl.java:561) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.doWork(CorbaMessageMediatorImpl.java:2558) at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.performWork(ThreadPoolImpl.java:492) at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:528) I have looked at a number of posts and have tried a number of things with no success. I can run the following commands on my server: ./asadmin list-jndi-entries UserTransaction: com.sun.enterprise.transaction.TransactionNamingProxy$UserTransactionProxy java:global: com.sun.enterprise.naming.impl.TransientContext jdbc: com.sun.enterprise.naming.impl.TransientContext ejb: com.sun.enterprise.naming.impl.TransientContext com.sun.enterprise.container.common.spi.util.InjectionManager: com.sun.enterprise.container.common.impl.util.InjectionManagerImpl jms: com.sun.enterprise.naming.impl.TransientContext Command list-jndi-entries executed successfully. ./asadmin list-jndi-entries --context jms MyTopic: org.glassfish.javaee.services.ResourceProxy MyConnectionFactory: org.glassfish.javaee.services.ResourceProxy MyQueue: org.glassfish.javaee.services.ResourceProxy Command list-jndi-entries executed successfully. Any help is greatly appreciated. Cheers, James

    Read the article

  • Android NDK Gaussian Blur radius stuck at 60

    - by rennoDeniro
    I implemented this NDK imeplementation of a Gaussian Blur, But I am having problems. I cannot increase the radius above 60, otherwise the activity just closes returning to a previous activity. No error message, nothing? Does anyone know why this could be? Note: This blur is based on the quasimondo implementation, here #include <jni.h> #include <string.h> #include <math.h> #include <stdio.h> #include <android/log.h> #include <android/bitmap.h> #define LOG_TAG "libbitmaputils" #define LOGI(...) __android_log_print(ANDROID_LOG_INFO,LOG_TAG,__VA_ARGS__) #define LOGE(...) __android_log_print(ANDROID_LOG_ERROR,LOG_TAG,__VA_ARGS__) typedef struct { uint8_t red; uint8_t green; uint8_t blue; uint8_t alpha; } rgba; JNIEXPORT void JNICALL Java_com_insert_your_package_ClassName_functionToBlur(JNIEnv* env, jobject obj, jobject bitmapIn, jobject bitmapOut, jint radius) { LOGI("Blurring bitmap..."); // Properties AndroidBitmapInfo infoIn; void* pixelsIn; AndroidBitmapInfo infoOut; void* pixelsOut; int ret; // Get image info if ((ret = AndroidBitmap_getInfo(env, bitmapIn, &infoIn)) < 0 || (ret = AndroidBitmap_getInfo(env, bitmapOut, &infoOut)) < 0) { LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret); return; } // Check image if (infoIn.format != ANDROID_BITMAP_FORMAT_RGBA_8888 || infoOut.format != ANDROID_BITMAP_FORMAT_RGBA_8888) { LOGE("Bitmap format is not RGBA_8888!"); LOGE("==> %d %d", infoIn.format, infoOut.format); return; } // Lock all images if ((ret = AndroidBitmap_lockPixels(env, bitmapIn, &pixelsIn)) < 0 || (ret = AndroidBitmap_lockPixels(env, bitmapOut, &pixelsOut)) < 0) { LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret); } int h = infoIn.height; int w = infoIn.width; LOGI("Image size is: %i %i", w, h); rgba* input = (rgba*) pixelsIn; rgba* output = (rgba*) pixelsOut; int wm = w - 1; int hm = h - 1; int wh = w * h; int whMax = max(w, h); int div = radius + radius + 1; int r[wh]; int g[wh]; int b[wh]; int rsum, gsum, bsum, x, y, i, yp, yi, yw; rgba p; int vmin[whMax]; int divsum = (div + 1) >> 1; divsum *= divsum; int dv[256 * divsum]; for (i = 0; i < 256 * divsum; i++) { dv[i] = (i / divsum); } yw = yi = 0; int stack[div][3]; int stackpointer; int stackstart; int rbs; int ir; int ip; int r1 = radius + 1; int routsum, goutsum, boutsum; int rinsum, ginsum, binsum; for (y = 0; y < h; y++) { rinsum = ginsum = binsum = routsum = goutsum = boutsum = rsum = gsum = bsum = 0; for (i = -radius; i <= radius; i++) { p = input[yi + min(wm, max(i, 0))]; ir = i + radius; // same as sir stack[ir][0] = p.red; stack[ir][1] = p.green; stack[ir][2] = p.blue; rbs = r1 - abs(i); rsum += stack[ir][0] * rbs; gsum += stack[ir][1] * rbs; bsum += stack[ir][2] * rbs; if (i > 0) { rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; } else { routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; } } stackpointer = radius; for (x = 0; x < w; x++) { r[yi] = dv[rsum]; g[yi] = dv[gsum]; b[yi] = dv[bsum]; rsum -= routsum; gsum -= goutsum; bsum -= boutsum; stackstart = stackpointer - radius + div; ir = stackstart % div; // same as sir routsum -= stack[ir][0]; goutsum -= stack[ir][1]; boutsum -= stack[ir][2]; if (y == 0) { vmin[x] = min(x + radius + 1, wm); } p = input[yw + vmin[x]]; stack[ir][0] = p.red; stack[ir][1] = p.green; stack[ir][2] = p.blue; rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; rsum += rinsum; gsum += ginsum; bsum += binsum; stackpointer = (stackpointer + 1) % div; ir = (stackpointer) % div; // same as sir routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; rinsum -= stack[ir][0]; ginsum -= stack[ir][1]; binsum -= stack[ir][2]; yi++; } yw += w; } for (x = 0; x < w; x++) { rinsum = ginsum = binsum = routsum = goutsum = boutsum = rsum = gsum = bsum = 0; yp = -radius * w; for (i = -radius; i <= radius; i++) { yi = max(0, yp) + x; ir = i + radius; // same as sir stack[ir][0] = r[yi]; stack[ir][1] = g[yi]; stack[ir][2] = b[yi]; rbs = r1 - abs(i); rsum += r[yi] * rbs; gsum += g[yi] * rbs; bsum += b[yi] * rbs; if (i > 0) { rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; } else { routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; } if (i < hm) { yp += w; } } yi = x; stackpointer = radius; for (y = 0; y < h; y++) { output[yi].red = dv[rsum]; output[yi].green = dv[gsum]; output[yi].blue = dv[bsum]; rsum -= routsum; gsum -= goutsum; bsum -= boutsum; stackstart = stackpointer - radius + div; ir = stackstart % div; // same as sir routsum -= stack[ir][0]; goutsum -= stack[ir][1]; boutsum -= stack[ir][2]; if (x == 0) vmin[y] = min(y + r1, hm) * w; ip = x + vmin[y]; stack[ir][0] = r[ip]; stack[ir][1] = g[ip]; stack[ir][2] = b[ip]; rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; rsum += rinsum; gsum += ginsum; bsum += binsum; stackpointer = (stackpointer + 1) % div; ir = stackpointer; // same as sir routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; rinsum -= stack[ir][0]; ginsum -= stack[ir][1]; binsum -= stack[ir][2]; yi += w; } } // Unlocks everything AndroidBitmap_unlockPixels(env, bitmapIn); AndroidBitmap_unlockPixels(env, bitmapOut); LOGI ("Bitmap blurred."); } int min(int a, int b) { return a > b ? b : a; } int max(int a, int b) { return a > b ? a : b; }

    Read the article

  • wxWidgets in Code::Blocks

    - by Vlad
    Hello all, I'm trying to compile the minimal sample from the "Cross-Platform GUI Programming with wxWidgets" book but the following compile errors: ||=== minimal, Debug ===| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_frame.o):frame.cpp:(.text+0x918)||undefined reference to `__Unwind_Resume' | C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_frame.o):frame.cpp:(.text+0x931)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_frame.o):frame.cpp:(.text+0xa96)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_frame.o):frame.cpp:(.text+0xada)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_frame.o):frame.cpp:(.text+0xb1e)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_frame.o):frame.cpp:(.eh_frame+0x12)||undefined reference to `___gxx_personality_v0'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_datacmn.o):datacmn.cpp:(.eh_frame+0x11)||undefined reference to `___gxx_personality_v0'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdicmn.o):gdicmn.cpp:(.text+0x63a)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdicmn.o):gdicmn.cpp:(.text+0x696)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdicmn.o):gdicmn.cpp:(.text+0x6f2)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdicmn.o):gdicmn.cpp:(.text+0x74a)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdicmn.o):gdicmn.cpp:(.text+0x7a2)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdicmn.o):gdicmn.cpp:(.eh_frame+0x12)||undefined reference to `___gxx_personality_v0'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menu.o):menu.cpp:(.text+0x88f)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menu.o):menu.cpp:(.text+0x927)||undefined reference to `__Unwind_Resume' | C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menu.o):menu.cpp:(.text+0x9bf)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menu.o):menu.cpp:(.text+0xb8b)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menu.o):menu.cpp:(.text+0xc87)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menu.o):menu.cpp:(.eh_frame+0x12)||undefined reference to `___gxx_personality_v0'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menucmn.o):menucmn.cpp:(.text+0xbc0)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menucmn.o):menucmn.cpp:(.text+0xc59)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menucmn.o):menucmn.cpp:(.text+0xcf5)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menucmn.o):menucmn.cpp:(.text+0xda6)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menucmn.o):menucmn.cpp:(.text+0xdce)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_menucmn.o):menucmn.cpp:(.eh_frame+0x12)||undefined reference to `___gxx_personality_v0'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_icon.o):icon.cpp:(.text+0x1ff)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_icon.o):icon.cpp:(.text+0x257)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_icon.o):icon.cpp:(.text+0x2af)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_icon.o):icon.cpp:(.text+0x2fc)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_icon.o):icon.cpp:(.text+0x36d)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_icon.o):icon.cpp:(.eh_frame+0x12)||undefined reference to `___gxx_personality_v0'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdiimage.o):gdiimage.cpp:(.text+0x4a8)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdiimage.o):gdiimage.cpp:(.text+0x73a)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdiimage.o):gdiimage.cpp:(.text+0x813)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdiimage.o):gdiimage.cpp:(.text+0xc06)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdiimage.o):gdiimage.cpp:(.text+0xd3e)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_gdiimage.o):gdiimage.cpp:(.eh_frame+0x12)||undefined reference to `___gxx_personality_v0'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_event.o):event.cpp:(.text+0x970)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_event.o):event.cpp:(.text+0xa80)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_event.o):event.cpp:(.text+0xb8c)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_event.o):event.cpp:(.text+0xc78)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_event.o):event.cpp:(.text+0xd4f)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_event.o):event.cpp:(.eh_frame+0x12)||undefined reference to `___gxx_personality_v0'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_appcmn.o):appcmn.cpp:(.text+0x2ef)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_appcmn.o):appcmn.cpp:(.text+0x32b)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_appcmn.o):appcmn.cpp:(.text+0x43d)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_appcmn.o):appcmn.cpp:(.text+0x586)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_appcmn.o):appcmn.cpp:(.text+0x601)||undefined reference to `__Unwind_Resume'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_appcmn.o):appcmn.cpp:(.eh_frame+0x12)||undefined reference to `___gxx_personality_v0'| C:\SourceCode\Libraries\wxWidgets2.8\lib\gcc_lib\libwxmsw28u_core.a(corelib_app.o):app.cpp:(.text+0x1da)||undefined reference to `__Unwind_Resume'| ||More errors follow but not being shown.| ||Edit the max errors limit in compiler options...| ||=== Build finished: 50 errors, 0 warnings ===| Here's the code sample from the book: #include "wx/wx.h" #include "mondrian.xpm" // Declare the application class class MyApp : public wxApp { public: // Called on application startup virtual bool OnInit(); }; // Declare our main frame class class MyFrame : public wxFrame { public: // Constructor MyFrame(const wxString& title); // Event handlers void OnQuit(wxCommandEvent& event); void OnAbout(wxCommandEvent& event); private: // This class handles events DECLARE_EVENT_TABLE() }; // Implements MyApp& GetApp() DECLARE_APP(MyApp) // Give wxWidgets the means to create a MyApp object IMPLEMENT_APP(MyApp) // Initialize the application bool MyApp::OnInit() { // Create the main application window MyFrame *frame = new MyFrame(wxT("Minimal wxWidgets App")); // Show it frame->Show(true); // Start the event loop return true; } // Event table for MyFrame BEGIN_EVENT_TABLE(MyFrame, wxFrame) EVT_MENU(wxID_ABOUT, MyFrame::OnAbout) EVT_MENU(wxID_EXIT, MyFrame::OnQuit) END_EVENT_TABLE() void MyFrame::OnAbout(wxCommandEvent& event) { wxString msg; msg.Printf(wxT("Hello and welcome to %s"), wxVERSION_STRING); wxMessageBox(msg, wxT("About Minimal"), wxOK | wxICON_INFORMATION, this); } void MyFrame::OnQuit(wxCommandEvent& event) { // Destroy the frame Close(); } MyFrame::MyFrame(const wxString& title) : wxFrame(NULL, wxID_ANY, title) { // Set the frame icon SetIcon(wxIcon(mondrian_xpm)); // Create a menu bar wxMenu *fileMenu = new wxMenu; // The “About” item should be in the help menu wxMenu *helpMenu = new wxMenu; helpMenu->Append(wxID_ABOUT, wxT("&About...\tF1"), wxT("Show about dialog")); fileMenu->Append(wxID_EXIT, wxT("E&xit\tAlt-X"), wxT("Quit this program")); // Now append the freshly created menu to the menu bar... wxMenuBar *menuBar = new wxMenuBar(); menuBar->Append(fileMenu, wxT("&File")); menuBar->Append(helpMenu, wxT("&Help")); // ... and attach this menu bar to the frame SetMenuBar(menuBar); // Create a status bar just for fun CreateStatusBar(2); SetStatusText(wxT("Welcome to wxWidgets!")); } So what's happenning? Thanks! P.S.: I installed wxWidgets through wxPack wich afaik comes with everything precomplied and i also added the wxWidgets directory to Global variables-base in Code::Blocks so everything should be correctly set, right?

    Read the article

  • ASP.Net MVC2 CustomModelBinder not working... Changed from MVC1

    - by Ian
    (My apologies if this seems verbose - trying to provide all relevant code) I've just upgraded to VS2010, and am now having trouble trying to get a new CustomModelBinder working. In MVC1 I would have written something like public class AwardModelBinder: DefaultModelBinder { : public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { // do the base binding to bind all simple types Award award = base.BindModel(controllerContext, bindingContext) as Award; // Get complex values from ValueProvider dictionary award.EffectiveFrom = Convert.ToDateTime(bindingContext.ValueProvider["Model.EffectiveFrom"].AttemptedValue.ToString()); string sEffectiveTo = bindingContext.ValueProvider["Model.EffectiveTo"].AttemptedValue.ToString(); if (sEffectiveTo.Length > 0) award.EffectiveTo = Convert.ToDateTime(bindingContext.ValueProvider["Model.EffectiveTo"].AttemptedValue.ToString()); // etc return award; } } Of course I'd register the custom binder in Global.asax.cs: protected void Application_Start() { RegisterRoutes(RouteTable.Routes); // register custom model binders ModelBinders.Binders.Add(typeof(Voucher), new VoucherModelBinder(DaoFactory.UserInstance("EH1303"))); ModelBinders.Binders.Add(typeof(AwardCriterion), new AwardCriterionModelBinder(DaoFactory.UserInstance("EH1303"), new VOPSDaoFactory())); ModelBinders.Binders.Add(typeof(SelectedVoucher), new SelectedVoucherModelBinder(DaoFactory.UserInstance("IT0706B"))); ModelBinders.Binders.Add(typeof(Award), new AwardModelBinder(DaoFactory.UserInstance("IT0706B"))); } Now, in MVC2, I'm finding that my call to base.BindModel returns an object where everything is null, and I simply don't want to have to iterate all the form fields surfaced by the new ValueProvider.GetValue() function. Google finds no matches for this error, so I assume I'm doing something wrong. Here's my actual code: My domain object (infer what you like about the encapsulated child objects - I know I'll need custom binders for those too, but the three "simple" fields (ie. base types) Id, TradingName and BusinessIncorporated are also coming back null): public class Customer { /// <summary> /// Initializes a new instance of the Customer class. /// </summary> public Customer() { Applicant = new Person(); Contact = new Person(); BusinessContact = new ContactDetails(); BankAccount = new BankAccount(); } /// <summary> /// Gets or sets the unique customer identifier. /// </summary> public int Id { get; set; } /// <summary> /// Gets or sets the applicant details. /// </summary> public Person Applicant { get; set; } /// <summary> /// Gets or sets the customer's secondary contact. /// </summary> public Person Contact { get; set; } /// <summary> /// Gets or sets the trading name of the business. /// </summary> [Required(ErrorMessage = "Please enter your Business or Trading Name")] [StringLength(50, ErrorMessage = "A maximum of 50 characters is permitted")] public string TradingName { get; set; } /// <summary> /// Gets or sets the date the customer's business began trading. /// </summary> [Required(ErrorMessage = "You must supply the date your business started trading")] [DateRange("01/01/1900", "01/01/2020", ErrorMessage = "This date must be between {0} and {1}")] public DateTime BusinessIncorporated { get; set; } /// <summary> /// Gets or sets the contact details for the customer's business. /// </summary> public ContactDetails BusinessContact { get; set; } /// <summary> /// Gets or sets the customer's bank account details. /// </summary> public BankAccount BankAccount { get; set; } } My controller method: /// <summary> /// Saves a Customer object from the submitted application form. /// </summary> /// <param name="customer">A populate instance of the Customer class.</param> /// <returns>A partial view indicating success or failure.</returns> /// <httpmethod>POST</httpmethod> /// <url>/Customer/RegisterCustomerAccount</url> [HttpPost] [ValidateAntiForgeryToken] public ActionResult RegisterCustomerAccount(Customer customer) { if (ModelState.IsValid) { // save the Customer // return indication of success, or otherwise return PartialView(); } else { ViewData.Model = customer; // load necessary reference data into ViewData ViewData["PersonTitles"] = new SelectList(ReferenceDataCache.Get("PersonTitle"), "Id", "Name"); return PartialView("CustomerAccountRegistration", customer); } } My custom binder: public class CustomerModelBinder : DefaultModelBinder { public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { ValueProviderResult vpResult = bindingContext .ValueProvider.GetValue(bindingContext.ModelName); // vpResult is null // MVC2 - ValueProvider is now an IValueProvider, not dictionary based anymore if (bindingContext.ValueProvider.GetValue("Model.Applicant.Title") != null) { // works } Customer customer = base.BindModel(controllerContext, bindingContext) as Customer; // customer instanitated with null (etc) throughout return customer; } } My binder registration: /// <summary> /// Application_Start is called once when the web application is first accessed. /// </summary> protected void Application_Start() { RegisterRoutes(RouteTable.Routes); // register custom model binders ModelBinders.Binders.Add(typeof(Customer), new CustomerModelBinder()); ReferenceDataCache.Populate(); } ... and a snippet from my view (could this be a prefix problem?) <div class="inputContainer"> <label class="above" for="Model_Applicant_Title" accesskey="t"><span class="accesskey">T</span>itle<span class="mandatoryfield">*</span></label> <%= Html.DropDownList("Model.Applicant.Title", ViewData["PersonTitles"] as SelectList, "Select ...", new { @class = "validate[required]" })%> <% Html.ValidationMessageFor(model => model.Applicant.Title); %> </div> <div class="inputContainer"> <label class="above" for="Model_Applicant_Forename" accesskey="f"><span class="accesskey">F</span>orename / First name<span class="mandatoryfield">*</span></label> <%= Html.TextBox("Model.Applicant.Forename", Html.Encode(Model.Applicant.Forename), new { @class = "validate[required,custom[onlyLetter],length[2,20]]", title="Enter your forename", maxlength = 20, size = 20, autocomplete = "off", onkeypress = "return maskInput(event,re_mask_alpha);" })%> </div> <div class="inputContainer"> <label class="above" for="Model_Applicant_MiddleInitials" accesskey="i">Middle <span class="accesskey">I</span>nitial(s)</label> <%= Html.TextBox("Model.Applicant.MiddleInitials", Html.Encode(Model.Applicant.MiddleInitials), new { @class = "validate[optional,custom[onlyLetter],length[0,8]]", title = "Please enter your middle initial(s)", maxlength = 8, size = 8, autocomplete = "off", onkeypress = "return maskInput(event,re_mask_alpha);" })%> </div>

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366  | Next Page >