Search Results

Search found 19746 results on 790 pages for 'local dependency'.

Page 58/790 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Intermittent "Specified cast is invalid" with StructureMap injected data context

    - by FreshCode
    I am intermittently getting an System.InvalidCastException: Specified cast is not valid. error in my repository layer when performing an abstracted SELECT query mapped with LINQ. The error can't be caused by a mismatched database schema since it works intermittently and it's on my local dev machine. Could it be because StructureMap is caching the data context between page requests? If so, how do I tell StructureMap v2.6.1 to inject a new data context argument into my repository for each request? Update: I found this question which correlates my hunch that something was being re-used. Looks like I need to call Dispose on my injected data context. Not sure how I'm going to do this to all my repositories without copypasting a lot of code. Edit: These errors are popping up all over the place whenever I refresh my local machine too quickly. Doesn't look like it's happening on my remote deployment box, but I can't be sure. I changed all my repositories' StructureMap life cycles to HttpContextScoped() and the error persists. Code: public ActionResult Index() { // error happens here, which queries my page repository var page = _branchService.GetPage("welcome"); if (page != null) ViewData["Welcome"] = page.Body; ... } Repository: GetPage boils down to a filtered query mapping in my page repository. public IQueryable<Page> GetPages() { var pages = from p in _db.Pages let categories = GetPageCategories(p.PageId) let revisions = GetRevisions(p.PageId) select new Page { ID = p.PageId, UserID = p.UserId, Slug = p.Slug, Title = p.Title, Description = p.Description, Body = p.Text, Date = p.Date, IsPublished = p.IsPublished, Categories = new LazyList<Category>(categories), Revisions = new LazyList<PageRevision>(revisions) }; return pages; } where _db is an injected data context as an argument, stored in a private variable which I reuse for SELECT queries. Error: Specified cast is not valid. Exception Details: System.InvalidCastException: Specified cast is not valid. Stack Trace: [InvalidCastException: Specified cast is not valid.] System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) +4539 System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) +207 System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) +500 System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute(Expression expression) +50 System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) +383 Manager.Controllers.SiteController.Index() in C:\Projects\Manager\Manager\Controllers\SiteController.cs:68 lambda_method(Closure , ControllerBase , Object[] ) +79 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) +258 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +39 System.Web.Mvc.<>c__DisplayClassd.<InvokeActionMethodWithFilters>b__a() +125 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) +640 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList`1 filters, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +312 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +709 System.Web.Mvc.Controller.ExecuteCore() +162 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__4() +58 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +20 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +453 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +371

    Read the article

  • PHP-Mcrypt Installation

    - by Infinity
    I need to install php-mcrypt on my CentOS 5.5 VPS, When I try yum install php-mcrypt, it says that it is set to be updated which implies that it is already installed. I looked in the /usr/lib/php/modules and cant find the .so file. Anyway I want to update it but yum is giving the following error, I am running PHP-FPM on Nginx. Last login: Thu Apr 21 12:13:30 2011 from cpc2-seve18-2-0-cust438.13-3.cable.virginmedia.com [root@infinity ~]# yum install php-mcrypt Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Processing Dependency: php >= 5.1.6 for package: php-mcrypt --> Running transaction check ---> Package php.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php --> Processing Dependency: php-cli = 5.1.6-27.el5_5.3 for package: php ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Running transaction check ---> Package php.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php ---> Package php-cli.i386 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5_5.3 for package: php-cli ---> Package php-mcrypt.i386 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Finished Dependency Resolution php-mcrypt-5.1.6-15.el5.centos.1.i386 from extras has depsolving problems --> Missing Dependency: php-api = 20041225 is needed by package php-mcrypt-5.1.6-15.el5.centos.1.i386 (extras) php-5.1.6-27.el5_5.3.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-5.1.6-27.el5_5.3.i386 (base) php-cli-5.1.6-27.el5_5.3.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-cli-5.1.6-27.el5_5.3.i386 (base) Error: Missing Dependency: php-api = 20041225 is needed by package php-mcrypt-5.1.6-15.el5.centos.1.i386 (extras) Error: Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-cli-5.1.6-27.el5_5.3.i386 (base) Error: Missing Dependency: php-common = 5.1.6-27.el5_5.3 is needed by package php-5.1.6-27.el5_5.3.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. [root@infinity ~]# Any ideas?

    Read the article

  • Changing User/Group to allow PHP to chmod/rename and move_upload_file()

    - by moe
    It seems like I cannot do anything with my PHP script on my VPS. It returns 'Permission denied' when I try to upload something to a directory. Yes, I have changed the permission to 777, and it works, but I do not like the insecurity When running the command: ps axu|grep apache|grep -v grep It returns nobody 7689 0.1 3.8 50604 20036 ? S 21:38 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 13600 0.0 3.8 50304 20348 ? Ss Jun06 0:46 /usr/local/apache/bin/httpd -k start -DSSL nobody 15733 0.1 3.8 50700 20156 ? S 21:39 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 15818 0.1 3.8 51492 20180 ? S 21:39 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 23843 0.1 3.7 51336 19592 ? S 21:40 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30335 0.0 3.5 50436 18496 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30406 0.0 3.5 50444 18544 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30407 0.0 3.5 50556 18696 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30472 0.0 3.6 50828 19348 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30474 0.0 3.5 50668 18868 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30476 0.0 3.6 50532 19064 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30501 0.0 3.8 50556 20080 ? S 21:36 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32341 0.0 3.5 50444 18492 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32370 0.0 3.5 50444 18476 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32414 0.1 3.7 51336 19524 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32416 0.1 3.5 50668 18816 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32457 0.1 3.6 50828 19320 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32458 0.1 3.6 50772 19276 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32459 0.0 3.5 50444 18504 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32460 0.2 3.6 50828 19320 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32463 0.0 3.5 50444 18472 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32466 0.0 3.4 50436 17960 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL The owner of the directory is 'user [505]' and the group is 'user[508]' (as seen in WinSCP) What can I do to change the Apache Handler to the right owner and group to allow my PHP scripts to work? P.S My PHP is not set to safe mode, and the open_basedir is set to no value EDIT: This is what my httpd.conf looks like (for the associative domain) <VirtualHost *:80> ServerName domain.com ServerAlias www.domain.com DocumentRoot /home/domain/public_html ServerAdmin info@domain ## User <theUsername> # Needed for Cpanel::ApacheConf <IfModule mod_userdir.c> Userdir disabled Userdir enabled <userName> </IfModule> <IfModule mod_suphp.c> suPHP_UserGroup <userName> <userName> </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup <userName> <userName> </IfModule> CustomLog /usr/local/apache/domlogs/domain.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/domain.com combined ScriptAlias /cgi-bin/ /home/domain/public_html/cgi-bin/ #Options -ExecCGI -Includes #RemoveHandler cgi-script .cgi .pl .plx .ppl .perl

    Read the article

  • Spring Test / JUnit problem - unable to load application context

    - by HDave
    I am using Spring for the first time and must be doing something wrong. I have a project with several Bean implementations and now I am trying to create a test class with Spring Test and JUnit. I am trying to use Spring Test to inject a customized bean into the test class. Here is my test-applicationContext.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="............."> <bean id="MyUuidFactory" class="com.myapp.UuidFactory" scope="singleton" > <property name="typeIdentifier" value="CLS" /> </bean> <bean id="ThingyImplTest" class="com.myapp.ThingyImplTest" scope="singleton"> <property name="uuidFactory"> <idref local="MyUuidFactory" /> </property> </bean> </beans> The injection of MyUuidFactory instance goes along with the following code from within the test class: private UuidFactory uuidFactory; public void setUuidFactory(UuidFactory uuidFactory) { this.uuidFactory = uuidFactory; } However, when I go to run the test (in Eclipse or command line) I get the following error (stack trace omitted for brevity): Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'MyImplTest' defined in class path resource [test-applicationContext.xml]: Initialization of bean failed; nested exception is org.springframework.beans.ConversionNotSupportedException: Failed to convert property value of type 'java.lang.String' to required type 'com.myapp.UuidFactory' for property 'uuidFactory'; nested exception is java.lang.IllegalStateException: Cannot convert value of type [java.lang.String] to required type [com.myapp.UuidFactory] for property 'uuidFactory': no matching editors or conversion strategy found Funny thing is, the Eclipse/Spring XML editor shows errors of I misspell any of the types or idrefs. If I leave the bean in, but comment out the dependency injection, everything work until I get a NullPointerException while running the test...which makes sense.

    Read the article

  • WPF: disable inheritance of properties

    - by Maximilian Csuk
    Hi! I would like to use a TabControl as the main navigation in the application I am working on. So I would like to make the font in the headers of the TabItems bigger and also give it another background-color. However, I do not want this to be inherited. For example, if I use this code: <TabControl FontSize="18pt"> <TabItem Header="Tab 1"> <Button>Button 1</Button> </TabItem> </TabControl> The font in the button is also 18pt big. I know that this is normal dependency property behaviour because the property is inherited, but that's not what I want in this case. I would like to change the TabItems without changing anything in the children. Isn't that possible? Because re-setting all children to default values is a PITA. Thanks for your time.

    Read the article

  • How to use Guice in Swing application

    - by Gerco Dries
    I have a Swing application that I would like to convert from spaghetti to using dependency injection with Guice. Using Guice to provide services like configuration and task queues is going great but I'm now starting on the GUI of the app and am unsure of how to proceed. The application is basically a JFrame with a bunch of tabs in a JTabbedPane. Each of the tabs is a separate JPanel subclass that lays out the various components and needs services to perform actions when certain buttons are pressed. In the current application, this looks somewhat like this: @Inject public MainFrame(SomeService service, Executor ex, Configuration config) { tabsPane = new JTabbedPane(); // Create the panels for each tab and add them to the tabbedpane somePanel = new SomeTabPanel(service, ex, config); tabsPane.addTab("Panel 1", somePanel); someOtherPanel = new SomeOtherTabPanel(service, ex, config); tabsPane.addTab("Panel 2", someOtherPanel); ... do more stuff } Obviously, this doesn't exactly follow DI best practices. I don't want to have to @Inject the tabs because that would get me a constructor with dozens of parameters. I do want to use Guice to inject the required dependencies into whatever tab objects I need without me having to pass all of those dependencies to the tab constructors. All of the dependencies for the tab objects are services that my Module knows about, so basically all I think I want to do is to ask Guice for the required objects and have them constructed for me.

    Read the article

  • WPF Exposing a calculated property for binding (as DependencyProperty)

    - by kubal5003
    Hello, I have a complex WPF control that for some reasons (ie. performance) is not using dependency properties but simple C# properties (at least at the top level these are exposed as properties). The goal is to make it possible to bind to some of those top level properties - I guess I should declare them as DPs.(right? or is there some other way to achieve this? ) I started reading on MSDN about DependencyProperties and DependencyObjects and found an example: public class MyStateControl : ButtonBase { public MyStateControl() : base() { } public Boolean State { get { return (Boolean)this.GetValue(StateProperty); } set { this.SetValue(StateProperty, value); } } public static readonly DependencyProperty StateProperty = DependencyProperty.Register( "State", typeof(Boolean), typeof(MyStateControl),new PropertyMetadata(false)); } If I'm right - this code enforces the property to be backed up by DependencyProperty which restricts it to be a simple property with a store(from functional point of view, not technically) instead of being able to calculate the property value each time getter is called and setting other properties/fields each time setter is called. What can I do about that? Is there any way I could make those two worlds meet at some point?

    Read the article

  • IoC - Dynamic Composition of object instances

    - by Joshua Starner
    Is there a way using IoC, MEF [Imports], or another DI solution to compose dependencies on the fly at object creation time instead of during composition time? Here's my current thought. If you have an instance of an object that raises events, but you are not creating the object once and saving it in memory, you have to register the event handlers every time the object is created. As far as I can tell, most IoC containers require you to register all of the classes used in composition and call Compose() to make it hook up all the dependencies. I think this may be horrible design (I'm dealing with a legacy system here) to do this due to the overhead of object creation, dependency injection, etc... but I was wondering if it was possible using one of the emergent IoC technologies. Maybe I have some terminology mixed up, but my goal is to avoid writing a framework to "hook up all the events" on an instance of an object, and use something like MEF to [Export] handlers (dependencies) that adhere to a very specific interface and [ImportMany] them into an object instance so my exports get called if the assemblies are there when the application starts. So maybe all of the objects could still be composed when the application starts, but I want the system to find and call all of them as the object is created and destroyed.

    Read the article

  • Considerations when architecting an extensible application using MEF

    - by Dan Bryant
    I've begun experimenting with dependency injection (in particular, MEF) for one of my projects, which has a number of different extensibility points. I'm starting to get a feel for what I can do with MEF, but I'd like to hear from others who have more experience with the technology. A few specific cases: My main use case at the moment is exposing various singleton-like services that my extensions make use of. My Framework assembly exposes service interfaces and my Engine assembly contains concrete implementations. This works well, but I may not want to allow all of my extensions to have access to all of my services. Is there a good way within MEF to limit which particular imports I allow a newly instantiated extension to resolve? This particular application has extension objects that I repeatedly instantiate. I can import multiple types of Controllers and Machines, which are instantiated in different combinations for a Project. I couldn't find a good way to do this with MEF, so I'm doing my own type discovery and instantiation. Is there a good way to do this within MEF or other DI frameworks? I welcome input on any other things to watch out for or surprising capabilities you've discovered that have changed the way you architect.

    Read the article

  • How do I correctly use Unity to pass a ConnectionString to my repository classes?

    - by GenericTypeTea
    I've literally just started using the Unity Application Blocks Dependency Injection library from Microsoft, and I've come unstuck. This is my IoC class that'll handle the instantiation of my concrete classes to their interface types (so I don't have to keep called Resolve on the IoC container each time I want a repository in my controller): public class IoC { public static void Intialise(UnityConfigurationSection section, string connectionString) { _connectionString = connectionString; _container = new UnityContainer(); section.Configure(_container); } private static IUnityContainer _container; private static string _connectionString; public static IMovementRepository MovementRepository { get { return _container.Resolve<IMovementRepository>(); } } } So, the idea is that from my Controller, I can just do the following: _repository = IoC.MovementRepository; I am currently getting the error: Exception is: InvalidOperationException - The type String cannot be constructed. You must configure the container to supply this value. Now, I'm assuming this is because my mapped concrete implementation requires a single string parameter for its constructor. The concrete class is as follows: public sealed class MovementRepository : Repository, IMovementRepository { public MovementRepository(string connectionString) : base(connectionString) { } } Which inherits from: public abstract class Repository { public Repository(string connectionString) { _connectionString = connectionString; } public virtual string ConnectionString { get { return _connectionString; } } private readonly string _connectionString; } Now, am I doing this the correct way? Should I not have a constructor in my concrete implementation of a loosely coupled type? I.e. should I remove the constructor and just make the ConnectionString property a Get/Set so I can do the following: public static IMovementRepository MovementRepository { get { return _container.Resolve<IMovementRepository>( new ParameterOverrides { { "ConnectionString", _connectionString } }.OnType<IMovementRepository>() ); } } So, I basically wish to know how to get my connection string to my concrete type in the correct way that matches the IoC rules and keeps my Controller and concrete repositories loosely coupled so I can easily change the DataSource at a later date.

    Read the article

  • StructureMap problems with bidirectional/circular dependencies

    - by leozilla
    I am currently integrating StructureMap within our business layer but have problems because of bidirectional dependencies. The layer contains multiple manager where each manager can call methods on each other, there are no restrictions or rules for communication. This also includes possible circular dependencies like in the example below. I know the design itself is questionable but currently we just want StructureMap to work and will focus on further refactoring in the future. Every manager implements the IManager interface internal interface IManager { bool IsStarted { get; } void Start(); void Stop(); } And does also have his own specific interface. internal interface IManagerA : IManager { void ALogic(); } internal interface IManagerB : IManager { void BLogic(); } Here are to dummy manager implementations. internal class ManagerA : IManagerA { public IManagerB ManagerB { get; set; } public void ALogic() { } public bool IsStarted { get; private set; } public void Start() { } public void Stop() { } } internal class ManagerB : IManagerB { public IManagerA ManagerA { get; set; } public void BLogic() { } public bool IsStarted { get; private set; } public void Start() { } public void Stop() { } } Here is the StructureMap configuration i use atm. I am still not sure how i should register the managers so currently i use a manual registration. Maybee someone could help me with this too. For<IManagerA>().Singleton().Use<ManagerA>(); For<IManagerB>().Singleton().Use<ManagerB>(); SetAllProperties(convention => { // configure the property injection for all managers convention.Matching(prop => typeof(IManager).IsAssignableFrom(prop.PropertyType)); }); After all i cannot create IManagerA because StructureMap complians about the circular dependency between ManagerA and ManagerB. Is there an easy and clean solution to solve this problem but keep to current design? br David

    Read the article

  • How VerticalOffset changes when Scrollable height changes while having list inside a list

    - by Prakash
    I am making a WP7 app which has a Listbox of UserControls. Each UserControl has an ItemsControl and Button(for getting more results). On click of the button the ItemsControl items will be increased by 5 or 10. Now on clicking on the GetMore button of any of the usercontrols except the first or last, there will be an increase in Scrollable height(Total height of the listbox) of the ListBox but the VerticalOffset(position of scrollbar from top) of the ListBox remains same. Now the problem I am facing is that the Vertical Offset is not absolute but relative to Scrollable Height. So the content being viewed till then will be changed basing on the new value of ScollableHeight. I want to know the relation between them, so that I can do some math and set the VerticalOffset value. I have added some dependency properties on VerticalOffset and ScrollableHeight through which I can get the events when any of them is changed. Also trying to use them to readjust the VerticalOffset. Any suggestions or corrections are highly appreciated.

    Read the article

  • Dependecy Injection with Massive ORM: dynamic trouble

    - by Sergi Papaseit
    I've started working on an MVC 3 project that needs data from an enormous existing database. My first idea was to go ahead and use EF 4.1 and create a bunch of POCO's to represent the tables I need, but I'm starting to think the mapping will get overly complicated as I only need some of the columns in some of the tables. (thanks to Steven for the clarification in the comments. So I thought I'd give Massive ORM a try. I normally use a Unit of Work implementation so I can keep everything nicely decoupled and can use Dependency Injection. This is part of what I have for Massive: public interface ISession { DynamicModel CreateTable<T>() where T : DynamicModel, new(); dynamic Single<T>(string where, params object[] args) where T : DynamicModel, new(); dynamic Single<T>(object key, string columns = "*") where T : DynamicModel, new(); // Some more methods supported by Massive here } And here's my implementation of the above interface: public class MassiveSession : ISession { public DynamicModel CreateTable<T>() where T : DynamicModel, new() { return new T(); } public dynamic Single<T>(string where, params object[] args) where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(where, args); } public dynamic Single<T>(object key, string columns = "*") where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(key, columns); } } The problem comes with the First(), Last() and FindBy() methods. Massive is based around a dynamic object called DynamicModel and doesn't define any of the above method; it handles them through a TryInvokeMethod() implementation overriden from DynamicObject instead: public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { } I'm at a loss on how to "interface" those methods in my ISession. How could my ISession provide support for First(), Last() and FindBy()? Put it another way, how can I use all of Massive's capabilities and still be able to decouple my classes from data access?

    Read the article

  • @PersistenceContext cannot be resolved to a type

    - by Saken Kungozhin
    i was running a code in which there's a dependency @PersistenceContext and field private EntityManager em; both of which cannot be resolved to a type, what is the meaning of this error and how can i fix it? the code is here: package org.jboss.tools.examples.util; import java.util.logging.Logger;` import javax.enterprise.context.RequestScoped; import javax.enterprise.inject.Produces; import javax.enterprise.inject.spi.InjectionPoint; import javax.faces.context.FacesContext; /** * This class uses CDI to alias Java EE resources, such as the persistence context, to CDI beans * * <p> * Example injection on a managed bean field: * </p> * * <pre> * &#064;Inject * private EntityManager em; * </pre> */ public class Resources { // use @SuppressWarnings to tell IDE to ignore warnings about field not being referenced directly @SuppressWarnings("unused") @Produces @PersistenceContext private EntityManager em; @Produces public Logger produceLog(InjectionPoint injectionPoint) { return Logger.getLogger(injectionPoint.getMember().getDeclaringClass().getName()); } @Produces @RequestScoped public FacesContext produceFacesContext() { return FacesContext.getCurrentInstance(); } }

    Read the article

  • Injecting correct object graph using StructureMap in Queue of different Objects

    - by davy
    I have a queuing service that has to inject a different dependency graph depending on the type of object in the queue. I'm using Structure Map. So, if the object in the queue is TypeA the concrete classes for TypeA are used and if it's TypeB, the concrete classes for TypeB are used. I'd like to avoid code in the queue like: if (typeA) { // setup TypeA graph } else if (typeB) { // setup TypeB graph } Within the graph, I also have a generic classes such as an IReader(ISomething, ISpomethingElse) where IReader is generic but needs to inject the correct ISomething and ISomethingElse for the type. ISomething will also have dependencies and so on. Currently I create a TypeA or TypeB object and inject a generic Processor class using StructureMap into it and then pass a factory manually inject a TypeA or TypeB factory into a method like: Processor.Process(new TypeAFactory) // perhaps I should have an abstract factory... However, because the factory then creates the generic IReader mentioned above, I end up manually injecting all the TypeA or TypeB classes fro there on. I hope enough of this makes sense. I am new to StructureMap and was hoping somebody could point me in the right direction here for a flexible and elegant solution. Thanks

    Read the article

  • Help me understand making maven project w/ non-maven jar dependencies usable by others

    - by deet
    Hi, I'm in the process of learning maven (and java packaging & distribution) with a new oss project I'm making as practice. Here's my situation, all java of course: My main project is ProjectA, maven-based in a github repository. I have also created one utility project, maven-based, in github: ProjectB. ProjectA depends on a project I have heavily modified that originally was from a google-code ant-based repository, ProjectC. So, how do I set up the build for ProjectA such that someone can download ProjectA.jar and use it without needing to install jars for ProjectB and ProjectC, and also how do I set up the build such that someone could check out ProjectA and run only 'mvn package' for a full compile? (Additionally, what should I do with my modified version of ProjectC? include the class files directly into ProjectA, or fork the project into something that could then be used by as a maven dependency?) I've been reading around, links such as this SO question and this SO question, but I'm unclear how those relate to my particular circumstance. So, any help would be appreciated. Thanks!

    Read the article

  • How should I define Pom.xml in each Module so that web module can communicate with the other two ejb modules?

    - by Kayser
    Maven, maven, maven. It must be very nice and it is nice by a small application. Now I want to build an ear project: with two EJB Modules, a web Module and ear module to build an ear file. Web Module is dependent on the other ejb modules.. How should I define Pom.xml in each Module so that web module can communicate with the other two ejb modules in ear and the ear module builds the right ear file? What I have done before: Module 1 -- Basic Module. All other modules are dependent to this Module. Basic functionality like login etc. <packaging>ejb</packaging> Module 1 -- Data Module. All Entites are here Type EJB <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency Module 2 -- Business Module. Businnes Facades are here. Type EJB <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency Web Module - Type is WAR <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency EAR Module -- In this project I try to build the project. <packaging>ear</packaging> <dependencies> <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Business</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_WEB</artifactId> <version>0.0.1-SNAPSHOT</version> <type>war</type> </dependency </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> </plugin> </plugins> </build>

    Read the article

  • Windows telling me, the local security authority is internally inconsistent upon mounting a network drive

    - by acme
    Since ages I've mounted a network share (via samba to a Linux machine) in Windows 7 to access it via drive letter. This worked flawlessly so far. Until now. Suddenly I couldn't access the drive anymore. Windows was telling me the network name (I didn't remember the exact term) was already in use. So I disconnected and tried to connect again: net use Y: \\10.10.10.208\work After a long time I get a message saying "The Local Security Authority (LSA) database contains an internal inconsistency" A restart didn't help. The mapped share is accessible (works on other machines in the same network), so obviously something strange is going on on my machine. Can anyone tell me how I can fix this inconsistency? Update: All machines that have saved the login information refuse with this error. So it must be something with the authorization. When I use net use Y: \\10.10.10.208\work /user:raphael it prompts me for the password and then returns that error message.

    Read the article

  • WDS's MDT DeploymentShare and REMINST replicated with DFS-R does not read WIM from local WDS

    - by mbrownnyc
    I've read several guides on using DFS-R with WDS and MDT to replicate REMINST and DeploymentShare, and I have a particularly strange problem. On the receiving server, after configuring WDS and mounting the DeploymentShare into MDT's DeploymentWorkbench, I also performed the following: 1) in .\Control\Bootstrap.ini, changed DeployRoot to \%wdsserver%\DeploymentShare$ 2) Changed the UNC path at the root of the MDT Deployment Share in the DeploymentWorkbench to match that of the current server. 3) In Unattend.xml files located: .\Control**, modified the following value to match the current server: <cpi:offlineImage catelog://HOST/ I am able to boot and grab the LiteTouch PE image off the local WDS TFTP server, but the WIM files, the scripts, everything else is being pulled off the WDS server at the remote site (the original WDS server that was the source of the files within the DFS-R replicated folder). What do I do in order to solve this problem? I've grepped all the files below the DeploymentShare to look for instances of the hostname of the WDS server at the remote site (the source of the files), but I found none. Here are the guides I referred to: http://technet.microsoft.com/en-us/library/cc771324%28WS.10%29.aspx http://blogs.technet.com/b/askds/archive/2009/12/16/wds-and-dfsr-love-at-first-sync.aspx http://oasysadmin.com/2011/11/03/copying-moving-and-replicating-the-mdt-2010-deployment-share/

    Read the article

  • Iomega eGo Encrypt Plus Encrypted Partition not mounting properly says "local disk"

    - by mosiac
    I'm working with an Iomega eGo 500gb Encrypt Plus portable drive. When I first set it up and installed the software and set a user password everything worked fine. The partition labeled "IomegaHDD" mounted properly and I could access the free space. Then I changed the ADMIN password which required me to lockout the device, wait 60 seconds, and then login to the Admin section and change the password, lockout the device again, wait 60 seconds, and then log back in with my user password. When I did that it of course unmounted the IomegaHDD partition to secure it, when it remounts it, it only shows up as "local disk" now and will not remount properly. I had not removed the cable while doing any of this. I have since tried unplugging and plugging back in to login to the drove but that has not worked. I'm wondering if I should remove every instance of "generic usb hub" from device manager and wait for it to re-add itself, or move it to a new set of USB ports temporarily to seee if that helps. Any ideas?

    Read the article

  • Force Windows Local Subnet Traffic through a Gateway

    - by Beerey
    Hi all, We are attempting to route all traffic from a certain machine to a gateway. This works ok for traffic destined for subnets outside of the machine's subnet. However, traffic to machines in the same subnet as the source machine goes through an On-Link gateway in Windows. This means that the default gateway is ignored, and traffic in a subnet (for example, 192.168.50.10 - 192.168.50.11) flows. Destination Netmask Gateway Interface Metric 192.168.50.0 255.255.255.0 On-link 192.168.50.214 276 This route can be deleted from Windows, but when the machine is rebooted it always comes back. Adding a persistant static route to the gateway with a lower metric doesn't work, since it will still try the On-Link gateway after the persistant route fails. Adding each machine in a VLAN isn't an option due to the setup we have Adding a startup script to delete the gateway isn't a great option either, since users will have full admin access to the machine and might disable the script. We cannot transperantly intercept all network traffic on the subnet using Gratuitous ARPs or transparent proxying, since there are other machines on the subnet which use a different gateway The only way we have gotten it to work is by adding a persistant route to the gateway for the subnet traffic, and deleting the On-link route on reboot. The question is then. Is there a way to permanently remove this On-link route If not, is there a way to otherwise force even local subnet traffic to go through a gateway?

    Read the article

  • Internet Explorer / Windows 7 does not want to show HTML file from local network drive

    - by Jaanus
    Setup: I have Windows 7 running inside VirtualBox on Mac OS X host. I have a shared drive with some HTML files, that I am mounting as a local drive W: in Windows, from the VirtualBox server \VBOXSVR. I want to look at them with a browser in Windows. Chrome in Windows 7 opens and shows those HTML files just fine (file:///W:/welcome.html). But Internet Explorer does not, and shows this error instead of the files: Internet Explorer cannot display the web page What you can try: [button Diagnose Connection Problems] More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For the internet zone in the status bar, it shows: Internet | Protected Mode: On IE settings are a mystery to me, and I could possibly get it to work by tweaking IE settings, but I don't know which ones. How do I make IE show the same files that Chrome is happy to show? (Chrome showing them means that the files themselves are fine, there is something about the setup that just makes IE be a diva.)

    Read the article

  • Real-time local backup with versioning on Windows 7/8

    - by Borda
    I'm looking for a reliable backup solution on Windows, something with a feature set similar to Yadis. I've been using CrashPlan for 2 months, but their software lost more than 1TB of my data, that's why I'm looking for alternatives. Requirements: Real-time folder-to-folder backup: I don't need online features, I want to use this to duplicate my files between my local disks. Versioning support: Should be able to choose how many versions to keep of the files. Plain backup: I'd like to be able to open the backup without special software. No proprietary file format. External disk support: Shouldn't have any problem using external disks, at least on the source side. Backup every file, even locked/system files. (Yadis fails this one) Should start automatically, and have a comfortable GUI. Should use actively maintained and/or popular software. I don't want to use discontinued products. Optional requirements: Low RAM usage I'm not really comfortable with something that eats 1GB of my RAM. Compression support Preferably ZIP, but I'm not picky about this. Any ordinary everyday format is acceptable. Freeware or Open Source Preferred, but not necessary. I can do a one-time payment within reasonable bounds. (preferably under $100) I only need Windows support, but if it works on Linux, that's a plus. I've already searched and tried lots of software, most of them failed at the plain backup, the versioning or the locked file backup requirements.

    Read the article

  • Can't connect to local MySQL server through socket

    - by Martin
    I was trying to tune the performance of a running mysql-server by running this command: mysqld_safe --key_buffer_size=64M --table_cache=256 --sort_buffer_size=4M --read_buffer_size=1M & After this i'm unable to connect mysql from the server where mysql is running. I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) However, luckily i can still connect to mysql remotely. So all my webservers still have access to mysql and are running without any problems. Because of this though i don't want to try to restart the mysql server since that will probably mess everything up. Now i know that mysqld_safe is starting the mysql-server, and since the mysql server was already running i guess it's some kind of problem with two mysql servers running and listening to the same port. Is there some way to solve this problem without restarting the initial mysql server? UPDATE: This is what ps xa | grep "mysql" says: 11672 ? S 0:00 /bin/sh /usr/bin/mysqld_safe 11780 ? Sl 175:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 11781 ? S 0:00 logger -t mysqld -p daemon.error 12432 pts/0 R+ 0:00 grep mysql

    Read the article

  • MySQL -- enable connection to remote server via local /tmp/mysql.sock

    - by Kevin
    Hey all, I run a shared hosting provider and we're looking to move to a High Availability (replicated across multiple datacenters) setup for our hosting. We have created a replicated MySQL setup with failover that works wonderfully, and we'd like to move all of our clients' databases to it. The only trouble is that we have many many customers, all of whom have configured their Wordpress, Drupal, etc. installations to connect to MySQL via a local socket, not to the address of the remove server. I would hate to have to go through manually and change the connection statement in all of our clients' sites. What I'd ideally love to see is a program that listens on /tmp/mysql.sock and forwards connections there to the remote server I specify. I've seen SQL Relay, but it seems to require that I hardcode all of the database names and usernames and passwords into its configuration file. This is not going to work for me because our users add new databases dynamically all of the time, and I'd rather not have to write code to updated SQLRelay's config file every time. Does anyone have an idea on how to do this? Alternatively, I'd accept idea on how to handle this at the PHP level. (i.e. redirect any attempted calls to mysql_connect() to use that hostname rather than localhost) Thanks, Kevin

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >