Search Results

Search found 5318 results on 213 pages for 'managed c'.

Page 10/213 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How to fetch managed objects sorted by calculated value

    - by Marcin Zbijowski
    Hello, I'm working on the app that uses CoreData. There is location entity that holds latitude and longitude values. I'd like to fetch those entities sorted by distance to the user's location. I tried to set sort descriptor to distance formula sqrt ((x1 - x2)^2 + (y1 - y2)^2) but it fails with exception "... keypath ... not found in entity". NSString *distanceFormula = [NSString stringWithFormat:@"sqrt(((latitude - %f) * (latitude - %f)) + ((longitude - %f) * (longitude - %f)))", location.coordinate.latitude, location.coordinate.latitude, location.coordinate.longitude, location.coordinate.longitude]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:distanceFormula ascending:YES]; [fetchRequest setSortDescriptors:[NSArray arrayWithObject:sortDescriptor]]; NSError *error; NSArray *result = [[self managedObjectContext] executeFetchRequest:fetchRequest error:&error]; I'd like to fetch already sorted objects rather then fetch them all and then sort in the code. Any tips appreciated.

    Read the article

  • how to create a auto-incremented attribute in xcode managed object model

    - by Mausimo
    Hey, what i want to do is to make a int that will be the ID of the entity and auto increment itself whenever a new entity is added (just like a SQL identity property). Is this possible? i tried using indexed (checked on attribute) however there is no description to what indexed even does. EDIT: I am adding annotations to a map, when you click the pin the annotationview with a chevron shows up. Now when you click this button i would like to load an associated picture. However the only way i know how to link the picture to this button is to give the picture a unique id, and then set the button.tag to this id and then load based on the button.tag.

    Read the article

  • DNS: forward part of a managed domain to one host, but sub domain services to another (Google Apps)

    - by Paul Zee
    I was going to post this as a comment against DNS: Forward domain to another host, but I don't seem able to do that. I'm in a similar situation. I have a DNS registered/managed by enom, except with the slight twist that the domain was originally registered with enom through a Google Apps account creation. The domain currently supports a Google Apps site/account. I now want to direct the bare primary domain and www entries to a hosting provider for the website component, but leave the Google Apps setup intact for its services such as calendar, mail etc. For now, I'm leaving the domain managed by enom. Also note that when I registered my account with the hosting provider, I gave the same domain name as the existing domain (e.g. example.com), so at their end I'm working with the same domain name in cpanel, etc. In my case, the existing enom DNS entries don't have an A record for the www.example.com, or the bare example.com domain. Instead, there are 4 x @ records with the Google Apps IP Address, 2 x TXT records with what I assume are Google Apps site verification strings/tokens, and a bunch of CNAME records for the various features of Google Apps (mail, calendar, docs, sites, etc). So, my questions: How do I point the www.example.com and example.com DNS entries at enom to my web site hosting provider, while leaving the domain managed by enom, and the Google Apps services working as they are now (with the obvious exception of Google Sites)? How do I setup the example.com mail-related DNS records (MX, etc) at the web site hosting provider, so that outbound email to [email protected] gets correctly sent to the google apps mail account, and doesn't get trapped inside the pseudo domain within the hosting providers servers?

    Read the article

  • ASP.Net 2.0 Web application cannot find specified module.

    - by Brian Walker
    Recently my ASP.NET application quit working. It is local to my machine. It quit working after installing and uninstalling some third party developer tools. (I believe) I emptied out my directory and started adding modules until I narrowed it down to a managed C++ dll. When I load the dll into Dependency Walker it says that "c:\windows\winsxs\x86_microsoft.vc80.debugcrt_1fc8b3b9a1e18e3b_8.0.50727.762_x-ww_5490cd9f\MSVCM80D.DLL" loads fine, but "MSVCR80D.DLL" could not be found. However this dll exists in the same directory. The managed dll contains an embedded manifest and I am using Visual Studio 2005 service pack 1.

    Read the article

  • Wrapping unmanaged C++ with C++/CLI - a proper approach.

    - by Jamie
    Hi there, as stated in the title, I want to have my old C++ library working in managed .NET. I think of two possibilities: 1) I might try to compile the library with /clr and try "It Just Works" approach. 2) I might write a managed wrapper to the unmanaged library. First of all, I want to have my library working FAST, as it was in unmanaged environment. Thus, I am not sure if the first approach will not cause a large decrease in performance. However, it seems to be faster to implement (not a right word :-)) (assuming it will work for me). On the other hand, I think of some problems that might appear while writing a wrapper (e.g. how to wrap some STL collection (vector for instance)?) I think of writing a wrapper residing in the same project as the unmanaged C++ resides - is that a reasonable approach (e.g. MyUnmanagedClass and MyManagedClass in the same project, the second wrapping the other)? What would you suggest in that problem? Which solution is going to give me better performance of the resulting code? Thank you in advance for any suggestions and clues! Cheers

    Read the article

  • Public Cloud, co-location and managed services ... what is the cloud?

    - by llaszews
    Recently I have had conversation with a number of people that are selling and implementing 'cloud' solutions. I put cloud in quotes as implementations like co-location (aka co-lo) and managed services (sometimes referred to as 'your mess for less') have become popular options for companies moving to the cloud. These are obviously not pure public cloud offerings and probably more of hybrid cloud implementations as the infrastructure (PasS and IaaS)is dedicated to a specific customer. This eliminates the security, multi-tenancy, performance and other concerns that companies have regarding public cloud. Are co-location and managed services cloud to you? Are they something your company is considering when you think about cloud ?

    Read the article

  • How can I get the real Emailaddress of a recipient?

    - by Boas Enkler
    a user in out Company has 2 mail-addresses on one Exchange-Mailbox. e.g test@... and test1@... the Primary SMTP-address off the Mailbox is test@. If I send a message to test1@, load it using EWS and parse through the TORecipients-Collection there is exactly one emailadress. But the mailaddress EWS gives to me is test@... and not test1@... where I sent the mail. My problem is, that all mails sent to test@ should be imported in our CRM using my program. All mails sent to test1@... must not be imported. In real life both mailaddresses are from my Boss. One is used for normal purposes the other one is for confidential e.g. from his lawyer. Does anyone know how I can get the real-emailaddress the message was sent to?

    Read the article

  • How long do managed gigabit ethernet switches take to boot up?

    - by Warren P
    One critical drawback that I have found in researching managed-switches, and one that I have some past experience with is that anything with "lots" of firmware is going to have lots of issues associated with that firmware. We are in the middle of researching rackmount gigabit switches (48 port). It looks like for 48 ports, our only choice is managed switches (Dell, Cisco/Linksys,HP, etc). What I want to know, that I can not find out much about is the boot-time for various managed switches. If you own one, can you please answer with the model number, and the cold boot time in seconds. I have read online that Linksys (now Cisco) SRW series sometimes take almost 5 minutes before they are fully booted up, and that is an unacceptable cost for us. I particularly want to know about Dell PowerConnect managed switch bootup time (model 3548 and 5448), and would like to confirm the 5-minute boot time on the SRW2048 or similar model, and any HP ProCurve boot up times. The composite of all those figures ought to form an interesting overall performance picture.

    Read the article

  • Do Managed/Unmanaged Switch Really need to have their Cabinets/Racks?

    - by Googooboyy
    Hi guys, I'm new to managed switches and I recently inquired an IT consultant to setup my new office network. For the records, the new office network spans 2 floors, thus he recommended utilising ONE managed switch with a rack/cabinet housing, and the rest using unmanaged switches. Anyways, my main question is: does a managed switch really need to be housed on a rack, or in its own cabinet, to function efficiently? Btw, what are the pros/cons of having a rack/cabinet?

    Read the article

  • How do I use an ADO.NET managed provider in Excel?

    - by Eli
    I have an ADO.NET Managed Data Provider that is registered in machine.config in DbProviderFactory - It is available for use from, say, Analysis Services, so I know it is correctly registered. However, I need to be able to query the managed provider from Excel, but the managed provider doesn't appear as a choice from Data Link Properties | All Ole Db Providers. How do I get an ADO.NET Managed Data Provider to appear there, or is there another technique I need to use? Thanks in advance, Eli.

    Read the article

  • How do I avoid the loader lock?

    - by Mark0978
    We have a managed app, that uses an assembly. That assembly uses some unmanaged C++ code. The Managed C++ code is in a dll, that depends on several other dlls. All of those Dlls are loaded by this code. (We load all the dll's that ImageCore.dll depends on first, so we can tell which ones are missing, otherwise it would just show up as ImageCore.dll failed to load, and the log file would give no clues as to why). class Interop { private const int DONT_RESOLVE_DLL_REFERENCES = 1; private static log4net.ILog log = log4net.LogManager.GetLogger("Imagecore.NET"); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr LoadLibraryEx(string fileName, IntPtr dummy, int flags); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr FreeLibrary(IntPtr hModule); static private String[] libs = { "log4cplus.dll", "yaz.dll", "zlib1.dll", "libxml2.dll" }; public static void PreloadAssemblies() { for (int i=0; i < libs.Length; ++i) { String libname = libs[i]; IntPtr hModule = LoadLibraryEx(libname, IntPtr.Zero, DONT_RESOLVE_DLL_REFERENCES); if(hModule == IntPtr.Zero) { log.Error("Unable to pre-load '" + libname + "'"); throw new DllNotFoundException("Unable to pre-load '" + libname + "'"); } else { FreeLibrary(hModule); } } IntPtr h = LoadLibraryEx("ImageCore.dll", IntPtr.Zero, 0); if (h == IntPtr.Zero) { throw new DllNotFoundException("Unable to pre-load ImageCore.dll"); } } } And this code is called by public class ImageDoc : IDisposable { static ImageDoc() { ImageHawk.ImageCore.Utility.Interop.PreloadAssemblies(); } ... } Which is static constructor. As near as I can understand it, as soon as we attempt to use an ImageDoc object, the dll that contains that assembly is loaded and as part of that load, the static constructor is called which in turn causes several other DLLs to be loaded as well. What I'm trying to figure out, is how do we defer loading of those DLLs so that we don't run smack dab into this loader lock that is being kicked out because of the static constructor. I've pieced this much together by looking at: http://social.msdn.microsoft.com/Forums/en-US/vsto/thread/dd192d7e-ce92-49ce-beef-3816c88e5a86 http://msdn.microsoft.com/en-us/library/aa290048%28VS.71%29.aspx http://forums.devx.com/showthread.php?t=53529 http://www.yoda.arachsys.com/csharp/beforefieldinit.html But I just can't seem to find a way to get these external DLLs to load without it happening at the point the class is loading. I think I need to get these LoadLibrary calls out of the static constructor, but don't know how to get them called before they are needed (except for how it is done here). I would prefer to not have to put this kind of knowledge of the dlls into every app that uses this assembly. (And I'm not sure that would even fix the problem.... The strange thing is that the exception only appears to be happening while running within the debugger, not while running outside the debugger.

    Read the article

  • Converting Luabind to C#?

    - by themarshal
    Has anybody tried converting Luabind to C#? Is such a thing even possible? I've got an application that I want to convert so that it can run in a completely managed environment, but most of the game logic relies upon Lua scripts, and the application uses Luabind to manage the back-and-forth. I'm not familiar enough with Lua or Luabind to know what's involved. Am I on a fool's errand here?

    Read the article

  • Is there a common practice how to make freeing memory for Garbage Collector easier in .NET?

    - by MartyIX
    I've been thinking if there's a way how to speed up freeing memory in .NET. I'm creating a game in .NET (only managed code) where no significant graphics is needed but still I would like to write it properly in order to not to lose performance for nothing. For example is it useful to assign null value to objects that are not longer needed? I see this in a few samples over Internet. Thanks for answers!

    Read the article

  • Unmanaged C++ in .net

    - by _Avishay_
    Hi, can someone please tell why , Great Microsoft that created C# language and now we're in C# 4.0 , dont have an important feature that C++\CLI has !!,which is to directly compile and link with un-managed c++ ??

    Read the article

  • Blittable Vs. Non-Blittable in IL

    - by Michael Covelli
    I'm trying to make sure that my Managed to Unmanaged calls are optimized. Is there a quick way to see by looking at the IL if any non-blittable types have accidentally gotten into my pinvoke calls? I tried just writing two unmanaged functions in a .dll, one that uses bool (which is non-blittable) and one that uses ints. But I didn't see anything different when looking at the IL to let me know that it was doing something extra to marshal the bool.

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Reverting CoreData data

    - by ndg
    I have an NSTableView which is populated via a CoreData-backed NSArrayController. Users are able to edit any field they choose within the NSTableView. When they select the rows that they have modified and press a button, the data is sent to a third-party webservice. Provided the webservice accepts the updated values, I want to commit those values to my persistent store. If, however, the webservice returns an error (or simply fails to return), I want the edited fields to revert to their original values. To complicate matters, I have a number of other editable controls, backed by CoreData, which do not need to resort to this behaviour. I believe the solution to this problem revolves around the creation of a secondary Managed Object context, which I would use only for values edited within that particularly NSTableView. But I'm confused as to how the two MOC would interact with each other. What's the best solution to this problem?

    Read the article

  • Learning C++ from scratch in Visual Studio?

    - by flesh
    I need to get up to speed with C++ quite quickly (I've never used it previously) - is learning through Visual Studio (i.e. Managed C++) going to be any use? Or will I end up learning the extensions and idiosyncracies of C++ in VS, rather then the language itself? If learning in VS is not recommended, what platform / IDE do you guys suggest? Edit: Can anyone elaborate on what VS will hide or manage for me when coding unmanaged C++? I really need to be learning things like pointers, garbage collection and all the nuts and bolts of the low level language.. does VS abstract or hide any of this kind of stuff from you? Thanks for all the suggestions..

    Read the article

  • Calling member method on unmanaged C++ pointer from C# (I think)

    - by Jacob G
    I apologize in advance if this is a trivial question... I'm pretty C++ / unmanaged dumb. Here's a simplified analog to my setup: --In myUnmanagedObject.h in DLL: class myUnmanagedObject { public: virtual void myMethod(){} } --In MyControl.h in Assembly #1: #pragma make_public(myUnmanagedObject) [event_source(managed)] public ref class MyControl : public System::Windows::Forms::UserControl { public: myUnmanagedObject* GetMyUnmanagedObject(); } --in C# in Assembly #2: unsafe { MyControl temp = new MyControl(); myUnmanagedObject* obj = temp.GetMyUnmanagedObject(); obj-myMethod(); } I get a compile error saying that myUnmanagedObject does not contain a definition for myMethod. Assembly #2 references Assembly #1. Assembly #1 references DLL. If I compile the DLL with /clr and reference it directly from Assembly #2, it makes no difference. How, from C#, do I execute myMethod ?

    Read the article

  • How to pass SAFEARRAY of UDTs to unmaged code from C#.

    - by themilan
    I also used VT_RECORD. But didn't got success in passing safearray of UDTs. [ComVisible(true)] [StructLayout(LayoutKind.Sequential)] public class MY_CLASS { [MarshalAs(UnmanagedType.U4)] public Int32 width; [MarshalAs(UnmanagedType.U4)] public Int32 height; }; [DllImport("mydll.dll")] public static extern Int32 GetTypes( [In, Out][MarshalAs(UnmanagedType.SafeArray, SafeArraySubType = VarEnum.VT_RECORD, SafeArrayUserDefinedSubType = typeof(MY_CLASS))]MY_CLASS[] myClass, [In, Out][MarshalAs(UnmanagedType.SafeArray, SafeArraySubType = VarEnum.VT_RECORD, SafeArrayUserDefinedSubType = typeof(Guid))]Guid[] guids ); If i communicate with my unmanaged code without 1st parameter, Then there is no error in passing "guids" parameter to unmanaged code. I also able to cast elements of obtained SAFEARRAY at unmanaged side to GUID type. But If i tried to pass my UDT class MY_CLASS to unmanaged code with SAFEARRAY then it fails on managed code. (as above code snippet) It shows exception "An unhandled exception of type 'System.Runtime.InteropServices.SafeArrayTypeMismatchException' occurred in myapp.exe" "Additional information: Specified array was not of the expected type." Plz help me in such a situation to pass SAFEARRAY of UDTs to unmaged code.

    Read the article

  • @PersistenceContext cannot be resolved to a type

    - by Saken Kungozhin
    i was running a code in which there's a dependency @PersistenceContext and field private EntityManager em; both of which cannot be resolved to a type, what is the meaning of this error and how can i fix it? the code is here: package org.jboss.tools.examples.util; import java.util.logging.Logger;` import javax.enterprise.context.RequestScoped; import javax.enterprise.inject.Produces; import javax.enterprise.inject.spi.InjectionPoint; import javax.faces.context.FacesContext; /** * This class uses CDI to alias Java EE resources, such as the persistence context, to CDI beans * * <p> * Example injection on a managed bean field: * </p> * * <pre> * &#064;Inject * private EntityManager em; * </pre> */ public class Resources { // use @SuppressWarnings to tell IDE to ignore warnings about field not being referenced directly @SuppressWarnings("unused") @Produces @PersistenceContext private EntityManager em; @Produces public Logger produceLog(InjectionPoint injectionPoint) { return Logger.getLogger(injectionPoint.getMember().getDeclaringClass().getName()); } @Produces @RequestScoped public FacesContext produceFacesContext() { return FacesContext.getCurrentInstance(); } }

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >