Search Results

Search found 11797 results on 472 pages for 'fedora 12'.

Page 82/472 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Unity – Part 5: Injecting Values

    - by Ricardo Peres
    Introduction This is the fifth post on Unity. You can find the introductory post here, the second post, on dependency injection here, a third one on Aspect Oriented Programming (AOP) here and the latest so far, on writing custom extensions, here. This time we will talk about injecting simple values. An Inversion of Control (IoC) / Dependency Injector (DI) container like Unity can be used for things other than injecting complex class dependencies. It can also be used for setting property values or method/constructor parameters whenever a class is built. The main difference is that these values do not have a lifetime manager associated with them and do not come from the regular IoC registration store. Unlike, for instance, MEF, Unity won’t let you register as a dependency a string or an integer, so you have to take a different approach, which I will describe in this post. Scenario Let’s imagine we have a base interface that describes a logger – the same as in previous examples: 1: public interface ILogger 2: { 3: void Log(String message); 4: } And a concrete implementation that writes to a file: 1: public class FileLogger : ILogger 2: { 3: public String Filename 4: { 5: get; 6: set; 7: } 8:  9: #region ILogger Members 10:  11: public void Log(String message) 12: { 13: using (Stream file = File.OpenWrite(this.Filename)) 14: { 15: Byte[] data = Encoding.Default.GetBytes(message); 16: 17: file.Write(data, 0, data.Length); 18: } 19: } 20:  21: #endregion 22: } And let’s say we want the Filename property to come from the application settings (appSettings) section on the Web/App.config file. As usual with Unity, there is an extensibility point that allows us to automatically do this, both with code configuration or statically on the configuration file. Extending Injection We start by implementing a class that will retrieve a value from the appSettings by inheriting from ValueElement: 1: sealed class AppSettingsParameterValueElement : ValueElement, IDependencyResolverPolicy 2: { 3: #region Private methods 4: private Object CreateInstance(Type parameterType) 5: { 6: Object configurationValue = ConfigurationManager.AppSettings[this.AppSettingsKey]; 7:  8: if (parameterType != typeof(String)) 9: { 10: TypeConverter typeConverter = this.GetTypeConverter(parameterType); 11:  12: configurationValue = typeConverter.ConvertFromInvariantString(configurationValue as String); 13: } 14:  15: return (configurationValue); 16: } 17: #endregion 18:  19: #region Private methods 20: private TypeConverter GetTypeConverter(Type parameterType) 21: { 22: if (String.IsNullOrEmpty(this.TypeConverterTypeName) == false) 23: { 24: return (Activator.CreateInstance(TypeResolver.ResolveType(this.TypeConverterTypeName)) as TypeConverter); 25: } 26: else 27: { 28: return (TypeDescriptor.GetConverter(parameterType)); 29: } 30: } 31: #endregion 32:  33: #region Public override methods 34: public override InjectionParameterValue GetInjectionParameterValue(IUnityContainer container, Type parameterType) 35: { 36: Object value = this.CreateInstance(parameterType); 37: return (new InjectionParameter(parameterType, value)); 38: } 39: #endregion 40:  41: #region IDependencyResolverPolicy Members 42:  43: public Object Resolve(IBuilderContext context) 44: { 45: Type parameterType = null; 46:  47: if (context.CurrentOperation is ResolvingPropertyValueOperation) 48: { 49: ResolvingPropertyValueOperation op = (context.CurrentOperation as ResolvingPropertyValueOperation); 50: PropertyInfo prop = op.TypeBeingConstructed.GetProperty(op.PropertyName); 51: parameterType = prop.PropertyType; 52: } 53: else if (context.CurrentOperation is ConstructorArgumentResolveOperation) 54: { 55: ConstructorArgumentResolveOperation op = (context.CurrentOperation as ConstructorArgumentResolveOperation); 56: String args = op.ConstructorSignature.Split('(')[1].Split(')')[0]; 57: Type[] types = args.Split(',').Select(a => Type.GetType(a.Split(' ')[0])).ToArray(); 58: ConstructorInfo ctor = op.TypeBeingConstructed.GetConstructor(types); 59: parameterType = ctor.GetParameters().Where(p => p.Name == op.ParameterName).Single().ParameterType; 60: } 61: else if (context.CurrentOperation is MethodArgumentResolveOperation) 62: { 63: MethodArgumentResolveOperation op = (context.CurrentOperation as MethodArgumentResolveOperation); 64: String methodName = op.MethodSignature.Split('(')[0].Split(' ')[1]; 65: String args = op.MethodSignature.Split('(')[1].Split(')')[0]; 66: Type[] types = args.Split(',').Select(a => Type.GetType(a.Split(' ')[0])).ToArray(); 67: MethodInfo method = op.TypeBeingConstructed.GetMethod(methodName, types); 68: parameterType = method.GetParameters().Where(p => p.Name == op.ParameterName).Single().ParameterType; 69: } 70:  71: return (this.CreateInstance(parameterType)); 72: } 73:  74: #endregion 75:  76: #region Public properties 77: [ConfigurationProperty("appSettingsKey", IsRequired = true)] 78: public String AppSettingsKey 79: { 80: get 81: { 82: return ((String)base["appSettingsKey"]); 83: } 84:  85: set 86: { 87: base["appSettingsKey"] = value; 88: } 89: } 90: #endregion 91: } As you can see from the implementation of the IDependencyResolverPolicy.Resolve method, this will work in three different scenarios: When it is applied to a property; When it is applied to a constructor parameter; When it is applied to an initialization method. The implementation will even try to convert the value to its declared destination, for example, if the destination property is an Int32, it will try to convert the appSettings stored string to an Int32. Injection By Configuration If we want to configure injection by configuration, we need to implement a custom section extension by inheriting from SectionExtension, and registering our custom element with the name “appSettings”: 1: sealed class AppSettingsParameterInjectionElementExtension : SectionExtension 2: { 3: public override void AddExtensions(SectionExtensionContext context) 4: { 5: context.AddElement<AppSettingsParameterValueElement>("appSettings"); 6: } 7: } And on the configuration file, for setting a property, we use it like this: 1: <appSettings> 2: <add key="LoggerFilename" value="Log.txt"/> 3: </appSettings> 4: <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> 5: <container> 6: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.ConsoleLogger, MyAssembly"/> 7: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.FileLogger, MyAssembly" name="File"> 8: <lifetime type="singleton"/> 9: <property name="Filename"> 10: <appSettings appSettingsKey="LoggerFilename"/> 11: </property> 12: </register> 13: </container> 14: </unity> If we would like to inject the value as a constructor parameter, it would be instead: 1: <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> 2: <sectionExtension type="MyNamespace.AppSettingsParameterInjectionElementExtension, MyAssembly" /> 3: <container> 4: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.ConsoleLogger, MyAssembly"/> 5: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.FileLogger, MyAssembly" name="File"> 6: <lifetime type="singleton"/> 7: <constructor> 8: <param name="filename" type="System.String"> 9: <appSettings appSettingsKey="LoggerFilename"/> 10: </param> 11: </constructor> 12: </register> 13: </container> 14: </unity> Notice the appSettings section, where we add a LoggerFilename entry, which is the same as the one referred by our AppSettingsParameterInjectionElementExtension extension. For more advanced behavior, you can add a TypeConverterName attribute to the appSettings declaration, where you can pass an assembly qualified name of a class that inherits from TypeConverter. This class will be responsible for converting the appSettings value to a destination type. Injection By Attribute If we would like to use attributes instead, we need to create a custom attribute by inheriting from DependencyResolutionAttribute: 1: [Serializable] 2: [AttributeUsage(AttributeTargets.Parameter | AttributeTargets.Property, AllowMultiple = false, Inherited = true)] 3: public sealed class AppSettingsDependencyResolutionAttribute : DependencyResolutionAttribute 4: { 5: public AppSettingsDependencyResolutionAttribute(String appSettingsKey) 6: { 7: this.AppSettingsKey = appSettingsKey; 8: } 9:  10: public String TypeConverterTypeName 11: { 12: get; 13: set; 14: } 15:  16: public String AppSettingsKey 17: { 18: get; 19: private set; 20: } 21:  22: public override IDependencyResolverPolicy CreateResolver(Type typeToResolve) 23: { 24: return (new AppSettingsParameterValueElement() { AppSettingsKey = this.AppSettingsKey, TypeConverterTypeName = this.TypeConverterTypeName }); 25: } 26: } As for file configuration, there is a mandatory property for setting the appSettings key and an optional TypeConverterName  for setting the name of a TypeConverter. Both the custom attribute and the custom section return an instance of the injector AppSettingsParameterValueElement that we implemented in the first place. Now, the attribute needs to be placed before the injected class’ Filename property: 1: public class FileLogger : ILogger 2: { 3: [AppSettingsDependencyResolution("LoggerFilename")] 4: public String Filename 5: { 6: get; 7: set; 8: } 9:  10: #region ILogger Members 11:  12: public void Log(String message) 13: { 14: using (Stream file = File.OpenWrite(this.Filename)) 15: { 16: Byte[] data = Encoding.Default.GetBytes(message); 17: 18: file.Write(data, 0, data.Length); 19: } 20: } 21:  22: #endregion 23: } Or, if we wanted to use constructor injection: 1: public class FileLogger : ILogger 2: { 3: public String Filename 4: { 5: get; 6: set; 7: } 8:  9: public FileLogger([AppSettingsDependencyResolution("LoggerFilename")] String filename) 10: { 11: this.Filename = filename; 12: } 13:  14: #region ILogger Members 15:  16: public void Log(String message) 17: { 18: using (Stream file = File.OpenWrite(this.Filename)) 19: { 20: Byte[] data = Encoding.Default.GetBytes(message); 21: 22: file.Write(data, 0, data.Length); 23: } 24: } 25:  26: #endregion 27: } Usage Just do: 1: ILogger logger = ServiceLocator.Current.GetInstance<ILogger>("File"); And off you go! A simple way do avoid hardcoded values in component registrations. Of course, this same concept can be applied to registry keys, environment values, XML attributes, etc, etc, just change the implementation of the AppSettingsParameterValueElement class. Next stop: custom lifetime managers.

    Read the article

  • WebCenter 11.1.1.8 Certified with E-Business Suite 12.2

    - by Steven Chan (Oracle Development)
    Oracle WebCenter Suite is an integrated suite of Fusion Middleware 11gR1 tools used to create web sites and portals using service-oriented architecture (SOA).  Applications adapters are also available. WebCenter Portal 11.1.1.8 is now certified with Oracle E-Business Suite Release 12.2.  This complements our existing certifications of WebCenter Portal 11.1.1.8 with EBS 12.0 and 12.1.  WebCenter Portal 11.1.1.8 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.8.0, also known as FMW 11gR1 Patchset 7.  Certified Platforms Oracle WebCenter Portal is certified to run on any operating system for which Oracle WebLogic Server 11g is certified. For information on operating systems supported by Oracle WebLogic Server 11g and Oracle WebCenter Portal, refer to the 'Oracle Fusion Middleware on WebLogic Server - System Certification' in the Oracle Fusion Middleware 11g Release 1 (11.1.1.x) Certification Matrix. Integration with Oracle WebCenter Portal involves components spanning several different suites of Oracle products. There are no restrictions on which platform any particular component may be installed so long as the platform is supported for that component. Migrating to Oracle WebCenterIf you're currently using Oracle Portal, you should be aware that Portal is now in maintenance mode.  Updates with bug fixes will continue to be produced, but you should consider migrating to Oracle WebCenter for ongoing new features. References Using WebCenter 11.1.1 with Oracle E-Business Suite Release 12.2 (Note 1332645.1) WebCenter Portal 11g Release 1 (11.1.1.8) Documentation Related Articles Oracle E-Business Suite 12.2 Now Available WebCenter Portal 11.1.1.8 Certified with E-Business Suite 12

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Troubleshooting inconsistent ODBC connectivity

    - by Chris
    I'm attempting to integrate UPS WorldShip with a SQL Server 2008 R2 database but the connection is very inconsistent. UPS claims this is a DSN/Windows problem and I have not been able to convince them otherwise. The integration is quite simple: my shipping guy clicks a button which opens a form where he enters an order #. After pressing enter the shipping information will be pulled from the database for that order #. The problem is that WorldShip often times thinks the DSN does not exist. However, I am able to open WorldShip's customization tool and browse all the tables and fields in the database my DSN is connected to which means at the very least my DSN does, in fact, exist. The reason this has been so difficult to troubleshoot is because there is no consistency to the problem and I'm not able to reliably repeat any behavior. That is to say that rebooting the PC doesn't cause the connection to break and opening the integration tool and viewing the tables and fields doesn't cause the integration button to work. Is there some way for me to monitor this connection from the SQL server or get any clues as to why it fails? As requested by TallTed here is a sample of the trace file I created. After a mere 5 hours the trace file was over 130MB so there's no way I could provide it in its entirety. WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 1227 <unknown> SQLPOINTER [Unknown attribute 1227] SQLINTEGER -5 DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed (0) WorldShipTD d94-690 ENTER SQLAllocHandle SQLSMALLINT 3 <SQL_HANDLE_STMT> SQLHANDLE 0x0C662FC0 SQLHANDLE * 0x03EBCE38 WorldShipTD d94-690 EXIT SQLAllocHandle with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 <SQL_HANDLE_STMT> SQLHANDLE 0x0C662FC0 SQLHANDLE * 0x03EBCE38 ( 0x0C6632A0) WorldShipTD d94-690 ENTER SQLSetStmtAttrW SQLHSTMT 0x0C6632A0 SQLINTEGER 0 <SQL_ATTR_QUERY_TIMEOUT> SQLPOINTER 30 SQLINTEGER -5 WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 0 <SQL_ATTR_QUERY_TIMEOUT> SQLPOINTER 30 SQLINTEGER -5 DIAG [HYC00] [Microsoft][ODBC Microsoft Access Driver]Optional feature not implemented (106) WorldShipTD d94-690 ENTER SQLGetDiagFieldW SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 WorldShipTD d94-690 EXIT SQLGetDiagFieldW with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 (10) WorldShipTD d94-690 ENTER SQLGetInfoW HDBC 0x0C662FC0 UWORD 77 <SQL_DRIVER_ODBC_VER> PTR 0x03EBCEDC SWORD 100 SWORD * 0x0028E290 WorldShipTD d94-690 EXIT SQLGetInfoW with return code 0 (SQL_SUCCESS) HDBC 0x0C662FC0 UWORD 77 <SQL_DRIVER_ODBC_VER> PTR 0x03EBCEDC [ 10] "03.51" SWORD 100 SWORD * 0x0028E290 (10) WorldShipTD d94-690 ENTER SQLSetStmtAttrW SQLHSTMT 0x0C6632A0 SQLINTEGER 1228 <unknown> SQLPOINTER [Unknown attribute 1228] SQLINTEGER -5 WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 1228 <unknown> SQLPOINTER [Unknown attribute 1228] SQLINTEGER -5 DIAG [HY092] [Microsoft][ODBC Microsoft Access Driver]Invalid attribute/option identifier (86) WorldShipTD d94-690 ENTER SQLGetDiagFieldW SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 WorldShipTD d94-690 EXIT SQLGetDiagFieldW with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 (10) WorldShipTD d94-690 ENTER SQLSetStmtAttrW SQLHSTMT 0x0C6632A0 SQLINTEGER 1227 <unknown> SQLPOINTER [Unknown attribute 1227] SQLINTEGER -5 WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 1227 <unknown> SQLPOINTER [Unknown attribute 1227] SQLINTEGER -5 DIAG [HY092] [Microsoft][ODBC Microsoft Access Driver]Invalid attribute/option identifier (86) WorldShipTD d94-690 ENTER SQLGetDiagFieldW SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 WorldShipTD d94-690 EXIT SQLGetDiagFieldW with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 (10)

    Read the article

  • /phpTest/zologize/axa.php? Another botnet?

    - by M132
    Starring at the log made me think, what is /phpTest/zologize/axa.php and why are bots looking for it? Previously, I had lots of /HNAP1/ requests. Requesting /HNAP1/ from IPs from log revealed, that all of them were sent by Linksys routers. 3 months later, these requests turned out to be generated by a router worm called TheMoon. But requesting /phpTest/zologize/axa.php from these servers returns a 404 error. How these servers got infected, and how can I protect mine from this? 124.11.224.69 - - [02/Feb/2014:00:37:16 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 140.113.238.121 - - [21/Feb/2014:01:24:32 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 77.121.132.79 - - [22/Feb/2014:00:03:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 142.4.201.210 - - [24/Feb/2014:21:54:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 212.83.168.39 - - [24/Feb/2014:23:16:00 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 87.117.229.210 - - [26/Feb/2014:06:34:58 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 78.100.82.99 - - [26/Feb/2014:08:25:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 198.50.205.219 - - [26/Feb/2014:09:59:11 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 210.60.142.107 - - [27/Feb/2014:00:12:12 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 101.109.4.73 - - [27/Feb/2014:08:50:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.91.128.158 - - [27/Feb/2014:08:59:15 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 201.188.41.175 - - [27/Feb/2014:11:25:42 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 220.133.137.2 - - [27/Feb/2014:12:12:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 203.156.104.88 - - [28/Feb/2014:18:11:49 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.19.52.58 - - [28/Feb/2014:22:02:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 84.2.92.40 - - [28/Feb/2014:23:04:17 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 58.64.205.11 - - [01/Mar/2014:06:08:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 113.61.200.151 - - [01/Mar/2014:18:25:25 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 178.33.219.12 - - [03/Mar/2014:14:41:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 74.63.220.132 - - [04/Mar/2014:01:16:44 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 187.141.230.106 - - [04/Mar/2014:15:39:26 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 103.22.181.146 - - [09/May/2014:17:16:56 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 502 166 "-" "-" 176.31.200.14 - - [10/May/2014:19:52:24 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 124.120.92.70 - - [12/May/2014:16:19:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 219.85.198.142 - - [15/May/2014:19:21:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 80.84.53.226 - - [23/May/2014:08:58:25 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 87.213.11.165 - - [25/May/2014:06:20:27 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 122.116.220.106 - - [25/May/2014:07:10:21 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.128.30 - - [29/May/2014:02:43:49 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 142.4.197.135 - - [29/May/2014:11:36:45 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.243.65 - - [30/May/2014:01:59:53 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.164.221 - - [30/May/2014:14:04:16 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 140.127.182.15 - - [01/Jun/2014:14:45:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 218.166.43.21 - - [01/Jun/2014:16:07:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.188.140 - - [01/Jun/2014:19:11:46 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 94.23.211.173 - - [05/Jun/2014:00:52:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 120.117.105.201 - - [05/Jun/2014:04:39:39 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 187.172.27.146 - - [05/Jun/2014:10:20:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 203.195.219.91 - - [05/Jun/2014:10:53:42 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-"

    Read the article

  • Eclipse juno can not open with error " An error has occurred. See the log file",ubuntu 12.04

    - by ana
    I'm trying to lunch eclipse for first time ,I've download the package and installed it manually.here is the log file : !SESSION 2012-10-10 16:06:11.460 ----------------------------------------------- eclipse.buildId=M20120914-1800 java.fullversion=GNU libgcj 4.6.3 BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY org.eclipse.osgi 4 0 2012-10-10 16:06:19.756 !MESSAGE Could not start bundle: org.eclipse.equinox.console !STACK 0 org.osgi.framework.BundleException: Could not start bundle: org.eclipse.equinox.console at org.eclipse.osgi.framework.internal.core.ConsoleManager.checkForConsoleBundle(ConsoleManager.java:217) at org.eclipse.core.runtime.adaptor.EclipseStarter.startup(EclipseStarter.java:297) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:176) at java.lang.reflect.Method.invoke(libgcj.so.12) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584) at org.eclipse.equinox.launcher.Main.run(Main.java:1438) at org.eclipse.equinox.launcher.Main.main(Main.java:1414) Caused by: org.osgi.framework.BundleException: Exception in org.eclipse.equinox.console.command.adapter.Activator.start() of bundle org.eclipse.equinox.console. at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:734) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:300) at org.eclipse.osgi.framework.internal.core.ConsoleManager.checkForConsoleBundle(ConsoleManager.java:215) ...7 more Caused by: org.osgi.framework.BundleException: Exception in org.apache.felix.gogo.command.Activator.start() of bundle org.apache.felix.gogo.command. at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:734) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:300) at org.eclipse.equinox.console.command.adapter.Activator.startBundle(Activator.java:248) at org.eclipse.equinox.console.command.adapter.Activator.start(Activator.java:239) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711) at java.security.AccessController.doPrivileged(libgcj.so.12) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702) ...11 more Caused by: java.lang.NoClassDefFoundError: org.apache.felix.gogo.command.OBR at java.lang.Class.initializeClass(libgcj.so.12) at org.apache.felix.gogo.command.Activator.start(Activator.java:54) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711) at java.security.AccessController.doPrivileged(libgcj.so.12) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702) ...19 more Caused by: java.lang.ClassNotFoundException: org.apache.felix.bundlerepository.Repository at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(libgcj.so.12) at java.lang.Class.initializeClass(libgcj.so.12) ...23 more Root exception: java.lang.NoClassDefFoundError: org.apache.felix.gogo.command.OBR at java.lang.Class.initializeClass(libgcj.so.12) at !ENTRY org.eclipse.osgi 2 0 2012-10-10 16:06:30.433 !MESSAGE The following is a complete list of bundles which are not resolved, see the prior log entry for the root cause if it exists: !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.433 !MESSAGE Bundle com.sun.el_2.2.0.v201108011116 [4] was not resolved. !SUBENTRY 2 com.sun.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.el_2.2.0. !SUBENTRY 2 com.sun.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.5.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle javax.el_2.2.0.v201108011116 [6] was not resolved. !SUBENTRY 2 javax.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_2.5.0. !SUBENTRY 2 javax.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.5.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle javax.servlet_3.0.0.v201112011016 [8] was not resolved. !SUBENTRY 2 javax.servlet 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle javax.servlet.jsp_2.2.0.v201112011158 [9] was not resolved. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.el_2.2.0. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle org.apache.jasper.glassfish_2.2.2.v201205150955 [21] was not resolved. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.el_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.descriptor_2.6.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.jsp_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.jsp.el_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.jsp.tagext_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing optionally imported package javax.tools_0.0.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle org.eclipse.equinox.http.jetty_3.0.0.v20120522-1841 [91] was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_[2.6.0,4.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet.http_[2.6.0,4.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.equinox.http.servlet_1.0.0. !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.http_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.io.bio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.io.nio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.bio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.nio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.session_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.ssl_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.servlet_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.util_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.util.component_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.util.log_[8.0.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.435 !MESSAGE Bundle org.eclipse.equinox.http.registry_1.1.200.v20120522-2049 [92] was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.registry 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet_2.3.0. !SUBENTRY 2 org.eclipse.equinox.http.registry 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet.http_2.3.0. !SUBENTRY 2 org.eclipse.equinox.http.registry 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.435 !MESSAGE Bundle org.eclipse.equinox.http.servlet_1.1.300.v20120522-1841 [93] was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet_[2.3.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing optionally imported package javax.servlet.annotation_2.6.0. !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing optionally imported package javax.servlet.descriptor_2.6.0. !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet.http_[2.3.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.436 !MESSAGE Bundle org.eclipse.equinox.jsp.jasper_1.0.400.v20120522-2049 [94] was not resolved. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet_[2.4.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing optionally imported package javax.servlet.annotation_2.6.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing optionally imported package javax.servlet.descriptor_2.6.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.http_[2.4.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.jsp_[2.0.0,2.3.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package org.apache.jasper.servlet_[0.0.0,6.0.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.436 !MESSAGE Bundle org.eclipse.equinox.jsp.jasper.registry_1.0.300.v20120522-2049 [95] was not resolved. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package org.eclipse.equinox.jsp.jasper_0.0.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet_2.4.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.http_2.4.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.436 !MESSAGE Bundle org.eclipse.help.webapp_3.6.101.v20120717-130216 [135] was not resolved. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required bundle org.eclipse.equinox.jsp.jasper.registry_1.0.100. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required bundle org.eclipse.equinox.http.registry_1.0.200. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet_2.4.0. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.http_2.4.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jdt.apt.pluggable.core_1.0.400.v20120522-1651 [139] was not resolved. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.tool_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.dispatch_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.model_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.util_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jdt.compiler.apt_1.0.500.v20120522-1651 [141] was not resolved. !SUBENTRY 2 org.eclipse.jdt.compiler.apt 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing optionally imported package org.eclipse.jdt.internal.compiler.tool_0.0.0. !SUBENTRY 2 org.eclipse.jdt.compiler.apt 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jdt.compiler.tool_1.0.101.v20120522-1651 [142] was not resolved. !SUBENTRY 2 org.eclipse.jdt.compiler.tool 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jetty.continuation_8.1.3.v20120522 [155] was not resolved. !SUBENTRY 2 org.eclipse.jetty.continuation 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.continuation 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing optionally imported package org.mortbay.log_[6.1.0,7.0.0). !SUBENTRY 2 org.eclipse.jetty.continuation 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing optionally imported package org.mortbay.util.ajax_[6.1.0,7.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jetty.http_8.1.3.v20120522 [156] was not resolved. !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jetty.io_[8.1.0,9.0.0). org.eclipse.jetty.util.resource_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.ssl_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.438 !MESSAGE Bundle org.eclipse.jetty.io_8.1.3.v20120522 [157] was not resolved. !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.component_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.log_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.thread_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.438 !MESSAGE Bundle org.eclipse.jetty.security_8.1.3.v20120522 [158] was not resolved. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.http_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 org.eclipse.jetty.jmx_8.0.0. !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.439 !MESSAGE Missing imported package org.eclipse.jetty.security_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.nio_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.session_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.ssl_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util.component_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util.log_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util.resource_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.440 !MESSAGE Bundle org.eclipse.jetty.util_8.1.3.v20120522 [161] was not resolved. !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j_[1.5.0,2.0.0). !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j.helpers_[1.6.0,2.0.0). !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j.impl_[1.5.0,2.0.0). !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j.spi_[1.6.0,2.0.0). !ENTRY org.eclipse.osgi 4 0 2012-10-10 16:06:30.441 !MESSAGE Application error !STACK 1 java.lang.ArrayIndexOutOfBoundsException: 0 at org.eclipse.e4.core.internal.di.ConstructorRequestor.calcDependentObjects(ConstructorRequestor.java:79) at org.eclipse.e4.core.internal.di.Requestor.getDependentObjects(Requestor.java:143) at org.eclipse.e4.core.internal.di.InjectorImpl.resolveArgs(InjectorImpl.java:408) at org.eclipse.e4.core.internal.di.InjectorImpl.internalMake(InjectorImpl.java:312) at org.eclipse.e4.core.internal.di.InjectorImpl.make(InjectorImpl.java:240) at org.eclipse.e4.core.contexts.ContextInjectionFactory.make(ContextInjectionFactory.java:161) at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createDefaultHeadlessContext(E4Application.java:420) at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createDefaultContext(E4Application.java:434) at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createE4Workbench(E4Application.java:182) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:557) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:543) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:124) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180) at java.lang.reflect.Method.invoke(libgcj.so.12) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584) at org.eclipse.equinox.launcher.Main.run(Main.java:1438) at org.eclipse.equinox.launcher.Main.main(Main.java:1414) would you please help me with this?

    Read the article

  • cdc-acm driver: This device cannot do calls on its own. It is not a modem

    - by Sorcrer
    I am using Beagleboard-xm with 3.12 Kernel and ubuntu rootfs from Robert Nelson's site. I use a Telit HE910 GPS+GSM modem along with my project .So as per the HW user guide i have to apply a logic high for 5s on the input of this modem for enabling it So when I does this by toggling the gpio pin for 5s using a script I'm getting some messages on the terminal I am sure this message comes from the driver in usb/class/cdc-acm.c but couldn't find the reason behind this? How can I solve this issue?? root@arm:~# ./modem_on.sh Turning on Telit modem ...... going to sleep and toggle [ 70.791381] cdc_acm 1-2:1.0: This device cannot do calls on its own. It is not a modem. [ 74.390258] cdc_acm 1-2:1.0: This device cannot do calls on its own. It is not a modem. [ 74.406890] cdc_acm 1-2:1.2: This device cannot do calls on its own. It is not a modem. [ 74.462188] cdc_acm 1-2:1.4: This device cannot do calls on its own. It is not a modem. [ 74.478363] cdc_acm 1-2:1.6: This device cannot do calls on its own. It is not a modem. [ 74.495269] cdc_acm 1-2:1.8: This device cannot do calls on its own. It is not a modem. [ 74.510040] cdc_acm 1-2:1.10: This device cannot do calls on its own. It is not a modem. [ 74.530090] cdc_acm 1-2:1.12: This device cannot do calls on its own. It is not a modem. [ 74.619720] cdc_acm 1-2:1.0: This device cannot do calls on its own. It is not a modem. [ 74.634429] cdc_acm 1-2:1.2: This device cannot do calls on its own. It is not a modem. [ 74.649475] cdc_acm 1-2:1.4: This device cannot do calls on its own. It is not a modem. [ 74.664459] cdc_acm 1-2:1.6: This device cannot do calls on its own. It is not a modem. [ 74.678741] cdc_acm 1-2:1.8: This device cannot do calls on its own. It is not a modem. [ 74.693389] cdc_acm 1-2:1.10: This device cannot do calls on its own. It is not a modem. [ 74.708099] cdc_acm 1-2:1.12: This device cannot do calls on its own. It is not a modem. Script complete .......... The realted necessary portion of dmesg is below [ 30.623107] init: plymouth-upstart-bridge main process ended, respawning [ 70.629943] usb 1-2: new high-speed USB device number 2 using ehci-omap [ 70.782501] usb 1-2: config 1 interface 0 altsetting 0 endpoint 0x81 has an invalid bInterval 255, changing to 11 [ 70.782592] usb 1-2: New USB device found, idVendor=058b, idProduct=0041 [ 70.782623] usb 1-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 70.791381] cdc_acm 1-2:1.0: This device cannot do calls on its own. It is not a modem. [ 70.801483] cdc_acm 1-2:1.0: ttyACM0: USB ACM device [ 73.041625] usb 1-2: USB disconnect, device number 2 [ 74.209930] usb 1-2: new high-speed USB device number 3 using ehci-omap [ 74.369049] usb 1-2: New USB device found, idVendor=1bc7, idProduct=0021 [ 74.369110] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 74.369140] usb 1-2: Product: Telit Wireless Module [ 74.369171] usb 1-2: Manufacturer: Telit wireless solutions [ 74.369201] usb 1-2: SerialNumber: 357164042197668 [ 74.390258] cdc_acm 1-2:1.0: This device cannot do calls on its own. It is not a modem. [ 74.400207] cdc_acm 1-2:1.0: ttyACM0: USB ACM device [ 74.406890] cdc_acm 1-2:1.2: This device cannot do calls on its own. It is not a modem. [ 74.416900] cdc_acm 1-2:1.2: ttyACM1: USB ACM device [ 74.462188] cdc_acm 1-2:1.4: This device cannot do calls on its own. It is not a modem. [ 74.472259] cdc_acm 1-2:1.4: ttyACM2: USB ACM device [ 74.478363] cdc_acm 1-2:1.6: This device cannot do calls on its own. It is not a modem. [ 74.488372] cdc_acm 1-2:1.6: ttyACM3: USB ACM device [ 74.495269] cdc_acm 1-2:1.8: This device cannot do calls on its own. It is not a modem. [ 74.505279] cdc_acm 1-2:1.8: ttyACM4: USB ACM device [ 74.510040] cdc_acm 1-2:1.10: This device cannot do calls on its own. It is not a modem. [ 74.520141] cdc_acm 1-2:1.10: ttyACM5: USB ACM device [ 74.530090] cdc_acm 1-2:1.12: This device cannot do calls on its own. It is not a modem. [ 74.540283] cdc_acm 1-2:1.12: ttyACM6: USB ACM device [ 74.619720] cdc_acm 1-2:1.0: This device cannot do calls on its own. It is not a modem. [ 74.629455] cdc_acm 1-2:1.0: ttyACM0: USB ACM device [ 74.634429] cdc_acm 1-2:1.2: This device cannot do calls on its own. It is not a modem. [ 74.644042] cdc_acm 1-2:1.2: ttyACM1: USB ACM device [ 74.649475] cdc_acm 1-2:1.4: This device cannot do calls on its own. It is not a modem. [ 74.659027] cdc_acm 1-2:1.4: ttyACM2: USB ACM device [ 74.664459] cdc_acm 1-2:1.6: This device cannot do calls on its own. It is not a modem. [ 74.674133] cdc_acm 1-2:1.6: ttyACM3: USB ACM device [ 74.678741] cdc_acm 1-2:1.8: This device cannot do calls on its own. It is not a modem. [ 74.688415] cdc_acm 1-2:1.8: ttyACM4: USB ACM device [ 74.693389] cdc_acm 1-2:1.10: This device cannot do calls on its own. It is not a modem. [ 74.703186] cdc_acm 1-2:1.10: ttyACM5: USB ACM device [ 74.708099] cdc_acm 1-2:1.12: This device cannot do calls on its own. It is not a modem. [ 74.717895] cdc_acm 1-2:1.12: ttyACM6: USB ACM device `

    Read the article

  • dbus dependency with yum

    - by Hengjie
    Whenever, I try and run yum update I get the following error: [root@server ~]# yum update Loaded plugins: dellsysid, fastestmirror Loading mirror speeds from cached hostfile * base: mirror01.idc.hinet.net * extras: mirror01.idc.hinet.net * rpmforge: fr2.rpmfind.net * updates: mirror01.idc.hinet.net Excluding Packages in global exclude list Finished Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package NetworkManager.x86_64 1:0.7.0-13.el5 set to be updated ---> Package NetworkManager-glib.x86_64 1:0.7.0-13.el5 set to be updated ---> Package SysVinit.x86_64 0:2.86-17.el5 set to be updated ---> Package acl.x86_64 0:2.2.39-8.el5 set to be updated ---> Package acpid.x86_64 0:1.0.4-12.el5 set to be updated ---> Package apr.x86_64 0:1.2.7-11.el5_6.5 set to be updated ---> Package aspell.x86_64 12:0.60.3-12 set to be updated ---> Package audit.x86_64 0:1.8-2.el5 set to be updated ---> Package audit-libs.x86_64 0:1.8-2.el5 set to be updated ---> Package audit-libs-python.x86_64 0:1.8-2.el5 set to be updated ---> Package authconfig.x86_64 0:5.3.21-7.el5 set to be updated ---> Package autofs.x86_64 1:5.0.1-0.rc2.163.el5 set to be updated ---> Package bash.x86_64 0:3.2-32.el5 set to be updated ---> Package bind.x86_64 30:9.3.6-20.P1.el5 set to be updated ---> Package bind-libs.x86_64 30:9.3.6-20.P1.el5 set to be updated ---> Package bind-utils.x86_64 30:9.3.6-20.P1.el5 set to be updated ---> Package binutils.x86_64 0:2.17.50.0.6-20.el5 set to be updated ---> Package centos-release.x86_64 10:5-8.el5.centos set to be updated ---> Package centos-release-notes.x86_64 0:5.8-0 set to be updated ---> Package coreutils.x86_64 0:5.97-34.el5_8.1 set to be updated ---> Package cpp.x86_64 0:4.1.2-52.el5 set to be updated ---> Package cpuspeed.x86_64 1:1.2.1-10.el5 set to be updated ---> Package crash.x86_64 0:5.1.8-1.el5.centos set to be updated ---> Package cryptsetup-luks.x86_64 0:1.0.3-8.el5 set to be updated ---> Package cups.x86_64 1:1.3.7-30.el5 set to be updated ---> Package cups-libs.x86_64 1:1.3.7-30.el5 set to be updated ---> Package curl.x86_64 0:7.15.5-15.el5 set to be updated --> Processing Dependency: dbus = 1.1.2-15.el5_6 for package: dbus-libs ---> Package dbus.x86_64 0:1.1.2-16.el5_7 set to be updated ---> Package dbus-libs.x86_64 0:1.1.2-16.el5_7 set to be updated ---> Package device-mapper.x86_64 0:1.02.67-2.el5 set to be updated ---> Package device-mapper-event.x86_64 0:1.02.67-2.el5 set to be updated ---> Package device-mapper-multipath.x86_64 0:0.4.7-48.el5_8.1 set to be updated ---> Package dhclient.x86_64 12:3.0.5-31.el5 set to be updated ---> Package dmidecode.x86_64 1:2.11-1.el5 set to be updated ---> Package dmraid.x86_64 0:1.0.0.rc13-65.el5 set to be updated ---> Package dmraid-events.x86_64 0:1.0.0.rc13-65.el5 set to be updated ---> Package dump.x86_64 0:0.4b41-6.el5 set to be updated ---> Package e2fsprogs.x86_64 0:1.39-33.el5 set to be updated ---> Package e2fsprogs-devel.x86_64 0:1.39-33.el5 set to be updated ---> Package e2fsprogs-libs.x86_64 0:1.39-33.el5 set to be updated ---> Package ecryptfs-utils.x86_64 0:75-8.el5 set to be updated ---> Package file.x86_64 0:4.17-21 set to be updated ---> Package finger.x86_64 0:0.17-33 set to be updated ---> Package firstboot-tui.x86_64 0:1.4.27.9-1.el5.centos set to be updated ---> Package freetype.x86_64 0:2.2.1-28.el5_7.2 set to be updated ---> Package freetype-devel.x86_64 0:2.2.1-28.el5_7.2 set to be updated ---> Package ftp.x86_64 0:0.17-37.el5 set to be updated ---> Package gamin.x86_64 0:0.1.7-10.el5 set to be updated ---> Package gamin-python.x86_64 0:0.1.7-10.el5 set to be updated ---> Package gawk.x86_64 0:3.1.5-15.el5 set to be updated ---> Package gcc.x86_64 0:4.1.2-52.el5 set to be updated ---> Package gcc-c++.x86_64 0:4.1.2-52.el5 set to be updated ---> Package glibc.i686 0:2.5-81.el5_8.1 set to be updated ---> Package glibc.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package glibc-common.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package glibc-devel.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package glibc-headers.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package gnutls.x86_64 0:1.4.1-7.el5_8.2 set to be updated ---> Package groff.x86_64 0:1.18.1.1-13.el5 set to be updated ---> Package gtk2.x86_64 0:2.10.4-21.el5_7.7 set to be updated ---> Package gzip.x86_64 0:1.3.5-13.el5.centos set to be updated ---> Package hmaccalc.x86_64 0:0.9.6-4.el5 set to be updated ---> Package htop.x86_64 0:1.0.1-2.el5.rf set to be updated ---> Package hwdata.noarch 0:0.213.26-1.el5 set to be updated ---> Package ifd-egate.x86_64 0:0.05-17.el5 set to be updated ---> Package initscripts.x86_64 0:8.45.42-1.el5.centos set to be updated ---> Package iproute.x86_64 0:2.6.18-13.el5 set to be updated ---> Package iptables.x86_64 0:1.3.5-9.1.el5 set to be updated ---> Package iptables-ipv6.x86_64 0:1.3.5-9.1.el5 set to be updated ---> Package iscsi-initiator-utils.x86_64 0:6.2.0.872-13.el5 set to be updated ---> Package kernel.x86_64 0:2.6.18-308.1.1.el5 set to be installed ---> Package kernel-headers.x86_64 0:2.6.18-308.1.1.el5 set to be updated ---> Package kpartx.x86_64 0:0.4.7-48.el5_8.1 set to be updated ---> Package krb5-devel.x86_64 0:1.6.1-70.el5 set to be updated ---> Package krb5-libs.x86_64 0:1.6.1-70.el5 set to be updated ---> Package krb5-workstation.x86_64 0:1.6.1-70.el5 set to be updated ---> Package ksh.x86_64 0:20100621-5.el5_8.1 set to be updated ---> Package kudzu.x86_64 0:1.2.57.1.26-3.el5.centos set to be updated ---> Package less.x86_64 0:436-9.el5 set to be updated ---> Package lftp.x86_64 0:3.7.11-7.el5 set to be updated ---> Package libX11.x86_64 0:1.0.3-11.el5_7.1 set to be updated ---> Package libX11-devel.x86_64 0:1.0.3-11.el5_7.1 set to be updated ---> Package libXcursor.x86_64 0:1.1.7-1.2 set to be updated ---> Package libacl.x86_64 0:2.2.39-8.el5 set to be updated ---> Package libgcc.x86_64 0:4.1.2-52.el5 set to be updated ---> Package libgomp.x86_64 0:4.4.6-3.el5.1 set to be updated ---> Package libpng.x86_64 2:1.2.10-16.el5_8 set to be updated ---> Package libpng-devel.x86_64 2:1.2.10-16.el5_8 set to be updated ---> Package libsmbios.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package libstdc++.x86_64 0:4.1.2-52.el5 set to be updated ---> Package libstdc++-devel.x86_64 0:4.1.2-52.el5 set to be updated ---> Package libsysfs.x86_64 0:2.1.0-1.el5 set to be updated ---> Package libusb.x86_64 0:0.1.12-6.el5 set to be updated ---> Package libvolume_id.x86_64 0:095-14.27.el5_7.1 set to be updated ---> Package libxml2.x86_64 0:2.6.26-2.1.15.el5_8.2 set to be updated ---> Package libxml2-python.x86_64 0:2.6.26-2.1.15.el5_8.2 set to be updated ---> Package logrotate.x86_64 0:3.7.4-12 set to be updated ---> Package lsof.x86_64 0:4.78-6 set to be updated ---> Package lvm2.x86_64 0:2.02.88-7.el5 set to be updated ---> Package m2crypto.x86_64 0:0.16-8.el5 set to be updated ---> Package man.x86_64 0:1.6d-2.el5 set to be updated ---> Package man-pages.noarch 0:2.39-20.el5 set to be updated ---> Package mcelog.x86_64 1:0.9pre-1.32.el5 set to be updated ---> Package mesa-libGL.x86_64 0:6.5.1-7.10.el5 set to be updated ---> Package mesa-libGL-devel.x86_64 0:6.5.1-7.10.el5 set to be updated ---> Package microcode_ctl.x86_64 2:1.17-1.56.el5 set to be updated ---> Package mkinitrd.x86_64 0:5.1.19.6-75.el5 set to be updated ---> Package mktemp.x86_64 3:1.5-24.el5 set to be updated --> Processing Dependency: nash = 5.1.19.6-68.el5_6.1 for package: mkinitrd ---> Package nash.x86_64 0:5.1.19.6-75.el5 set to be updated ---> Package net-snmp.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-snmp-devel.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-snmp-libs.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-snmp-utils.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-tools.x86_64 0:1.60-82.el5 set to be updated ---> Package nfs-utils.x86_64 1:1.0.9-60.el5 set to be updated ---> Package nfs-utils-lib.x86_64 0:1.0.8-7.9.el5 set to be updated ---> Package nscd.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package nspr.x86_64 0:4.8.9-1.el5_8 set to be updated ---> Package nspr-devel.x86_64 0:4.8.9-1.el5_8 set to be updated ---> Package nss.x86_64 0:3.13.1-5.el5_8 set to be updated ---> Package nss-devel.x86_64 0:3.13.1-5.el5_8 set to be updated ---> Package nss-tools.x86_64 0:3.13.1-5.el5_8 set to be updated ---> Package nss_ldap.x86_64 0:253-49.el5 set to be updated ---> Package ntp.x86_64 0:4.2.2p1-15.el5.centos.1 set to be updated ---> Package numactl.x86_64 0:0.9.8-12.el5_6 set to be updated ---> Package oddjob.x86_64 0:0.27-12.el5 set to be updated ---> Package oddjob-libs.x86_64 0:0.27-12.el5 set to be updated ---> Package openldap.x86_64 0:2.3.43-25.el5 set to be updated ---> Package openssh.x86_64 0:4.3p2-82.el5 set to be updated ---> Package openssh-clients.x86_64 0:4.3p2-82.el5 set to be updated ---> Package openssh-server.x86_64 0:4.3p2-82.el5 set to be updated ---> Package openssl.i686 0:0.9.8e-22.el5_8.1 set to be updated ---> Package openssl.x86_64 0:0.9.8e-22.el5_8.1 set to be updated ---> Package openssl-devel.x86_64 0:0.9.8e-22.el5_8.1 set to be updated ---> Package pam_krb5.x86_64 0:2.2.14-22.el5 set to be updated ---> Package pam_pkcs11.x86_64 0:0.5.3-26.el5 set to be updated ---> Package pango.x86_64 0:1.14.9-8.el5.centos.3 set to be updated ---> Package parted.x86_64 0:1.8.1-29.el5 set to be updated ---> Package pciutils.x86_64 0:3.1.7-5.el5 set to be updated ---> Package perl.x86_64 4:5.8.8-38.el5 set to be updated ---> Package perl-Compress-Raw-Bzip2.x86_64 0:2.037-1.el5.rf set to be updated ---> Package perl-Compress-Raw-Zlib.x86_64 0:2.037-1.el5.rf set to be updated ---> Package perl-rrdtool.x86_64 0:1.4.7-1.el5.rf set to be updated ---> Package poppler.x86_64 0:0.5.4-19.el5 set to be updated ---> Package poppler-utils.x86_64 0:0.5.4-19.el5 set to be updated ---> Package popt.x86_64 0:1.10.2.3-28.el5_8 set to be updated ---> Package postgresql-libs.x86_64 0:8.1.23-1.el5_7.3 set to be updated ---> Package procps.x86_64 0:3.2.7-18.el5 set to be updated ---> Package proftpd.x86_64 0:1.3.4a-1.el5.rf set to be updated --> Processing Dependency: perl(Mail::Sendmail) for package: proftpd ---> Package python.x86_64 0:2.4.3-46.el5 set to be updated ---> Package python-ctypes.x86_64 0:1.0.2-3.el5 set to be updated ---> Package python-libs.x86_64 0:2.4.3-46.el5 set to be updated ---> Package python-smbios.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package rhpl.x86_64 0:0.194.1-2 set to be updated ---> Package rmt.x86_64 0:0.4b41-6.el5 set to be updated ---> Package rng-utils.x86_64 1:2.0-5.el5 set to be updated ---> Package rpm.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-build.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-devel.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-libs.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-python.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rrdtool.x86_64 0:1.4.7-1.el5.rf set to be updated ---> Package rsh.x86_64 0:0.17-40.el5_7.1 set to be updated ---> Package rsync.x86_64 0:3.0.6-4.el5_7.1 set to be updated ---> Package ruby.x86_64 0:1.8.5-24.el5 set to be updated ---> Package ruby-libs.x86_64 0:1.8.5-24.el5 set to be updated ---> Package sblim-sfcb.x86_64 0:1.3.11-49.el5 set to be updated ---> Package sblim-sfcc.x86_64 0:2.2.2-49.el5 set to be updated ---> Package selinux-policy.noarch 0:2.4.6-327.el5 set to be updated ---> Package selinux-policy-targeted.noarch 0:2.4.6-327.el5 set to be updated ---> Package setup.noarch 0:2.5.58-9.el5 set to be updated ---> Package shadow-utils.x86_64 2:4.0.17-20.el5 set to be updated ---> Package smartmontools.x86_64 1:5.38-3.el5 set to be updated ---> Package smbios-utils-bin.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package smbios-utils-python.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package sos.noarch 0:1.7-9.62.el5 set to be updated ---> Package srvadmin-omilcore.x86_64 0:6.5.0-1.452.1.el5 set to be updated ---> Package strace.x86_64 0:4.5.18-11.el5_8 set to be updated ---> Package subversion.x86_64 0:1.6.11-7.el5_6.4 set to be updated ---> Package sudo.x86_64 0:1.7.2p1-13.el5 set to be updated ---> Package sysfsutils.x86_64 0:2.1.0-1.el5 set to be updated ---> Package syslinux.x86_64 0:3.11-7 set to be updated ---> Package system-config-network-tui.noarch 0:1.3.99.21-1.el5 set to be updated ---> Package talk.x86_64 0:0.17-31.el5 set to be updated ---> Package tar.x86_64 2:1.15.1-31.el5 set to be updated ---> Package traceroute.x86_64 3:2.0.1-6.el5 set to be updated ---> Package tzdata.x86_64 0:2012b-3.el5 set to be updated ---> Package udev.x86_64 0:095-14.27.el5_7.1 set to be updated ---> Package util-linux.x86_64 0:2.13-0.59.el5 set to be updated ---> Package vixie-cron.x86_64 4:4.1-81.el5 set to be updated ---> Package wget.x86_64 0:1.11.4-3.el5_8.1 set to be updated ---> Package xinetd.x86_64 2:2.3.14-16.el5 set to be updated ---> Package yp-tools.x86_64 0:2.9-2.el5 set to be updated ---> Package ypbind.x86_64 3:1.19-12.el5_6.1 set to be updated ---> Package yum.noarch 0:3.2.22-39.el5.centos set to be updated ---> Package yum-dellsysid.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package yum-fastestmirror.noarch 0:1.1.16-21.el5.centos set to be updated ---> Package zlib.x86_64 0:1.2.3-4.el5 set to be updated ---> Package zlib-devel.x86_64 0:1.2.3-4.el5 set to be updated --> Running transaction check --> Processing Dependency: dbus = 1.1.2-15.el5_6 for package: dbus-libs --> Processing Dependency: nash = 5.1.19.6-68.el5_6.1 for package: mkinitrd ---> Package perl-Mail-Sendmail.noarch 0:0.79-1.2.el5.rf set to be updated base/filelists | 3.5 MB 00:00 dell-omsa-indep/filelists | 195 kB 00:01 dell-omsa-specific/filelists | 1.0 kB 00:00 extras/filelists_db | 224 kB 00:00 rpmforge/filelists | 4.8 MB 00:06 updates/filelists_db | 715 kB 00:00 --> Finished Dependency Resolution dbus-libs-1.1.2-15.el5_6.i386 from installed has depsolving problems --> Missing Dependency: dbus = 1.1.2-15.el5_6 is needed by package dbus-libs-1.1.2-15.el5_6.i386 (installed) mkinitrd-5.1.19.6-68.el5_6.1.i386 from installed has depsolving problems --> Missing Dependency: nash = 5.1.19.6-68.el5_6.1 is needed by package mkinitrd-5.1.19.6-68.el5_6.1.i386 (installed) Error: Missing Dependency: nash = 5.1.19.6-68.el5_6.1 is needed by package mkinitrd-5.1.19.6-68.el5_6.1.i386 (installed) Error: Missing Dependency: dbus = 1.1.2-15.el5_6 is needed by package dbus-libs-1.1.2-15.el5_6.i386 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. I have tried running package-cleanup --dupes and package-cleanup --problems but to no avail.

    Read the article

  • 202 blog articles

    - by mprove
    All my blog articles under blogs.oracle.com since August 2005: 202 blog articles Apr 2012 blogs.oracle.com design patch Mar 2012 Interaction 12 - Critique Mar 2012 Typing. Clicking. Dancing. Feb 2012 Desktop Mobility in Hospitals with Oracle VDI /video Feb 2012 Interaction 12 in Dublin - Highlights of Day 3 Feb 2012 Interaction 12 in Dublin - Highlights of Day 2 Feb 2012 Interaction 12 in Dublin - Highlights of Day 1 Feb 2012 Shit Interaction Designers Say Feb 2012 Tips'n'Tricks for WebCenter #3: How to display custom page titles in Spaces Jan 2012 Tips'n'Tricks for WebCenter #2: How to create an Admin menu in Spaces and save a lot of time Jan 2012 Tips'n'Tricks for WebCenter #1: How to apply custom resources in Spaces Jan 2012 Merry XMas and a Happy 2012! Dec 2011 One Year Oracle SocialChat - The Movie Nov 2011 Frank Ludolph's Last Working Day Nov 2011 Hans Rosling at TED Oct 2011 200 Countries x 200 Years Oct 2011 Blog Aggregation for Desktop Virtualization Oct 2011 Oracle VDI at OOW 2011 Sep 2011 Design for Conversations & Conversations for Design Sep 2011 All Oracle UX Blogs Aug 2011 Farewell Loriot Aug 2011 Oracle VDI 3.3 Overview Aug 2011 Sutherland's Closing Remarks at HyperKult Aug 2011 Surface and Subface Aug 2011 Back to Childhood in UI Design Jul 2011 The Art of Engineering and The Engineering of Art Jul 2011 Oracle VDI Seminar - June-30 Jun 2011 SGD White Paper May 2011 TEDxHamburg Live Feed May 2011 Oracle VDI in 3 Minutes May 2011 Space Ship Earth 2011 May 2011 blog moving times Apr 2011 Frozen tag cloud Apr 2011 Oracle: Hardware Software Complete in 1953 Apr 2011 Interaction Design with Wireframes Apr 2011 A guide to closing down a project Feb 2011 Oracle VDI 3.2.2 Jan 2011 free VDI charts Jan 2011 Sun Founders Panel 2006 Dec 2010 Sutherland on Leadership Dec 2010 SocialChat: Efficiency of E20 Dec 2010 ALWAYS ON Desktop Virtualization Nov 2010 12,000 Desktops at JavaOne Nov 2010 SocialChat on Sharing Best Practices Oct 2010 Globe of Visitors Oct 2010 SocialChat about the Next Big Thing Oct 2010 Oracle VDI UX Story - Wireframes Oct 2010 What's a PC anyway? Oct 2010 SocialChat on Getting Things Done Oct 2010 SocialChat on Infoglut Oct 2010 IT Twenty Twenty Oct 2010 Desktop Virtualization Webcasts from OOW Oct 2010 Oracle VDI 3.2 Overview Sep 2010 Blog Usability Top 7 Sep 2010 100 and counting Aug 2010 Oracle'izing the VDI Blogs Aug 2010 SocialChat on Apple Aug 2010 SocialChat on Video Conferencing Aug 2010 Oracle VDI 3.2 - Features and Screenshots Aug 2010 SocialChat: Don't stop making waves Aug 2010 SocialChat: Giving Back to the Community Aug 2010 SocialChat on Learning in Meetings Aug 2010 iPAD's Natural User Interface Jul 2010 Last day for Sun Microsystems GmbH Jun 2010 SirValUse Celebration Snippets Jun 2010 10 years SirValUse - Happy Birthday! Jun 2010 Wim on Virtualization May 2010 New Home for Oracle VDI Apr 2010 Renaissance Slide Sorter Comments Apr 2010 Unboxing Sun Ray 3 Plus Apr 2010 Desktop Virtualisierung mit Sun VDI 3.1 Apr 2010 Blog Relaunch Mar 2010 Social Messaging Slides from CeBIT Mar 2010 Social Messaging Talk at CeBIT Feb 2010 Welcome Oracle Jan 2010 My last presentation at Sun Jan 2010 Ivan Sutherland on Leadership Jan 2010 Learning French with Sun VDI Jan 2010 Learning Danish with Sun Ray Jan 2010 VDI workshop in Nieuwegein Jan 2010 Happy New Year 2010 Jan 2010 On Creating Slides Dec 2009 Best VDI Ever Nov 2009 How to store the Big Bang Nov 2009 Social Enterprise Tools. Beipiel Sun. Nov 2009 Nov-19 Nov 2009 PDF and ODF links on your blog Nov 2009 Q&A on VDI and MySQL Cluster Nov 2009 Zürich next week: Swiss Intranet Summit 09 Nov 2009 Designing for a Sustainable World - World Usabiltiy Day, Nov-12 Nov 2009 How to export a desktop from VDI 3 Nov 2009 Virtualisation Roadshow in the UK Nov 2009 Project Wonderland at EDUCAUSE 09 Nov 2009 VDI Roadshow in Dublin, Nov-26, 2009 Nov 2009 Sun VDI at EDUCAUSE 09 Nov 2009 Sun VDI 3.1 Architecture and New Features Oct 2009 VDI 3.1 is Early-Access Sep 2009 Virtualization for MySQL on VMware Sep 2009 Silpion & 13. Stock Sommerparty Sep 2009 Sun Ray and VMware View 3.1.1 2009-08-31 New Set of Sun Ray Status Icons 2009-08-25 Virtualizing the VDI Core? 2009-08-23 World Usability Day Hamburg 2009 - CfP 2009-07-16 Rising Sun 2009-07-15 featuring twittermeme 2009-06-19 ISC09 Student Party on June-20 /Hamburg 2009-06-18 Before and behind the curtain of JavaOne 2009-06-09 20k desktops at JavaOne 2009-06-01 sweet microblogging 2009-05-25 VDI 3 - Why you need 3 VDI hosts and what you can do about that? 2009-05-21 IA Konferenz 2009 2009-05-20 Sun VDI 3 UX Story - Power of the Web 2009-05-06 Planet of Sun and Oracle User Experience Design 2009-04-22 Sun VDI 3 UX Story - User Research 2009-04-08 Sun VDI 3 UX Story - Concept Workshops 2009-04-06 Localized documentation for Sun Ray Connector for VMware View Manager 1.1 2009-04-03 Sun VDI 3 Press Release 2009-03-25 Sun VDI 3 launches today! 2009-03-25 Sun Ray Connector for VMware View Manager 1.1 Update 2009-03-11 desktop virtualization wiki relaunch 2009-03-06 VDI 3 at CeBIT hall 6, booth E36 2009-03-02 Keyboard layout problems with Sun Ray Connector for VMware VDM 2009-02-23 wikis.sun.com tips & tricks 2009-02-23 Sun VDI 3 is in Early Access 2009-02-09 VirtualCenter unable to decrypt passwords 2009-02-02 Sun & VMware Desktop Training 2009-01-30 VDI at next09? 2009-01-16 Sun VDI: How to use virtual machines with multiple network adapters 2009-01-07 Sun Ray and VMware View 2009-01-07 Hamburg World Usability Day 2008 - Webcasts 2009-01-06 Sun Ray Connector for VMware VDM slides 2008-12-15 mother of all demos 2008-12-08 Build your own Thumper 2008-12-03 Troubleshooting Sun Ray Connector for VMware VDM 2008-12-02 My Roller Tag Cloud 2008-11-28 Sun Ray Connector: SSL connection to VDM 2008-11-25 Setting up SSL and Sun Ray Connector for VMware VDM 2008-11-13 Inspiration for Today and Tomorrow 2008-10-23 Sun Ray Connector for VMware VDM released 2008-10-14 From Sketchpad to ILoveSketch 2008-10-09 Desktop Virtualization on Xing 2008-10-06 User Experience Forum on Xing 2008-10-06 Sun Ray Connector for VMware VDM certified 2008-09-17 Virtual Clouds over Las Vegas 2008-09-14 Bill Verplank sketches metaphors 2008-09-04 End of Early Access - Sun Ray Connector for VMware 2008-08-27 Early Access: Sun Ray Connector for VMware Virtual Desktop Manager 2008-08-12 Sun Virtual Desktop Connector - Insides on Recycling Part 2 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling Part 3 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling 2008-07-20 lost in wiki space 2008-07-07 Evolution of the Desktop 2008-06-17 Virtual Desktop Webcast 2008-06-16 Woodstock 2008-06-16 What's a Desktop PC anyway? 2008-06-09 Virtual-T-Box 2008-06-05 Virtualization Glossary 2008-05-06 Five User Experience Principles 2008-04-25 Virtualization News Feed 2008-04-21 Acetylcholinesterase - Second Season 2008-04-18 Acetylcholinesterase - End of Signal 2007-12-31 Produkt-Management ist... 2007-10-22 Usability Verbände, Verteiler und Netzwerke. 2007-10-02 The Meaning is the Message 2007-09-28 Visualization Methods 2007-09-10 Inhouse und Open Source Projekte – Usability verankern und Synergien nutzen 2007-09-03 Der Schwabe Darth Vader entdeckt das Virale Marketing 2007-08-29 Dick Hardt 3.0 on Identity 2.0 2007-08-27 quality of written text depends on the tool 2007-07-27 podcasts for reboot9 2007-06-04 It is the user's itch that need to be scratched 2007-05-25 A duel at reboot9 2007-05-14 Taxonomien und Folksonomien - Tagging als neues HCI-Element 2007-05-10 Dueling Interaction Models of Personal-Computing and Web-Computing 2007-03-01 22.März: Weizenbaum. Rebel at Work. /Filmpremiere Hamburg 2007-02-25 Bruce Sterling at UbiComp 2006 /webcast 2006-11-12 FSOSS 2006 /webcasts 2006-11-10 Highway 101 2006-11-09 User Experience Roundtable Hamburg: EuroGEL 2006 2006-11-08 Douglas Adams' Hyperland (BBC 1990) 2006-10-08 Taxonomien und Folksonomien – Tagging als neues HCI-Element 2006-09-13 Usability im Unternehmen 2006-09-13 Doug does HyperScope 2006-08-26 TED Talks and TechTalks 2006-08-21 Kai Krause über seine Freundschaft zu Douglas Adams 2006-07-20 Rebel At Work: Film Portrait on Weizenbaum 2006-07-04 Gabriele Fischer, mp3 2006-06-07 Dick Hardt at ETech 06 2006-06-05 Weinberger: From Control to Conversation 2006-04-16 Eye Tracking at User Experience Roundtable Hamburg 2006-04-14 dropping knowledge 2006-04-09 GEL 2005 2006-03-13 slide photos of reboot7 2006-03-04 Dick Hardt on Identity 2.0 2006-02-28 User Experience Newsletter #13: Versioning 2006-02-03 Ester Dyson on Choice and Happyness 2006-02-02 Requirements-Engineering im Spannungsfeld von Individual- und Produktsoftware 2006-01-15 User Experience Newsletter #12: Intuition Quiz 2005-11-30 User Experience und Requirements-Engineering für Software-Projekte 2005-10-31 Ivan Sutherland on "Research and Fun" 2005-10-18 Ars Electronica / Mensch und Computer 2005 2005-09-14 60 Jahre nach Memex: Über die Unvereinbarkeit von Desktop- und Web-Paradigma 2005-08-31 reboot 7 2005-06-30

    Read the article

  • Finding a person in the forest

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved finding a person in the forest or Limiting the AD result in SharePoint People Picker There are times when we need to limit the SharePoint audience of certain farms or servers or site collections to a particular audience. One of my experiences involved limiting access to US citizens, another to a particular location. Now, most of us – your humble servant included – are not Active Directory experts – but we must be able to handle the “audience restrictions” as required. So here is how it’s done in a nutshell. Important note. Not all could be done in PowerShell (at least not yet)! There are no Windows PowerShell commands to configure People Picker. The stsadm command is: stsadm -o setproperty -pn peoplepicker-searchadcustomquery -pv ADQuery –url http://somethingOrOther Note the long-hyphenated property name. Now to filling the ADQuery.   LDAP Query in a nutshell Syntax LDAP is no older than SQL and an LDAP query is actually a query against the LDAP Database. LDAP attributes are the equivalent of Database columns, so why do we have to learn a new query language? Beats me! But we must, so here it is. The syntax of an LDAP query string is made of individual statements with relational operators including: = Equal <= Lower than or equal >= Greater than or equal… and memberOf – a group membership. ! Not * Wildcard Equal and memberOf are the most commonly used. Checking for absence uses the ! – not and the * - wildcard Example: (SN=Grant) All whose last name – SurName – is Grant Example: (!(SN=Grant)) All except Grant Example: (!(SN=*)) all where there is no SurName i.e SurName is absent (probably Rappers). Example: (CN=MyGroup) Common Name is MyGroup.  Example: (GN=J*) all the Given Names that start with J (JJ, Jane, Jon, John, etc.) The cryptic SN, CN, GN, etc. are attributes and more about them later All the queries are enclosed in parentheses (Query). Complex queries are comprised of sets that are in AND or OR conditions. AND is denoted by the ampersand (&) and the OR is denoted by the vertical pipe (|). The general syntax is that of the Prefix polish notation where the operand precedes the variables. E.g +ab is the sum of a and b. In an LDAP query (&(A)(B)) will garner the objects for which both A and B are true. In an LDAP query (&(A)(B)(C)) will garner the objects for which A, B and C are true. There’s no limit to the number of conditions. In an LDAP query (|(A)(B)) will garner the objects for which either A or B are true. In an LDAP query (|(A)(B)(C)) will garner the objects for which at least one of A, B and C is true. There’s no limit to the number of conditions. More complex queries have both types of conditions and the parentheses determine the order of operations. Attributes Now let’s get into the SN, CN, GN, and other attributes of the query SN – is the SurName (last name) GN – is the Given Name (first name) CN – is the Common Name, usually GN followed by SN OU – is an Organization Unit such as division, department etc. DC – is a Domain Content in the AD forest l – lower case ‘L’ stands for location. Jerusalem anybody? Or Katmandu. UPN – User Principal Name, is usually the first part of an email address. By nature it is unique in the forest. Most systems set the UPN to be the first initial followed by the SN of the person involved. Some limit the total to 8 characters. If we have many ‘jsmith’ we have to somehow distinguish them from each other. DN – is the distinguished name – a name unique to AD forest in which it lives. Usually it’s a CN with some domain or group distinguishers. DN is important in conjunction with the memberOf relation. Groups have stricter requirement. Each group has to have a unique name - its CN and it has to be unique regardless of its place. See more below. All of the attributes are case insensitive. CN, cn, Cn, and cN are identical. objectCategory is an element that requires special consideration. AD contains many different object like computers, printers, and of course people and groups. In the queries below, we’re limiting our search to people (person). Putting it altogether Let’s get a list of all the Johns in the SPAdmin group of the Jerusalem that local domain. (&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local)) The memberOf=cn=SPAdmin uses the cn (Common Name) of the SPAdmin group. This is how the memberOf relation is used. ‘SPAdmin’ is actually the DN of the group. Also the memberOf relation does not allow wild cards (*) in the group name. Also, you are limited to at most one ‘OU’ entry. Let’s add Marvin Minsky to the search above. |(&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local))(CN=Marvin Minsky) Here I added the or pipeline at the beginning of the query and put the CN requirement for Minsky at the end. Note that if Marvin was already in the prior result, he’s not going to be listed twice. One last note: You may see a dryer but more complete list of attributes rules and examples in: http://www.tek-tips.com/faqs.cfm?fid=5667 And finally (thus negating the claim that my previous note was last), to the best of my knowledge there are 3 more ways to limit the audience. One is to use the peoplepicker-searchadcustomfilter property using the same ADQuery. This works only in SP1 and above. The second is to limit the search to users within this particular site collection – the property name is peoplepicker-onlysearchwithinsitecollection and the value is yes (-pv yes) And the third is –pn peoplepicker-serviceaccountdirectorypaths –pv “OU=ou1,DC=dc1…..” Again you are limited to at most one ‘OU’ phrase – no OU=ou1,OU=ou2… And now the real end. The main property discussed in this sprawling and seemingly endless monogram – peoplepicker-searchadcustomquery - is the most general way of getting the job done. Here are a few examples of command lines that worked and some that didn’t. Can you see why? C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (!Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,OU=OUMid,OU=OUTop,DC=TopDC,DC=MidDC,DC=BottomDC) Command line error. Too many OUs C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully.   That’s all folks!

    Read the article

  • 15 Kasim 2012 Oracle Day

    - by TUFEKCIOGLU,FATIH
       15.Kasim'da Harbiye Istanbul Kongre Merkezi'nde düzenlenecek Oracle Day'e ait etkinlik bilgileri : Oracle Day etkinlik bilgileri için tiklayiniz    En Son Teknolojiden Faydalanin: Inovasyona ve Rekabete Zaman Birakin 15 Kasim 2012 Bulut Bilisim, Mobilite, Sosyal Medya ve Büyük Veri, bildigimiz dünyayi yeniden tanimliyor. Bu teknolojileri kurumuna ilk getirenlerden biri olun; daha hizli yeni ürün ve hizmet gelistirme, müsteri deneyimini iyilestirme ve yeni inovatif is modellerini hayata geçirme firsati yakalayarak rekabetteki konumunuzu güçlendirin. Oracle ve is ortaklari bu noktada size, teknolojik yenilikleri kurumunuza uyarlamanizda yardim ederken, sizin de piyasadaki degisimlerden rakiplerinizden önce avantaj elde etmenizi saglar. Oracle'in, birlikte çalismak için tasarlanmis olan yazilim ve donanimlarda en yeni teknolojileri kullanarak, bilgi teknolojilerini nasil sadelestirdigini ögrenmek için Oracle Day'de bize katilin. Oracle Day'de: Oracle'in Bulut Bilisim, Büyük Veri, Sosyal Medya, is uygulamalari çözümleri hakkinda bilgi edinme, Basarili is dönüsümleri hakkinda örnek basari hikayelerini dinleme, Sizinle ayni zorluklari tecrübe eden sektör çalisanlariyla biraraya gelme, Oracle uzmanlari ve is ortaklari ile tanisma ve yeni ürün tanitimlarini izleme, Oracle OpenWorld'den en yeni ürün bilgilerini edinme firsatini kaçirmayin... Saygilarimizla, Oracle Türkiye Hemen Kaydolun! Platin Sponsor Istanbul Kongre Merkezi Taskisla Caddesi Harbiye 34367 Istanbul / Türkiye 15 Kasim 2012, Persembe 08:30 - 18:30 LCV: [email protected] oracle.com/oracleday Bizi takip edin: #oracleday   Oracle Is Ortagi Müsteri Basari Hikayesi TROUG Sunum Ingilizce'dir  Günün Ajandasi 08:45-09:30 Kayit 09:30-10:00 Hos Geldiniz Filiz Dogan, Genel Müdür, Oracle Türkiye 10:00-10:30 Navigating Complexity by Simplifying I.T. Andrew Mendelsohn, Kidemli Baskan Yardimcisi, Oracle Veritabani Sunucu Teknolojileri, Oracle           10:30-11:00 Dönüsümsel Bulut Yolculugu Ilker Kuruöz, CIO, Turkcell 11:00-11:20 Yeni Dönemde Veri Merkezlerinin Olmazsa Olmazlari Yalim Eristiren, Genel Müdür Yardimcisi, Intel 11:20-11:30 Slimfit Feyza Narli, Is Çözümleri Direktörü, Innova 11:30-12:00 Java ile Inovasyon Cuma Yigit, Teknik Mimar, Etiya Yusuf Tok, Java Grup Yöneticisi, OBSS Ersun Engel, Satis Müdürü, Oracle 12:00-12:10  Kahve Molasi       1. SALON 2. SALON 3. SALON 4. SALON 5. SALON 6. SALON 7. SALON 8. SALON 9. SALON 10. SALON   Müsteri Deneyimi: Çalisaninizi Yetkinlestirin. Markanizi Güçlendirin. Is Süreçlerinde Degisim Daha Fazla Veri, Daha Hizli Sonuç: Isinizi Analitik Çözümlerle Güçlendirin Peki Ama Nasil? Bulut Uygulayicilari için Çözüm Haritasi Is Uygulamalarinizda Inovasyonun Gücünü Kullanin Yeni Nesil Veri Merkezi ile BT'nin Gücünü Ortaya Çikartin Oracle & Is Ortaklari Çözümleri - Basari Hikayeleri I Oracle & Is Ortaklari Çözümleri - Basari Hikayeleri II Oracle Finansal Hizmetler - Core Banking and Analytical Solutions Oracle User Group (TROUG) 12:10-12:40 Müsteri Deneyimi: Çalisaninizi Yetkinlestirin. Markanizi Güçlendirin. Tekfen Ceyhan Çelik Fabrikasi Maliyet Yönetimi ve Üretim Takibi Daha Fazla Veriyle, Daha Hizli Hareket: Yaraticiligi Aksiyonla Güçlendirme Architect Your Cloud: A Blueprint for Cloud Builders Technology Strategies that Drive Business Excellence: Get Social. Be Mobile. Run Cloud. Yeni Nesil Veri Merkezi ile BT'nin Gücünü Ortaya Çikartin Innovate with Oracle - Virtual Banking and Self Service Channels What's Next for Oracle Database?   Oracle Innova Oracle Oracle Oracle Oracle Oracle Oracle 12:40-13:40 Ögle Yemegi 13:40-14:10 Uygulamalariniz Artik Bulutta 21. Yüzyilda Finans: Potansiyeli Kullanin - Sonuçlara Ulasin! "Düsünce Hizinda" Intel Islemcili Oracle Büyük Veri ve Is Analitigi Çözümleri Deniz Seviyesinden Bulutlara I CRM'inizi Sosyallestirin: Telaura Sosyal CRM Yeni Nesil Veri Merkezinde Trend: Sadelik Abone Bilgi Yönetim Sistemi (ABYS) Yüksek Oracle Veri Tabani Performansi - Oracle Veri Tabani Oracle Veri Depolama Sistemi ile Entegre Oldugunda Sigortacilikta Finansal Transformasyon ve Entegre Veri Ambari Çözümleri SQL/PLSQL Yeni Özellikler   Oracle Oracle Intel Oracle Etiya Oracle Inspirit Oracle Oraturk TROUG 14:10-14:20  Kahve Molasi 14:20-14:50 Satis ve Pazarlamada Sosyal Mecralar Müsteri Basari Hikayeleri Paneli: Degisim Yolculugu ve Sonuçlari Tukas'in Analitik Yolculugu Deniz Seviyesinden Bulutlara II Loupe: IP Tabanli Servisler için Proaktif Izleme Veri Merkezinizdeki Riskleri Ortadan Kaldirin Oracle iAS to WebLogic Migrasyonu Oracle ZFS Storage Appliance - Turkcell Deneyimleri Analytical Transformation - Risk and Finance Together to Address the Regulatory Changes Today and Tomorrow Veri Madenciligi Veritabaninda Yapilir: Uygulamalariyla Oracle R Enterprise ve Oracle Data Mining Opsiyonu   Oracle Akbank, Teknosa, Dogus Holding, Ceynak Gtech Oracle Netas Oracle OBSS Turkcell&Gantek Oracle TROUG 14:50-15:00  Kahve Molasi 15:00-15:30 Yetenek Yönetiminde Entegre Çözümler: Taleo ile Ise Alim Artik Daha Kolay Müsteri Basari Hikayeleri Paneli: Degisim Yolculugu ve Sonuçlari Büyük Veri & Exalytics - Exadata'nin Gelecek Rotasi - Bütünlesik (Engineered) Sistemler'de Ücretsiz Platin Hizmetleri Aksigorta Oracle ATS (Application Testing Suite) ile Uygulamalarini Nasil Test Ediyor? Üstün Performans ve Esneklik ile Servis Seviyenizi Arttirin Akilli Belediyecilik Uygulamalarinda Oracle BPM ile Süreç Yönetimi Teyp ile Uçtan Uca Yedekleme Çözümleri Connecting with Customers to Enhance Revenue Generation: Unleashing the Power of an Enterprise Revenue Management and Billing Solution Günümüzün Uygulama Mimarisi Sorunlari ve Çözüm Önerileri   Oracle Akbank, Teknosa, Dogus Holding, Ceynak Oracle Oracle Aksigorta Oracle Sampas Remivac Oracle TROUG 15:30-15:40  Kahve Molasi 15:40-16:10 JD Edwards Yeni Sürüm ile Satis Agi Yönetimi Oracle Policy Automation ile Türkçe Merkezi Is Kurallari Yönetimi Oracle BI ile Kurumsal Karne Çözümleri Turkcell Süperbulut ile Yazilim Artik Hizmetinizde Bütünlesik (Engineered) Sistemler Artik SAP Müsterilerinde de Fark Yaratiyor Yeni Nesil Veri Merkezi Olusturma: Denenmis ve Ispatlanmis Yöntemler Türk Telekom SOA Projesi Basari Hikayesi Yapi Kredi Sigorta Uygulamalarinda Son Kullanici Deneyimini Nasil Izliyor? 2013 NFC Trendleri Karagöz ile Hacivat Veri Tabaninda   TupperWare & Akademi Danismanlik Oracle Oracle Turkcell Oracle Oracle Türk Telekom Yapi Kredi Sigorta Smartsoft TROUG 16:10-16:20  Kahve Molasi 16:20-16:50 Tutarli, Tekil Veriye Yolculuk: Oracle Ana Veri Yönetimi Turkcell/Superonline'da Varlik Yönetimi Her Tür Verinin Endeca ile Rahat Analizi Bulut Bilisim'de Güvenlik Nasil Saglanir? Rekabette Kazanmak: GoldenGate ile Dogru Kararlari Rakiplerinizden Önce Verin. Veri Merkeziniz için Bulut Altyapi Stratejileri Oracle Orkestrasi: Çok Sesli Yönetime Kulak Verin Lojistik Zekasi / Horoz Lojistik Basari Hikayesi Bütünlesik Sistemler (Engineered Systems) ile Yüksek Performansli Java Uygulamalari International Growth - Helping the Banks Standardize Overseas Operations Oracle Big Data   Oracle Turkcell Oracle Oracle Oracle Oracle Kora & Horoz Lojistik Oracle Oracle TROUG 16:50-18:30  Kokteyl     Eger bir kamu kurumunun/kurulusunun çalisani veya görevlisi iseniz, bu etkinlige iliskin önemli etik kurallara iliskin bilgi için lütfen buraya tiklayiniz Copyright 2012, Oracle and/or its affiliates. All rights reserved. Bize Ulasin | Yasal Uyarilar | Gizlilik Beyani

    Read the article

  • Error with postgres and Rails in Bundle Install on Ubuntu 12.10

    - by jason328
    I'm trying to install postgres onto Ubuntu. When running the Bundle Install in terminal I'm receiving this message at the end of the running code. How do I get pg to install properly? Libpq-dev is installed as well. Using coffee-rails (3.2.2) Using diff-lcs (1.1.3) Using jquery-rails (2.0.2) Installing pg (0.12.2) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9.1 extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... yes checking for PQisthreadsafe()... yes checking for PQprepare()... yes checking for PQexecParams()... yes checking for PQescapeString()... yes checking for PQescapeStringConn()... yes checking for PQgetCancel()... yes checking for lo_create()... yes checking for pg_encoding_to_char()... yes checking for PQsetClientEncoding()... yes checking for rb_encdb_alias()... yes checking for rb_enc_alias()... no checking for struct pgNotify.extra in libpq-fe.h... yes checking for unistd.h... yes checking for ruby/st.h... yes creating extconf.h creating Makefile make compiling compat.c compiling pg.c pg.c: In function ‘pgconn_wait_for_notify’: pg.c:2117:3: warning: ‘rb_thread_select’ is deprecated (declared at /usr/include/ruby- 1.9.1/ruby/intern.h:379) [-Wdeprecated-declarations] pg.c: In function ‘pgconn_block’: pg.c:2592:3: error: format not a string literal and no format arguments [-Werror=format- security] pg.c:2598:3: warning: ‘rb_thread_select’ is deprecated (declared at /usr/include/ruby- 1.9.1/ruby/intern.h:379) [-Wdeprecated-declarations] pg.c:2607:4: error: format not a string literal and no format arguments [-Werror=format- security] pg.c: In function ‘pgconn_locreate’: pg.c:2866:11: warning: variable ‘lo_oid’ set but not used [-Wunused-but-set-variable] pg.c: In function ‘find_or_create_johab’: pg.c:3947:3: warning: implicit declaration of function ‘rb_encdb_alias’ [-Wimplicit- function-declaration] cc1: some warnings being treated as errors make: *** [pg.o] Error 1 Gem files will remain installed in /home/jason/.bundler/tmp/10083/gems/pg-0.12.2 for inspection. Results logged to /home/jason/.bundler/tmp/10083/gems/pg-0.12.2/ext/gem_make.out An error occurred while installing pg (0.12.2), and Bundler cannot continue. Make sure that `gem install pg -v '0.12.2'` succeeds before bundling.

    Read the article

  • #PowerPivot Workshop Online for America’s Time Zones #ppws

    - by Marco Russo (SQLBI)
    After so many request we have finally arranged a PowerPivot Workshop online edition dedicated to America’s time zones! It is scheduled for December 19-20, 2012, with this schedule: US Eastern Time (EST): 10:00am-1:00pm / 2:00pm-5:00pm US Central Time (CST): 9:00am-12:00pm / 1:00pm-4:00pm US Pacific Time (PST): 7:00am-10:00am / 11:00am-1:00pm Bogotá (Colombia): 10:00am-1:00pm / 2:00pm-5:00pm São Paulo (Brazil): 1:00pm-4:00pm / 5:00pm-8:00pm Buenos Aires (Argentina): 12:00pm-3:00pm / 4:00pm-7:00pm...(read more)

    Read the article

  • Meredith Ryan: DBA of the Day

    Meredith Ryan – DBA at the Bell Group –was elected by judges and the SQL Server community as the Exceptional DBA of 2012. So who is Meredith, and how did she become a DBA? What makes her exceptional at her work? Simple-Talk sent Richard Morris to investigate. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Yahoo Webmail - Garbled Quote Text

    - by baultista
    I've encountered a very strange problem when trying to reply to e-mail via my Yahoo Web Mail from a family member's computer. She received an e-mail from a client who is using Microsoft Outlook. When I receive the message it looks perfectly fine in my browser and I can read it. However, when I try to reply to the e-mail the quoted text looks as such: > #yiv9181642880 p.yiv9181642880msonormal1114, #yiv9181642880 > li.yiv9181642880msonormal1114, #yiv9181642880 > div.yiv9181642880msonormal1114 > {margin-right:0in;margin-left:0in;font-size:12.0pt;} > #yiv9181642880 p.yiv9181642880msoacetate1114, #yiv9181642880 > li.yiv9181642880msoacetate1114, #yiv9181642880 > div.yiv9181642880msoacetate1114 > {margin-right:0in;margin-left:0in;font-size:12.0pt;} > #yiv9181642880 p.yiv9181642880emailquote1114, #yiv9181642880 > li.yiv9181642880emailquote1114, #yiv9181642880 > div.yiv9181642880emailquote1114 > {margin-right:0in;margin-left:0in;font-size:12.0pt;} > #yiv9181642880 p.yiv9181642880msochpdefault1114, > #yiv9181642880 li.yiv9181642880msochpdefault1114, > #yiv9181642880 div.yiv9181642880msochpdefault1114 > {margin-right:0in;margin-left:0in;font-size:12.0pt;} > #yiv9181642880 p.yiv9181642880msonormal53, #yiv9181642880 > li.yiv9181642880msonormal53, #yiv9181642880 > div.yiv9181642880msonormal53 > {margin-right:0in;margin-left:0in;font-size:12.0pt;} It's the strangest thing. It doesn't happen with all e-mails except this particular one. At a glance it almost looks like raw CSS code that's being displayed, but I really can't understand why. So far I have tried the following: Try a different browser, both IE11 and Google Chrome Check the browser encoding settings Check Yahoo Web Mail's encoding/font settings My only other guess is that the client used some weird font or formatting on the e-mail that is throwing the message body out of sync. Unfortunately for my family member, she is a contractor working with a medium-sized company that refuses to provide her with a domain e-mail address, so she is forced to conduct business this way. Simply asking the sender to use a more widely supported font wouldn't be an acceptable solution here. Any thoughts?

    Read the article

  • What is vt.handoff=7 parameter in grub.cfg

    - by sirkubax
    I wonder what vt.handoff=7 parameter does. I can not find any good man for that... BTW, if you have a nice descriptoon about : search --no-floppy --fs-uuid --set=root I would be happy :) grub.cfg example: menuentry 'FAILSAFE' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(hd0,msdos8)' search --no-floppy --fs-uuid --set=root 36286167-4eba-4a1e-a202-155c6baafa01 linux /boot/vmlinuz-2.6.37-12-generic root=UUID=36286167-4eba-4a1e-a202-155c6baafa01 ro vt.handoff=7 quiet splash initrd /boot/initrd.img-2.6.37-12-generic } BTW2 - i can not create tag vt.handoff ;(

    Read the article

  • Did You Know: So Many User Groups, So Little Time

    - by Kalen Delaney
    In May and June of this year, I'll be four user groups presentations plus a SQL Saturday. You can check my schedule for links to the relevant sites, and a description of my topics, as soon as they are available. This post is mainly just a heads-up, so you can make your plans. http://schedule.KalenDelaney.com May 12: The inaugural meeting of the Sacramento SQL Server User Group (evening) May 13: Central California .Net Users Group (evening) June 8: Colorado PASS (evening) June 12: SQL Saturday #43,...(read more)

    Read the article

  • Dependency problem with mysql-server-core-5.5

    - by Tama
    When I start the Ubuntu software centre, it says I cannot do anything until the package catalog is repaired. However, repairing fails. I ran "sudo apt-get -f install" and found the problem to be: mysql-server-5.5 depends on mysql-server-core-5.5 (= 5.5.24-0ubuntu0.12.04.1); however: Version of mysql-server-core-5.5 on system is 5.5.28-0ubuntu0.12.04.2. So, the question is, how do I install that version and resolve the dependency problem?

    Read the article

  • Asterisk Outgoing CDR Logging To Mysql

    - by user3295551
    Trying to utilize the cdr logging (to mysql) using custom fields. The problem I am facing is only when an outbound call is placed, during inbound calls the custom field I am able to log no problem. The reason I am having an issue is because the custom cdr field I need is a unique value for each user on the system. sip.conf ... ... [sales_department](!) type=friend host=dynamic context=SalesAgents disallow=all allow=ulaw allow=alaw qualify=yes qualifyfreq=30 ;; company sales agents: [11](sales_agent) secret=xxxxxx callerid="<...>" [12](sales_agent) secret=xxxxxx callerid="<...>" [13](sales_agent) secret=xxxxxx callerid="<...>" [14](sales_agent) secret=xxxxxx callerid="<...>" extensions.conf [SalesAgents] include => Services ; Outbound calls exten=>_1NXXNXXXXXX,1,Dial(SIP/${EXTEN}@myprovider) ; Inbound calls exten=>100,1,NoOp() same => n,Set(CDR(agent_id)=11) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${11_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(11@asterisk) same => n,Hangup() same => n(busy),VoiceMail(11@asterisk) same => n,Hangup() exten=>101,1,NoOp() same => n,Set(CDR(agent_id)=12) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${12_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(12@asterisk) same => n,Hangup() same => n(busy),VoiceMail(12@asterisk) same => n,Hangup() ... ... For the inbound section of the dialplan in the above example I am able to insert the custom cdr field (agent_id). But above it you can see for the Oubound section of the dialplan I have been stumped on how I would be able to tell the dialplan which agent_id is making the outbound call. My Question: how to take the agent_id=[11] & agent_id=[12] and agent_id=[13] and agent_id=[14] etc and use that as a custom field for cdr on outbound calls?

    Read the article

  • An XEvent a Day (25 of 31) – The Twelve Days of Christmas

    - by Jonathan Kehayias
    In the spirit of today’s holiday, a couple of people have been posting SQL related renditions of holiday songs.  Tim Mitchell posted his 12 days of SQL Christmas , and Paul Randal and Kimberly Tripp went as far as to record themselves sing SQL Carols on their blog post Our Christmas Gift To You: Paul and Kimberly Singing!   For today’s post on Extended Events I give you the 12 days of Christmas, Extended Events style (all of these are based on true facts about Extended Events in SQL Server)....(read more)

    Read the article

  • Stairway to SQL Server Security: Level 1, Overview of SQL Server Security

    The ubiquity of databases and the potentially valuable information stored in them makes them attractive targets for people who want to steal data or harm its owner by tampering with it. Making sure that your data is secure is a critical part of configuring SQL Server and developing applications that use it to store data. 12 must-have SQL Server toolsThe award-winning SQL Developer Bundle contains 12 tools for faster, simpler SQL Server development. Download a free trial.

    Read the article

  • Exporting Master Data from Master Data Services

    This white paper describes how to export master data from Microsoft SQL Server Master Data Services (MDS) using a subscription view, and how to import the master data into an external system using SQL Server Integration Services (SSIS). The white paper provides a step-by-step sample for creating a subscription view and an SSIS package. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Stairway to PowerPivot and DAX - Level 2: The DAX COUNTROWS() and FILTER() Functions

    Bill Pearson, business intelligence architect and author, exposes the DAX COUNTROWS() and FILTER() functions, while generally exploring, comparing and contrasting the nature and operation of calculated columns and calculated measures, in the second Level of our Stairway to PowerPivot and DAX series. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Ma este Oracle Data Mining újdonságok webcast!

    - by Fekete Zoltán
    2010. május 12-én szerdán 18 órakor a böngészonkkel kapcsolódva a következo roppant érdekes eloadást hallgathatjuk meg az Oracle BIWA keretében: BIWA SIG TechCast Series - May 12 - Data Mining Made Easy, az eloadó Charlie Berger, az Oracle adatbányászati vezetoje. Könnyen elvégezheto adatbányászat! Az Oracle Data Miner 11g Release 2 új "Work flow" grafikus felületének bevezetése. Csatlakozni az Oracle BIWA-hoz a ezen a linken ingyenesen lehet. Itt találhatjuk meg, hogyan lehet meghallgatni ezt a konferenciát: www.oraclebiwa.org

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >