Search Results

Search found 5718 results on 229 pages for 'resource'.

Page 43/229 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Securing WebSocket applications on Glassfish

    - by Pavel Bucek
    Today we are going to cover deploying secured WebSocket applications on Glassfish and access to these services using WebSocket Client API. WebSocket server application setup Our server endpoint might look as simple as this: @ServerEndpoint("/echo") public class EchoEndpoint { @OnMessage   public String echo(String message) {     return message + " (from your server)";   } } Everything else must be configured on container level. We can start with enabling SSL, which will require web.xml to be added to your project. For starters, it might look as following: <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee">   <security-constraint>     <web-resource-collection>       <web-resource-name>Protected resource</web-resource-name>       <url-pattern>/*</url-pattern>       <http-method>GET</http-method>     </web-resource-collection>     <!-- https -->     <user-data-constraint>       <transport-guarantee>CONFIDENTIAL</transport-guarantee>     </user-data-constraint>   </security-constraint> </web-app> This is minimal web.xml for this task - web-resource-collection just defines URL pattern and HTTP method(s) we want to put a constraint on and user-data-constraint defines that constraint, which is in our case transport-guarantee. More information about these properties and security settings for web application can be found in Oracle Java EE 7 Tutorial. I have some simple webpage attached as well, so I can test my endpoint right away. You can find it (along with complete project) in Tyrus workspace: [webpage] [whole project]. After deploying this application to Glassfish Application Server, you should be able to hit it using your favorite browser. URL where my application resides is https://localhost:8181/sample-echo-https/ (may be different, depends on other configuration). My browser warns me about untrusted certificate (I use what freshly built Glassfish provides - self signed certificates) and after adding an exception for this site, I can see my webpage and I am able to securely connect to wss://localhost:8181/sample-echo-https/echo. WebSocket client Already mentioned demo application also contains test client, but execution of this is skipped for normal build. Reason for this is that Glassfish uses these self-signed "random" untrusted certificates and you are (in most cases) not able to connect to these services without any additional settings. Creating test WebSocket client is actually quite similar to server side, only difference is that you have to somewhere create client container and invoke connect with some additional info. Java API for WebSocket allows you to use annotated and programmatic way to construct endpoints. Server side shows the annotated case, so let's see how the programmatic approach will look. final WebSocketContainer client = ContainerProvider.getWebSocketContainer(); client.connectToServer(new Endpoint() {   @Override   public void onOpen(Session session, EndpointConfig EndpointConfig) {     try {       // register message handler - will just print out the       // received message on standard output.       session.addMessageHandler(new MessageHandler.Whole<String>() {       @Override         public void onMessage(String message) {          System.out.println("### Received: " + message);         }       });       // send a message       session.getBasicRemote().sendText("Do or do not, there is no try.");     } catch (IOException e) {       // do nothing     }   } }, ClientEndpointConfig.Builder.create().build(),    URI.create("wss://localhost:8181/sample-echo-https/echo")); This client should work with some secured endpoint with valid certificated signed by some trusted certificate authority (you can try that with wss://echo.websocket.org). Accessing our Glassfish instance will require some additional settings. You can tell Java which certificated you trust by adding -Djavax.net.ssl.trustStore property (and few others in case you are using linked sample). Complete command line when you are testing your service might need to look somewhat like: mvn clean test -Djavax.net.ssl.trustStore=$AS_MAIN/domains/domain1/config/cacerts.jks\ -Djavax.net.ssl.trustStorePassword=changeit -Dtyrus.test.host=localhost\ -DskipTests=false Where AS_MAIN points to your Glassfish instance. Note: you might need to setup keyStore and trustStore per client instead of per JVM; there is a way how to do it, but it is Tyrus proprietary feature: http://tyrus.java.net/documentation/1.2.1/user-guide.html#d0e1128. And that's it! Now nobody is able to "hear" what you are sending to or receiving from your WebSocket endpoint. There is always room for improvement, so the next step you might want to take is introduce some authentication mechanism (like HTTP Basic or Digest). This topic is more about container configuration so I'm not going to go into details, but there is one thing worth mentioning: to access services which require authorization, you might need to put this additional information to HTTP headers of first (Upgrade) request (there is not (yet) any direct support even for these fundamental mechanisms, user need to register Configurator and add headers in beforeRequest method invocation). I filed related feature request as TYRUS-228; feel free to comment/vote if you need this functionality.

    Read the article

  • Multiple vulnerabilities in Wireshark

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-1593 Denial of Service (DoS) vulnerability 3.3 Wireshark Solaris 11 11/11 SRU 8.5 CVE-2012-1594 Improper Control of Generation of Code ('Code Injection') vulnerability 3.3 CVE-2012-1595 Resource Management Errors vulnerability 4.3 CVE-2012-1596 Resource Management Errors vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • OData &ndash; The easiest service I can create: now with updates

    - by Jon Dalberg
    The other day I created a simple NastyWord service exposed via OData. It was read-only and used an in-memory backing store for the words. Today I’ll modify it to use a file instead of a list and I’ll accept new nasty words by implementing IUpdatable directly. The first thing to do is enable the service to accept new entries. This is done at configuration time by adding the “WriteAppend” access rule: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Next I placed a file, NastyWords.txt, in the “App_Data” folder and added a few *choice* words to start. This required one simple change to our NastyWordDataSource.cs file: 1: public NastyWordsDataSource() 2: { 3: UpdateFromSource(); 4: } 5:   6: private void UpdateFromSource() 7: { 8: var words = File.ReadAllLines(pathToFile); 9: NastyWords = (from w in words 10: select new NastyWord { Word = w }).AsQueryable(); 11: }   Nothing too shocking here, just reading each line from the NastyWords.txt file and exposing them. Next, I implemented IUpdatable which comes with a boat-load of methods. We don’t need all of them for now since we are only concerned with allowing new values. Here are the methods we must implement, all the others throw a NotImplementedException: 1: public object CreateResource(string containerName, string fullTypeName) 2: { 3: var nastyWord = new NastyWord(); 4: pendingUpdates.Add(nastyWord); 5: return nastyWord; 6: } 7:   8: public object ResolveResource(object resource) 9: { 10: return resource; 11: } 12:   13: public void SaveChanges() 14: { 15: var intersect = (from w in pendingUpdates 16: select w.Word).Intersect(from n in NastyWords 17: select n.Word); 18:   19: if (intersect.Count() > 0) 20: throw new DataServiceException(500, "duplicate entry"); 21:   22: var lines = from w in pendingUpdates 23: select w.Word; 24:   25: File.AppendAllLines(pathToFile, 26: lines, 27: Encoding.UTF8); 28:   29: pendingUpdates.Clear(); 30:   31: UpdateFromSource(); 32: } 33:   34: public void SetValue(object targetResource, string propertyName, object propertyValue) 35: { 36: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 37: }   I use a simple list to contain the pending updates and only commit them when the “SaveChanges” method is called. Here’s the order these methods are called in our service during an insert: CreateResource – here we just instantiate a new NastyWord and stick a reference to it in our pending updates list. SetValue – this is where the “Word” property of the NastyWord instance is set. SaveChanges – get the list of pending updates, barfing on duplicates, write them to the file and clear our pending list. ResolveResource – the newly created resource will be returned directly here since we aren’t dealing with “handles” to objects but the actual objects themselves. Not too bad, eh? I didn’t find this documented anywhere but a little bit of digging in the OData spec and use of Fiddler made it pretty easy to figure out. Here is some client code which would add a new nasty word: 1: static void Main(string[] args) 2: { 3: var svc = new ServiceReference1.NastyWordsDataSource(new Uri("http://localhost.:60921/NastyWords.svc")); 4: svc.AddToNastyWords(new ServiceReference1.NastyWord() { Word = "shat" }); 5:   6: svc.SaveChanges(); 7: }   Here’s all of the code so far for to implement the service: 1: using System; 2: using System.Collections.Generic; 3: using System.Data.Services; 4: using System.Data.Services.Common; 5: using System.Linq; 6: using System.ServiceModel.Web; 7: using System.Web; 8: using System.IO; 9: using System.Text; 10:   11: namespace ONasty 12: { 13: [DataServiceKey("Word")] 14: public class NastyWord 15: { 16: public string Word { get; set; } 17: } 18:   19: public class NastyWordsDataSource : IUpdatable 20: { 21: private List<NastyWord> pendingUpdates = new List<NastyWord>(); 22: private string pathToFile = @"path to your\App_Data\NastyWords.txt"; 23:   24: public NastyWordsDataSource() 25: { 26: UpdateFromSource(); 27: } 28:   29: private void UpdateFromSource() 30: { 31: var words = File.ReadAllLines(pathToFile); 32: NastyWords = (from w in words 33: select new NastyWord { Word = w }).AsQueryable(); 34: } 35:   36: public IQueryable<NastyWord> NastyWords { get; private set; } 37:   38: public void AddReferenceToCollection(object targetResource, string propertyName, object resourceToBeAdded) 39: { 40: throw new NotImplementedException(); 41: } 42:   43: public void ClearChanges() 44: { 45: pendingUpdates.Clear(); 46: } 47:   48: public object CreateResource(string containerName, string fullTypeName) 49: { 50: var nastyWord = new NastyWord(); 51: pendingUpdates.Add(nastyWord); 52: return nastyWord; 53: } 54:   55: public void DeleteResource(object targetResource) 56: { 57: throw new NotImplementedException(); 58: } 59:   60: public object GetResource(IQueryable query, string fullTypeName) 61: { 62: throw new NotImplementedException(); 63: } 64:   65: public object GetValue(object targetResource, string propertyName) 66: { 67: throw new NotImplementedException(); 68: } 69:   70: public void RemoveReferenceFromCollection(object targetResource, string propertyName, object resourceToBeRemoved) 71: { 72: throw new NotImplementedException(); 73: } 74:   75: public object ResetResource(object resource) 76: { 77: throw new NotImplementedException(); 78: } 79:   80: public object ResolveResource(object resource) 81: { 82: return resource; 83: } 84:   85: public void SaveChanges() 86: { 87: var intersect = (from w in pendingUpdates 88: select w.Word).Intersect(from n in NastyWords 89: select n.Word); 90:   91: if (intersect.Count() > 0) 92: throw new DataServiceException(500, "duplicate entry"); 93:   94: var lines = from w in pendingUpdates 95: select w.Word; 96:   97: File.AppendAllLines(pathToFile, 98: lines, 99: Encoding.UTF8); 100:   101: pendingUpdates.Clear(); 102:   103: UpdateFromSource(); 104: } 105:   106: public void SetReference(object targetResource, string propertyName, object propertyValue) 107: { 108: throw new NotImplementedException(); 109: } 110:   111: public void SetValue(object targetResource, string propertyName, object propertyValue) 112: { 113: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 114: } 115: } 116:   117: public class NastyWords : DataService<NastyWordsDataSource> 118: { 119: // This method is called only once to initialize service-wide policies. 120: public static void InitializeService(DataServiceConfiguration config) 121: { 122: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 123: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 124: } 125: } 126: } Next time we’ll allow removing nasty words. Enjoy!

    Read the article

  • WebCenter Customer Spotlight: SICE

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummarySociedad Ibérica de Construcciones Eléctricas, S.A. (SICE) is a Spanish company specializes in engineering and technology integration for intelligent transport systems and environmental control systems. They had a large quantity of engineering and environmental planning documents  which they wanted to manage, classify and integrate with their existing enterprise resource planning (ERP) system. SICE adapted  Oracle WebCenter Content to classify and manage more than 30 different types, defined a security plan to ensure the integrity and recovery of various document types and integrated the document management solution with SICE’s third-party enterprise resource planning (ERP) system. SICE  accelerated time to market for all projects, minimized time required to identify and recover documents  and achieved greater efficiency in all operations. Company Overview Created in 1921, Sociedad Ibérica de Construcciones Eléctricas, S.A. (SICE) currently specializes in engineering and technology integration for intelligent transport systems and environmental control systems. It has more than 2,500 employees, with operations in Spain and various locations in Latin America, the United States, Africa, and Australia. Business Challenges They had a large quantity of engineering and environmental planning documents generated in research and projects which they wanted to manage, classify and integrate with their existing enterprise resource planning (ERP) system. Solution Deployed SICE worked with the Oracle Partner ABAST Solutions to evaluate and choose the best document management system, ultimately selecting Oracle WebCenter Content over other options including  Documentum, SharePoint, OpenText, and Alfresco.They adapted Oracle WebCenter Content to classify and manage more than 30 different types, defined a security plan to ensure the integrity and recovery of various document types and integrated the document management solution  with SICE’s third-party enterprise resource planning (ERP) system to accelerate incorporation with the documentation system and ensure integrity ERP system data. Business Results SICE  accelerated time to market for all projects by releasing reports and information that support and validate engineering projects, stored all documents in a single repository with organizationwide accessibility, minimizing time required to identify and recover documents needed for reports to initiate and execute engineering and building projects. Overall they achieved greater efficiency in all operations, including technical and impact report development and construction documentation management. “The correct and efficient management of information is vital to our environmental management activity. Oracle WebCenter Content  serves as a basis for knowledge management practices, with the objective of adding greater value to everything that we do.” Manuel Delgado, IT Project Engineering, Sociedad Ibérica de Construcciones Eléctricas, S.A Additional Information SICE Customer Snapshot Oracle WebCenter Content

    Read the article

  • Oracle’s Primavera Inspire for SAP: Aligning Business Priorities with Project Priorities

    Oracle’s Primavera Inspire for SAP integrates schedule, financial and resource information between Oracle’s Primavera project portfolio management applications and SAP’s enterprise resource planning solutions. Join Tracy Bowman, Principal Product Manager and learn how Primavera Inspire for SAP can help utilities and oil and gas companies’ complete projects on-time and within budget by providing them with a single access point for all project and portfolio related information.

    Read the article

  • How to refactor my design, if it seems to require multiple inheritance?

    - by Omega
    Recently I made a question about Java classes implementing methods from two sources (kinda like multiple inheritance). However, it was pointed out that this sort of need may be a sign of a design flaw. Hence, it is probably better to address my current design rather than trying to simulate multiple inheritance. Before tackling the actual problem, some background info about a particular mechanic in this framework: It is a simple game development framework. Several components allocate some memory (like pixel data), and it is necessary to get rid of it as soon as you don't need it. Sprites are an example of this. Anyway, I decided to implement something ala Manual-Reference-Counting from Objective-C. Certain classes, like Sprites, contain an internal counter, which is increased when you call retain(), and decreased on release(). Thus the Resource abstract class was created. Any subclass of this will obtain the retain() and release() implementations for free. When its count hits 0 (nobody is using this class), it will call the destroy() method. The subclass needs only to implement destroy(). This is because I don't want to rely on the Garbage Collector to get rid of unused pixel data. Game objects are all subclasses of the Node class - which is the main construction block, as it provides info such as position, size, rotation, etc. See, two classes are used often in my game. Sprites and Labels. Ah... but wait. Sprites contain pixel data, remember? And as such, they need to extend Resource. But this, of course, can't be done. Sprites ARE nodes, hence they must subclass Node. But heck, they are resources too. Why not making Resource an interface? Because I'd have to re-implement retain() and release(). I am avoiding this in virtue of not writing the same code over and over (remember that there are multiple classes that need this memory-management system). Why not composition? Because I'd still have to implement methods in Sprite (and similar classes) that essentially call the methods of Resource. I'd still be writing the same code over and over! What is your advice in this situation, then?

    Read the article

  • Unit-testing code that relies on untestable 3rd party code

    - by DudeOnRock
    Sometimes, especially when working with third party code, I write unit-test specific code in my production code. This happens when third party code uses singletons, relies on constants, accesses the file-system/a resource I don't want to access in a test situation, or overuses inheritance. The form my unit-test specific code takes is usually the following: if (accessing or importing a certain resource fails) I assume this is a test case and load a mock object Is this poor form, and if it is, what is normally done when writing tests for code that uses untestable third party code?

    Read the article

  • Multiple Vulnerabilities in libpng

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2010-0205 Resource Management Errors vulnerability 7.8 libpng Solaris 8 SPARC: 114816-04 X86: 114817-04 Solaris 9 SPARC: 139382-03 X86: 139383-03 Solaris 10 SPARC: 137080-05 X86: 137081-05 Solaris 11 Express snv_151a CVE-2010-1205 Buffer Overflow vulnerability 7.5 CVE-2010-2249 Resource Management Errors vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Understanding exceptional cases

    - by Justin
    I've been studying the use of exceptions in various php projects (such as Doctrine and Zend Framework). Exceptions seem to be thrown when unordinary input/state occurs. A perfect example is Doctrine throwing an exception when you try to use a invalid query string. I think the creators of the doctrine api understood that first, you can't query data by using an invalid DQL statement, and a developer should immediately be warned that an error has occurred, rather then letting execution continue with the possibility of an error code going un-checked. I also bet that this simplifies reading the code. I can't think of a situation where you would want to use an invalid DQL statement, except unit testing. Since this is true, it's better to avoid plaguing a bunch of code with null/error checks and use exceptions. I've read in books that exceptions shouldn't be thrown when validating dating user input. I've seen examples where of where the guideline is broken. One example is the Zend framework. If supplying an invalid controller or action name, an exception is thrown. Unlike doctrine, the user has more direct control over this sort of input. I know you can configure an error controller and set up a 404 message or what have you, but I'm curious why they have used an exception in this scenario? I guess you can argue the Zend Framework does not know how to continue processing the quest. One last example Is I wrote a function to return some html based on a given resource type. This resource type is hard-coded and sent when a user interacts with a web site (such as clicking a button to display the form to input data). I don't expect users to be mucking around with the request type. Under normal operating conditions, the resource type should be valid. To clean up some logic, I was going to throw an exception if a particular form wasn't found. This is mainly to find the correct form associated with a resource type so proper validation can occur. Does this sound like a valid use case for an exception? Right now it's pretty trivial, but I do plan to implement a restful consumer and re-using a function to map resources to their validation services would be very useful. I can then catch the exception and based on the consumer, return an error message suitable for the request type...

    Read the article

  • Multiple Denial of Service (DoS) vulnerabilities in libxml2

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-2821 Resource Management Errors vulnerability 7.5 libxml2 Solaris 11 Contact Support Solaris 10 SPARC: 125731-07 X86: 125732-07 Solaris 9 Contact Support CVE-2011-2834 Resource Management Errors vulnerability 6.8 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • What Instruments Does a Web Based Project Management System Offer Us?

    Nowadays, in order to successfully manage various and complex projects, a project owner has access to a multitude of web based software covering key areas of focus such as scheduling, cost control, budget management, resource allocation, documentation and communication. Managing projects becomes time and resource saving also maximizing collaboration between team members that, in certain situations must stay connected to the partial outcomes.

    Read the article

  • Why can’t two programs access my webcam simultaneously?

    - by qdii
    I first launch cheese and my webcam turns on. I then run vlc to grab the output of /dev/video0 but it fails with: [0x7f3ea40012e8] v4l2 demux error: cannot set input 0: Device or resource busy [0x7f3ea40012e8] v4l2 demux error: cannot set input 0: Device or resource busy [0x7f3ea4002168] v4l2 access error: cannot set input 0: Device or resource busy [0x7f3ea4002168] v4l2 access error: cannot set input 0: Device or resource busy [0x7f3eb4000b78] main input error: open of `v4l2:///dev/video0' failed Whatever pair of video programs I run (skype, cheese, vlc, etc.), the result is always the same: the second program can no longer use the webcam when the first one has already grabbed the output. However I find it curious as video4linux states: In general, V4L2 devices can be opened more than once. When this is supported by the driver, users can for example start a "panel" application to change controls like brightness or audio volume, while another application captures video and audio. My webcam is seen in lspci as 058f:a014 Alcor Micro Corp. Asus Integrated Webcam, but I don’t even know what the underlying driver is, so I can’t check whether my problem is driver-related or not. Any input would be more than welcome!

    Read the article

  • How to prevent eMule from jamming up the router?

    - by the searcher
    Usually, when eMule is started, after some time, I find that the router is jammed, so the internet connection on that computer stopped working, or it seemed to be waiting for some port to be freed up before it can connect to a website. This sometimes affect even other PCs or Macs using the same router. Is there a way to prevent eMule from hogging too much resource or ports? I see that there is under Options -> Connection "Max Sources/File" and a "Connection Limits - Maximum Connections". Right now I set them to really low numbers: the first to 120 and the second to 200, but what are good numbers to fill in there so that it can work well without jamming up the router or use up the network resource of the PC or Mac? Or could it be that the number of files that are "Waiting" is too high, and used up too much resource? (If so, can emule automatically limit the number to 10 or 20 to prevent using too much resource?) (This happened before on Linksys router, Netgear router, and the AT&T U-verse router.)

    Read the article

  • empty file from tomcat https redirect?

    - by amertune
    I am using tomcat 6.0.20, with jdk 1.6.0_18 on 64 bit linux (tomcat downloaded from tomcat.apache.org, not installed from repositories). I have iptables redirects from port 80 - 8080 and 443 - 8443. In server.xml, the connector for port 8080 redirects to 443, and the 8443 connector has proxyPort="443". In conf/web.xml, I have added this bit at the end of the file (but still inside the <web-app</webapp tags). <security-constraint> <web-resource-collection> <web-resource-name>Protected Context</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> I also have two contexts, ROOT and mywebapp. If I go to http://myurl.com, then I get redirected to https://myurl.com. If I go to http://myurl.com/mywebapp/, then I get redirected to https://myurl.com/mywebapp/. The problem I'm having is when I go to http://myurl.com/mywebapp (no trailing slash). When I do this I get a prompt to download an empty file that has an empty name. Going to https://myurl.com/mywebapp works. I would think that a user typing myurl.com/mywebapp is far from rare. Is there something I'm missing?

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c: Contributing to emerging Cloud standards

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contributed by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups . As one would expect of an industry leader, Oracle's participation in industry standards bodies is extensive. We participate in dozens of organizations that produce open standards which apply to our products, and our commitment to the success of these organizations is manifest in several way - we support them financially through our memberships; our senior engineers are active participants, often serving in leadership positions on boards, technical working groups and committees; and when it makes good business sense we contribute our intellectual property. We believe supporting the development of open standards is fundamental to Oracle meeting customer demands for product choice, seamless interoperability, and lowering the cost of ownership. Nowhere is this truer than in the area of cloud standards, and for the most recent release of our flagship management product, Oracle Enterprise Manager Cloud Control 12c (EM Cloud Control 12c). There is a fundamental rule that standards follow architecture. This was true of distributed computing, it was true of service-oriented architecture (SOA), and it's true of cloud. If you are familiar with Enterprise Manager it is likely to be no surprise that EM Cloud Control 12c is a source of technology that can be considered for adoption within cloud management standards. The reason, quite simply, is that the Oracle integrated stack architecture aligns with the cloud architecture models being adopted by the industry, and EM Cloud Control 12c has been developed to manage this architecture. EM Cloud Control 12c has facilities for managing the various underlying capabilities of the integrated stack in IaaS, PaaS, and SaaS clouds, and enables essential characteristics such as on-demand self-service provisioning, centralized policy-based resource management, integrated chargeback, and capacity planning, and complete visibility of the physical and virtual environment from applications to disk. Our most recent contribution in support of cloud management standards to come out of the EM Cloud Control 12c work was the Oracle Cloud Elemental Resource Model API. Oracle contributed the Elemental Resource Model API to the Distributed Management Task Force (DMTF) in 2011 where it was assigned to DMTF's Cloud Management Working Group (CMWG). The CMWG is considering the Oracle specification and those of several other vendors in their effort to produce a best practices specification for managing IaaS clouds. DMTF's Cloud Infrastructure Management Interface specification, called CIMI for short, is currently out for public review and expected to be released by DMTF later this year. We are proud to be playing an important role in the development of what is expected to become a major cloud standard. You can find more information on DMTF CIMI at http://dmtf.org/standards/cloud. You can find the work-in-progress release of CIMI at http://dmtf.org/content/cimi-work-progress-specifications-now-available-public-comment . The Oracle Cloud API specification is available on the Oracle Technology Network. You can find more information about the Oracle Cloud Elemental Resource Model API on the Oracle Technical Network (OTN), including a webcast featuring the API engineering manager Jack Yu (see TechCast Live: Inside the Oracle Cloud Resource Model API). If you have not seen this video we recommend you take the time to view it. Simply hover your cursor over the webcast title and control+click to follow the embedded link. If you have a question about the Oracle Cloud API or want to learn more about Oracle's participation in cloud management standards efforts drop us a line. We'd love to hear from you. The Enterprise Manager Standards Blogs are written by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups. They can be reached at Tony.DiCenzo at Oracle.com and Mark.Carlson at Oracle.com respectively. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • Making your WCF Web Apis to speak in multiple languages

    - by cibrax
    One of the key aspects of how the web works today is content negotiation. The idea of content negotiation is based on the fact that a single resource can have multiple representations, so user agents (or clients) and servers can work together to chose one of them. The http specification defines several “Accept” headers that a client can use to negotiate content with a server, and among all those, there is one for restricting the set of natural languages that are preferred as a response to a request, “Accept-Language”. For example, a client can specify “es” in this header for specifying that he prefers to receive the content in spanish or “en” in english. However, there are certain scenarios where the “Accept-Language” header is just not enough, and you might want to have a way to pass the “accepted” language as part of the resource url as an extension. For example, http://localhost/ProductCatalog/Products/1.es” returns all the descriptions for the product with id “1” in spanish. This is useful for scenarios in which you want to embed the link somewhere, such a document, an email or a page.  Supporting both scenarios, the header and the url extension, is really simple in the new WCF programming model. You only need to provide a processor implementation for any of them. Let’s say I have a resource implementation as part of a product catalog I want to expose with the WCF web apis. [ServiceContract][Export]public class ProductResource{ IProductRepository repository;  [ImportingConstructor] public ProductResource(IProductRepository repository) { this.repository = repository; }  [WebGet(UriTemplate = "{id}")] public Product Get(string id, HttpResponseMessage response) { var product = repository.GetById(int.Parse(id)); if (product == null) { response.StatusCode = HttpStatusCode.NotFound; response.Content = new StringContent(Messages.OrderNotFound); }  return product; }} The Get method implementation in this resource assumes the desired culture will be attached to the current thread (Thread.CurrentThread.Culture). Another option is to pass the desired culture as an additional argument in the method, so my processor implementation will handle both options. This method is also using an auto-generated class for handling string resources, Messages, which is available in the different cultures that the service implementation supports. For example, Messages.resx contains “OrderNotFound”: “Order Not Found” Messages.es.resx contains “OrderNotFound”: “No se encontro orden” The processor implementation bellow tackles the first scenario, in which the desired language is passed as part of the “Accept-Language” header. public class CultureProcessor : Processor<HttpRequestMessage, CultureInfo>{ string defaultLanguage = null;  public CultureProcessor(string defaultLanguage = "en") { this.defaultLanguage = defaultLanguage; this.InArguments[0].Name = HttpPipelineFormatter.ArgumentHttpRequestMessage; this.OutArguments[0].Name = "culture"; }  public override ProcessorResult<CultureInfo> OnExecute(HttpRequestMessage request) { CultureInfo culture = null; if (request.Headers.AcceptLanguage.Count > 0) { var language = request.Headers.AcceptLanguage.First().Value; culture = new CultureInfo(language); } else { culture = new CultureInfo(defaultLanguage); }  Thread.CurrentThread.CurrentCulture = culture; Messages.Culture = culture;  return new ProcessorResult<CultureInfo> { Output = culture }; }}   As you can see, the processor initializes a new CultureInfo instance with the value provided in the “Accept-Language” header, and set that instance to the current thread and the auto-generated resource class with all the messages. In addition, the CultureInfo instance is returned as an output argument called “culture”, making possible to receive that argument in any method implementation   The following code shows the implementation of the processor for handling languages as url extensions.   public class CultureExtensionProcessor : Processor<HttpRequestMessage, Uri>{ public CultureExtensionProcessor() { this.OutArguments[0].Name = HttpPipelineFormatter.ArgumentUri; }  public override ProcessorResult<Uri> OnExecute(HttpRequestMessage httpRequestMessage) { var requestUri = httpRequestMessage.RequestUri.OriginalString;  var extensionPosition = requestUri.LastIndexOf(".");  if (extensionPosition > -1) { var extension = requestUri.Substring(extensionPosition + 1);  var query = httpRequestMessage.RequestUri.Query;  requestUri = string.Format("{0}?{1}", requestUri.Substring(0, extensionPosition), query); ;  var uri = new Uri(requestUri);  httpRequestMessage.Headers.AcceptLanguage.Clear();  httpRequestMessage.Headers.AcceptLanguage.Add(new StringWithQualityHeaderValue(extension));  var result = new ProcessorResult<Uri>();  result.Output = uri;  return result; }  return new ProcessorResult<Uri>(); }} The last step is to inject both processors as part of the service configuration as it is shown bellow, public void RegisterRequestProcessorsForOperation(HttpOperationDescription operation, IList<Processor> processors, MediaTypeProcessorMode mode){ processors.Insert(0, new CultureExtensionProcessor()); processors.Add(new CultureProcessor());} Once you configured the two processors in the pipeline, your service will start speaking different languages :). Note: Url extensions don’t seem to be working in the current bits when you are using Url extensions in a base address. As far as I could see, ASP.NET intercepts the request first and tries to route the request to a registered ASP.NET Http Handler with that extension. For example, “http://localhost/ProductCatalog/products.es” does not work, but “http://localhost/ProductCatalog/products/1.es” does.

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

  • Best Practices - Dynamic Reconfiguration

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) Overview of dynamic Reconfiguration Oracle VM Server for SPARC supports Dynamic Reconfiguration (DR), making it possible to add or remove resources to or from a domain (virtual machine) while it is running. This is extremely useful because resources can be shifted to or from virtual machines in response to load conditions without having to reboot or interrupt running applications. For example, if an application requires more CPU capacity, you can add CPUs to improve performance, and remove them when they are no longer needed. You can use even use Dynamic Resource Management (DRM) policies that automatically add and remove CPUs to domains based on load. How it works (in broad general terms) Dynamic Reconfiguration is done in coordination with Solaris, which recognises a hypervisor request to change its virtual machine configuration and responds appropriately. In essence, Solaris receives a message saying "you now have 16 more CPUs numbered 16 to 31" or "8GB more RAM starting at address X" or "here's a new network or disk device - have fun with it". These actions take very little time. Solaris then can start using the new resource. In the case of added CPUs, that means dispatching processes and potentially binding interrupts to the new CPUs. For memory, Solaris adds the new memory pages to its "free" list and starts using them. Comparable actions occur with network and disk devices: they are recognised by Solaris and then used. Removing is the reverse process: after receiving the DR message to free specific CPUs, Solaris unbinds interrupts assigned to the CPUs and stops dispatching process threads. That takes very little time. primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m primary # ldm set-core 5 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.2% 6d 22h 29m ldom1 active -n---- 5000 40 8G 0.1% 6h 59m primary # ldm set-core 2 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m Memory pages are vacated by copying their contents to other memory locations and wiping them clean. Solaris may have to swap memory contents to disk if the remaining RAM isn't enough to hold all the contents. For this reason, deallocating memory can take longer on a loaded system. Even on a lightly loaded system it took several 7 or 8 seconds to switch the domain below between 8GB and 24GB of RAM. primary # ldm set-mem 24g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.1% 6d 22h 36m ldom1 active -n---- 5000 16 24G 0.2% 7h 6m primary # ldm set-mem 8g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.7% 6d 22h 37m ldom1 active -n---- 5000 16 8G 0.3% 7h 7m What if the device is in use? (this is the anecdote that inspired this blog post) If CPU or memory is being removed, releasing it pretty straightforward, using the method described above. The resources are released, and Solaris continues with less capacity. It's not as simple with a network or I/O device: you don't want to yank a device out from underneath an application that might be using it. In the following example, I've added a virtual network device to ldom1 and want to take it away, even though it's been plumbed. primary # ldm rm-vnet vnet19 ldom1 Guest LDom returned the following reason for failing the operation: Resource Information ---------------------------------------------------------- ----------------------- /devices/virtual-devices@100/channel-devices@200/network@1 Network interface net1 VIO operation failed because device is being used in LDom ldom1 Failed to remove VNET instance That's what I call a helpful error message - telling me exactly what was wrong. In this case the problem is easily solved. I know this NIC is seen in the guest as net1 so: ldom1 # ifconfig net1 down unplumb Now I can dispose of it, and even the virtual switch I had created for it: primary # ldm rm-vnet vnet19 ldom1 primary # ldm rm-vsw primary-vsw9 If I had to take away the device disruptively, I could have used ldm rm-vnet -f but that could disrupt whoever was using it. It's better if that can be avoided. Summary Oracle VM Server for SPARC provides dynamic reconfiguration, which lets you modify a guest domain's CPU, memory and I/O configuration on the fly without reboot. You can add and remove resources as needed, and even automate this for CPUs by setting up resource policies. Taking things away can be more complicated than giving, especially for devices like disks and networks that may contain application and system state or be involved in a transaction. LDoms and Solaris cooperative work together to coordinate resource allocation and de-allocation in a safe and effective way. For best practices, use dynamic reconfiguration to make the best use of your system's resources.

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • Cannot redeclare class error when generating PHPUnit code coverage report

    - by Cobby
    Starting a project with Zend Framework 1.10 and Doctrine 2 (Beta1). I am using namespaces in my own library code. When generating code coverage reports I get a Fatal Error about Redeclaring a class. To provide more info, I've commented out the xdebug_disable() call in my phpunit executable so you can see the function trace (disabled local variables output because there was too much output). Here's my Terminal output: $ phpunit PHPUnit 3.4.12 by Sebastian Bergmann. ........ Time: 4 seconds, Memory: 16.50Mb OK (8 tests, 14 assertions) Generating code coverage report, this may take a moment.PHP Fatal error: Cannot redeclare class Cob\Application\Resource\HelperBroker in /Users/Cobby/Sites/project/trunk/code/library/Cob/Application/Resource/HelperBroker.php on line 93 PHP Stack trace: PHP 1. {main}() /usr/local/zend/bin/phpunit:0 PHP 2. PHPUnit_TextUI_Command::main() /usr/local/zend/bin/phpunit:54 PHP 3. PHPUnit_TextUI_Command-run() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:146 PHP 4. PHPUnit_TextUI_TestRunner-doRun() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:213 PHP 5. PHPUnit_Util_Report::render() /usr/local/zend/share/pear/PHPUnit/TextUI/TestRunner.php:478 PHP 6. PHPUnit_Framework_TestResult-getCodeCoverageInformation() /usr/local/zend/share/pear/PHPUnit/Util/Report.php:97 PHP 7. PHPUnit_Util_Filter::getFilteredCodeCoverage() /usr/local/zend/share/pear/PHPUnit/Framework/TestResult.php:623 Fatal error: Cannot redeclare class Cob\Application\Resource\HelperBroker in /Users/Cobby/Sites/project/trunk/code/library/Cob/Application/Resource/HelperBroker.php on line 93 Call Stack: 0.0004 322888 1. {main}() /usr/local/zend/bin/phpunit:0 0.0816 4114628 2. PHPUnit_TextUI_Command::main() /usr/local/zend/bin/phpunit:54 0.0817 4114964 3. PHPUnit_TextUI_Command-run() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:146 0.1151 5435528 4. PHPUnit_TextUI_TestRunner-doRun() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:213 4.2931 16690760 5. PHPUnit_Util_Report::render() /usr/local/zend/share/pear/PHPUnit/TextUI/TestRunner.php:478 4.2931 16691120 6. PHPUnit_Framework_TestResult-getCodeCoverageInformation() /usr/local/zend/share/pear/PHPUnit/Util/Report.php:97 4.2931 16691148 7. PHPUnit_Util_Filter::getFilteredCodeCoverage() /usr/local/zend/share/pear/PHPUnit/Framework/TestResult.php:623 (I have no idea why it shows the error twice...?) And here is my phpunit.xml: <phpunit bootstrap="./code/tests/application/bootstrap.php" colors="true"> <!-- bootstrap.php changes directory to trunk/code/tests, all paths below are relative to this directory. --> <testsuite name="My Promotions"> <directory>./</directory> </testsuite> <filter> <whitelist> <directory suffix=".php">../application</directory> <directory suffix=".php">../library/Cob</directory> <exclude> <!-- By adding the below line I can remove the error --> <file>../library/Cob/Application/Resource/HelperBroker.php</file> <directory suffix=".phtml">../application</directory> <directory suffix=".php">../application/doctrine</directory> <file>../application/Bootstrap.php</file> <directory suffix=".php">../library/Cob/Tools</directory> </exclude> </whitelist> </filter> <logging> <log type="junit" target="../../build/reports/tests/report.xml" /> <log type="coverage-html" target="../../build/reports/coverage" charset="UTF-8" yui="true" highlight="true" lowUpperBound="50" highLowerBound="80" /> </logging> </phpunit> I have added a tag inside the which seams to hide this problem. I do have another application resource but it doesn't seam to have a problem (the other one is a Doctrine 2 resource). I'm not sure why it is specific to this class, my entire library is autoloaded so their isn't any include/require calls anywhere. I guess it should be noted that HelperBroker is the first file in the filesystem stemming out from library/Cob I am on Snow Leopard with the latest/recent versions of all software (Zend Server, Zend Framework, Doctrine 2 Beta1, Phing, PHPUnit, PEAR).

    Read the article

  • What's the best Communication Pattern for EJB3-based applications?

    - by Hank
    I'm starting a JEE project that needs to be strongly scalable. So far, the concept was: several Message Driven Beans, responsible for different parts of the architecture each MDB has a Session Bean injected, handling the business logic a couple of Entity Beans, providing access to the persistence layer communication between the different parts of the architecture via Request/Reply concept via JMS messages: MDB receives msg containing activity request uses its session bean to execute necessary business logic returns response object in msg to original requester The idea was that by de-coupling parts of the architecture from each other via the message bus, there is no limit to the scalability. Simply start more components - as long as they are connected to the same bus, we can grow and grow. Unfortunately, we're having massive problems with the request-reply concept. Transaction Mgmt seems to be in our way plenty. It seams that session beans are not supposed to consume messages?! Reading http://blogs.sun.com/fkieviet/entry/request_reply_from_an_ejb and http://forums.sun.com/message.jspa?messageID=10338789, I get the feeling that people actually recommend against the request/reply concept for EJBs. If that is the case, how do you communicate between your EJBs? (Remember, scalability is what I'm after) Details of my current setup: MDB 1 'TestController', uses (local) SLSB 1 'TestService' for business logic TestController.onMessage() makes TestService send a message to queue XYZ and requests a reply TestService uses Bean Managed Transactions TestService establishes a connection & session to the JMS broker via a joint connection factory upon initialization (@PostConstruct) TestService commits the transaction after sending, then begins another transaction and waits 10 sec for the response Message gets to MDB 2 'LocationController', which uses (local) SLSB 2 'LocationService' for business logic LocationController.onMessage() makes LocationService send a message back to the requested JMSReplyTo queue Same BMT concept, same @PostConstruct concept all use the same connection factory to access the broker Problem: The first message gets send (by SLSB 1) and received (by MDB 2) ok. The sending of the returning message (by SLSB 2) is fine as well. However, SLSB 1 never receives anything - it just times out. I tried without the messageSelector, no change, still no receiving message. Is it not ok to consume message by a session bean? SLSB 1 - TestService.java @Resource(name = "jms/mvs.MVSControllerFactory") private javax.jms.ConnectionFactory connectionFactory; @PostConstruct public void initialize() { try { jmsConnection = connectionFactory.createConnection(); session = jmsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE); System.out.println("Connection to JMS Provider established"); } catch (Exception e) { } } public Serializable sendMessageWithResponse(Destination reqDest, Destination respDest, Serializable request) { Serializable response = null; try { utx.begin(); Random rand = new Random(); String correlationId = rand.nextLong() + "-" + (new Date()).getTime(); // prepare the sending message object ObjectMessage reqMsg = session.createObjectMessage(); reqMsg.setObject(request); reqMsg.setJMSReplyTo(respDest); reqMsg.setJMSCorrelationID(correlationId); // prepare the publishers and subscribers MessageProducer producer = session.createProducer(reqDest); // send the message producer.send(reqMsg); System.out.println("Request Message has been sent!"); utx.commit(); // need to start second transaction, otherwise the first msg never gets sent utx.begin(); MessageConsumer consumer = session.createConsumer(respDest, "JMSCorrelationID = '" + correlationId + "'"); jmsConnection.start(); ObjectMessage respMsg = (ObjectMessage) consumer.receive(10000L); utx.commit(); if (respMsg != null) { response = respMsg.getObject(); System.out.println("Response Message has been received!"); } else { // timeout waiting for response System.out.println("Timeout waiting for response!"); } } catch (Exception e) { } return response; } SLSB 2 - LocationService.Java (only the reply method, rest is same as above) public boolean reply(Message origMsg, Serializable o) { boolean rc = false; try { // check if we have necessary correlationID and replyTo destination if (!origMsg.getJMSCorrelationID().equals("") && (origMsg.getJMSReplyTo() != null)) { // prepare the payload utx.begin(); ObjectMessage msg = session.createObjectMessage(); msg.setObject(o); // make it a response msg.setJMSCorrelationID(origMsg.getJMSCorrelationID()); Destination dest = origMsg.getJMSReplyTo(); // send it MessageProducer producer = session.createProducer(dest); producer.send(msg); producer.close(); System.out.println("Reply Message has been sent"); utx.commit(); rc = true; } } catch (Exception e) {} return rc; } sun-resources.xml <admin-object-resource enabled="true" jndi-name="jms/mvs.LocationControllerRequest" res-type="javax.jms.Queue" res-adapter="jmsra"> <property name="Name" value="mvs.LocationControllerRequestQueue"/> </admin-object-resource> <admin-object-resource enabled="true" jndi-name="jms/mvs.LocationControllerResponse" res-type="javax.jms.Queue" res-adapter="jmsra"> <property name="Name" value="mvs.LocationControllerResponseQueue"/> </admin-object-resource> <connector-connection-pool name="jms/mvs.MVSControllerFactoryPool" connection-definition-name="javax.jms.QueueConnectionFactory" resource-adapter-name="jmsra"/> <connector-resource enabled="true" jndi-name="jms/mvs.MVSControllerFactory" pool-name="jms/mvs.MVSControllerFactoryPool" />

    Read the article

  • Cascading S3 Sink Tap not being deleted with SinkMode.REPLACE

    - by Eric Charles
    We are running Cascading with a Sink Tap being configured to store in Amazon S3 and were facing some FileAlreadyExistsException (see [1]). This was only from time to time (1 time on around 100) and was not reproducable. Digging into the Cascading codem, we discovered the Hfs.deleteResource() is called (among others) by the BaseFlow.deleteSinksIfNotUpdate(). Btw, we were quite intrigued with the silent NPE (with comment "hack to get around npe thrown when fs reaches root directory"). From there, we extended the Hfs tap with our own Tap to add more action in the deleteResource() method (see [2]) with a retry mechanism calling directly the getFileSystem(conf).delete. The retry mechanism seemed to bring improvement, but we are still sometimes facing failures (see example in [3]): it sounds like HDFS returns isDeleted=true, but asking directly after if the folder exists, we receive exists=true, which should not happen. Logs also shows randomly isDeleted true or false when the flow succeeds, which sounds like the returned value is irrelevant or not to be trusted. Can anybody bring his own S3 experience with such a behavior: "folder should be deleted, but it is not"? We suspect a S3 issue, but could it also be in Cascading or HDFS? We run on Hadoop Cloudera-cdh3u5 and Cascading 2.0.1-wip-dev. [1] org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory s3n://... already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132) at com.twitter.elephantbird.mapred.output.DeprecatedOutputFormatWrapper.checkOutputSpecs(DeprecatedOutputFormatWrapper.java:75) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:923) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:882) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:882) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:856) at cascading.flow.hadoop.planner.HadoopFlowStepJob.internalNonBlockingStart(HadoopFlowStepJob.java:104) at cascading.flow.planner.FlowStepJob.blockOnJob(FlowStepJob.java:174) at cascading.flow.planner.FlowStepJob.start(FlowStepJob.java:137) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:122) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:42) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.j [2] @Override public boolean deleteResource(JobConf conf) throws IOException { LOGGER.info("Deleting resource {}", getIdentifier()); boolean isDeleted = super.deleteResource(conf); LOGGER.info("Hfs Sink Tap isDeleted is {} for {}", isDeleted, getIdentifier()); Path path = new Path(getIdentifier()); int retryCount = 0; int cumulativeSleepTime = 0; int sleepTime = 1000; while (getFileSystem(conf).exists(path)) { LOGGER .info( "Resource {} still exists, it should not... - I will continue to wait patiently...", getIdentifier()); try { LOGGER.info("Now I will sleep " + sleepTime / 1000 + " seconds while trying to delete {} - attempt: {}", getIdentifier(), retryCount + 1); Thread.sleep(sleepTime); cumulativeSleepTime += sleepTime; sleepTime *= 2; } catch (InterruptedException e) { e.printStackTrace(); LOGGER .error( "Interrupted while sleeping trying to delete {} with message {}...", getIdentifier(), e.getMessage()); throw new RuntimeException(e); } if (retryCount == 0) { getFileSystem(conf).delete(getPath(), true); } retryCount++; if (cumulativeSleepTime > MAXIMUM_TIME_TO_WAIT_TO_DELETE_MS) { break; } } if (getFileSystem(conf).exists(path)) { LOGGER .error( "We didn't succeed to delete the resource {}. Throwing now a runtime exception.", getIdentifier()); throw new RuntimeException( "Although we waited to delete the resource for " + getIdentifier() + ' ' + retryCount + " iterations, it still exists - This must be an issue in the underlying storage system."); } return isDeleted; } [3] INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] at least one sink is marked for delete INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] sink oldest modified date: Wed Dec 31 23:59:59 UTC 1969 INFO [pool-2-thread-15] (HiveSinkTap.java:148) - Now I will sleep 1 seconds while trying to delete s3n://... - attempt: 1 INFO [pool-2-thread-15] (HiveSinkTap.java:130) - Deleting resource s3n://... INFO [pool-2-thread-15] (HiveSinkTap.java:133) - Hfs Sink Tap isDeleted is true for s3n://... ERROR [pool-2-thread-15] (HiveSinkTap.java:175) - We didn't succeed to delete the resource s3n://... Throwing now a runtime exception. WARN [pool-2-thread-15] (Cascade.java:706) - [...] flow failed: ... java.lang.RuntimeException: Although we waited to delete the resource for s3n://... 0 iterations, it still exists - This must be an issue in the underlying storage system. at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:179) at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:40) at cascading.flow.BaseFlow.deleteSinksIfNotUpdate(BaseFlow.java:971) at cascading.flow.BaseFlow.prepare(BaseFlow.java:733) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:761) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:710) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Set the JAXB context factory initialization class to be used

    - by user1902288
    I have updated our projects (Java EE based running on Websphere 8.5) to use a new release of a company internal framework (and Ejb 3.x deployment descriptors rather than the 2.x ones). Since then my integration Tests fail with the following exception: [java.lang.ClassNotFoundException: com.ibm.xml.xlxp2.jaxb.JAXBContextFactory] I can build the application with the previous framework release and everything works fine. While debugging i noticed that within the ContextFinder (javax.xml.bind) there are two different behaviours: Previous Version (Everything works just fine): None of the different places brings up a factory class so the default factory class gets loaded which is com.sun.xml.internal.bind.v2.ContextFactory (defined as String constant within the class). Upgraded Version (ClassNotFound): There is a resource "META-INF/services/javax.xml.bind.JAXBContext" beeing loaded successfully and the first line read makes the ContextFinder attempt to load "com.ibm.xml.xlxp2.jaxb.JAXBContextFactory" which causes the error. I now have two questions: What sort is that resource? Because inside our EAR there is two WARs and none of those two contains a folder services in its META-INF directory. Where could that value be from otherwise? Because a filediff showed me no new or changed properties files. No need to say i am going to read all about the JAXB configuration possibilities but if you have first insights on what could have gone wrong or help me out with that resource (is it a real file i have to look for?) id appreciate a lot. Many Thanks! EDIT (according to comments Input/Questions): Out of curiosity, does your framework include JAXB JARs? Did the old version of your framework include jaxb.properties? Indeed (i am a bit surprised) the framework has a customized eclipselink-2.4.1-.jar inside the EAR that includes both a JAXB implementation and a jaxb.properties file that shows the following entry in both versions (the one that finds the factory as well as in the one that throws the exception): javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory I think this is has nothing to do with the current issue since the jar stayed exactly the same in both EARs (the one that runs/ the one with the expection) It's also not clear to me why the old version of the framework was ever selecting the com.sun implementation There is a class javax.xml.bind.ContextFinder which is responsible for initializing the JAXBContextFactory. This class searches various placess for the existance of a jaxb.properties file or a "javax.xml.bind.JAXBContext" resource. If ALL of those places dont show up which Context Factory to use there is a deault factory loaded which is hardcoded in the class itself: private static final String PLATFORM_DEFAULT_FACTORY_CLASS = "com.sun.xml.internal.bind.v2.ContextFactory"; Now back to my problem: Building with the previous version of the framework (and EJB 2.x deployment descriptors) everything works fine). While debugging i can see that there is no configuration found and thatfore above mentioned default factory is loaded. Building with the new version of the framework (and EJB 3.x deployment descriptors so i can deploy) ONLY A TESTCASE fails but the rest of the functionality works (like i can send requests to our webservice and they dont trigger any errors). While debugging i can see that there is a configuration found. This resource is named "META-INF/services/javax.xml.bind.JAXBContext". Here are the most important lines of how this resource leads to the attempt to load 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' which then throws the ClassNotFoundException. This is simplified source of the mentioned javax.xml.bind.ContextFinder class: URL resourceURL = ClassLoader.getSystemResource("META-INF/services/javax.xml.bind.JAXBContext"); BufferedReader r = new BufferedReader(new InputStreamReader(resourceURL.openStream(), "UTF-8")); String factoryClassName = r.readLine().trim(); The field factoryClassName now has the value 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' (The day i understand how to format source code on stackoverflow will be my biggest step ahead.... sorry for the formatting after 20 mins it still looks the same :() Because this has become a super lager question i will also add a bounty :)

    Read the article

  • Visual Studio compiles WPF application twice during build

    - by Brian Ensink
    I have a WPF app in VS2008 that compiles twice during the build. The two CSC command lines are similar but with some differences. The first CSC command line does not have an /resource options, the second has two /resource options on the command line. The second CSC command line has these additional arguments: /resource:"obj\Debug AutoCAD\VisualApp.g.resources" /resource:"obj\Debug AutoCAD\CAP.Visual.Properties.Resources.resources" I hate to post such a huge ugly compiler output but here are both command lines. 2>c:\WINDOWS\Microsoft.NET\Framework\v3.5\Csc.exe /noconfig /nowarn:1701,1702 /platform:x86 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /reference:..\BIN\RELEASE\FOO.Base.dll /reference:..\BIN\RELEASE\FOO.CAPArchiveHandler.dll /reference:..\BIN\RELEASE\FOO.CAPDOM.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\PresentationCore.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\PresentationFramework.dll" /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll" /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Data.DataSetExtensions.dll" /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\System.Runtime.Serialization.dll" /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.Docking.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.Navigation.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\UIAutomationProvider.dll" /reference:c:\project\FooStudio\BIN\DEBUGCAD\VS-3DEngine-Wrapper.dll /reference:c:\project\FooStudio\BIN\DEBUGCAD\VisualServiceClient.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\WindowsBase.dll" /debug+ /debug:full /filealign:512 /out:"obj\Debug AutoCAD\VisualApp.exe" /target:winexe App.xaml.cs MainWindow.xaml.cs CameraAndLightingControl.xaml.cs CameraAndLightingViewModel.cs MainWindowViewModel.cs Properties\AssemblyInfo.cs Properties\Resources.Designer.cs Properties\Settings.Designer.cs ScenarioToolsWindow.xaml.cs SceneGraph.cs ScenePart.cs ToolWindow.xaml.cs "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\CameraAndLightingControl.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\MainWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\ScenarioToolsWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\ToolWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\App.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\GeneratedInternalTypeHelper.g.cs" 2>Done building project "0ye0i4wb.tmp_proj". 2>c:\WINDOWS\Microsoft.NET\Framework\v3.5\Csc.exe /noconfig /nowarn:1701,1702 /platform:x86 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /reference:..\BIN\RELEASE\FOO.Base.dll /reference:..\BIN\RELEASE\FOO.CAPArchiveHandler.dll /reference:..\BIN\RELEASE\FOO.CAPDOM.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\PresentationCore.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\PresentationFramework.dll" /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll" /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Data.DataSetExtensions.dll" /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\System.Runtime.Serialization.dll" /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.Docking.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.Navigation.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\UIAutomationProvider.dll" /reference:c:\project\FooStudio\BIN\DEBUGCAD\VS-3DEngine-Wrapper.dll /reference:c:\project\FooStudio\BIN\DEBUGCAD\VisualServiceClient.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\WindowsBase.dll" /debug+ /debug:full /filealign:512 /out:"obj\Debug AutoCAD\VisualApp.exe" /resource:"obj\Debug AutoCAD\VisualApp.g.resources" /resource:"obj\Debug AutoCAD\FOO.Visual.Properties.Resources.resources" /target:winexe App.xaml.cs MainWindow.xaml.cs CameraAndLightingControl.xaml.cs CameraAndLightingViewModel.cs MainWindowViewModel.cs Properties\AssemblyInfo.cs Properties\Resources.Designer.cs Properties\Settings.Designer.cs ScenarioToolsWindow.xaml.cs SceneGraph.cs ScenePart.cs ToolWindow.xaml.cs "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\CameraAndLightingControl.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\MainWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\ScenarioToolsWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\ToolWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\App.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\GeneratedInternalTypeHelper.g.cs" Any idea what could possibly cause this? I think this is causing a problem I posted about earlier today.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >