Search Results

Search found 14875 results on 595 pages for 'resource controller'.

Page 131/595 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • display port on x230 not working

    - by Aaron
    having problems with my new lenovo x230 - using dual montiors, only vga registers anything. This has the Intel graphics controller : 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) TO get it to even boot, had to add the nomodeset parameter to the kernel. Otherwise will not even start up X Ive upgraded my kernel to the latest 3.6 with no difference. The display section reads the monitor as the only display and labels it as laptop. Cant seem to get it to work! Any help would be greatly appreciated.

    Read the article

  • video/audio output via HDMI Ubuntu 12.04

    - by lostNfound
    I've been out of the Ubuntu loop for quite a while now and have a completely new laptop now. Just installed Ubuntu 12.04 64-bit and would like to output my video and my audio via HDMI to my television. the following is the lspci | grep VGA for my computer. please tell me if there is any additional information needed and preferably how to obtain it and i will be more than happy to oblige. thank you in advance for your time and assistance in this matter. 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev a1) Edit: every time i restart my computer, after a short moment, i get an error message stating something along the lines "sorry, jockey needed to close unexpectedly." after researching, i discovered jockey is the name of the "additional drivers," which after initial installation, ubuntu informed me of proprietary drivers available. those are no longer available, and this error continues to occur.

    Read the article

  • How to make creating viewmodels at runtime less painfull

    - by Mr Happy
    I apologize for the long question, it reads a bit as a rant, but I promise it's not! I've summarized my question(s) below In the MVC world, things are straightforward. The Model has state, the View shows the Model, and the Controller does stuff to/with the Model (basically), a controller has no state. To do stuff the Controller has some dependencies on web services, repository, the lot. When you instantiate a controller you care about supplying those dependencies, nothing else. When you execute an action (method on Controller), you use those dependencies to retrieve or update the Model or calling some other domain service. If there's any context, say like some user wants to see the details of a particular item, you pass the Id of that item as parameter to the Action. Nowhere in the Controller is there any reference to any state. So far so good. Enter MVVM. I love WPF, I love data binding. I love frameworks that make data binding to ViewModels even easier (using Caliburn Micro a.t.m.). I feel things are less straightforward in this world though. Let's do the exercise again: the Model has state, the View shows the ViewModel, and the ViewModel does stuff to/with the Model (basically), a ViewModel does have state! (to clarify; maybe it delegates all the properties to one or more Models, but that means it must have a reference to the model one way or another, which is state in itself) To do stuff the ViewModel has some dependencies on web services, repository, the lot. When you instantiate a ViewModel you care about supplying those dependencies, but also the state. And this, ladies and gentlemen, annoys me to no end. Whenever you need to instantiate a ProductDetailsViewModel from the ProductSearchViewModel (from which you called the ProductSearchWebService which in turn returned IEnumerable<ProductDTO>, everybody still with me?), you can do one of these things: call new ProductDetailsViewModel(productDTO, _shoppingCartWebService /* dependcy */);, this is bad, imagine 3 more dependencies, this means the ProductSearchViewModel needs to take on those dependencies as well. Also changing the constructor is painfull. call _myInjectedProductDetailsViewModelFactory.Create().Initialize(productDTO);, the factory is just a Func, they are easily generated by most IoC frameworks. I think this is bad because Init methods are a leaky abstraction. You also can't use the readonly keyword for fields that are set in the Init method. I'm sure there are a few more reasons. call _myInjectedProductDetailsViewModelAbstractFactory.Create(productDTO); So... this is the pattern (abstract factory) that is usually recommended for this type of problem. I though it was genious since it satisfies my craving for static typing, until I actually started using it. The amount of boilerplate code is I think too much (you know, apart from the ridiculous variable names I get use). For each ViewModel that needs runtime parameters you'll get two extra files (factory interface and implementation), and you need to type the non-runtime dependencies like 4 extra times. And each time the dependencies change, you get to change it in the factory as well. It feels like I don't even use an DI container anymore. (I think Castle Windsor has some kind of solution for this [with it's own drawbacks, correct me if I'm wrong]). do something with anonymous types or dictionary. I like my static typing. So, yeah. Mixing state and behavior in this way creates a problem which don't exist at all in MVC. And I feel like there currently isn't a really adequate solution for this problem. Now I'd like to observe some things: People actually use MVVM. So they either don't care about all of the above, or they have some brilliant other solution. I haven't found an indepth example of MVVM with WPF. For example, the NDDD-sample project immensely helped me understand some DDD concepts. I'd really like it if someone could point me in the direction of something similar for MVVM/WPF. Maybe I'm doing MVVM all wrong and I should turn my design upside down. Maybe I shouldn't have this problem at all. Well I know other people have asked the same question so I think I'm not the only one. To summarize Am I correct to conclude that having the ViewModel being an integration point for both state and behavior is the reason for some difficulties with the MVVM pattern as a whole? Is using the abstract factory pattern the only/best way to instantiate a ViewModel in a statically typed way? Is there something like an in depth reference implementation available? Is having a lot of ViewModels with both state/behavior a design smell?

    Read the article

  • Wifi not working after upgrading from 12.04 to 13.10

    - by Gary
    I went through loads of posts looking for an answer to my WiFi problems but can't seem to fix it. I had Ubuntu 12.04 installed and WiFi worked fine with no issues whatsoever but then I upgraded to 13.10 it has stopped working. It shows the available networks but I can't connect to any of them it just does that animation thing then stops working. Update Heres the link to pastebin: http://pastebin.com/GyMMEYhv And heres what wifi card i have and driver lspci: 01:00.2 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 0a) Subsystem: Hewlett-Packard Company Device [103c:18de] Kernel driver in use: r8169 02:00.0 Network controller [0280]: Ralink corp. RT3290 Wireless 802.11n 1T/1R PCIe [1814:3290] Subsystem: Hewlett-Packard Company Device [103c:18ec] Kernel driver in use: rt2800pci

    Read the article

  • PowerXpress error with Driver Catalyst. How can I fix it?

    - by J03Bukowski
    I have install Ubuntu 11.10 64bit on my Hp Dv6-3150el. My Notebook has two graphics cards: lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) 01:00.0 VGA compatible controller: ATI Technologies Inc Madison [AMD Radeon HD 5000M Series] I tried to install the proprietary graphics drivers ''fglrx'' available in "Additional Drivers", which does not give me 3D graphics acceleration (and I can't install those post-release). Then I can try to download and install from the website (I tried the version that Catalyst 11.8 11.12). The installation goes perfectly (I followed this guide and others), except that when I configure Xorg.conf file: sudo aticonfig --initial PowerXpress error: Cannot stat '/usr/lib64/fglrx': No such file or directory Failed to initialize libglx for discrete GPU

    Read the article

  • Why doesn't Unity 3D work on my HD3000 integrated graphics?

    - by Zatsugami
    So I got my new laptop recently. HP Envy 15 with switchable graphics card. I'm using both windows and ubuntu, but for ubuntu I need just Intel HD3000 for better battery live. I've installed fglrx-updates and fglrx-amdcccle-updates. The drivers seems to work for my ATI card. The problem is, Intel HD3000 does not support Unity 3D. Why? My older Intel GMA 4500 did this. lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0126] (rev 09) 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Whistler [AMD Radeon HD 6600M Series] [1002:6741] (rev ff)

    Read the article

  • display port on x230 not workgin

    - by Aaron
    having problems with my new lenovo x230 - using dual montiors, only vga registers anything. This has the Intel graphics controller : 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) TO get it to even boot, had to add the nomodeset parameter to the kernel. Otherwise will not even start up X Ive upgraded my kernel to the latest 3.6 with no difference. The display section reads the monitor as the only display and labels it as laptop. Cant seem to get it to work! Any help would be greatly appreciated.

    Read the article

  • Setting Up Audio on a Server Install

    - by tdcrenshaw
    I'm running on a clean install of 10.10 Server edition and have alsa-base, alsa-tools, alsa-utils, alsaplayer, and alsa-firmware-loader installed. At one point I installed pulseaudio, but I have since removed it. I've tried the following lspci | grep audio 00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 01) 01:06.0 Multimedia audio controller: Creative Labs [SB Live! Value] EMU10k1X aplay -l aplay: device_list:235: no soundcards found... alsamixer can not open mixer: No such file or directory When I search for modules with find /lib/modules/`uname -r` | grep snd I do get a list of modules I'm not very experienced with alsa setup, so I'm not sure where to go from here

    Read the article

  • ubuntu 13.10 secondary monitor - doesn't redraw properly

    - by Daniel
    Hi I've just installed Ubuntu 13.10 on a Lenovo w350 and am having some issues with graphics driver. standard nouveau drivers work great on the laptop display but when I attach an external display it goes bad on the external monitor, seems it's not re-drawing properly. (Photo attached). I've tried other drivers but none of them detect the external monitor when connected. Any ideas? 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GK107GLM [Quadro K1000M] (rev a1) screenshot of effected monitor -- http://i.imgur.com/UZWMpQO.jpg UPDATE 1: I have noticed that if I leave the System Monitor open the display improves. Wierd. UPDATE 2: If I leave a video going the lag/flicker issues stop.

    Read the article

  • Fan working non-stop on a Dell Inspiron 5110

    - by cankemik
    First of all i'm new at ubuntu. I've only tried 11.10 before 12.04. Since then my notebook's(Dell Insprion 5110) fan was working non-stop. And also battery lasts in 2 hours. So i made a research. Some said it's about graphics card driver. I've tried so many things, so many codes but i get no result. I must say that i've tried ironhide and bumblebee. non of them worked 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev ff)

    Read the article

  • gt 555m instead of gt540m

    - by gangov
    Im gonna be really short. My Acer Aspire 5755G lap top has a Nvidia geforce gt 540m. But when i start lspci | grep VGA it shows me the following: ^[[Agangov@gangov-Aspire-5755G:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: nVidia Corporation GF106 [GeForce GT 555M] (rev a1) Im really not familiar with the additional drivers in ubuntu. Can somebody tell me what should i do you my VGA properly?? Thank you in advance.

    Read the article

  • disable intel gpu in ubuntu 12.04

    - by small_potato
    I am wondering if there is anything to disable the intel gpu on ubuntu 12.04. I want to be able to setup dual monitor using nvidia-settings. It seems the intel gpu is used for display as suggested by sudo lshw -c display the output is *-display description: VGA compatible controller product: NVIDIA Corporation vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:c0000000-c0ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:4000(size=128) memory:a2000000-a207ffff *-display description: VGA compatible controller product: Haswell Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:47 memory:c2000000-c23fffff memory:b0000000-bfffffff ioport:5000(size=64) I have a lenovoY410 with GT750M. It seems there is no way to turn off the intel gpu in bios either. Help please. Thanks.

    Read the article

  • Dual monitor not working after an update

    - by Nimonika
    I did a package manager update yesterday and it turns out that my dual monitor setup has stopped working. I have poor vision so I really need to connect to a much bigger screen, but since yesterday, when I connect the screen to my laptop, the screen does not automatically reset itself to the laptop display. Even after lots of trial and error with the display settings, I am getting different dispalys on the laptop and external screen and right now only the big screen is active while the laptop has blanked out. Please can someone help me setup my dual screens for 11.10 properly. lspci -v | grep -i vga output 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) (prog-if 00 [VGA controller]

    Read the article

  • Why can’t two programs access my webcam simultaneously?

    - by qdii
    I first launch cheese and my webcam turns on. I then run vlc to grab the output of /dev/video0 but it fails with: [0x7f3ea40012e8] v4l2 demux error: cannot set input 0: Device or resource busy [0x7f3ea40012e8] v4l2 demux error: cannot set input 0: Device or resource busy [0x7f3ea4002168] v4l2 access error: cannot set input 0: Device or resource busy [0x7f3ea4002168] v4l2 access error: cannot set input 0: Device or resource busy [0x7f3eb4000b78] main input error: open of `v4l2:///dev/video0' failed Whatever pair of video programs I run (skype, cheese, vlc, etc.), the result is always the same: the second program can no longer use the webcam when the first one has already grabbed the output. However I find it curious as video4linux states: In general, V4L2 devices can be opened more than once. When this is supported by the driver, users can for example start a "panel" application to change controls like brightness or audio volume, while another application captures video and audio. My webcam is seen in lspci as 058f:a014 Alcor Micro Corp. Asus Integrated Webcam, but I don’t even know what the underlying driver is, so I can’t check whether my problem is driver-related or not. Any input would be more than welcome!

    Read the article

  • How to prevent eMule from jamming up the router?

    - by the searcher
    Usually, when eMule is started, after some time, I find that the router is jammed, so the internet connection on that computer stopped working, or it seemed to be waiting for some port to be freed up before it can connect to a website. This sometimes affect even other PCs or Macs using the same router. Is there a way to prevent eMule from hogging too much resource or ports? I see that there is under Options -> Connection "Max Sources/File" and a "Connection Limits - Maximum Connections". Right now I set them to really low numbers: the first to 120 and the second to 200, but what are good numbers to fill in there so that it can work well without jamming up the router or use up the network resource of the PC or Mac? Or could it be that the number of files that are "Waiting" is too high, and used up too much resource? (If so, can emule automatically limit the number to 10 or 20 to prevent using too much resource?) (This happened before on Linksys router, Netgear router, and the AT&T U-verse router.)

    Read the article

  • empty file from tomcat https redirect?

    - by amertune
    I am using tomcat 6.0.20, with jdk 1.6.0_18 on 64 bit linux (tomcat downloaded from tomcat.apache.org, not installed from repositories). I have iptables redirects from port 80 - 8080 and 443 - 8443. In server.xml, the connector for port 8080 redirects to 443, and the 8443 connector has proxyPort="443". In conf/web.xml, I have added this bit at the end of the file (but still inside the <web-app</webapp tags). <security-constraint> <web-resource-collection> <web-resource-name>Protected Context</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> I also have two contexts, ROOT and mywebapp. If I go to http://myurl.com, then I get redirected to https://myurl.com. If I go to http://myurl.com/mywebapp/, then I get redirected to https://myurl.com/mywebapp/. The problem I'm having is when I go to http://myurl.com/mywebapp (no trailing slash). When I do this I get a prompt to download an empty file that has an empty name. Going to https://myurl.com/mywebapp works. I would think that a user typing myurl.com/mywebapp is far from rare. Is there something I'm missing?

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c: Contributing to emerging Cloud standards

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contributed by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups . As one would expect of an industry leader, Oracle's participation in industry standards bodies is extensive. We participate in dozens of organizations that produce open standards which apply to our products, and our commitment to the success of these organizations is manifest in several way - we support them financially through our memberships; our senior engineers are active participants, often serving in leadership positions on boards, technical working groups and committees; and when it makes good business sense we contribute our intellectual property. We believe supporting the development of open standards is fundamental to Oracle meeting customer demands for product choice, seamless interoperability, and lowering the cost of ownership. Nowhere is this truer than in the area of cloud standards, and for the most recent release of our flagship management product, Oracle Enterprise Manager Cloud Control 12c (EM Cloud Control 12c). There is a fundamental rule that standards follow architecture. This was true of distributed computing, it was true of service-oriented architecture (SOA), and it's true of cloud. If you are familiar with Enterprise Manager it is likely to be no surprise that EM Cloud Control 12c is a source of technology that can be considered for adoption within cloud management standards. The reason, quite simply, is that the Oracle integrated stack architecture aligns with the cloud architecture models being adopted by the industry, and EM Cloud Control 12c has been developed to manage this architecture. EM Cloud Control 12c has facilities for managing the various underlying capabilities of the integrated stack in IaaS, PaaS, and SaaS clouds, and enables essential characteristics such as on-demand self-service provisioning, centralized policy-based resource management, integrated chargeback, and capacity planning, and complete visibility of the physical and virtual environment from applications to disk. Our most recent contribution in support of cloud management standards to come out of the EM Cloud Control 12c work was the Oracle Cloud Elemental Resource Model API. Oracle contributed the Elemental Resource Model API to the Distributed Management Task Force (DMTF) in 2011 where it was assigned to DMTF's Cloud Management Working Group (CMWG). The CMWG is considering the Oracle specification and those of several other vendors in their effort to produce a best practices specification for managing IaaS clouds. DMTF's Cloud Infrastructure Management Interface specification, called CIMI for short, is currently out for public review and expected to be released by DMTF later this year. We are proud to be playing an important role in the development of what is expected to become a major cloud standard. You can find more information on DMTF CIMI at http://dmtf.org/standards/cloud. You can find the work-in-progress release of CIMI at http://dmtf.org/content/cimi-work-progress-specifications-now-available-public-comment . The Oracle Cloud API specification is available on the Oracle Technology Network. You can find more information about the Oracle Cloud Elemental Resource Model API on the Oracle Technical Network (OTN), including a webcast featuring the API engineering manager Jack Yu (see TechCast Live: Inside the Oracle Cloud Resource Model API). If you have not seen this video we recommend you take the time to view it. Simply hover your cursor over the webcast title and control+click to follow the embedded link. If you have a question about the Oracle Cloud API or want to learn more about Oracle's participation in cloud management standards efforts drop us a line. We'd love to hear from you. The Enterprise Manager Standards Blogs are written by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups. They can be reached at Tony.DiCenzo at Oracle.com and Mark.Carlson at Oracle.com respectively. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • Making your WCF Web Apis to speak in multiple languages

    - by cibrax
    One of the key aspects of how the web works today is content negotiation. The idea of content negotiation is based on the fact that a single resource can have multiple representations, so user agents (or clients) and servers can work together to chose one of them. The http specification defines several “Accept” headers that a client can use to negotiate content with a server, and among all those, there is one for restricting the set of natural languages that are preferred as a response to a request, “Accept-Language”. For example, a client can specify “es” in this header for specifying that he prefers to receive the content in spanish or “en” in english. However, there are certain scenarios where the “Accept-Language” header is just not enough, and you might want to have a way to pass the “accepted” language as part of the resource url as an extension. For example, http://localhost/ProductCatalog/Products/1.es” returns all the descriptions for the product with id “1” in spanish. This is useful for scenarios in which you want to embed the link somewhere, such a document, an email or a page.  Supporting both scenarios, the header and the url extension, is really simple in the new WCF programming model. You only need to provide a processor implementation for any of them. Let’s say I have a resource implementation as part of a product catalog I want to expose with the WCF web apis. [ServiceContract][Export]public class ProductResource{ IProductRepository repository;  [ImportingConstructor] public ProductResource(IProductRepository repository) { this.repository = repository; }  [WebGet(UriTemplate = "{id}")] public Product Get(string id, HttpResponseMessage response) { var product = repository.GetById(int.Parse(id)); if (product == null) { response.StatusCode = HttpStatusCode.NotFound; response.Content = new StringContent(Messages.OrderNotFound); }  return product; }} The Get method implementation in this resource assumes the desired culture will be attached to the current thread (Thread.CurrentThread.Culture). Another option is to pass the desired culture as an additional argument in the method, so my processor implementation will handle both options. This method is also using an auto-generated class for handling string resources, Messages, which is available in the different cultures that the service implementation supports. For example, Messages.resx contains “OrderNotFound”: “Order Not Found” Messages.es.resx contains “OrderNotFound”: “No se encontro orden” The processor implementation bellow tackles the first scenario, in which the desired language is passed as part of the “Accept-Language” header. public class CultureProcessor : Processor<HttpRequestMessage, CultureInfo>{ string defaultLanguage = null;  public CultureProcessor(string defaultLanguage = "en") { this.defaultLanguage = defaultLanguage; this.InArguments[0].Name = HttpPipelineFormatter.ArgumentHttpRequestMessage; this.OutArguments[0].Name = "culture"; }  public override ProcessorResult<CultureInfo> OnExecute(HttpRequestMessage request) { CultureInfo culture = null; if (request.Headers.AcceptLanguage.Count > 0) { var language = request.Headers.AcceptLanguage.First().Value; culture = new CultureInfo(language); } else { culture = new CultureInfo(defaultLanguage); }  Thread.CurrentThread.CurrentCulture = culture; Messages.Culture = culture;  return new ProcessorResult<CultureInfo> { Output = culture }; }}   As you can see, the processor initializes a new CultureInfo instance with the value provided in the “Accept-Language” header, and set that instance to the current thread and the auto-generated resource class with all the messages. In addition, the CultureInfo instance is returned as an output argument called “culture”, making possible to receive that argument in any method implementation   The following code shows the implementation of the processor for handling languages as url extensions.   public class CultureExtensionProcessor : Processor<HttpRequestMessage, Uri>{ public CultureExtensionProcessor() { this.OutArguments[0].Name = HttpPipelineFormatter.ArgumentUri; }  public override ProcessorResult<Uri> OnExecute(HttpRequestMessage httpRequestMessage) { var requestUri = httpRequestMessage.RequestUri.OriginalString;  var extensionPosition = requestUri.LastIndexOf(".");  if (extensionPosition > -1) { var extension = requestUri.Substring(extensionPosition + 1);  var query = httpRequestMessage.RequestUri.Query;  requestUri = string.Format("{0}?{1}", requestUri.Substring(0, extensionPosition), query); ;  var uri = new Uri(requestUri);  httpRequestMessage.Headers.AcceptLanguage.Clear();  httpRequestMessage.Headers.AcceptLanguage.Add(new StringWithQualityHeaderValue(extension));  var result = new ProcessorResult<Uri>();  result.Output = uri;  return result; }  return new ProcessorResult<Uri>(); }} The last step is to inject both processors as part of the service configuration as it is shown bellow, public void RegisterRequestProcessorsForOperation(HttpOperationDescription operation, IList<Processor> processors, MediaTypeProcessorMode mode){ processors.Insert(0, new CultureExtensionProcessor()); processors.Add(new CultureProcessor());} Once you configured the two processors in the pipeline, your service will start speaking different languages :). Note: Url extensions don’t seem to be working in the current bits when you are using Url extensions in a base address. As far as I could see, ASP.NET intercepts the request first and tries to route the request to a registered ASP.NET Http Handler with that extension. For example, “http://localhost/ProductCatalog/products.es” does not work, but “http://localhost/ProductCatalog/products/1.es” does.

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

  • Best Practices - Dynamic Reconfiguration

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) Overview of dynamic Reconfiguration Oracle VM Server for SPARC supports Dynamic Reconfiguration (DR), making it possible to add or remove resources to or from a domain (virtual machine) while it is running. This is extremely useful because resources can be shifted to or from virtual machines in response to load conditions without having to reboot or interrupt running applications. For example, if an application requires more CPU capacity, you can add CPUs to improve performance, and remove them when they are no longer needed. You can use even use Dynamic Resource Management (DRM) policies that automatically add and remove CPUs to domains based on load. How it works (in broad general terms) Dynamic Reconfiguration is done in coordination with Solaris, which recognises a hypervisor request to change its virtual machine configuration and responds appropriately. In essence, Solaris receives a message saying "you now have 16 more CPUs numbered 16 to 31" or "8GB more RAM starting at address X" or "here's a new network or disk device - have fun with it". These actions take very little time. Solaris then can start using the new resource. In the case of added CPUs, that means dispatching processes and potentially binding interrupts to the new CPUs. For memory, Solaris adds the new memory pages to its "free" list and starts using them. Comparable actions occur with network and disk devices: they are recognised by Solaris and then used. Removing is the reverse process: after receiving the DR message to free specific CPUs, Solaris unbinds interrupts assigned to the CPUs and stops dispatching process threads. That takes very little time. primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m primary # ldm set-core 5 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.2% 6d 22h 29m ldom1 active -n---- 5000 40 8G 0.1% 6h 59m primary # ldm set-core 2 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m Memory pages are vacated by copying their contents to other memory locations and wiping them clean. Solaris may have to swap memory contents to disk if the remaining RAM isn't enough to hold all the contents. For this reason, deallocating memory can take longer on a loaded system. Even on a lightly loaded system it took several 7 or 8 seconds to switch the domain below between 8GB and 24GB of RAM. primary # ldm set-mem 24g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.1% 6d 22h 36m ldom1 active -n---- 5000 16 24G 0.2% 7h 6m primary # ldm set-mem 8g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.7% 6d 22h 37m ldom1 active -n---- 5000 16 8G 0.3% 7h 7m What if the device is in use? (this is the anecdote that inspired this blog post) If CPU or memory is being removed, releasing it pretty straightforward, using the method described above. The resources are released, and Solaris continues with less capacity. It's not as simple with a network or I/O device: you don't want to yank a device out from underneath an application that might be using it. In the following example, I've added a virtual network device to ldom1 and want to take it away, even though it's been plumbed. primary # ldm rm-vnet vnet19 ldom1 Guest LDom returned the following reason for failing the operation: Resource Information ---------------------------------------------------------- ----------------------- /devices/virtual-devices@100/channel-devices@200/network@1 Network interface net1 VIO operation failed because device is being used in LDom ldom1 Failed to remove VNET instance That's what I call a helpful error message - telling me exactly what was wrong. In this case the problem is easily solved. I know this NIC is seen in the guest as net1 so: ldom1 # ifconfig net1 down unplumb Now I can dispose of it, and even the virtual switch I had created for it: primary # ldm rm-vnet vnet19 ldom1 primary # ldm rm-vsw primary-vsw9 If I had to take away the device disruptively, I could have used ldm rm-vnet -f but that could disrupt whoever was using it. It's better if that can be avoided. Summary Oracle VM Server for SPARC provides dynamic reconfiguration, which lets you modify a guest domain's CPU, memory and I/O configuration on the fly without reboot. You can add and remove resources as needed, and even automate this for CPUs by setting up resource policies. Taking things away can be more complicated than giving, especially for devices like disks and networks that may contain application and system state or be involved in a transaction. LDoms and Solaris cooperative work together to coordinate resource allocation and de-allocation in a safe and effective way. For best practices, use dynamic reconfiguration to make the best use of your system's resources.

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • Cannot redeclare class error when generating PHPUnit code coverage report

    - by Cobby
    Starting a project with Zend Framework 1.10 and Doctrine 2 (Beta1). I am using namespaces in my own library code. When generating code coverage reports I get a Fatal Error about Redeclaring a class. To provide more info, I've commented out the xdebug_disable() call in my phpunit executable so you can see the function trace (disabled local variables output because there was too much output). Here's my Terminal output: $ phpunit PHPUnit 3.4.12 by Sebastian Bergmann. ........ Time: 4 seconds, Memory: 16.50Mb OK (8 tests, 14 assertions) Generating code coverage report, this may take a moment.PHP Fatal error: Cannot redeclare class Cob\Application\Resource\HelperBroker in /Users/Cobby/Sites/project/trunk/code/library/Cob/Application/Resource/HelperBroker.php on line 93 PHP Stack trace: PHP 1. {main}() /usr/local/zend/bin/phpunit:0 PHP 2. PHPUnit_TextUI_Command::main() /usr/local/zend/bin/phpunit:54 PHP 3. PHPUnit_TextUI_Command-run() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:146 PHP 4. PHPUnit_TextUI_TestRunner-doRun() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:213 PHP 5. PHPUnit_Util_Report::render() /usr/local/zend/share/pear/PHPUnit/TextUI/TestRunner.php:478 PHP 6. PHPUnit_Framework_TestResult-getCodeCoverageInformation() /usr/local/zend/share/pear/PHPUnit/Util/Report.php:97 PHP 7. PHPUnit_Util_Filter::getFilteredCodeCoverage() /usr/local/zend/share/pear/PHPUnit/Framework/TestResult.php:623 Fatal error: Cannot redeclare class Cob\Application\Resource\HelperBroker in /Users/Cobby/Sites/project/trunk/code/library/Cob/Application/Resource/HelperBroker.php on line 93 Call Stack: 0.0004 322888 1. {main}() /usr/local/zend/bin/phpunit:0 0.0816 4114628 2. PHPUnit_TextUI_Command::main() /usr/local/zend/bin/phpunit:54 0.0817 4114964 3. PHPUnit_TextUI_Command-run() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:146 0.1151 5435528 4. PHPUnit_TextUI_TestRunner-doRun() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:213 4.2931 16690760 5. PHPUnit_Util_Report::render() /usr/local/zend/share/pear/PHPUnit/TextUI/TestRunner.php:478 4.2931 16691120 6. PHPUnit_Framework_TestResult-getCodeCoverageInformation() /usr/local/zend/share/pear/PHPUnit/Util/Report.php:97 4.2931 16691148 7. PHPUnit_Util_Filter::getFilteredCodeCoverage() /usr/local/zend/share/pear/PHPUnit/Framework/TestResult.php:623 (I have no idea why it shows the error twice...?) And here is my phpunit.xml: <phpunit bootstrap="./code/tests/application/bootstrap.php" colors="true"> <!-- bootstrap.php changes directory to trunk/code/tests, all paths below are relative to this directory. --> <testsuite name="My Promotions"> <directory>./</directory> </testsuite> <filter> <whitelist> <directory suffix=".php">../application</directory> <directory suffix=".php">../library/Cob</directory> <exclude> <!-- By adding the below line I can remove the error --> <file>../library/Cob/Application/Resource/HelperBroker.php</file> <directory suffix=".phtml">../application</directory> <directory suffix=".php">../application/doctrine</directory> <file>../application/Bootstrap.php</file> <directory suffix=".php">../library/Cob/Tools</directory> </exclude> </whitelist> </filter> <logging> <log type="junit" target="../../build/reports/tests/report.xml" /> <log type="coverage-html" target="../../build/reports/coverage" charset="UTF-8" yui="true" highlight="true" lowUpperBound="50" highLowerBound="80" /> </logging> </phpunit> I have added a tag inside the which seams to hide this problem. I do have another application resource but it doesn't seam to have a problem (the other one is a Doctrine 2 resource). I'm not sure why it is specific to this class, my entire library is autoloaded so their isn't any include/require calls anywhere. I guess it should be noted that HelperBroker is the first file in the filesystem stemming out from library/Cob I am on Snow Leopard with the latest/recent versions of all software (Zend Server, Zend Framework, Doctrine 2 Beta1, Phing, PHPUnit, PEAR).

    Read the article

  • What's the best Communication Pattern for EJB3-based applications?

    - by Hank
    I'm starting a JEE project that needs to be strongly scalable. So far, the concept was: several Message Driven Beans, responsible for different parts of the architecture each MDB has a Session Bean injected, handling the business logic a couple of Entity Beans, providing access to the persistence layer communication between the different parts of the architecture via Request/Reply concept via JMS messages: MDB receives msg containing activity request uses its session bean to execute necessary business logic returns response object in msg to original requester The idea was that by de-coupling parts of the architecture from each other via the message bus, there is no limit to the scalability. Simply start more components - as long as they are connected to the same bus, we can grow and grow. Unfortunately, we're having massive problems with the request-reply concept. Transaction Mgmt seems to be in our way plenty. It seams that session beans are not supposed to consume messages?! Reading http://blogs.sun.com/fkieviet/entry/request_reply_from_an_ejb and http://forums.sun.com/message.jspa?messageID=10338789, I get the feeling that people actually recommend against the request/reply concept for EJBs. If that is the case, how do you communicate between your EJBs? (Remember, scalability is what I'm after) Details of my current setup: MDB 1 'TestController', uses (local) SLSB 1 'TestService' for business logic TestController.onMessage() makes TestService send a message to queue XYZ and requests a reply TestService uses Bean Managed Transactions TestService establishes a connection & session to the JMS broker via a joint connection factory upon initialization (@PostConstruct) TestService commits the transaction after sending, then begins another transaction and waits 10 sec for the response Message gets to MDB 2 'LocationController', which uses (local) SLSB 2 'LocationService' for business logic LocationController.onMessage() makes LocationService send a message back to the requested JMSReplyTo queue Same BMT concept, same @PostConstruct concept all use the same connection factory to access the broker Problem: The first message gets send (by SLSB 1) and received (by MDB 2) ok. The sending of the returning message (by SLSB 2) is fine as well. However, SLSB 1 never receives anything - it just times out. I tried without the messageSelector, no change, still no receiving message. Is it not ok to consume message by a session bean? SLSB 1 - TestService.java @Resource(name = "jms/mvs.MVSControllerFactory") private javax.jms.ConnectionFactory connectionFactory; @PostConstruct public void initialize() { try { jmsConnection = connectionFactory.createConnection(); session = jmsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE); System.out.println("Connection to JMS Provider established"); } catch (Exception e) { } } public Serializable sendMessageWithResponse(Destination reqDest, Destination respDest, Serializable request) { Serializable response = null; try { utx.begin(); Random rand = new Random(); String correlationId = rand.nextLong() + "-" + (new Date()).getTime(); // prepare the sending message object ObjectMessage reqMsg = session.createObjectMessage(); reqMsg.setObject(request); reqMsg.setJMSReplyTo(respDest); reqMsg.setJMSCorrelationID(correlationId); // prepare the publishers and subscribers MessageProducer producer = session.createProducer(reqDest); // send the message producer.send(reqMsg); System.out.println("Request Message has been sent!"); utx.commit(); // need to start second transaction, otherwise the first msg never gets sent utx.begin(); MessageConsumer consumer = session.createConsumer(respDest, "JMSCorrelationID = '" + correlationId + "'"); jmsConnection.start(); ObjectMessage respMsg = (ObjectMessage) consumer.receive(10000L); utx.commit(); if (respMsg != null) { response = respMsg.getObject(); System.out.println("Response Message has been received!"); } else { // timeout waiting for response System.out.println("Timeout waiting for response!"); } } catch (Exception e) { } return response; } SLSB 2 - LocationService.Java (only the reply method, rest is same as above) public boolean reply(Message origMsg, Serializable o) { boolean rc = false; try { // check if we have necessary correlationID and replyTo destination if (!origMsg.getJMSCorrelationID().equals("") && (origMsg.getJMSReplyTo() != null)) { // prepare the payload utx.begin(); ObjectMessage msg = session.createObjectMessage(); msg.setObject(o); // make it a response msg.setJMSCorrelationID(origMsg.getJMSCorrelationID()); Destination dest = origMsg.getJMSReplyTo(); // send it MessageProducer producer = session.createProducer(dest); producer.send(msg); producer.close(); System.out.println("Reply Message has been sent"); utx.commit(); rc = true; } } catch (Exception e) {} return rc; } sun-resources.xml <admin-object-resource enabled="true" jndi-name="jms/mvs.LocationControllerRequest" res-type="javax.jms.Queue" res-adapter="jmsra"> <property name="Name" value="mvs.LocationControllerRequestQueue"/> </admin-object-resource> <admin-object-resource enabled="true" jndi-name="jms/mvs.LocationControllerResponse" res-type="javax.jms.Queue" res-adapter="jmsra"> <property name="Name" value="mvs.LocationControllerResponseQueue"/> </admin-object-resource> <connector-connection-pool name="jms/mvs.MVSControllerFactoryPool" connection-definition-name="javax.jms.QueueConnectionFactory" resource-adapter-name="jmsra"/> <connector-resource enabled="true" jndi-name="jms/mvs.MVSControllerFactory" pool-name="jms/mvs.MVSControllerFactoryPool" />

    Read the article

  • Cascading S3 Sink Tap not being deleted with SinkMode.REPLACE

    - by Eric Charles
    We are running Cascading with a Sink Tap being configured to store in Amazon S3 and were facing some FileAlreadyExistsException (see [1]). This was only from time to time (1 time on around 100) and was not reproducable. Digging into the Cascading codem, we discovered the Hfs.deleteResource() is called (among others) by the BaseFlow.deleteSinksIfNotUpdate(). Btw, we were quite intrigued with the silent NPE (with comment "hack to get around npe thrown when fs reaches root directory"). From there, we extended the Hfs tap with our own Tap to add more action in the deleteResource() method (see [2]) with a retry mechanism calling directly the getFileSystem(conf).delete. The retry mechanism seemed to bring improvement, but we are still sometimes facing failures (see example in [3]): it sounds like HDFS returns isDeleted=true, but asking directly after if the folder exists, we receive exists=true, which should not happen. Logs also shows randomly isDeleted true or false when the flow succeeds, which sounds like the returned value is irrelevant or not to be trusted. Can anybody bring his own S3 experience with such a behavior: "folder should be deleted, but it is not"? We suspect a S3 issue, but could it also be in Cascading or HDFS? We run on Hadoop Cloudera-cdh3u5 and Cascading 2.0.1-wip-dev. [1] org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory s3n://... already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132) at com.twitter.elephantbird.mapred.output.DeprecatedOutputFormatWrapper.checkOutputSpecs(DeprecatedOutputFormatWrapper.java:75) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:923) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:882) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:882) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:856) at cascading.flow.hadoop.planner.HadoopFlowStepJob.internalNonBlockingStart(HadoopFlowStepJob.java:104) at cascading.flow.planner.FlowStepJob.blockOnJob(FlowStepJob.java:174) at cascading.flow.planner.FlowStepJob.start(FlowStepJob.java:137) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:122) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:42) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.j [2] @Override public boolean deleteResource(JobConf conf) throws IOException { LOGGER.info("Deleting resource {}", getIdentifier()); boolean isDeleted = super.deleteResource(conf); LOGGER.info("Hfs Sink Tap isDeleted is {} for {}", isDeleted, getIdentifier()); Path path = new Path(getIdentifier()); int retryCount = 0; int cumulativeSleepTime = 0; int sleepTime = 1000; while (getFileSystem(conf).exists(path)) { LOGGER .info( "Resource {} still exists, it should not... - I will continue to wait patiently...", getIdentifier()); try { LOGGER.info("Now I will sleep " + sleepTime / 1000 + " seconds while trying to delete {} - attempt: {}", getIdentifier(), retryCount + 1); Thread.sleep(sleepTime); cumulativeSleepTime += sleepTime; sleepTime *= 2; } catch (InterruptedException e) { e.printStackTrace(); LOGGER .error( "Interrupted while sleeping trying to delete {} with message {}...", getIdentifier(), e.getMessage()); throw new RuntimeException(e); } if (retryCount == 0) { getFileSystem(conf).delete(getPath(), true); } retryCount++; if (cumulativeSleepTime > MAXIMUM_TIME_TO_WAIT_TO_DELETE_MS) { break; } } if (getFileSystem(conf).exists(path)) { LOGGER .error( "We didn't succeed to delete the resource {}. Throwing now a runtime exception.", getIdentifier()); throw new RuntimeException( "Although we waited to delete the resource for " + getIdentifier() + ' ' + retryCount + " iterations, it still exists - This must be an issue in the underlying storage system."); } return isDeleted; } [3] INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] at least one sink is marked for delete INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] sink oldest modified date: Wed Dec 31 23:59:59 UTC 1969 INFO [pool-2-thread-15] (HiveSinkTap.java:148) - Now I will sleep 1 seconds while trying to delete s3n://... - attempt: 1 INFO [pool-2-thread-15] (HiveSinkTap.java:130) - Deleting resource s3n://... INFO [pool-2-thread-15] (HiveSinkTap.java:133) - Hfs Sink Tap isDeleted is true for s3n://... ERROR [pool-2-thread-15] (HiveSinkTap.java:175) - We didn't succeed to delete the resource s3n://... Throwing now a runtime exception. WARN [pool-2-thread-15] (Cascade.java:706) - [...] flow failed: ... java.lang.RuntimeException: Although we waited to delete the resource for s3n://... 0 iterations, it still exists - This must be an issue in the underlying storage system. at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:179) at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:40) at cascading.flow.BaseFlow.deleteSinksIfNotUpdate(BaseFlow.java:971) at cascading.flow.BaseFlow.prepare(BaseFlow.java:733) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:761) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:710) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619)

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >