Search Results

Search found 21778 results on 872 pages for 'stewart may'.

Page 81/872 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Know Your Service Request Status

    - by Get Proactive Customer Adoption Team
    Untitled Document To monitor a Service Request or not to monitor a Service Request... That should never be the question Monitoring the Service Requests you create is an essential part of the process to resolve your issue when you work with a Support Engineer. If you monitor your Service Request, you know at all times where it is in the process, or to be more specific, you know at all times what action the Support Engineer has taken on your request and what the next step is. When you think about it, it is rather simple... Oracle Support is working the issue, Oracle Development is working the issue, or you are. When you check on the status, you may find that the Support Engineer has a question for you or the engineer is waiting for more information to resolve the issue. If you monitor the Service Request, and respond quickly, the process keeps moving, and you’ll get your answer more quickly. Monitoring a Service Request is easy. All you need to do is check the status codes that the Support Engineer or the system assigns to your Service Request. These status codes are not static. You will see that during the life of your Service request, it will go through a variety of status codes. The best advice I can offer you when you monitor your Service Request is to watch the codes. If the status is not changing, or if you are not getting responses back within the agreed timeframes, you should review the action plan the Support Engineer has outlined or talk about a new action plan. Here are the most common status codes: Work in Progress indicates that your Support Engineer is researching and working the issue. Development Working means that you have a code related issue and Oracle Support has submitted a bug to Development. Please pay a particular attention to the following statuses; they indicate that the Support Engineer is waiting for a response from you: Customer Working usually means that your Support Engineer needs you to collect additional information, needs you to try something or to apply a patch, or has more questions for you. Solution Offered indicates that the Support Engineer has identified the problem and has provided you with a solution. Auto-Close or Close Initiated are statuses you don’t want to see. Monitoring your Service Request helps prevent your issues from reaching these statuses. They usually indicate that the Support Engineer did not receive the requested information or action from you. This is important. If you fail to respond, the Support Engineer will attempt to contact you three times over a two-week period. If these attempts are unsuccessful, he or she will initiate the Auto-Close process. At the end of this additional two-week period, if you have not updated the Service Request, your Service Request is considered abandoned and the Support Engineer will assign a Customer Abandoned status. A Support Engineer doesn’t like to see this status, since he or she has been working to solve your issue, but we know our customers dislike it even more, since it means their issue is not moving forward. You can avoid delays in resolving your issue by monitoring your Service Request and acting quickly when you see the status change. Respond to the request from the engineer to answer questions, collect information, or to try the offered solution. Then the Support Engineer can continue working the issue and the Service Request keeps moving forward towards resolution. Keep in mind that if you take an extended period of time to respond to a request or to provide the information requested, the Support Engineer cannot take the next step. You may inadvertently send an implicit message about the problem’s urgency that may not match the Service Request priority, and your need for an answer. Help us help you. We want to get you the answer as quickly as possible so you can stay focused on your company’s objectives. Now, back to our initial question. To monitor Service Requests or not to monitor Service Requests? I think the answer is clear: yes, monitor your Service Request to resolve the issue as quickly as possible.

    Read the article

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • Need help partitioning when reinstalling Ubuntu 14.04

    - by Chris M.
    I upgraded to 14.04 about a month ago on my HP Mini netbook (about 16 GB hard disk). A few days ago the system crashed (I don't know why but I was using internet at the time). When I restarted the computer, Ubuntu would not load. Instead, I got a message from the BIOS saying Reboot and Select proper Boot device or Insert Boot Media in selected Boot device and press a key I took this to mean that I needed to reinstall 14.04. When I try to reinstall Ubuntu from the USB stick, I choose "Erase disk and install Ubuntu" but then I get a message: Some of the partitions you created are too small. Please make the following partitions at least this large: / 3.3 GB If you do not go back to the partitioner and increase the size of these partitions, the installation may fail. At first I hit Continue to see if it would install anyway, and it gave the message: The attempt to mount a file system with type ext4 in SCSI1 (0,0,0), partition # 1 (sda) at / failed. You may resume partitioning from the partitioning menu. The second time I hit Go Back, and it took me to the following partitioning table: Device Type Mount Point Format Size Used System /dev/sda /dev/sda1 ext4 (checked) 3228 MB Unknown /dev/sda5 swap (not checked) 1063 MB Unknown + - Change New Partition Table... Revert Device for boot loader installation: /dev/sda ATA JM Loader 001 (4.3 GB) At this point I'm not sure what to do. I've never partitioned my hard drive before and I don't want to screw things up. (I'm not particularly tech savvy.) Can you instruct me what I should do. (P.S. I'm afraid the table might not appear as I typed it in.) Results from fdisk: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 4294 MB, 4294967296 bytes 255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 7860 MB, 7860125696 bytes 155 heads, 31 sectors/track, 3194 cylinders, total 15351808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009a565 Device Boot Start End Blocks Id System /dev/sdb1 * 2768 15351807 7674520 b W95 FAT32 ubuntu@ubuntu:~$ Here is what it displays when I open the Disks utility (I tried the screenshot terminal command you suggested but it didn't seem to do anything): 4.3 GB Hard Disk /dev/sda Model: JM Loader 001 (01000001) Size: 4.3 GB (4,294,967,296 bytes) Serial Number: 01234123412341234 Assessment: SMART is not supported Volumes Size: 4.3 GB (4,294,967,296 bytes) Device: /dev/sda Contents: Unknown (There is a button in the utility that when you click it gives the following options: Format... Create Disk Image... Restore Disk Image... Benchmark but SMART Data & Self-Tests... is dimmed out) When I hit F9 Change Boot Device Order, it shows the hard drive as: SATA:PM-JM Loader 001 When I hit F10 to get me into the BIOS Setup Utility, under Diagnostic it shows: Primary Hard Disk Self Test Not Support NetworkManager Tool State: disconnected Device: eth0 Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 00:26:55:B0:7F:0C Capabilities: Carrier Detect: yes Wired Properties Carrier: off When I run command lshw -C network, I get: WARNING: you should run this program as super-user. *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:16 memory:feafc000-feafffff *-network description: Ethernet interface product: AR8132 Fast Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: c0 serial: 00:26:55:b0:7f:0c capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.1-NAPI latency=0 link=no multicast=yes port=twisted pair resources: irq:43 memory:febc0000-febfffff ioport:ec80(size=128) WARNING: output may be incomplete or inaccurate, you should run this program as super-user.

    Read the article

  • CI tests to enforce specific development rules - good practice?

    - by KeithS
    The following is all purely hypothetical and any particular portion of it may or may not accurately describe real persons or situations, whether living, dead or just pretending. Let's say I'm a senior dev or architect in charge of a dev team working on a project. This project includes a security library for user authentication/authorization of the application under development. The library must be available for developers to edit; however, I wish to "trust but verify" that coders are not doing things that could compromise the security of the finished system, and because this isn't my only responsibility I want it to be done in an automated way. As one example, let's say I have an interface that represents a user which has been authenticated by the system's security library. The interface exposes basic user info and a list of things the user is authorized to do (so that the client app doesn't have to keep asking the server "can I do this?"), all in an immutable fashion of course. There is only one implementation of this interface in production code, and for the purposes of this post we can say that all appropriate measures have been taken to ensure that this implementation can only be used by the one part of our code that needs to be able to create concretions of the interface. The coders have been instructed that this interface and its implementation are sacrosanct and any changes must go through me. However, those are just words; the security library's source is open for editing by necessity. Any of my devs could decide that this secured, private, hash-checked implementation needs to be public so that they could do X, or alternately they could create their own implementation of this public interface in a different library, exposing the hashing algorithm that provides the secure checksum, in order to do Y. I may not be made aware of these changes so that I can beat the developer over the head for it. An attacker could then find these little nuggets in an unobfuscated library of the compiled product, and exploit it to provide fake users and/or falsely-elevated administrative permissions, bypassing the entire security system. This possibility keeps me awake for a couple of nights, and then I create an automated test that reflectively checks the codebase for types deriving from the interface, and fails if it finds any that are not exactly what and where I expect them to be. I compile this test into a project under a separate folder of the VCS that only I have rights to commit to, have CI compile it as an external library of the main project, and set it up to run as part of the CI test suite for user commits. Now, I have an automated test under my complete control that will tell me (and everyone else) if the number of implementations increases without my involvement, or an implementation that I did know about has anything new added or has its modifiers or those of its members changed. I can then investigate further, and regain the opportunity to beat developers over the head as necessary. Is this considered "reasonable" to want to do in situations like this? Am I going to be seen in a negative light for going behind my devs' backs to ensure they aren't doing something they shouldn't?

    Read the article

  • Why Apple’s New SDK Limitation is So Offensive

    - by TStewartDev
    I am not an Apple fanboy, nor have I ever been. However, I have owned a Mac, an iPod, and an iPhone in my lifetime, and for more than a decade, I have defended Apple against the untruths that the haters so enjoy spewing. I encouraged my wife to buy a MacBook when she needed a new laptop two years ago, and I often recommend them to my friends and relatives. I have proudly and happily used my first generation iPhone for nearly three years. Now, for the first time in well over ten years, I find myself ready to swear off Apple and encourage everyone I know to do the same. I was disappointed when Apple wouldn't allow native apps, but I still bought the iPhone. I've stomached their ambiguous app approval process even though it's apparent that Steve may just reject your app because he doesn't like you or feels threatened by you (I'm still lamenting the rejection of the Google Voice app). But, as a developer, I can no longer tolerate Apple's terms and the kind of totalitarian control they indicate Apple wants. In case you are not already familiar, Apple has dictated in their OS 4.0 SDK license agreement (the now infamous Section 3.3.1) that all apps developed for the iPhone must be coded in C, C++, or Objective C, and moreover, that using any cross-compiling platforms is a violation of the agreement. For those of you who aren't developers, let me try to illustrate why this angers those of us who are. Imagine you're a professional writer. You've had articles published in some journals and magazines, and you've got a couple popular books out there, too. You've got an idea for a new book, and so you take it to your publisher. Your publisher agrees that it's a good idea. "But," says the publisher, "we want to hold our books to a tighter standard so that our readers get the experience we want them to have. Therefore, from now on, all our writers may only use words from this list of the 10,000 most common English words. Furthermore, if you cite any other works or quote anyone, they must comply with that same list, or you'll have to rewrite the entire work as well in case our readers want to look up your citation." What do you do? If your work is a children's book, this probably isn't a big deal to you. If it's an autobiography, textbook, or even a novel, though, you're going to have a lot of trouble describing your content with only common words. It's going to take you longer to complete your book, too, since you'll be looking up less common words frequently to see if you can use them. You could always go to another publisher, but this one has the best ability to distribute your book. The next largest distributor can only do a quarter as much. You could abandon the project altogether, but then everyone loses. Isn't this a silly scenario? Who would put such a limitation on writers? Yet this is very much what Apple is doing. They are using their dominant position in the market to coerce developers to write their apps exclusively for the iPhone OS by making it too expensive to write for multiple platforms. It is at least a threefold attack, striking at Adobe who is set to release a compiler that lets Flash source be compiled to iPhone binaries; striking at Google whose Android platform stands the best chance at the moment of providing serious competition to the iPhone; and reinforcing their own strong position by keeping popular apps exclusively to iPhone. And while developers are already very upset about this, the sad fact is that most of us will cave and give in to Apple because consumers don't know any better. They will continue to buy Apple's toy forcing developers to play Apple's maniacal game in order to make any money, at least until Steve Jobs decides he doesn't like them or he intends to release a competing application (bye-bye OpenFeint). Apple has been kept in check on the desktop front by a very dominant Microsoft, but I'm afraid that their success with iPods, iTunes, and iPhones has created a monster that we may have to bear until it is slain by an anti-trust suit or dies with the retirement of Steve Jobs.

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • Custom Text and Binary Payloads using WebSocket (TOTD #186)

    - by arungupta
    TOTD #185 explained how to process text and binary payloads in a WebSocket endpoint. In summary, a text payload may be received as public void receiveTextMessage(String message) {    . . . } And binary payload may be received as: public void recieveBinaryMessage(ByteBuffer message) {    . . .} As you realize, both of these methods receive the text and binary data in raw format. However you may like to receive and send the data using a POJO. This marshaling and unmarshaling can be done in the method implementation but JSR 356 API provides a cleaner way. For encoding and decoding text payload into POJO, Decoder.Text (for inbound payload) and Encoder.Text (for outbound payload) interfaces need to be implemented. A sample implementation below shows how text payload consisting of JSON structures can be encoded and decoded. public class MyMessage implements Decoder.Text<MyMessage>, Encoder.Text<MyMessage> {     private JsonObject jsonObject;    @Override    public MyMessage decode(String string) throws DecodeException {        this.jsonObject = new JsonReader(new StringReader(string)).readObject();               return this;    }     @Override    public boolean willDecode(String string) {        return true;    }     @Override    public String encode(MyMessage myMessage) throws EncodeException {        return myMessage.jsonObject.toString();    } public JsonObject getObject() { return jsonObject; }} In this implementation, the decode method decodes incoming text payload to MyMessage, the encode method encodes MyMessage for the outgoing text payload, and the willDecode method returns true or false if the message can be decoded. The encoder and decoder implementation classes need to be specified in the WebSocket endpoint as: @WebSocketEndpoint(value="/endpoint", encoders={MyMessage.class}, decoders={MyMessage.class}) public class MyEndpoint { public MyMessage receiveMessage(MyMessage message) { . . . } } Notice the updated method signature where the application is working with MyMessage instead of the raw string. Note that the encoder and decoder implementations just illustrate the point and provide no validation or exception handling. Similarly Encooder.Binary and Decoder.Binary interfaces need to be implemented for encoding and decoding binary payload. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • High Resolution Timeouts

    - by user12607257
    The default resolution of application timers and timeouts is now 1 msec in Solaris 11.1, down from 10 msec in previous releases. This improves out-of-the-box performance of polling and event based applications, such as ticker applications, and even the Oracle rdbms log writer. More on that in a moment. As a simple example, the poll() system call takes a timeout argument in units of msec: System Calls poll(2) NAME poll - input/output multiplexing SYNOPSIS int poll(struct pollfd fds[], nfds_t nfds, int timeout); In Solaris 11, a call to poll(NULL,0,1) returns in 10 msec, because even though a 1 msec interval is requested, the implementation rounds to the system clock resolution of 10 msec. In Solaris 11.1, this call returns in 1 msec. In specification lawyer terms, the resolution of CLOCK_REALTIME, introduced by POSIX.1b real time extensions, is now 1 msec. The function clock_getres(CLOCK_REALTIME,&res) returns 1 msec, and any library calls whose man page explicitly mention CLOCK_REALTIME, such as nanosleep(), are subject to the new resolution. Additionally, many legacy functions that pre-date POSIX.1b and do not explicitly mention a clock domain, such as poll(), are subject to the new resolution. Here is a fairly comprehensive list: nanosleep pthread_mutex_timedlock pthread_mutex_reltimedlock_np pthread_rwlock_timedrdlock pthread_rwlock_reltimedrdlock_np pthread_rwlock_timedwrlock pthread_rwlock_reltimedwrlock_np mq_timedreceive mq_reltimedreceive_np mq_timedsend mq_reltimedsend_np sem_timedwait sem_reltimedwait_np poll select pselect _lwp_cond_timedwait _lwp_cond_reltimedwait semtimedop sigtimedwait aiowait aio_waitn aio_suspend port_get port_getn cond_timedwait cond_reltimedwait setitimer (ITIMER_REAL) misc rpc calls, misc ldap calls This change in resolution was made feasible because we made the implementation of timeouts more efficient a few years back when we re-architected the callout subsystem of Solaris. Previously, timeouts were tested and expired by the kernel's clock thread which ran 100 times per second, yielding a resolution of 10 msec. This did not scale, as timeouts could be posted by every CPU, but were expired by only a single thread. The resolution could be changed by setting hires_tick=1 in /etc/system, but this caused the clock thread to run at 1000 Hz, which made the potential scalability problem worse. Given enough CPUs posting enough timeouts, the clock thread could be a performance bottleneck. We fixed that by re-implementing the timeout as a per-CPU timer interrupt (using the cyclic subsystem, for those familiar with Solaris internals). This decoupled the clock thread frequency from timeout resolution, and allowed us to improve default timeout resolution without adding CPU overhead in the clock thread. Here are some exceptions for which the default resolution is still 10 msec. The thread scheduler's time quantum is 10 msec by default, because preemption is driven by the clock thread (plus helper threads for scalability). See for example dispadmin, priocntl, fx_dptbl, rt_dptbl, and ts_dptbl. This may be changed using hires_tick. The resolution of the clock_t data type, primarily used in DDI functions, is 10 msec. It may be changed using hires_tick. These functions are only used by developers writing kernel modules. A few functions that pre-date POSIX CLOCK_REALTIME mention _SC_CLK_TCK, CLK_TCK, "system clock", or no clock domain. These functions are still driven by the clock thread, and their resolution is 10 msec. They include alarm, pcsample, times, clock, and setitimer for ITIMER_VIRTUAL and ITIMER_PROF. Their resolution may be changed using hires_tick. Now back to the database. How does this help the Oracle log writer? Foreground processes post a redo record to the log writer, which releases them after the redo has committed. When a large number of foregrounds are waiting, the release step can slow down the log writer, so under heavy load, the foregrounds switch to a mode where they poll for completion. This scales better because every foreground can poll independently, but at the cost of waiting the minimum polling interval. That was 10 msec, but is now 1 msec in Solaris 11.1, so the foregrounds process transactions faster under load. Pretty cool.

    Read the article

  • Best way to load application settings

    - by enzom83
    A simple way to keep the settings of a Java application is represented by a text file with ".properties" extension containing the identifier of each setting associated with a specific value (this value may be a number, string, date, etc..). C# uses a similar approach, but the text file must be named "App.config". In both cases, in source code you must initialize a specific class for reading settings: this class has a method that returns the value (as string) associated with the specified setting identifier. // Java example Properties config = new Properties(); config.load(...); String valueStr = config.getProperty("listening-port"); // ... // C# example NameValueCollection setting = ConfigurationManager.AppSettings; string valueStr = setting["listening-port"]; // ... In both cases we should parse strings loaded from the configuration file and assign the ??converted values to the related typed objects (parsing errors could occur during this phase). After the parsing step, we must check that the setting values ??belong to a specific domain of validity: for example, the maximum size of a queue should be a positive value, some values ??may be related (example: min < max), and so on. Suppose that the application should load the settings as soon as it starts: in other words, the first operation performed by the application is to load the settings. Any invalid values for the settings ??must be replaced automatically with default values??: if this happens to a group of related settings, those settings are all set with default values. The easiest way to perform these operations is to create a method that first parses all the settings, then checks the loaded values ??and finally sets any default values??. However maintenance is difficult if you use this approach: as the number of settings increases while developing the application, it becomes increasingly difficult to update the code. In order to solve this problem, I had thought of using the Template Method pattern, as follows. public abstract class Setting { protected abstract bool TryParseValues(); protected abstract bool CheckValues(); public abstract void SetDefaultValues(); /// <summary> /// Template Method /// </summary> public bool TrySetValuesOrDefault() { if (!TryParseValues() || !CheckValues()) { // parsing error or domain error SetDefaultValues(); return false; } return true; } } public class RangeSetting : Setting { private string minStr, maxStr; private byte min, max; public RangeSetting(string minStr, maxStr) { this.minStr = minStr; this.maxStr = maxStr; } protected override bool TryParseValues() { return (byte.TryParse(minStr, out min) && byte.TryParse(maxStr, out max)); } protected override bool CheckValues() { return (0 < min && min < max); } public override void SetDefaultValues() { min = 5; max = 10; } } The problem is that in this way we need to create a new class for each setting, even for a single value. Are there other solutions to this kind of problem? In summary: Easy maintenance: for example, the addition of one or more parameters. Extensibility: a first version of the application could read a single configuration file, but later versions may give the possibility of a multi-user setup (admin sets up a basic configuration, users can set only certain settings, etc..). Object oriented design.

    Read the article

  • Business Objects Enterprise reporting using SDK client gives exception

    - by Dev_Karl
    Hi! We have a client that is using the SDK for invoking reports on the Business Objects Embedded Report Server. We can login, but when calling the openDocument method, something goes wrong. code: //logon IEnterpriseSession session = sessionMgr.logon(username, password, clusterNode, authType); .... clientDoc = reportAppFactory.openDocument(report, 0, locale); /*row 58 in exception*/ exception: com.crystaldecisions.sdk.occa.report.lib.ReportSDKServerException: Unable to connect to the server: . - Server not found or server may be down---- Error code:-2147217387 Error code name:connectServer at com.crystaldecisions.sdk.occa.managedreports.ras.internal.RASReportAppFactory.a(Unknown Source) at com.crystaldecisions.sdk.occa.managedreports.ras.internal.RASReportAppFactory.a(Unknown Source) at com.crystaldecisions.sdk.occa.managedreports.ras.internal.RASReportAppFactory.a(Unknown Source) at com.crystaldecisions.sdk.occa.managedreports.ras.internal.RASReportAppFactory.openDocument(Unknown Source) at com.reportclient.MyReportClient.getReportFromInfoStore(MyReportClient.java:58) ... 28 more Caused by: com.crystaldecisions.sdk.occa.report.lib.ReportSDKServerException: Unable to connect to the server: . - Server not found or server may be down---- Error code:-2147217387 Error code name:connectServer at com.crystaldecisions.sdk.occa.report.lib.ReportSDKServerException.throwReportSDKServerException(Unknown Source) at com.crystaldecisions.sdk.occa.managedreports.ras.internal.CECORBACommunicationAdapter.connect(Unknown Source) ... 32 more Caused by: com.crystaldecisions.enterprise.ocaframework.OCAFrameworkException$NotFoundInDirectory: Server not found or server may be down at com.crystaldecisions.enterprise.ocaframework.j.find(Unknown Source) at com.crystaldecisions.enterprise.ocaframework.AbstractServerHandler.buildServerInfo(Unknown Source) at com.crystaldecisions.enterprise.ocaframework.AbstractServerHandler.buildClusterInfo(Unknown Source) at com.crystaldecisions.enterprise.ocaframework.aa.for(Unknown Source) at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.for(Unknown Source) at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source) at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source) at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source) at com.crystaldecisions.enterprise.ocaframework.p.a(Unknown Source) at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getManagedService(Unknown Source) ... 33 more The communication obviously works when logging in. Please let me know if you got any ideas or know where I can go and look for the answer. :) Regards, Karl

    Read the article

  • How to get JSF2 working with Seam2

    - by Walter White
    Hi all, Is it possible to get JSF2 working on the latest production Seam release (2.2.1.GA)? I get this error on startup: javax.faces.view.facelets.FaceletException: Must have a Constructor that takes in a ComponentConfig at com.sun.faces.facelets.tag.AbstractTagLibrary$UserComponentHandlerFactory.<init>(AbstractTagLibrary.java:289) at com.sun.faces.facelets.tag.AbstractTagLibrary.addComponent(AbstractTagLibrary.java:519) at com.sun.faces.facelets.tag.TagLibraryImpl.putComponent(TagLibraryImpl.java:111) at com.sun.faces.config.processor.FaceletTaglibConfigProcessor.processComponent(FaceletTaglibConfigProcessor.java:569) at com.sun.faces.config.processor.FaceletTaglibConfigProcessor.processTags(FaceletTaglibConfigProcessor.java:361) at com.sun.faces.config.processor.FaceletTaglibConfigProcessor.processTagLibrary(FaceletTaglibConfigProcessor.java:314) at com.sun.faces.config.processor.FaceletTaglibConfigProcessor.process(FaceletTaglibConfigProcessor.java:263) at com.sun.faces.config.ConfigManager.initialize(ConfigManager.java:337) at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:223) at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4591) at com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:535) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5193) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1933) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1605) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:214) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:144) at org.glassfish.maven.RunMojo.execute(RunMojo.java:105) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: java.lang.NoSuchMethodException: org.jboss.seam.ui.handler.CommandButtonParameterComponentHandler.<init>(javax.faces.view.facelets.ComponentConfig) at java.lang.Class.getConstructor0(Class.java:2723) at java.lang.Class.getConstructor(Class.java:1674) at com.sun.faces.facelets.tag.AbstractTagLibrary$UserComponentHandlerFactory.<init>(AbstractTagLibrary.java:287) ... 44 more May 23, 2010 9:35:41 AM org.apache.catalina.core.StandardContext start SEVERE: PWC1306: Startup of context /WalterJWhite-1.0.2-SNAPSHOT-Development failed due to previous errors May 23, 2010 9:35:41 AM org.apache.catalina.core.StandardContext start SEVERE: PWC1305: Exception during cleanup after start failed org.apache.catalina.LifecycleException: PWC2769: Manager has not yet been started at org.apache.catalina.session.StandardManager.stop(StandardManager.java:892) at org.apache.catalina.core.StandardContext.stop(StandardContext.java:5383) at com.sun.enterprise.web.WebModule.stop(WebModule.java:530) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5211) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1933) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1605) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:214) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:144) at org.glassfish.maven.RunMojo.execute(RunMojo.java:105) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) May 23, 2010 9:35:41 AM org.apache.catalina.core.ContainerBase addChildInternal SEVERE: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: com.sun.faces.config.ConfigurationException: CONFIGURATION FAILED! org.jboss.seam.ui.handler.CommandButtonParameterComponentHandler.<init>(javax.faces.view.facelets.ComponentConfig) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5216) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1933) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1605) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:214) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:144) at org.glassfish.maven.RunMojo.execute(RunMojo.java:105) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: com.sun.faces.config.ConfigurationException: CONFIGURATION FAILED! org.jboss.seam.ui.handler.CommandButtonParameterComponentHandler.<init>(javax.faces.view.facelets.ComponentConfig) at com.sun.faces.config.ConfigManager.initialize(ConfigManager.java:354) at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:223) at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4591) at com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:535) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5193) ... 33 more Caused by: java.lang.NoSuchMethodException: org.jboss.seam.ui.handler.CommandButtonParameterComponentHandler.<init>(javax.faces.view.facelets.ComponentConfig) at java.lang.Class.getConstructor0(Class.java:2723) at java.lang.Class.getConstructor(Class.java:1674) at com.sun.faces.facelets.tag.AbstractTagLibrary$UserComponentHandlerFactory.<init>(AbstractTagLibrary.java:287) at com.sun.faces.facelets.tag.AbstractTagLibrary.addComponent(AbstractTagLibrary.java:519) at com.sun.faces.facelets.tag.TagLibraryImpl.putComponent(TagLibraryImpl.java:111) at com.sun.faces.config.processor.FaceletTaglibConfigProcessor.processComponent(FaceletTaglibConfigProcessor.java:569) at com.sun.faces.config.processor.FaceletTaglibConfigProcessor.processTags(FaceletTaglibConfigProcessor.java:361) at com.sun.faces.config.processor.FaceletTaglibConfigProcessor.processTagLibrary(FaceletTaglibConfigProcessor.java:314) at com.sun.faces.config.processor.FaceletTaglibConfigProcessor.process(FaceletTaglibConfigProcessor.java:263) at com.sun.faces.config.ConfigManager.initialize(ConfigManager.java:337) ... 37 more May 23, 2010 9:35:41 AM com.sun.enterprise.web.WebApplication start WARNING: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: com.sun.faces.config.ConfigurationException: CONFIGURATION FAILED! org.jboss.seam.ui.handler.CommandButtonParameterComponentHandler.<init>(javax.faces.view.facelets.ComponentConfig) java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: com.sun.faces.config.ConfigurationException: CONFIGURATION FAILED! org.jboss.seam.ui.handler.CommandButtonParameterComponentHandler.<init>(javax.faces.view.facelets.ComponentConfig) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:932) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1933) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1605) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:214) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:144) at org.glassfish.maven.RunMojo.execute(RunMojo.java:105) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) May 23, 2010 9:35:41 AM org.glassfish.api.ActionReport failure SEVERE: Exception while invoking class com.sun.enterprise.web.WebApplication start method java.lang.Exception: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: com.sun.faces.config.ConfigurationException: CONFIGURATION FAILED! org.jboss.seam.ui.handler.CommandButtonParameterComponentHandler.<init>(javax.faces.view.facelets.ComponentConfig) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:117) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:214) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:144) at org.glassfish.maven.RunMojo.execute(RunMojo.java:105) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) May 23, 2010 9:35:41 AM org.glassfish.api.ActionReport failure SEVERE: Exception while loading the app java.lang.Exception: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: com.sun.faces.config.ConfigurationException: CONFIGURATION FAILED! org.jboss.seam.ui.handler.CommandButtonParameterComponentHandler.<init>(javax.faces.view.facelets.ComponentConfig) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:117) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:214) at org.glassfish.kernel.embedded.EmbeddedDeployerImpl.deploy(EmbeddedDeployerImpl.java:144) at org.glassfish.maven.RunMojo.execute(RunMojo.java:105) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) classLoader = WebappClassLoader (delegate=true; repositories=WEB-INF/classes/) SharedSecrets.getJavaNetAccess()=java.net.URLClassLoader$7@61b1acc3 Walter

    Read the article

  • What are the reasons to use dos batch programs in Windows?

    - by DVK
    Question What would be a good (ideally, technical) reason to ever program some non-trivial task in dos batch language on a modern Windows system as opposed to downloading either PowerShell, or ActiveState Perl? To be more specific, I make the following two assumptions for the duration of this question: anyone technical enough to be able to write a medium-complexity batch script is technical enough to install either of the scripting interpreters. Neither of those two present enough of a learning curve for basic batch replacement tasks that said curve would outweigh the pain of doing any remotely-non-trivial task in batch. Notes "You need a batch program for autoexec.bat" is not a valid reason. Your autoexec.bat may consist of simply calling non-batch script. If you disagree with either of my 2 assumptions above, that's fine, and I may be wrong. But my question is specifically "assuming those 2 assumptions are correct, what would be the reason to still stick with batch?" If it makes it easier to suspend disbelief (in case you disagree with me), add in a 3rd assumption that the question is limited to people who already posess at least some modicum of PowerShell or Perl experience. To re-iterate - this is not meant to be a subjective question about how easy it is to learn PSh or ASPerl compared to doing advanced batch coding. That is a separate question that is too subjective to be bothered with in this post. Background: I used to do some fairly complicated batch programming back in the elder days, and remember batch as one of the worst possble programming languages I had encountered. The idea for this question came after seeing a bunch of batch questions on SO, and trying to grok the answer of one of them out of sheer curiosity and giving up in pain after a minute, exclaiming mentally "why would anyone go through this pain instead of doing that in 1 line of Perl?" :) My own plausible answer I assume there may be an an likely DOS-compatible system, which has DOS interpreter but has no compatible PowerShell or Perl... I'm not aware of one but not completely impossible.

    Read the article

  • Dynamics CRM Customer Portal Accelerator Installation

    - by saturdayplace
    (I've posted this question on the codeplex forums too, but have yet to get a response) I've got an on-premise installation of CRM and I'm trying to hook the portal to it. My connection string in web.config: <connectionStrings> <add name="Xrm" connectionString="Authentication Type=AD; Server=http://myserver:myport/MyOrgName; User ID=mydomain\crmwebuser; Password=thepassword" /> </connectionStrings> And my membership provider: <membership defaultProvider="CustomCRMProvider"> <providers> <add connectionStringName="Xrm" applicationName="/" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" passwordFormat="Hashed" minRequiredPasswordLength="1" minRequiredNonalphanumericCharacters="0" name="CustomCRMProvider" type="System.Web.Security.SqlMembershipProvider" /> </providers> </membership> Now, I'm super new to MS style web development, so please help me if I'm missing something. In Visual Studio 2010, when I go to Project ASP.NET Configuration it launches the Web Site Administration Tool. When I click the Security Tab there, I get the following error: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: An error occurred while attempting to initialize a System.Data.SqlClient.SqlConnection object. The value that was provided for the connection string may be wrong, or it may contain an invalid syntax. Parameter name: connectionString I can't see what I'm doing wrong here. Does the user mydomain\crmwebuser need certain permissions in the SQL database, or somewhere else? edit: On the home page of the Web Site Administration Tool, I have the following: **Application**:/ **Current User Name**:MACHINENAME\USERACCOUNT Which is obviously a different set of credentials than mydomain\crmwebuser. Is this part of the problem?

    Read the article

  • javadoc and overloaded methods

    - by skrebbel
    Hi all, I'm developing an API with many identically named methods that just differ by signature, which I guess is fairly common. They all do the same thing, except that they initialize various values by defaults if the user does not want to specify. As a digestible example, consider public interface Forest { public Tree addTree(); public Tree addTree(int amountOfLeaves); public Tree addTree(int amountOfLeaves, Fruit fruitType); public Tree addTree(int amountOfLeaves, int height); public Tree addTree(int amountOfLeaves, Fruit fruitType, int height); } The essential action performed by all of these methods is the same; a tree is planted in the forest. Many important things users of my API need to know about adding trees hold for all these methods. Ideally, I would like to write one Javadoc block that is used by all methods: /** * Plants a new tree in the forest. Please note that it may take * up to 30 years for the tree to be fully grown. * * @param amountOfLeaves desired amount of leaves. Actual amount of * leaves at maturity may differ by up to 10%. * @param fruitType the desired type of fruit to be grown. No warranties * are given with respect to flavour. * @param height desired hight in centimeters. Actual hight may differ by * up to 15%. */ In my imagination, a tool could magically choose which of the @params apply to each of the methods, and thus generate good docs for all methods at once. With Javadoc, if I understand it correctly, all I can do is essentially copy&paste the same javadoc block five times, with only a slightly differing parameter list for each method. This sounds cumbersome to me, and is also difficult to maintain. Is there any way around that? Some extension to javadoc that has this kind of support? Or is there a good reason why this is not supported that I missed?

    Read the article

  • empty response body in ajax (or 206 Partial Content)

    - by Nikita Rybak
    Hi guys, I'm feeling completely stupid because I've spent two hours solving task which should be very simple and which I solved many times before. But now I'm not even sure in which direction to dig. I fail to fetch static content using ajax from local servers (Apache and Mongrel). I get responses 200 and 206 (depending on the server), empty response text (although Content-Length header is always correct), firebug shows request in red. Javascript is very generic, I'm getting same results even here: http://www.w3schools.com/ajax/tryit.asp?filename=tryajax_first (just change document location to 'http://localhost:3000/whatever') So, it's probably not the cause. Well, now I'm out of ideas. I can also post http headers, if it'll help. Thanks! Response Headers Connection close Date Sat, 01 May 2010 21:05:23 GMT Last-Modified Sun, 18 Apr 2010 19:33:26 GMT Content-Type text/html Content-Length 7466 Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.w3schools.com/ajax/tryit_view.asp Origin http://www.w3schools.com Response Headers Date Sat, 01 May 2010 21:54:59 GMT Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_jk/1.2.28 Etag "3d5cbdb-fb4-4819c460d4a40" Accept-Ranges bytes Content-Length 4020 Cache-Control max-age=7200, public, proxy-revalidate Expires Sat, 01 May 2010 23:54:59 GMT Content-Range bytes 0-4019/4020 Keep-Alive timeout=5, max=100 Connection Keep-Alive Content-Type application/javascript Request Headers Host localhost User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Origin null

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • ClassCastException When Calling an EJB that Exists on Same Server

    - by aaronvargas
    I have 2 ejbs. Ejb-A that calls Ejb-B. They are not in the same Ear. For portability Ejb-B may or may not exist on the same server. (There is an external property file that has the provider URLs of Ejb-B. I have no control over this.) Example Code: in Ejb-A EjbBDelegate delegateB = EjbBDelegateHelper.getRemoteDelegate(); // lookup from list of URLs from props... BookOfMagic bom = delegateB.getSomethingInteresting(); Use Cases/Outcomes: When Ejb-B DOES NOT EXIST on the same server as Ejb-A, everything works correctly. (it round-robbins through the URLs) When Ejb-B DOES EXIST on the same server, and Ejb-A happens to call Ejb-B on the same server, everything works correctly. When Ejb-B DOES EXIST on the same server, and Ejb-A calls Ejb-B on a different server, I get: javax.ejb.EJBException: nested exception is: java.lang.ClassCastException: $Proxy126 java.lang.ClassCastException: $Proxy126 NOTE (not normal use case but may help): When Ejb-B DOES EXIST on the same server, and this server is NOT listed in the provider URLs for Ejb-B, and Ejb-A calls Ejb-B on a different server, I get the same Exception as above. I'm using Weblogic 10.0, Java 5, EJB3 Basically, if Ejb-B Exists on the server, it must be called ONLY on that server. Which leads me to believe that the class is getting loaded by a local classloader (on deployment?), then when called remotely, a different classloader is loading it. (causing the Exception) But it should work, as it should be Serialized into the destination classloader... What am I doing wrong?? Also, when reproducing this locally, Ejb-A would favor the Ejb-B on the same server, so it was difficult to reproduce. But this wasn't the case on other machines.

    Read the article

  • Can not parse table information from html document.

    - by Harikrishna
    I am parsing many html documents.I am using html agility pack And I want to parse the tabular information from each document. And there may be any number of tables in each document.But I want to extract only one table from each document which has column header name NAME,PHONE NO,ADDRESS.And this table can be anywhere in the document,like in the document there is ten tables and from ten table there is one table which has many nested tables and from nested table there may be a table what I want to extract means table can be anywhere in the document and I want to find that table from the document by column header name.If I got that table then I want to then extract the information from that table. Now I can find the table which has column header NAME,PHONE NO,ADDRESS and also can extract the information from that.I am doing for that is, first I find the all tables in a document by foreach (var table in doc.DocumentNode.Descendants("table")) then for each table got I find the row for each table like, var rows = table.Descendants("tr"); and then for each row I am checking that row has that header name NAME,ADDRESS,PHONENO and if it is then I skip that row and extract all information after that row foreach (var row in rows.Skip(rowNo)) { var data = new List<string>(); foreach (var column in row.Descendants("td")) { data.Add(properText); } } Such that I am extracting all information from almost many document. But now problem is sometimes what happened that in some document I can not parse the information.Like a document in which there are like 10 tables and from these 10 tables 1 table is like there are many nested tables in that table. And from these nested tables I want to find the table which tabel has column header like NAME,ADDRESS,PHONE NO.So if table may be anywhere in the document even in the nested tables or anywhere it can be find through column header name.So I can parse the information from that table and skip the outer tabular information from that table.

    Read the article

  • DotNetOpenAuth OpenID on ISA 2006 Reverse Proxy problem

    - by userb00
    I am trying to host my site that uses DotNetOpenAuth (OpenID) behind ISA 2006 (reverse proxy), and after it authenticated with a provider (such as Google), and it returns with a URL with %253A in the URL. However, ISA HTTP filter rejects the request. What I need to do is, on ISA web publishing rule, right click config HTTP policy properties uncheck "Verify Normalization" and it worked. Is this a problem on ISA 2006 generally? Are other firewalls having similar problems? Or, is it an OpenID or DotNetOpenAuth issue? Is it safe to disable Normalization checking on ISA? According to MSDN, quote "Web servers receive requests that are URL encoded. This means that certain characters may be replaced with a percent sign (%) followed by a particular number. For example, %20 corresponds to a space, so a request for http://myserver/My%20Dir/My%20File.htm is the same as a request for http://myserver/My Dir/My File.htm. Normalization is the process of decoding URL-encoded requests. Because the % can be URL encoded, an attacker can submit a carefully crafted request to a server that is basically double-encoded. If this occurs, Internet Information Services (IIS) may accept a request that it would otherwise reject as not valid. When you select Verify Normalization, the HTTP filter normalizes the URL two times. If the URL after the first normalization is different from the URL after the second normalization, the filter rejects the request. This prevents attacks that rely on double-encoded requests. Note that while we recommend that you use the Verify Normalization function, it may also block legitimate requests that contain a %."

    Read the article

  • Error Ant Build/deploy to websphere 7.0

    - by adisembiring
    Hi I'm trying to build/deploy war to websphere process server 7.0. and I run on windows environment. I use http://illegalargumentexception.blogspot.com/2008/08/ant-automated-deployment-to-websphere.html as my reference. and http://illegalargumentexception.googlecode.com/svn/trunk/code/java/WebSphereAntFiles/ as my sample code to deployed. this is my buil.properies is ? #build properties mywebappear=D:/data/code/WebSphereAntFiles/scripts/test/mywebappEAR.ear #WAS6 install directory was_home=C:/IBM/WID7_WTE/runtimes/bi_v7 #server name (see cell/node/server; e.g. "server1") was_server=server1 #user + password; for use when security is enabled was_user=admin was_password=admin #stops scripts on problem was_failonerror=true #virtual host was_virtualhost=default_host #Absolute path to EAR file #was_ear=fooEAR.ear #Name of the enterprise application #was_appname=fooEAR this is my console while I trying to build with ws_ant.bat [wsDefaultBindings] mywebapp.war [wsDefaultBindings] <virtual-host> --> default_host [wsDefaultBindings] [wsDefaultBindings] ------------------------ [wsDefaultBindings] Saving EAR File to directory [wsDefaultBindings] Saved EAR File to directory Successfully test_wsStartServer: WAS_wsStartServer: depCheck: depCheck: [startServer] ADMU0116I: Tool information is being logged in file [startServer] C:\IBM\WID7_WTE\runtimes\bi_v7\profiles\qwps\logs\server1\startServer.log [startServer] ADMU0128I: Starting tool with the qwps profile [startServer] ADMU3100I: Reading configuration for server: server1 [startServer] ADMU3028I: Conflict detected on port 8880. Likely causes: a) An instance of [startServer] the server server1 is already running b) some other process is [startServer] using port 8880 [startServer] ADMU3027E: An instance of the server may already be running: server1 [startServer] ADMU0111E: Program exiting with error: [startServer] com.ibm.websphere.management.exception.AdminException: ADMU3027E: An [startServer] instance of the server may already be running: server1 [startServer] ADMU1211I: To obtain a full trace of the failure, use the -trace option. [startServer] ADMU0211I: Error details may be seen in the file: [startServer] C:/IBM/WID7_WTE/runtimes/bi_v7/profiles/qwps\logs\server1\startServer.log BUILD FAILED D:\data\code\WebSphereAntFiles\scripts\test\build.xml:68: The following error occurred while executing this line: D:\data\code\WebSphereAntFiles\scripts\was\wsStartServer.xml:49: Java returned: -1

    Read the article

  • Can not parse tabular information from html document.

    - by Harikrishna
    I am parsing many html documents.I am using html agility pack And I want to parse the tabular information from each document. And there may be any number of tables in each document.But I want to extract only one table from each document which has column header name NAME,PHONE NO,ADDRESS.And this table can be anywhere in the document,like in the document there is ten tables and from ten table there is one table which has many nested tables and from nested table there may be a table what I want to extract means table can be anywhere in the document and I want to find that table from the document by column header name.If I got that table then I want to then extract the information from that table. Now I can find the table which has column header NAME,PHONE NO,ADDRESS and also can extract the information from that.I am doing for that is, first I find the all tables in a document by foreach (var table in doc.DocumentNode.Descendants("table")) then for each table got I find the row for each table like, var rows = table.Descendants("tr"); and then for each row I am checking that row has that header name NAME,ADDRESS,PHONENO and if it is then I skip that row and extract all information after that row foreach (var row in rows.Skip(rowNo)) { var data = new List<string>(); foreach (var column in row.Descendants("td")) { data.Add(properText); } } Such that I am extracting all information from almost many document. But now problem is sometimes what happened that in some document I can not parse the information.Like a document in which there are like 10 tables and from these 10 tables 1 table is like there are many nested tables in that table. And from these nested tables I want to find the table which tabel has column header like NAME,ADDRESS,PHONE NO.So if table may be anywhere in the document even in the nested tables or anywhere it can be find through column header name.So I can parse the information from that table and skip the outer tabular information of that table.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >