Search Results

Search found 30930 results on 1238 pages for 'enterprise content manage'.

Page 48/1238 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Oracle WebCenter at the Enterprise 2.0 Conference

    - by Brian Dirking
    We had a great week at the E20 Conference, presenting in four sessions – Andy MacMillan gave a session titled Today’s Successful Enterprises are Social Enterprises and was on a panel that Tony Byrne moderated; Christian Finn spoke on a panel on Unified Communications Unified Communications + Social Computing = Best of Both Worlds?, Mark Bennett spoke on a panel on The Evolution of Talent Management. The key areas of focus this year were sentiment analysis, adoption and community building, the benefits of failure, and social’s role in process applications. Sentiment analysis. This was focused not on external audiences but more on employee sentiment. Tim Young showed his internal "NikoNiko" project, where employees use smilies to report their current mood. The result was a dashboard that showed the company mood by department. Since the goal is to improve productivity, people can see which departments are running into issues and try and address them. A company might otherwise wait until the end of the quarter financials to find out that there was a problem and product didn’t ship. This is a way to identify issues immediately. Tim is great – he had the crowd laughing as soon as he hit the stage, with his proposed hastag for his session: by making it 138 characters long, people couldn’t say much behind his back. And as I tweeted during his session, I loved his comment that complexity diffuses energy - it sounds like something Sun Tzu would say. Another example of employee sentiment analysis was CubeVibe. Founder and CEO Aaron Aycock, in his 3 minute pitch or die session talked about how engaged employees perform better. It was too bad he got gonged, he was just picking up speed, but CubeVibe did win the vote – congratulations to them. Internal adoption, community building, and involvement. On this topic I spoke to Terri Griffith, and she said there is some good work going on at University of Indiana regarding this, and hinted that she might be blogging about it in the near future. This area holds lots of interest for me. Amongst our customers, - CPAC stands out as an organization that has successfully built a community. So, I wonder - what are the building blocks? A strong leader? A common or unifying purpose? A certain level of engagement? I imagine someone has created an equation that says “for a community to grow at 30% per month, there must be an engagement level x to the square root of y, where x equals current community size, and y equals the expected growth rate, and the result is how many engagements the average user must contribute to maintain that growth.” Does anyone have a framework like that? The net result of everyone’s experience is that there is nothing to do but start early and fail often. Kevin Jones made this the focus of his keynote. He talked about the types of failure and what they mean. And he showed his famous kids at work video: Kevin’s blog also has this post: Social Business Failure #8: Workflow Integration. This is something that we’ve been working on at Oracle. Since so much of business is based in enterprise applications such as ERP and CRM (and since Oracle offers e-Business Suite, Siebel, PeopleSoft, and JD Edwards, as well as Fusion Applications), it makes sense that the social capabilities of Oracle WebCenter is built right into these applications. There are two types of social collaboration – ad-hoc, and exception handling. When you are in a business process and encounter an exception, you immediately look for 1) the document that tells you how to handle it, or 2) the person who can tell you how to handle it. With WebCenter built into these processes, people either search their content management system, or engage in expertise location and conversation. The great thing is, THEY DON’T HAVE TO LEAVE THE APPLICATION TO DO IT. Oracle has built the social capabilities right into the applications and business processes. I don’t think enough folks were able to see that at the event, but I expect that over the next six months folks will become very aware of it. WebCenter also provides the ability to have ad-hoc collaboration, search, and expertise location that folks need when they are innovating or collaborating. We demonstrated Oracle Social Network. It’s built on our Oracle WebCenter product to provide social collaboration inside and outside of your company. When we showed it to people, there were a number of areas that they commented on that were different from the other products being shown at the conference: Screenshots from within the product Many authors working on documents simultaneously Flagging people for follow up Direct ability to call out to people Ability to see presence not just if someone is online, but which conversation they are actively in Great stuff, the conference was full of smart people that that we enjoy spending time with. We’ll keep up in the meantime, but we look forward to seeing you in Boston.

    Read the article

  • Install Oracle Enterprise Linux 5 on VMWare

    - by TUFEKCIOGLU,FATIH
    In this article, we will try to install Install Oracle Enterprise Linux 5 on VMWare. To make the installation easier, I will show the screenshots of the whole steps. 1.   2.   3.   4.   5.   6. 7. 8.   9.   10.   11.   12.   13.   14.   15.   16.   17.   18.   19.   20.   21.   22.   23.   24.   25.   26.   27.   28.   29.   30.   31.   32.   33.   34.   35.   36.   37.   38.   39.   40.   41.   42.   43.   44.   45.   46.   47.   48.   49.   50.   51.   52.   53.   54.   55.   56.   57.   58.   59.   60.   61.   62.   The installation is completed, Thanks, Fatih Tufekcioglu  

    Read the article

  • How to manage credentials on multiserver environment

    - by rush
    I have a some software that uses its own encrypted file for password storage ( such as ftp, web and other passwords to login to external systems, there is no way to use certificates ). On each server I've several instances of this software, each instance has its own password file. At the moment number of servers is permanently growing and it's getting harder and harder to manage all passwords on all instances up to date. Unfortunately, some servers are in cegregated network and there is no access from them to some centralized storage, but it works vice versa. My first idea was to create a git repository, encrypt each password with gpg and store it there and deliver it within deployment system, but security team was not satisfied with this idea and as it is insecure to store passwords in repository even in encrypted view ( from their words ). Nothing similar comes to my mind. Is there any way to implement safe and secure password storage with minimal effort to manage all passwords up-to-date? ps. if that matters I've red hat everywhere.

    Read the article

  • How to send Content-Disposition headers in apache for files?

    - by Rory McCann
    I have a directory of text files that I'm serving out with apache 2. Normally when I (or any user) access the files they see them in their browser. I want to 'force'* the web browser to pop up a 'Save as' dialog box. I know this is possible to do with the Content-Disposition headers (more info). Is there some way to turn that on for each file? Ideally I'd like something like this: <Directory textfiles> AutoAddContentDispositionHeaders On </Directory> And then apache would set the correct content disposition header, including using the same filename. Something like this might be possible with the apache Header directive. Bonus points if it's included by standing in apache in debian. I could do a simple PHP wrapper script that takes in a filename argument, makes the call to header(...) and then prints the file, but then i have to validdate input etc. that's work I'm trying to avoid. * I know you can't actually force things when it comes to the web

    Read the article

  • Content Length and Transfer Encoding Chunked nginx, node-http-proxy

    - by rampr
    I have the following setup - node-http-proxy acts as a reverse proxy forwarding all requests to nginx/socket.io as necessary My problem is this When I send a HTTP DELETE request from the browser, node-http-proxy adds a header "Transfer Encoding Chunked" as the request from the browser had no Content Length. The request from the browser had no Content Length as it had no body. Nginx doesn't like the Transfer Encoding Chunked Header and throws a 411 asking for Content-Length. The problem gets solved when I send dummy data as part of the DELETE request so there is a Content Length and node-http-proxy doesn't add Transfer Encoding Chunked header and nginx is happy. I want to understand if node-http-proxy isn't working as expected, because it adds a Transfer Encoding Chunked header when Content Length is missing because there is no Content Body.

    Read the article

  • Solution to Manage and Monitor (Ubuntu) Machines

    - by Elmar Weber
    I'm looking for a tool like Canonical (system management and monitoring for Ubuntu) that is Open Source and free. The goal is to manage a dozen or so KVM machines for private testing purposes. I know of puppet and munin or RHQ as separate tools to manage and monitor, but I'd prefer something integrated. Any tips? Basic requirements would be: system package management and update (individual selection for each managed node) configuration of basic system services (Users, NFS, cron, ideally also Apache) monitoring (charting of system resources, disk, io, memory, etc) and alerting, ideally a default configuration with sensible values for alerts

    Read the article

  • How to manage bookmarks?

    - by LNK2019
    Hi Everyone, I have 981 bookmarks and about 30 to 40 folders in my firefox browser. Now,they become very difficult to manage. I searched "bookmark management" etc in google but I can't find useful tutorial or guidelines to follow. I've been looking for answers for a long time. I tried Xmakrs ReaditLater lace. But they couldn't help me organize my bookmarks. Do you have any tips or suggestions on how to manage your bookmarks? In what situation you want to create a tag instead of a folder? Thanks

    Read the article

  • Podcast Show Notes: William Ulrich and Neal McWhorter on Business Architecture

    - by Bob Rhubart
    The latest ArchBeat podcast program features a four-part conversation with William Ulrich and Neal McWhorter, the authors of Business Architecture: The Art and Practice of Business Transformation, available from Meghan-Kiffer Press. Listen to Part 1 Bill and Neal cover the basics and discuss the effects of the lack of business architecture on organizations. Listen to Part 2 (Jan 19) What really happens to the billions of dollars annually invested in IT. Listen to Part 3 (Jan 26) Why the IT and business sides of many organizations can’t play nice. Listen to Part 4 (Feb 2) How IT architects and business architects can work together to get the ship back on course and keep it there. Connect William Ulrich Website | LinkedIn | Business Architecture Guild Neal McWhorter Website | LinkedIn | Business Architecture Group on OMG Coming Soon Bob Hensle, Director, Oracle Enterprise Architecture Group, discusses the recently launched IT Solutions from Oracle (ITSO) library of documents. Excerpts from a recent OTN Architect Community Virtual Meet-up. Stay tuned: RSS del.icio.us Tags: business architecture,enterprise architecture,arch2arch,archbeat,podcast,business transformation,oracle,oracle technology network Technorati Tags: business architecture,enterprise architecture,arch2arch,archbeat,podcast,business transformation,oracle,oracle technology network

    Read the article

  • CIO Magazine's State of the CIO and its Impact on Your EA

    - by david.olivencia(at)oracle.com
    CIO Magazine today released its State of the CIO report.  As most Enterpise Architects report to (or report very close to) the CIO, the report provides interesting insights as to where most CIOs minds and priorities are.  The information will allow Enterprise Architects  to better align plans, approaches, models, and stratagies. The report's summary can be found here:  http://assets.cio.com/documents/cache/pdfs/2011/dec15_gatefold.pdf   Specifically the article highlights: * How IT Makes A Difference * Critical Leadership Skills * Business Focused CIOs * Areas of Increasing Responsibility * Plans for 2015   Enterprise Architects what insights from this report will alter they way you successfully lead in 2011?   David Olivencia | Solution Director, Enterprise Architecture & Exa ServicesOracle Consulting Latin America and Caribbean

    Read the article

  • Additional new content SOA & BPM Partner Community

    - by JuergenKress
    Oracle Fusion Middleware 12c (12.1.2.0.0) Released - Download (OTN, eDelivery) Whitepaper: Next Generation Service Integration Platform - PDF SOA Maturity This article in the Industrial SOA series offers exploration of the fundamentals of applying a factory approach to modern service-oriented software development. Read the article. Enterprise Service Bus The fifth article in the Industrial SOA series answers to some of the most important questions about the use of an enterprise service bus, using concrete examples to clarify areas of application that can be deemed correct for ESBs. Read the article. DevOps, Cloud, and Role Creep DevOps and cloud computing are changing the IT industry - and changing IT roles. An panel of community members discusses what’s happening and how it might affect your job. Listen to the podcast. Industrial SOA - Now chapters 1 to 5 available | Torsten Winterberg White Paper: Cloud Integration - A Comprehensive Solution White Paper: Next Generation Service Integration Platform : SOA Suite on Exalogic IT Briefcase Interview: An Integrated Approach to Mobile, Cloud, and API Management Technologies with Oracle Fusion Middleware Webcast: Oracle Cloud Integration – Information Week Webcast eBook: Oracle SOA Suite – In the Customers’ Words Podcast: Cloud Integration Transitioning from TIBCO to Oracle SOA Suite – Part 1 Events: Oracle Simplifying Integration of Cloud and On-Premise New B2B Book Published for Oracle SOA B2B 11g Get Fast-Data Accelerator in Your Hands Today: Mobile Data Offloading for Telecom Fast Data Accelerator - Blog New Oracle Process Accelerators in Financial Services & Teleco Detect, Analyze, Act Fast with BPM Improving the Quality of Healthcare with BPM Engineers Australia Improves and Automates Business Processes and Completes Engineer Enrollments up to 90% Faster with Middleware Platform - Case Study | PPT Specialized Partner Ataway on BPM Practice - Video eProseed Delivers Processes Skillfully with Oracle BPM Suite - Video Yarra Valley Water Uses SOA and BPM for Orchestration, Re-use and Visibility - Video Victoria University Discusses Oracle SOA & Oracle BPM - Video SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Help me classify this type of software architecture

    - by Alex Burtsev
    I read some books about software architecture as we are using it in our project but I can't classify the architecture properly. It's some kind of Enterprise Architecture, but what exactly... SOA, ESB (Enterprise Service Bus), Message Bus, Event Driven SOA, there are so many terms in Enterprise software.... The system is based on custom XML messages exchanges between services. (it's not SOAP, nor any other XML based standard, just plain XML). These messages represent notifications (state changes) that are applied to the Domain model, (it's not like CRUD when you serialize the whole domain object, and pass it to service for persistence). The system is centralized, and system participants use different programming languages and frameworks (c++, c#, java). Also, messages are not processed at the moment they are received as they are stored first and the treatment begins on demand. It's called SOA+EDA -:)

    Read the article

  • Fix a jQuery/HTML5 dynamic content issue by upgrading jQuery

    - by Steve Albers
    The default NuGet template for MVC3 pushes down jQuery 1.5.1.  You can upgrade to a new version (1.7.1 is current when this is written) to avoid a problem with the creation of “unknown” HTML5 tags in IE6-8: Take this sample HTML page using HTML5Shiv to provide support for new HTML5 tags in IE6 – IE8.  The page has a number of <article> tags that are backwards compatible in Internet Explorer 6-8 thanks to the HTML5Shiv. After the article elements there is a jQuery 1.5.1 script tag, and a ready() event handler that appends a footer element with a copyright to each of the article tags.  This appears correctly in IE9, but in older IE browsers the unknown tag problem reappears for the dynamic <footer> elements, even though we have the HTML5Shiv at the top of the page.  The copyright text sits outside of the two separate footer tags. To solve the issue upgrade your jQuery files to an up-to-date version.  For instance in Visual Studio 2010: In the Solution Explorer right click on References and choose Manage NuGet Packages. In the Manage NuGet Packages window select the jQuery item on the middle of the page and click the “Upgrade” button. You may need to upgrade your script src references to point at the new version. Using the updated jQuery library the incorrect tags should disappear and styles should work properly:   You can find more information about the issue on the jQuery Bug Tracker site.

    Read the article

  • wpf display staggered content

    - by Chris Cap
    I am trying to display a rather dynamic list of data in WPF. I have essentially a LineItem class that contains a list of strings and a line type. The line type separates different categories of line items. All line items with the same type should be displayed the same and their data should line up. For example, this list will contain an order summary. And the there will be a line type that represents something with a width and height. The width and height must line up vertically. However, there may be other line types that don't have to line up vertically. I want to produce a table similar to what you see below: ------------------------------------------------------------------ | some content here | some more content here | last content here | |----------------------------------------------------------------| | some content here | | last content here | |----------------------------------------------------------------| | spanning content that is longer then most | last content here | |----------------------------------------------------------------| | some content that can span a really long distance | ------------------------------------------------------------------ I attempted to do this by creating ListView with a single column that had a datatemplate that contained a grid with a fixed number of fields and then bind to the Colspan value. Unfortunately, this didn't work. I ended up with incorrect or overlapping content anytime I tried to do a column span. Here's the XAML I was working with <ListView ItemsSource="{Binding}" > <ListView.View> <GridView> <GridViewColumn Header="Content"> <GridViewColumn.CellTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Grid.ColumnSpan="{Binding Path=Tokens[0].ColumnSpan}" Text="{Binding Path=Tokens[0].Content}" ></TextBlock> <TextBlock Grid.Column="1" Grid.ColumnSpan="{Binding Path=Tokens[1].ColumnSpan}" Text="{Binding Path=Tokens[1].Content}" ></TextBlock> <TextBlock Grid.Column="2" Text="{Binding Path=Tokens[2].Content}"></TextBlock> </Grid> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> </GridView> </ListView.View> And here's the classes I was binding to public class DisplayLine { public LineType Linetype { get; set; } public List<Token> Tokens { get; set; } public DisplayLine() { Tokens = new List<Token>(); } } public class Token { public string Content { get; set; } public bool IsEmpty { get { return string.IsNullOrEmpty(Content); } } public int ColumnSpan { get; set; } public Token() { ColumnSpan = 1; } } Does anyone have any suggestions of maybe a way of making this work. I may be taking the wrong approach. I'm trying to avoid any solutions where I explcitily build something in the code behind as I'm using the MVVM pattern so it has to be something that I can bind from exposed through the controller. My intial plan was to create a factory and separate classes that display the data differently based on type. However, I'm struggling coming up with a strategy for this using MVVM as I really can't just build something and display it. I have toyed with the idea of making some kind of UI service class that is injected, but it would still require some pretty detailed UI information from the controller to do it's work.

    Read the article

  • SQL Server Express Uninstall / Enterprise Install Issue

    - by user19049
    I need help installing SQL Server 2005 Enterprise edition.I really need to remove the current SQL Server 2005.installation that is no longer on my Add/Remove software list but yet still installed on the machine. I tried to uninstall SQL Server Express / Developer Edition but it only removed it from my Add/Remove software list. It returned immediately but did NOT actually remove the product. (I'm now in a bad state.) i tried to install SQL Server 2005 Enterprise and its says I'm blocked as all components are already installed - but they are not. How can I remove all instance of previous one and install clean Enterprise edition on my server Thanks

    Read the article

  • Multi screen RDP in Windows 8.1 Enterprise

    - by bgs264
    I have just flattened my machine and installed Windows 8.1 Enterprise Edition. I have used the Hyper-V to create a virtual machine for my Software Development stuff, on my VM I have also installed Windows 8.1 Enterprise Edition. I want to have two screen support when using this VM (not using /span) Both the Hyper-V viewer and Remote Desktop give me a tickbox to "Use all my monitors for the remote session". However even with it ticked (and even when I tried the /multimon switch on the command line), I only get a single screen. Am I missing something - this should be supported in Enterprise edition, right? Is there some extra config I need to do on the RDP host? Forgive me if it's an obvious question, I'm more a developer and just stumbling through ;-) Cheers! Ben

    Read the article

  • javax.naming.NameAlreadyBoundException: in glassfish server v2

    - by Nila
    Hi! I'm implementing stateless session bean ejb3 in glassfish server using netbeans. First time, it is working properly. Later, I'm getting the exception as follows: LDR5012: Jndi name conflict found in [SampleEjb3]. Jndi name [Lulu.HellostatelessRemote] for bean [HellostatelessBean] is already in use. LDR5013: Naming exception while creating EJB container: javax.naming.NameAlreadyBoundException: Use rebind to override at com.sun.enterprise.naming.TransientContext.doBindOrRebind(TransientContext.java:292) at com.sun.enterprise.naming.TransientContext.bind(TransientContext.java:232) at com.sun.enterprise.naming.SerialContextProviderImpl.bind(SerialContextProviderImpl.java:111) at com.sun.enterprise.naming.LocalSerialContextProviderImpl.bind(LocalSerialContextProviderImpl.java:90) at com.sun.enterprise.naming.SerialContext.bind(SerialContext.java:461) at com.sun.enterprise.naming.SerialContext.bind(SerialContext.java:476) at javax.naming.InitialContext.bind(InitialContext.java:404) at com.sun.enterprise.naming.NamingManagerImpl.publishObject(NamingManagerImpl.java:237) at com.sun.enterprise.naming.NamingManagerImpl.publishObject(NamingManagerImpl.java:190) at com.sun.ejb.containers.BaseContainer.initializeHome(BaseContainer.java:1015) at com.sun.ejb.containers.StatelessSessionContainer.initializeHome(StatelessSessionContainer.java:232) at com.sun.ejb.containers.ContainerFactoryImpl.createContainer(ContainerFactoryImpl.java:654) at com.sun.enterprise.server.AbstractLoader.loadEjbs(AbstractLoader.java:536) at com.sun.enterprise.server.ApplicationLoader.doLoad(ApplicationLoader.java:188) at com.sun.enterprise.server.TomcatApplicationLoader.doLoad(TomcatApplicationLoader.java:126) at com.sun.enterprise.server.AbstractLoader.load(AbstractLoader.java:244) at com.sun.enterprise.server.AbstractManager.load(AbstractManager.java:225) at com.sun.enterprise.server.ApplicationLifecycle.onStartup(ApplicationLifecycle.java:217) at com.sun.enterprise.server.ApplicationServer.onStartup(ApplicationServer.java:442) at com.sun.enterprise.server.ondemand.OnDemandServer.onStartup(OnDemandServer.java:120) at com.sun.enterprise.server.PEMain.run(PEMain.java:411) at com.sun.enterprise.server.PEMain.main(PEMain.java:338) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.server.PELaunch.main(PELaunch.java:412) Then, I'll remove the ejb module from the glassfish server and I'll restart the server. It will work then. So, how to overcome this problem..

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • Sending the variable's content to my mailbox in Python?

    - by brilliant
    I have asked this question here about a Python command that fetches a URL of a web page and stores it in a variable. The first thing that I wanted to know then was whether or not the variable in this code contains the HTML code of a web-page: from google.appengine.api import urlfetch url = "http://www.google.com/" result = urlfetch.fetch(url) if result.status_code == 200: doSomethingWithResult(result.content) The answer that I received was "yes", i.e. the variable "result" in the code did contain the HTML code of a web page, and the programmer who was answering said that I needed to "check the Content-Type header and verify that it's either text/html or application/xhtml+xml". I've looked through several Python tutorials, but couldn't find anything about headers. So my question is where is this Content-Type header located and how can I check it? Could I send the content of that variable directly to my mailbox? Here is where I got this code. It's on Google App Engines.

    Read the article

  • Why does my Sharepoint page's custom content type change to "Page" when editing?

    - by Mobius
    I have a custom Sharepoint 2007 site definition with custom content types for the different page layouts. When editing a page using a custom layout from the main "View all contents" tree view, the page content type is fine, but if I view the page directly and edit it from there, the content type gets stripped and replaced with "Page." I can change it back by viewing and editing it from the main list, but not from its subsite home location.

    Read the article

  • Content Types in browsers, can we use the Mime??

    - by SoLoGHoST
    Ok, I am wondering which mime types are dangerous in browsers? That is to say setting the Content Type to that mime type?? Which mime types, if any would pose a security risk?? I am noticing that many forum software, when uploading files, use the application/octet-stream for any files other than images and place that into the Content Type of the header. I am wondering why don't they place the actual mime-type instead into the Content Type? Are there security risks involved with this? So far I have used text/css, text/plain, audio/mpeg, and many others and haven't noticed any difference between application/octet-stream and these others. Does anyone out there know the exact difference, and what makes application/octet-stream any better, or any worse...to use for the Content Type?? Thank You :)

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >