Search Results

Search found 271 results on 11 pages for 'benchmarks'.

Page 7/11 | < Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >

  • SOA performance on SPARC T5 benchmark results

    - by JuergenKress
    The brand NEW super fast SPARC T5 servers are available. The platform is superb to run large SOA Suite environments or to consolidate your whole middleware platform. Some performance advices, recommended for all workloads: Performance profile for SOA apps on Oracle Solaris 11 BPEL (Fusion Order Demo) instances per second OSB (messages / transformations per second) Crypto acceleration study for SOA transformations SPARC T4 and T5 platform testing, pre-tuning Performance suitable for mid-to-high range enterprise in stand-alone SOA deployment or virtualized consolidation environment shared with Oracle applications 2.2x to 5x faster than SPARC T3 servers 25% faster SOA throughput, core to core than Intel 5600-series servers (running Exalogic software) SPARC T5 has 2x the consolidation density of Intel 5600-class processors 2x faster initial deployment time using Optimized Solutions pre-tested configuration steps Over 200 Application adapters for easiest Oracle software integration Would you like to get details? We can share with you on 1:1 bases T5 SOA Suite performance benchmarks, please contact your local partner manager or myself! SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: T5,TS Sparc,T5 SOA,bechmark,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • New SPC2 benchmark- The 7420 KILLS it !!!

    - by user12620172
    This is pretty sweet. The new SPC2 benchmark came out last week, and the 7420 not only came in 2nd of ALL speed scores, but came in #1 for price per MBPS. Check out this table. The 7420 score of 10,704 makes it really fast, but that's not the best part. The price one would have to pay in order to beat it is ridiculous. You can go see for yourself at http://www.storageperformance.org/results/benchmark_results_spc2The only system on the whole page that beats it was over twice the price per MBPS. Very sweet for Oracle. So let's see, the 7420 is the fastest per $. The 7420 is the cheapest per MBPS. The 7420 has incredible, built-in features, management services, analytics, and protocols. It's extremely stable and as a cluster has no single point of failure. It won the Storage Magazine award for best NAS system this year. So how long will it be before it's the number 1 NAS system in the market? What are the biggest hurdles still stopping the widespread adoption of the ZFSSA? From what I see, it's three things: 1. Administrator's comfort level with older legacy systems. 2. Politics 3. Past issues with Oracle Support.   I see all of these issues crop up regularly. Number 1 just takes time and education. Number 3 takes time with our new, better, and growing support team. many of them came from Oracle and there were growing pains when they went from a straight software-model to having to also support hardware. Number 2 is tricky, but it's the job of the sales teams to break through the internal politics and help their clients see the value in oracle hardware systems. Benchmarks like this will help.

    Read the article

  • eSTEP Newsletter November 2011 now available

    - by uwes
    Dear Partners,We would like to inform you that the November issue of our Newsletter is now available.The issue contains informations to the following topics:Notes from Corporate: Magic Quadrant for Enterprise Application Servers, Oracle Buys RightNow Technical Corner: Oracle Solaris 11 – The First Cloud OS, Oracle Solaris 10 8/11 now available, New RAC/Containers certifications, DTrace and Container for Oracle Linux, Oracle Enterprise Manager Ops Center released, News from the Oracle Solaris Cluster, SPARC - New roadmap, T-Series Benchmarks Learning & Events: eSTEP Events Schedule, Recently Delivered TechCasts, Delivered Campaigns in 2011 How to ...: About Oracle Solaris Containers, Detailed feature comparison between the different versions of database 11g, Upgrade Advantage Program + table with examples, Sun Software Name ===> New Oracle Name, Oracle Linux and OVM Certification Search, TO YOUR ATTENTION - Repricing Servers and Xoptions You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Looking for job advice [closed]

    - by EntryLevelJavaDeveloper
    I am a software developer for a government agency in DC, and I have recently completed one year of employment. I am generally dissatisfied with my experiences here. I do not want to gripe too much, but I do not spend a lot of time doing actual development on projects. I am asked to do everything under the sun: write requirements, review specs, test, attend random meetings, but actual coding makes up a small fraction of my time. The coding itself is fairly straightforward and simple so it feels like I am not growing from my experiences. I am not tasked with more challenging work, and I find the experiences are not rewarding. If I had a stronger resume/more work experience, I'd leave the position immediately but combined with the present economy, I am hesitant to leave. I have several questions: Does anybody have experiences like this? How did you make the most of it? I am currently doing some side projects, making simple webpages for people, but aside from that, and open source projects, what other things are out there? What are general benchmarks for a developer after one year of professional experience? What should I be expected to know/do? I am outsider (coming from a math/science background) so I do not know what exactly I should know/do. Is it possible to obtain a mentorship with a mid/senior developer to learn? If so, how can I go about making contacts in the DC area?

    Read the article

  • Choosing between Berkeley DB Core and Berkeley DB JE

    - by zokier
    I'm designing a Java based web-app and I need a key-value store. Berkeley DB seems fitting enough for me, but there appears to be TWO Berkeley DBs to choose from: Berkeley DB Core which is implemented in C, and Berkeley DB Java Edition which is implemented in pure Java. The question is, how to choose which one to use? With web-apps scalability and performance is quite important (who knows, maybe my idea will become the next Youtube), and I couldn't find easily any meaningful benchmarks between the two. I have yet to familiarize with Cores Java API, but I find it hard to believe that it could be much worse than Java Editions, which seems to be quite nice. If some other key-value store would be much better, feel free to recommend that too. I'm storing smallish binary blobs, and keys probably will be hashes of the data, or some other unique id.

    Read the article

  • Best server-side javascript servers.

    - by fmsf
    Hey, I've been wondering to try out server-side javascript for a while. And I'm finding a good amount of servers, like: Node.js Rhino SpiderMonkey among others. Could anyone with experience on server-side javascript, tell me which are the best engines? and why? I like the Node.js because it's based on Google's V8 engine. And seems easy to use. But some feedback on what you would choose would be great. Edit: Some benchmarks for Node. I'm thinking on going with this one but feedback is still welcome. Thanks

    Read the article

  • PyPy -- How can it possible beat CPython?

    - by Vulcan Eager
    From the Google Open Source Blog: PyPy is a reimplementation of Python in Python, using advanced techniques to try to attain better performance than CPython. Many years of hard work have finally paid off. Our speed results often beat CPython, ranging from being slightly slower, to speedups of up to 2x on real application code, to speedups of up to 10x on small benchmarks. How is this possible? Which Python implementation was used to implement PyPy? CPython? And what are the chances of a PyPyPy or PyPyPyPy beating their score? (On a related note... why would anyone try something like this?)

    Read the article

  • Can we generate multiple coverage reports using Hudson Emma plugin

    - by Subhashish
    We run both unit (junit) and system (fit) tests on instrumented code in our build. The consolidated coverage report for both is generated as part of the build itself. We then feed the unit test coverage report to the Hudson Emma plugin, configure benchmark numbers and things work nicely. Is it possible to also feed in the system test coverage report separately to the same plugin so that we can get that report and configure benchmarks for that as well? I know there is a workaround of creating a downstream project for the latter activity but it would be good to be able to do both in the same build.

    Read the article

  • Speed Difference between native OLE DB and ADO.NET

    - by weijiajun
    I'm looking for suggestions as well as any benchmarks or observations people have. We are looking to rewrite our data access layer and are trying to decide between native C++ OLEDB or ADO.NET for connecting with databases. Currently we are specifically targeting Oracle which would mean we would use the Oracle OLE DB provider and the ODP.NET. Requirements: 1. All applications will be in managed code so using native C++ OLEDB would require C++/CLI to work (no PInvoke way to slow). 2. Application must work with multiple databases in the future, currently just targeting Oracle specifically. Question: 1. Would it be more performant to use ADO.NET to accomplish this or use native C++ OLE DB wrapped in a Managed C++ interface for managed code to access? Any ideas, or help or places to look on the web would be greatly appreciated.

    Read the article

  • How to benchmark apache/nginx setup

    - by Saif Bechan
    I am planning to setup nginx as reverse proxy. I will have apache to deliver my dynamic content, and nginx will deliver the static content. My configuration i have now is just Apache with fastCGI. This gives me no configuration problems and runs great. After I have set up nginx I want to run some benchmarks to see if I really got some performance increases, else i will switch back. Does anyone know how I can benchmark this type of setup? Or maybe someone did this already and have some canned results, I will be glad to hear them. PS. I know this is more a serverfault type of question, but i have seen numerous posts about apache and nginx so i thought i give it a try

    Read the article

  • Kohana or CodeIgniter?

    - by Thinker
    I'm looking for a PHP framework. I did a research here and found CodeIgniter would be best for me. But then I discovered that there is Kohana based on CodeIgniter. And I'm annoyed, because since Kohana is based on CodeIgniter, it should work similar, but it is optimized for PHP 5, so it should be faster in a PHP 5 environment. I was going to choose Kohana and then I found benchmarks that points CodeIgniter as the faster one. I don't understand it. If Kohana is updated more frequently and optimized for PHP 5, shouldn't it be better then? I'm experienced PHP programmer, but Smarty is the only framework I know, and I don't want to waste next few month for learning a bad framework. Thanks for your reply, I will give Kohana a try.

    Read the article

  • Are Symphony and CakePHP too slow to be usable?

    - by Aziz Light
    Until now, I have always said that CakePHP is too bloated and slow. I don't really know that, I just saw "some" benchmarks. What I really want to know, is that if those two frameworks (Symfony and CakePHP) are too slow to be usable in a way that the user will get frustrated. I already know that those frameworks are slower than other alternatives, but that's not the question. I ask the question because I want to create a project management web application and I still hesitate between a couple frameworks. I've had some trouble learning Zend, but imho I haven't tried hard enough. So in conclusion, in addition to the first question above, I would like to ask another question: If I want to create a project management tool (which is a pretty big project), which of the following should you suggest, considering the developement time, the speed of the resulting application, and the robustness of the final product: Symphony CakePHP Zend Framework Also I should mention that I don't know any of those frameworks, and that I want to learn one of them (at least).

    Read the article

  • Apache or Nginx for php behind Varnish

    - by Macindy
    We are managing a heavy traffic php site (driven with vbulletin). To get less load I use the cache-proxy varnish. Works very well - with varnish the load was reduced by 50%. But now I am thinking about which webserver to use behind varnish. 90% requests getting to the webserver are php-requests. So is there a difference between apache/mpm-prefork with mod_php and nginx/php-fpm? Is apache perhaps better, because it doesn't have the tcp overhead? Thanks for reply - benchmarks would be great! macindy

    Read the article

  • Are there any widely-agreed upon guidelines for rating your language knowledge on a scale?

    - by DVK
    The question was imagined after a co-worker was complaining for an hour about some guy who could not answer basic Java questions on an interview after self-identifying himself as "8 out of 10" on Java. While that was an obvious fib, I personally always had major trouble defining my specific language skills on a sliding scale unless I'm given specific guidelines (remember 40 standard libraries by heart? Able to solve 10 random Project Euler problems in <30 mins each? Can write implementation of A, B and C data-structures from scratch in 30 mins? Know 30% of standard? Can answer 50% of questions on StackOverflow pertaining to the language?) So, I was wondering - is there some sort of commonly accepted methodology for translating such tangible benchmarks into "rate yourself on a language between 1-10"? "Kernighan gets an A, God gets a B, everyone else gets C and less" type jokes are not helpful :)

    Read the article

  • PyPy -- How can it possibly beat CPython?

    - by Vulcan Eager
    From the Google Open Source Blog: PyPy is a reimplementation of Python in Python, using advanced techniques to try to attain better performance than CPython. Many years of hard work have finally paid off. Our speed results often beat CPython, ranging from being slightly slower, to speedups of up to 2x on real application code, to speedups of up to 10x on small benchmarks. How is this possible? Which Python implementation was used to implement PyPy? CPython? And what are the chances of a PyPyPy or PyPyPyPy beating their score? (On a related note... why would anyone try something like this?)

    Read the article

  • Why the same code in WPF is slower than in Windows Forms?

    - by Marco Bettiolo
    I made bunch of benchmarks of the framework 4.0 and older and I can't understand why the same code is slower when using WPF compared to Windows Forms: This is the code, it has nothing to do with the UI elements: Random rnd = new Random(845038); Int64 number = 0; for (int i = 0; i < 500000000; i++) { number += rnd.Next(); } The code takes 5968ms - 6024ms to execute in Windows Forms and 6953ms in WPF. Here is the post with the downloadable solution: http://blog.bettiolo.it/2010/04/benchmark-of-net-framework-40.html

    Read the article

  • What is a good DBMS for archiving?

    - by Thomas.Winsnes
    I've been stuck in a MsSql/MySql world now for a few years, and I've decided to spread my wings a little further. At the moment I'm researching which DBMS is good at things needed when archiving data. Eg. lots of writes and low reads. I've seen the NoSQL crusade, but I have a very RDBMS mindset, so I'm a bit skeptical. Anyone have any suggestions? Or even any pointers to where there are some benchmarks etc for this kind of stuff. Thank you :) Thomas

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

  • Solr Vs. Sphinx in a Ruby project

    - by Robert Ross
    I have a project that is being written on top of the Grape API framework in ruby. (https://github.com/intridea/grape) The problem I'm having is that Thinking-Sphinx vs. Sunspot (Gems used to interface with each search index) have worlds different benchmarks. View the Benchmark Here We're trying to develop something that is quick and easy to deploy (Solr needs Java). The issues we see right now is mainly that Solr is slower through Sunspot gem and Sphinx is faster through Thinking-Sphinx because Solr is HTTP REST calls where Sphinx is sockets. Anyone have any experience in either and can explain pitfalls / bonuses? Note: Needs to be deployable to Rails AND non-rails apps (Hence Sunspot). Thanks!

    Read the article

  • Is memcached a dinosaur in comparison to Redis?

    - by Industrial
    Hi everyone, I have worked quite a bit with memcached the last weeks and just found out about Redis. When I read this part of their readme, I suddenly got a warm, cozy feeling in my stomach: Redis can be used as a memcached on steroids because is as fast as memcached but with a number of features more. Like memcached, Redis also supports setting timeouts to keys so that this key will be automatically removed when a given amount of time passes. This sounds amazing. I'd also found this page with benchmarks: http://www.ruturaj.net/redis-memcached-tokyo-tyrant-mysql-comparison So, honestly - Is memcache really that old dinousaur that is a bad choice from a performance perspective when compared to this newcomer called Redis? I haven't heard lot about Redis previously, thereby the approach for my question!

    Read the article

  • O(N log N) Complexity - Similar to linear?

    - by gav
    Hey All, So I think I'm going to get buried for asking such a trivial but I'm a little confused about something. I have implemented quicksort in Java and C and I was doing some basic comparissons. The graph came out as two straight lines, with the C being 4ms faster than the Java counterpart over 100,000 random integers. The code for my tests can be found here; android-benchmarks I wasn't sure what an (n log n) line would look like but I didn't think it would be straight. I just wanted to check that this is the expected result and that I shouldn't try to find an error in my code. I stuck the formula into excel and for base 10 it seems to be a straight line with a kink at the start. Is this because the difference between log(n) and log(n+1) increases linearly? Thanks, Gav

    Read the article

  • How much faster is a database running in RAM?

    - by orokusaki
    I"m looking to run PostgreSQL in RAM for performance enhancement. The database isn't more than 1GB and shouldn't ever grow to more than 5GB. Is it worth doing? Are there any benchmarks out there? Is it buggy? My second major concern is: How easy is it to back things up when it's running purely in RAM. Is this just like using RAM as tier 1 HD, or is it much more complicated?

    Read the article

  • What is the Fastest Java/PHP Bridge?

    - by The Rook
    A bi-directional communication is required between a Java application and a PHP web app. A Fast and low Resource consumption is the number 1 priority in this project There is the PHP/Java Brdige(PJB) project and there are benchmarks to show that is very fast. Do you know of a faster approach? xml over http strikes me as a bit wasteful, but I don't know of a better approach. Ideally this system would be a remote connection, but due to time constraints it might have to be local. Do you know of a fastest way to accomplish this bi-directional connection?

    Read the article

  • C++0x optimizing compiler quality

    - by aaa
    hello. I do some heavy numbercrunching and for me floating-point performance is very important. I like performance of Intel compiler very much and quite content with quality of assembly it produces. I am thinking at some point to try C++0x mainly for sugar parts, like auto, initializer list, etc, but also lambdas. at this point I use those features in regular C++ by the means of boost. How good of assembly code do compilers C++0x generate? specifically Intel and gcc compilers. Do they produce SSE code? is performance comparable to C++? are there any benchmarks? My Google search did not reveal much. Thank you.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >