Search Results

Search found 13808 results on 553 pages for 'remote storage'.

Page 90/553 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

  • IIS FTP Server works locally, but cannot connect from remote

    - by Mike Christensen
    I'm trying to setup an FTP server on Windows 2008 Server. I can connect locally: C:\>ftp localhost Connected to WebHead1 220 Microsoft FTP Service However, when I try to connect from remote, it doesn't work: ~>ftp x.x.x.x ftp: Can't connect to `x.x.x.x': Operation timed out ftp: Can't connect to `x.x.x.x' I've tried everything I can think of with the settings. The FTP server is bound to all unassigned IPs and listening on port 21. I've also checked "FTP Server" in the firewall settings. Nothing appears in the FTP log files. I'm totally out of ideas!

    Read the article

  • Cyrus IMAP: Unable to connect to remote host: Connection refused

    - by Nick
    I'm working on setting up a Cyrus 2.2 IMAP server on Ubuntu Server 9.04. If I telnet from the server itself: # telnet localhost imap I get: * OK IMAP Cyrus IMAP4 v2.2.13-Debian-2.2.13-14ubuntu3 server ready Which is what I should be seeing. If I try from another machine on the network: telnet 192.168.5.122 imap I get: telnet: Unable to connect to remote host: Connection refused To the best of my knowledge, there is no firewall running on the box. I've tried restarting the saslauthd and cyrus2.2 daemons, with no effect. What else can I try?

    Read the article

  • Guide for installing Zenoss remote SSH monitoring plugin for Ubuntu

    - by normalocity
    I'm trying out Zenoss. I got it to monitor a test machine via SNMP - that was easy enough. Now I want to add another server that is remote, and I want to use the SSH plugin. I've been using this guide, but it skips a few steps for non-RedHat systems. I'm on Ubuntu. The steps I have down so far are: Install alien Convert the .rpm to a .deb file Use dpkg to install teh .deb file My issue: where to get the .rpm file in the first place?

    Read the article

  • Command-line access to remote MySQL server

    - by Jerry Krinock
    I administer a website on a remote, shared host. My web host offers MySQL, and I am able to access this from my Mac OS X computer using a GUI program, Sequel Pro. That works great. But I want to script some queries, and Sequel Pro is not scriptable. What should I do? I've read about tunneling to mysql via SSH. I have shell access to the server, with an SSH key on my Mac, so ssh [email protected] -p 7978 gets me in. Should tunneling the MySQL port 3306 work? Like this? ssh [email protected] -L 3306:127.0.0.1:3306 (It "times out" after a minute.) Do I need to install mysql on my Mac?

    Read the article

  • Hyper-V Manager: right-clicking on remote VM crashes MMC snap-in

    - by Greg Bray
    I have a Windows Server 2008 R2 Enterprise SP1 machine that I log into and use to manage virtual machines running on multiple Hyper-V servers on our domain. Sometimes, when I right-click on a remote VM, the Hyper-V Manager will crash and display the following error message: If I use the Actions menu on the lower right, it works just fine, but for some reason right-clicking causes MMC to stop working. Is there any way to fix this issue? Here are the full details of the error message. Description: Stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: mmc.exe Problem Signature 02: 6.1.7600.16385 Problem Signature 03: 4a5bc808 Problem Signature 04: Microsoft.Virtualization.Client Problem Signature 05: 6.1.0.0 Problem Signature 06: 4ce7c9e3 Problem Signature 07: 342 Problem Signature 08: 1f Problem Signature 09: System.OverflowException OS Version: 6.1.7601.2.1.0.274.10 Locale ID: 1033 Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt

    Read the article

  • ESET Remote Administrator Console showing infected files on a client, but threat log is empty

    - by Aron Rotteveel
    We recently deployed ESET NOD32 Antivirus on our small domain network and use the Remote Adminstrator to manage everything remotely. On a recent full system scan, one of the clients shows 10 infected files of which 4 have been cleaned in the scan log. The strange thing, however, is that the threat log is empty. Is there any reason why the threat log is empty? What has happened to the 6 remaining uncleaned files? Where can I view information on what files are infected and what they have been infected with? I know this can be done through the scan log properties screen, but with 958790 files scanned, I obviously do not want to browse through this list. Any help is appreciated.

    Read the article

  • Fast Remote PHP Technique To Detect Image 404

    - by Volomike
    What PHP script technique runs the fastest in detecting if a remote image does not exist before I include the image? I mean, I don't want to download all the bytes of the remote image -- just enough to detect if it exists. And while on the subject but with just a slight deviation, I'd like to download just enough bytes to determine a JPEG's width and height information. Speed is very important in my concern here on this system design I'm working on.

    Read the article

  • Error pushing to remote with git

    - by pcm2a
    I have a fresh Centos 6 server stood up and I have installed git version 1.7.1 through yum. I am using the smart http method through apache for access. When I try to push to the remote server this is what I get: $ git push origin master Password: Counting objects: 6, done. Compressing objects: 100% (3/3), done. Writing objects: 100% (6/6), 436 bytes, done. Total 6 (delta 0), reused 0 (delta 0) error: unpack failed: index-pack abnormal exit I have tried these things which made no difference: chown -R apache:apache /path/to/git/repository (httpd runs as apache) chown -R apache:users /path/to/git/repository chmod -R 777 /path/to/git/repository (obviously not secure but wanted to eliminate this being a file permission problem) What can I try to get pushing to work?

    Read the article

  • Netscreen-Remote Equivalent On Linux

    - by mojah
    We're running a simple Juniper VPN tunnel (using Juniper SSG5's) for outside-network access, which works great for Windows PCs since they can connect using the supplied Netscreen-Remote VPN client. Has anyone successfully managed to get this working under Linux? There are several alternatives, but none seem to actually work. The following were tried, but failed: - http://www.prolixium.com/netscreenlinux - http://david.dw-perspective.org.uk/Juniper-Networks-SSL-VPN-Client-On-Linux.html The official version is no Linux Client will ever be developed by Juniper themselves, but perhaps other (open) software exists that has been found compatible to Juniper's VPN?

    Read the article

  • Merge only a one remote branch into a local branch with Mercurial

    - by Pepijn
    I wan to manage some profiles as XML files in Mercurial repos. The setup I'm thinking of: Each user has a repo with a branch where he manages his own profile, and a number of branches where he can pull and merge other profiles from that branch of another user. So for example I have my own profile branch and a branch labeled friends, in which I want to pull the profile branches of a few remote repos, to collect like a collection of profiles. I figured out that since the repos are unrelated I need to use -f, but I can't figure out how to pull and merge only a single branch into another. So I want like me friend someone profile ---> friends <--- profile \-> family friends <--- profile Is this even possible? Should I use separate repos instead? Is there a better solution?

    Read the article

  • Hyper-V Manager right clicking on remote VM causes MMC error

    - by Greg Bray
    I have a Windows 2008 R2 Enterprise Server with SP1 that I log into and use to manage virtual machines running on multiple HyperV servers on our domain. Sometimes when I right click on a remote VM the HyperV Manager will crash and display the following error message: If I use the Actions menu on the lower right it works just fine, but for some reason right clicking cause MMC to stop working. Is there any way to fix this issue? Here are the full details of the error message. Description: Stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: mmc.exe Problem Signature 02: 6.1.7600.16385 Problem Signature 03: 4a5bc808 Problem Signature 04: Microsoft.Virtualization.Client Problem Signature 05: 6.1.0.0 Problem Signature 06: 4ce7c9e3 Problem Signature 07: 342 Problem Signature 08: 1f Problem Signature 09: System.OverflowException OS Version: 6.1.7601.2.1.0.274.10 Locale ID: 1033 Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt

    Read the article

  • Remote installation and configuration software, preferrably open source

    - by Brimstedt
    Hello, Im looking for some remote-installation software. I've looked briefly at unattended, opsi and a bunch of stuff, but there is nowhere near enough time to evaluate them all, and they are rather complex to setup so some insight would be very appreciated. This is foremost for windows clients, but linux support would be good. Something like apt-get would've been great. Requirements: Simple to setup and use Set up groups of users (developers, management, sales, etc) Chose which software to be installed for different groups Add new software to groups and it will be automatically installed on client Dependencies between software Nice-to-have: Linux client support OS-unattended installation thanks in advance

    Read the article

  • Fedora 17 - Can't access remote machine using hostname

    - by Aaron
    I am using Fedora 17 and am trying to access a remote machine (running Fedora 15) using its hostname which isn't working. The machine is right next to me on the same switch as my machine (so they are both on the same network with the same subnet and everything). When I was running Windows (7 32-bit) on my machine I could access the other machine no problem but now that I am running Fedora 17 that's not the case. Is there an additional daemon or something that I need to be using in order for this to work?

    Read the article

  • Administer postgres from PGAdmin on remote mac using ssh tunnel

    - by Aidan Ewen
    I've got PostgreSQL installed on a Ubuntu server and I'm trying to connect to that server using PGAdmin on a remote macbook. I've created an ssh tunnel - macbook:~postgres$ ssh -L 5423:localhost:5432 [email protected] And I can connect using psql on the macbook as expected - macbook:~ me$ psql -U postgres -p 5423 -h localhost ... postgres=# In the 'New Server Registration' window on PGAdminIII I'm entering the following credentials - Name - MyServer Host - localhost Port - 5423 Maintenance DB - postgres Username - postgres Password - <remote_postgres_password> However the connection fails - Error connecting to the server: FATAL: password authentication failed for user "postgres" Not sure what's going on here, these seem to be the same credentials I've used for psql.

    Read the article

  • Powershell Remoting: using imported module cmdlets in a remote pssession

    - by Joanthan Matheus
    Is there a way to use modules that were imported in a local session in a remote session? I looked at import-pssession, but I don't know how to get the local session. Here's a sample of what I want to do. import-module .\MyModule\MyModule.ps1 $session = new-pssession -computerName RemoteComputer invoke-command -session $session -scriptblock { Use-CmdletFromMyModule } Also, I do not want to import-module in the remote session, as the ps1 files are not on that server.

    Read the article

  • ssh_exchange_identification: Connection closed by remote host?

    - by user51684
    debug1: Connection established. debug1: identity file /home/DAMS/.ssh/id_rsa type 1 debug1: identity file /home/DAMS/.ssh/id_rsa-cert type -1 debug1: identity file /home/DAMS/.ssh/id_dsa type -1 debug1: identity file /home/DAMS/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host hello this one is different . no missing or anything. im using cygwin. and it just stop when im doing git push production on my server. usually its ok, but i dont know why its stop connections i wonder whats wrong.

    Read the article

  • Remote deploying wars to a liferay installation

    - by iftrue
    With vanilla tomcat, you can POST to URLs beneath SOMURL/manager/ with a proper manager user role defined. The liferay deployment of tomcat, however, is missing the manager and host-manager applications, and when I copy the directories from a vanilla Tomcat installation, I get the exception below: Exception: javax.servlet.ServletException: Error allocating a servlet instance org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) root cause java.lang.SecurityException: Servlet of class org.apache.catalina.manager.HTMLManagerServlet is privileged and cannot be loaded by this web application org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) What's the proper way to remote deploy wars to a liferay instance? (Not portlets, in my case.)

    Read the article

  • cannot mount remote partition using fstab/fuse

    - by HorusKol
    Using a combination of http://ubuntu-tutorials.com/2007/01/02/mount-remote-directories-securely-with-ssh-ubuntu-6061-610/ and http://www.tuxfiles.org/linuxhelp/fstab.html I figured I could mount the root of another computer to somewhere on my new laptop to make it easier to transfer files and stuff. Now, I can connect through SSH and browser the files through an ad-hoc mount - but I would like to be able to do this automatically, and so had a look at fstab. my new entry in fstab is: remote_comp:/ /var/remote_comp fuse defaults 0 0 but testing with mount -a results in the following error: /bin/sh: remote_comp:/: not found I thought the problem was because I was trying to mount the root of the other computer, but even trying sub-directories result in the same error message.

    Read the article

  • Remote monitoring with screen capture

    - by Anonymous
    Is there anyway i can perodically take screenshots of a remote computer using nagios? I am experimenting with Nagios, and i am trying to explore different monitoring. So my question is apart from using nagios to monitor cpu usage, bandwidth utilization, uptime etc.. can i monitor my worker's productivity by checking what is he doing on his computer in a form of image output. Being able to monitor processes would not be of much help to me, as i only know if he or she is running firefox.exe for example he maybe using excessive use of firefox for facebook or other stuff but he claims he is troubleshooting and looking for solutions on forums. I saw a check_vnc script but i am unable to install the requsite vnc server anyone succesfully tried the vnc script care to share how to go about it? If not anyother way to try this?

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • ListActivity Item with remote image

    - by TPan
    Hello, I have a ListActivity that should display quite a lot of items and where each list item should contain a text and and an image. The images are gotten from a remote server. How can I display the remote image on the list item. Thanks in advance

    Read the article

  • How do I access Glassfish V3 Administration Console Website from a remote host

    - by Tom
    I have installed Glassfish v3 on a standalone server running ubuntu-server 9.10. I can open the Admin website if I use a browser running on the server by browsing to: http:// localhost:4848/ I would like to access it from a remote machine by browsing to something like http:// mydomain.com:4848/ The firewall is definitely allowing traffic through on that port (4848) and I can access the application server by browsing to: http:// mydomain.com:8080/ How can I allow remote access to the administration website?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >