Search Results

Search found 102 results on 5 pages for 'commodity'.

Page 2/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • search dataset from xml file

    - by Anelim
    Hi, I need to filter the results I obtain when I load my xml file. For example I need to search the xml data for items with keyword "Chemistry" for example. The below xml example is a summary of my xml file. The data is loaded in a gridview. Could you help? Thanks! Xml File (summary): <CONTRACTS> <CONTRACT> <CONTRACTID>779</CONTRACTID> <NAME>ContractName</NAME> <KEYWORDS>Chemistry, Engineering, Chemical</KEYWORDS> <CONTRACTSTARTDATE>1/8/2005</CONTRACTSTARTDATE> <CONTRACTENDDATE>31/7/2008</CONTRACTENDDATE> <COMMODITIES><COMMODITY><COMMODITYCODE>CHEM</COMMODITYCODE> <COMMODITYNAME>Chemicals</COMMODITYNAME></COMMODITY></COMMODITIES> </CONTRACT></CONTRACTS> My code behind code is: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim ds As DataSet = New DataSet() ds.ReadXml(AppDomain.CurrentDomain.BaseDirectory + "/testxml.xml") Dim dtContract As DataTable = ds.Tables(0) Dim dtJoinCommodities As DataTable = ds.Tables(1) Dim dtCommodity As DataTable = ds.Tables(2) dtContract.Columns.Add("COMMODITYCODE") dtContract.Columns.Add("COMMODITYNAME") Dim count As Integer = 0 Dim commodityCode As String = Nothing Dim commodityName As String = Nothing Dim dRowJoinCommodity As DataRow Dim trimChar As Char() = {","c, " "c} Dim textboxstring As String = "KEYWORDS like 'pencil'" For Each dRow As DataRow In dtContract.Select(textboxstring) commodityCode = "" commodityName = "" count = dtContract.Rows.IndexOf(dRow) dRowJoinCommodity = dtJoinCommodities.Rows(count) For Each dRowCommodities As DataRow In dtCommodity.Rows If dRowCommodities("COMMODITIES_Id").ToString() = dRowJoinCommodity("COMMODITIES_ID").ToString() Then commodityCode = commodityCode + dRowCommodities("COMMODITYCODE").ToString() + ", " commodityName = commodityName + dRowCommodities("COMMODITYNAME").ToString() + ", " End If Next commodityCode = commodityCode.TrimEnd(trimChar) commodityName = commodityName.TrimEnd(trimChar) dRow("COMMODITYCODE") = commodityCode dRow("COMMODITYNAME") = commodityName Next GridView1.DataSource = dtContract GridView1.DataBind() End Sub

    Read the article

  • Maximum number of hard drives in a build-your-own NAS solution [closed]

    - by groovehunter
    My IT department has a bunch of older 160/320GB Drives. I'd like to use them in a build-your-own NAS device. What limitations exist in regards to the maximum number of drives that can be connected to typical commodity hardware that might be used in a situation like this? EDIT okay I like to specify my question is what to search for to find a storage controller which can handle many drives. I simply cannot find the right search terms.

    Read the article

  • YouSendIt Alternative?

    - by WuckaChucka
    Looking for a reasonably priced alternative to YouSendIt's exorbitant pricing for an embedded, unbranded (i.e. no "Uploads by SomeCompany" or at the very least, discrete, subtle co-branding) file upload solution for my client's print shop Website. To do what we want to do with YouSendIt, we're looking at a corporate account of $995 USD plus $29.99 USD monthly fee, that is only sold pro-rated, so you have to buy the entire year's worth. To me, this is just unacceptable considering the commodity pricing of storage and bandwidth nowadays. For data, we're looking at roughly 10MB per upload, with perhaps 250-1000 uploads per month, with transient data storage of no more than 30 days (and more than likely 1-2 business days) for a total of 10 GB transfer (upload) and 10 GB transfer (download, to the print shop) at the very max each month. Any ideas? Everything I've found through searching seems to be geared more towards personal file sharing and not for embedding into Websites. Thanks

    Read the article

  • .NET development on a Retina MacBook Pro with Windows 8

    - by Jeff
    I remember sitting in Building 5 at Microsoft with some of my coworkers, when one of them came in with a shiny new 11” MacBook Air. It was nearly two years ago, and we found it pretty odd that the OEM’s building Windows machines sucked at industrial design in a way that defied logic. While Dell and HP were in a race to the bottom building commodity crap, Apple was staying out of the low-end market completely, and focusing on better design. In the process, they managed to build machines people actually wanted, and maintain an insanely high margin in the process. I stopped buying the commodity crap and custom builds in 2006, when Apple went Intel. As a .NET guy, I was still in it for Microsoft’s stack of development tools, which I found awesome, but had back to back crappy laptops from HP and Dell. After that original 15” MacBook Pro, I also had a Mac Pro tower (that I sold after three years for $1,500!), a 27” iMac, and my favorite, a 17” MacBook Pro (the unibody style) with an SSD added from OWC. The 17” was a little much to carry around because it was heavy, but it sure was nice getting as much as eight hours of battery life, and the screen was amazing. When the rumors started about a 15” model with a “retina” screen inspired by the Air, I made up my mind I wanted one, and ordered it the day it came out. I sold my 17”, after three years, for $750 to a friend who is really enjoying it. I got the base model with the upgrade to 16 gigs of RAM. It feels solid for being so thin, and if you’ve used the third generation iPad or the newer iPhone, you’ll be just as thrilled with the screen resolution. I’m typically getting just over six hours of battery life while running a VM, but Parallels 8 allegedly makes some power improvements, so we’ll see what happens. (It was just released today.) The nice thing about VM’s are that you can run more than one at a time. Primarily I run the Windows 8 VM with four cores (the laptop is quad-core, but has 8 logical cores due to hyperthreading or whatever Intel calls it) and 8 gigs of RAM. I also have a Windows Server 2008 R2 VM I spin up when I need to test stuff in a “real” server environment, and I give it two cores and 4 gigs of RAM. The Windows 8 VM spins up in about 8 seconds. Visual Studio 2012 takes a few more seconds, but count part of that as the “ReSharper tax” as it does its startup magic. The real beauty, the thing I looked most forward to, is that beautifully crisp C# text. Consolas has never looked as good as it does at 10pt. as it does on this display. You know how it looks great at 80pt. when conference speakers demo stuff on a projector? Think that sharpness, only tiny. It’s just gorgeous. Beyond that, everything is just so responsive and fast. Builds of large projects happen in seconds, hundreds of unit tests run in seconds… you just don’t spend a lot of time waiting for stuff. It’s kind of painful to go back to my 27” iMac (which would be better if I put an SSD in it before its third birthday). Are there negatives? A few minor issues, yes. As is the case with OS X, not everything scales right. You’ll see some weirdness at times with splash screens and icons and such. Chrome’s text rendering (in Windows) is apparently not aware of how to deal with higher DPI’s, so text is fuzzy (the OS X version is super sharp, however). You’ll also have to do some fiddling with keyboard settings to use the Windows 8 keyboard shortcuts. Overall, it’s as close to a no-compromise development experience as I’ve ever had. I’m not even going to bother with Boot Camp because the VM route already exceeds my expectations. You definitely get what you pay for. If this one also lasts three years and I can turn around and sell it, it’s worth it for something I use every day.

    Read the article

  • New Whitepaper: Deploying E-Business Suite on Exadata and Exalogic

    - by Elke Phelps (Oracle Development)
    Our E-Business Suite Performance Team recently published a new whitepaper to assist you with deploying E-Business Suite on the Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine , also referred to as Exastack.  If you are considering a migration to Exastack, this new whitepaper will assist you understanding sizing requirements, deployment standards and migration strategies: Deploying Oracle E-Business Suite on Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine (Note 1460742.1) This whitepaper covers the following topics: Scalability and Sizing Examples - provides performance benchmark analysis with concurrent user counts, scaling analysis and sizing recommendations Deployment Standards - includes recommendations for deploying the various components of the E-Business Suite architecture on Exastack Migration Standards and Guidelines - includes an overview of methods for migrating from commodity hardware to Exastack References Our Maximum Availability Architecture (MAA) team has a number of whitepapers that provide additional information regarding Oracle E-Business Suite on the Oracle Exadata Database Machine.  Their library of whitepapers may be found here: MAA Best Practices - Oracle Applications Unlimited  Related Articles Running E-Business Suite on Exadata V2 Running Oracle E-Business Suite on Exalogic Elastic Cloud

    Read the article

  • Can a table be both Fact and Dimension

    - by PatFromCanada
    Ok, I am a newbie and don't really think "dimensionally" yet, I have most of my initial schema roughed out but I keep flipping back and forth on one table. I have a Contract table and it has a quantity column (tonnes), and a net price column, which need to be summed up a bunch of different ways, and the contract has lots of foreign keys (producer, commodity, futures month etc.) and dates so it appears to be a fact table. Also the contract is never updated, if that makes a difference. However, we create cash tickets which we use to pay out part or all of the contract and they have a contract ID on them so then the contract looks like a dimension in the cash ticket's star schema. Is this a problem? Any ideas on the process to resolve this, because people don't seem to like the idea of joining two fact tables. Should I put producerId and commodityId on the cash ticket? It would seem really weird not to have a contractID on it.

    Read the article

  • YouSendIt Alternative?

    - by user4855
    Looking for a reasonably priced alternative to YouSendIt's exorbitant pricing for an embedded, unbranded (i.e. no "Uploads by SomeCompany" or at the very least, discrete, subtle co-branding) file upload solution for my client's print shop Website. To do what we want to do with YouSendIt, we're looking at a corporate account of $995 USD plus $29.99 USD monthly fee, that is only sold pro-rated, so you have to buy the entire year's worth. To me, this is just unacceptable considering the commodity pricing of storage and bandwidth nowadays. For data, we're looking at roughly 10MB per upload, with perhaps 250-1000 uploads per month, with transient data storage of no more than 30 days (and more than likely 1-2 business days) for a total of 10 GB transfer (upload) and 10 GB transfer (download, to the print shop) at the very max each month. Any ideas? Everything I've found through searching seems to be geared more towards personal file sharing and not for embedding into Websites. Thanks

    Read the article

  • Help with function in string

    - by draice
    I have a variable, emailBody, which is set to a string stored in a database emailBody = DLookup("emailBody", "listAdditions", "Id = " & itemType) The string icludes an IIf function (which includes a dlookup function). ?emailBody The new commodity is" & Iif(dlookup("IsVague", "CommodityType", "Description= " & newItem)="1", "vague.", "not vague.") How do I properly format the string so that the function will be evaluated and the result stored in the string?

    Read the article

  • Could a truly random number be generated using pings to psuedo-randomly selected IP addresses?

    - by _ande_turner_
    The question posed came about during a 2nd Year Comp Science lecture while discussing the impossibility of generating numbers in a deterministic computational device. This was the only suggestion which didn't depend on non-commodity-class hardware. Subsequently nobody would put their reputation on the line to argue definitively for or against it. Anyone care to make a stand for or against. If so, how about a mention as to a possible implementation?

    Read the article

  • if statement is giving me some trouble

    - by kevin Mendoza
    For some reason, this if statement is giving me an "Expected : before ] token. if ([ [mine commodity] isEqualToString:@"Gold"] && [gold == YES]) { [tempMine setAnnotationType:iProspectLiteAnnotationTypeGold]; [mapView addAnnotation:tempMine]; } is there some typo here that I'm not seeing?

    Read the article

  • Inspiration and influence of the else clause of loop statements in Python?

    - by Aristide
    Python offers an optional else clause in loop statements, which is executed if and only if the loop is not terminated by a break. For an interesting discussion about this neglected commodity, see this question. Here, I just wanted to know: if the very concept of this loop-else construct originates from another language (either theoretical or actually implemented), conversely, if it was taken up in any newer language. May be I should ask the former to Guido, but he surely is a too busy guy for such a futile inquiry. ;-)

    Read the article

  • One IP, One Port, Multiple Servers

    - by Adrian Godong
    I am looking for a solution to forward one public IP address and one specific port to different machines based on hostname (as of now, I need it only for HTTP). The current setup is NAT on a commodity router (it only provide simple public port to private IP address / port forwarding). I can add a Windows Server 2008 R2 machine before the router if required, but prefer not to do so. So ideally, I would like to have the current setup and the forwarding is done on one of the Windows Servers. Is it possible to do this?

    Read the article

  • Instant messanger capable of offline messaging & tolerant of network interruptions

    - by Terry
    I am looking for an instant messaging solution to facilitate communications between recovery vehicles in remote rural areas. All the vehicles have internet connections, but they are intermittent depending on location. Ideally we'd like something that has the following features: Offline messaging: messages sent to clients who are offline will be delivered when they next come online, regardless of whether the sender is still online or not. Lightweight: CPU cycles are limited on the machines in these vehicles. A bloated solution will be an issue. Client platform is primarily win32, but support for osx/linux/mobile devices would be a bonus. Non-chatty: Bandwidth is a precious commodity for us, so services which use a minimal amount are ideal. Fault tolerant: We see plenty of packetloss and high latency, so whatever we use needs to be able to function in trying network conditions. I'm not fussed if we use a hosted platform like gtalk/skype/msn/icq/whatever, and likewise I can run a server if need be. Suggestions would be appreciated!

    Read the article

  • Is there a server distro with the capability of syncing live data to multiple machines?

    - by Adam Hart
    Scenario: I have a main server that is used for pagebuilding/storing master data, and is accessed by a few clients on site. This company also has multiple branches with their own server that that connect to locally, but need to work with all the same data, and have it synchronized across all servers in real (or close) time. Is there a way/specific server OS that can sync live data across all of these servers? These servers would also need to be able to: Configure AFP, FTP, CIFS, SMB Continue to host their web server and database server in a Microsoft environment, but move the file server off to commodity hardware Just wondering if this is even possible.

    Read the article

  • Distributed, Parallel, Fault-tolerant File System

    - by Eddified
    There are so many choices that it's hard to know where to start. My requirements are these: Runs on Linux Most of the files will be between 5-9 MB in size. There will also be a significant number of small-ish jpgs (100px x 100px). All of the files need to be available over http. Redundancy -- ideally it would provide the space efficiency similar to RAID 5 of 75% (in RAID 5 this would be calculated thus: with 4 identical disks, 25% of the space is used for parity = 75% efficent) Must support several petabytes of data scalable runs on commodity hardware In addition, I look for these qualities, though they are not "requirements": Stable, mature file system Lots of momentum and support etc I would like some input as to which file system works best for the given requirements. Some people at my organization are leaning towards MogileFS, but I'm not convinced of the stability and momentum of that project. GlusterFS and Lustre, based on my limited research, appear to be better supported... Thoughts?

    Read the article

  • Virtualizing OpenSolaris with physical disks

    - by Fionna Davids
    I currently have a OpenSolaris installation with a ~1Tb RaidZ volume made up of 3 500Gb hard drives. This is on commodity hardware (ASUS NVIDIA based board on Intel Core 2). I'm wondering whether anyone knows if XenServer or Oracle VM can be used to install 2009.06 and get given physical access to the three SATA drives so that I can continue to use the zpool and be able to use the Xen bits for other areas. I'm thinking of installing the JeOS version of OpenSolaris, have it manage just my ZFS volume and some other stuff for work(4GB), then have a Windows(2GB) and Linux(1GB) VM (theres 8Gb RAM on that box) virtualised for testing things. Currently I am using VirtualBox installed on OpenSolaris for the Windows and Linux testing but wondered if the above was a better alternative. Essentially, 3 Disks - OpenSolaris Guest VM, it loads the zpool and offers it to the other VMs via CIFS.

    Read the article

  • Non-Windows, non-Unix-like OS's?

    - by dsimcha
    Since most operating systems I've heard of besides Windows seem to derive their heritage from Unix, I've been curious whether any OS's with the following characteristics exist: Not generally considered Unix-like, i.e. wasn't designed with Unix compatibility as a primary goal, doesn't use X11 as its default GUI in the most common distributions, doesn't support Unix commands by default, etc. Not in the Windows NT family. Is a modern production operating system, not a purely legacy operating system, a research/hobby project or an OS that's still in an alpha state. Is targeted at commodity x86/x64 PC hardware.

    Read the article

  • Network adapters reliability

    - by casey_miller
    Can you help me with understanding of reliability of network adapters. Most of the time servers do have at least 2 NIC's bonded to provide sort of a HA for it. So in case of one NIC fails, the second would still do the job. I wonder which factors work when you use network adapters. I know that, the most important and weakest part of any computer system is: storage (i.e HDD). but how reliable actually network adapters are? There are more expensive ones, and cheaper adapters. In which cases do they actually fail? In what circumstances. May it be a intensive usage of them Time when it's on In your experience how often you found yourself changing NIC's due to their fail? Or just what's the typical lifetime of commodity NIC's? thanks.

    Read the article

  • mirroring linux server to external usb harddrive

    - by DuPie
    My google-fu must be sucking. i havent been able to find a good solution for the following: numerous Linux server on commodity hardware Trying to do a recovery mirror copy to external harddrives External harddrives are smaller than source harddrives, but larger than data External drives are connected via usb2 (slow) Servers range from 20GB of data to 400GB of data Servers are remote, so hands on access is a pain need to copy boot files. empty external drives currently Basicly, looking for a way to do use a ghosting solution from INSIDE a running linux server to an external harddrive, without booting a cd etc. the rsync/cpio solutions i've looked at dont work great with grub/dev/proc etc. I understand that since the system isnt offline, it wont be a "mirror" image as files change, but thats ok. Are there any free/commercial products that would work?

    Read the article

  • why are CPUs so much more expensive in the UK than US?

    - by Nick Fortescue
    I'm looking at building my own PC. An Intel Core i7 960 3.2 Ghz is about £457 in the UK at various online retailers. In the US the price at newegg is $570 (this is about £380 at current exchange rates). 2 questions. 1) Why the difference (about 20%)? All I can think of is sales tax. 2) Am I right in assuming this is just a commodity part - if I ordered one from the US there is no reason it would be any different from one bought in the UK?

    Read the article

  • Real or False Recovery? Economic 'tea-leaves'

    - by [email protected]
    "Information-technology is allowing the city's economy to speak to us in lots of different ways," Mr. Egan said. "We just need to find new ways of listening." Source: "New Way to Read Economy" WSJ_ARTICLE  April 8th, Carli Tuna, Blog by ARC's Steve Banker Apr 12, 2010 Alan Greenspan used cardboard box purchases and other 'source-commodity' indicators. The Carli Tuna WSJ article said that truck diesel fuel sales are a reliable indicator. What factor do you and your company use as future forward indicators? .. is it quotes, perhaps calls into the call center or sales activity?  Is your business moving to the internet and your supply chain driven by your iStore?  How do your distributors, retailers and supply chain partners provide the 'side-line' signals to you to either ramp up or contract production? With competition being only one click away, organizations need to know with higher degrees of certainty, what the econmic 'tea-leaves' are telling us and how firms need to react with production and shipping forecasts.  Firms using the latest forecasting and supply chain analytical (Bus.Intelligence) tools and technologies appear to be leading their markets "Had we been aware of that data in 2008," Mr. Leamer said, "we would have made a different call." .        

    Read the article

  • Talking JavaOne with Rock Star Raghavan Srinivas

    - by Janice J. Heiss
    Raghavan Srinivas, affectionately known as “Rags,” is a two-time JavaOne Rock Star (from 2005 and 2011) who, as a Developer Advocate at Couchbase, gets his hands dirty with emerging technology directions and trends. His general focus is on distributed systems, with a specialization in cloud computing. He worked on Hadoop and HBase during its early stages, has spoken at conferences world-wide on a variety of technical topics, conducted and organized Hands-on Labs and taught graduate classes.He has 20 years of hands-on software development and over 10 years of architecture and technology evangelism experience and has worked for Digital Equipment Corporation, Sun Microsystems, Intuit and Accenture. He has evangelized and influenced the architecture of numerous technologies including the early releases of JavaFX, Java, Java EE, Java and XML, Java ME, AJAX and Web 2.0, and Java Security.Rags will be giving these sessions at JavaOne 2012: CON3570 -- Autosharding Enterprise to Social Gaming Applications with NoSQL and Couchbase CON3257 -- Script Bowl 2012: The Battle of the JVM-Based Languages (with Guillaume Laforge, Aaron Bedra, Dick Wall, and Dr Nic Williams) Rags emphasized the importance of the Cloud: “The Cloud and the Big Data are popular technologies not merely because they are trendy, but, largely due to the fact that it's possible to do massive data mining and use that information for business advantage,” he explained. I asked him what we should know about Hadoop. “Hadoop,” he remarked, “is mainly about using commodity hardware and achieving unprecedented scalability. At the heart of all this is the Java Virtual Machine which is running on each of these nodes. The vision of taking the processing to where the data resides is made possible by Java and Hadoop.” And the most exciting thing happening in the world of Java today? “I read recently that Java projects on github.com are just off the charts when compared to other projects. It's exciting to realize the robust growth of Java and the degree of collaboration amongst Java programmers.” He encourages Java developers to take advantage of Java 7 for Mac OS X which is now available for download. At the same time, he also encourages us to read the caveats. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Raghavan Srinivas

    - by Janice J. Heiss
    Raghavan Srinivas, affectionately known as “Rags,” is a two-time JavaOne Rock Star (from 2005 and 2011) who, as a Developer Advocate at Couchbase, gets his hands dirty with emerging technology directions and trends. His general focus is on distributed systems, with a specialization in cloud computing. He worked on Hadoop and HBase during its early stages, has spoken at conferences world-wide on a variety of technical topics, conducted and organized Hands-on Labs and taught graduate classes.He has 20 years of hands-on software development and over 10 years of architecture and technology evangelism experience and has worked for Digital Equipment Corporation, Sun Microsystems, Intuit and Accenture. He has evangelized and influenced the architecture of numerous technologies including the early releases of JavaFX, Java, Java EE, Java and XML, Java ME, AJAX and Web 2.0, and Java Security.Rags will be giving these sessions at JavaOne 2012: CON3570 -- Autosharding Enterprise to Social Gaming Applications with NoSQL and Couchbase CON3257 -- Script Bowl 2012: The Battle of the JVM-Based Languages (with Guillaume Laforge, Aaron Bedra, Dick Wall, and Dr Nic Williams) Rags emphasized the importance of the Cloud: “The Cloud and the Big Data are popular technologies not merely because they are trendy, but, largely due to the fact that it's possible to do massive data mining and use that information for business advantage,” he explained. I asked him what we should know about Hadoop. “Hadoop,” he remarked, “is mainly about using commodity hardware and achieving unprecedented scalability. At the heart of all this is the Java Virtual Machine which is running on each of these nodes. The vision of taking the processing to where the data resides is made possible by Java and Hadoop.” And the most exciting thing happening in the world of Java today? “I read recently that Java projects on github.com are just off the charts when compared to other projects. It's exciting to realize the robust growth of Java and the degree of collaboration amongst Java programmers.” He encourages Java developers to take advantage of Java 7 for Mac OS X which is now available for download. At the same time, he also encourages us to read the caveats.

    Read the article

  • Oracle's SPARC T4, 007 Style

    - by Kristin Rose
    The names 4, T4, and this power house travels hand in hand with its good friend SPARC. About 6 years ago on-chip encryption acceleration was first shipped in a commercial system, the SPARC T1. Today, thanks to Oracle SPARC innovative leadership in on-chip encryption acceleration, complex cryptographic computations was born and has since rapidly evolved. Customers can now have security with performance because we my friend, are in the Age of Big Data.If you need some high speed action in your life, listen here. The SPARC T4 systems offer customers much more value for applications than just increased performance through its cross sell opportunity. This is done by enabling partners to integrate your own applications to Oracle’s SPARC T4 Servers for Cloud deployments, and providing direct business benefits that supersedes the commodity approach to data center computing such as security, performance and optimization.As companies continue down this complex path of big data, eCommerce, and mobility, the need to provide better and more in-depth security is more prominent than ever. Oracle’s SPARC T4 processor allows customers to deliver the highest levels of application security, as well as deliver the necessary level performance without added cost, and complexity.To learn more behind the value of SPARC T4, check out a more in-depth blog here. For more on the SPARC T4 family of products, click here.Encryption Lives Another Day,The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >