Search Results

Search found 3409 results on 137 pages for 'distributed computing'.

Page 47/137 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • ArchBeat Link-o-Rama for 2012-06-21

    - by Bob Rhubart
    Software Architects Need Not Apply | Dustin Marx "I think there is a place for software architecture," says Dustin Marx, "but a portion of our fellow software architects have harmed the reputation of the discipline." For another angle on this subject, check out Out of the Tower, Into the Trenches from the Nov/Dec edition of Oracle Magazine. Oracle Data Integrator 11g - Faster Files | David Allan David Allan illustrates "a big step for regular file processing on the way to super-charging big data files using Hadoop." 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in SF Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. WLST Domain creation using dry-run | Michel Schildmeijer What to do "if you want to browse through your domain to check if settings you want to apply satisfy your requirements." Cloud opens up new vistas for service orientation at Netflix | Joe McKendrick "Many see service oriented architecture as laying the groundwork for cloud. But at one well-known company, cloud has instigated the move to SOA." How to avoid the Portlet Skin mismatch | Martin Deh Detailed how-to from WebCenter A-Team blogger Martin Deh. Internationalize WebCenter Portal - Content Presenter | Stefan Krantz Stefan Krantz explains "how to get Content Presenter and its editorials to comply with the current selected locale for the WebCenter Portal session." Oracle Public Cloud Architecture | Tyler Jewell Tyler Jewell discusses the multi-tenancy model and elasticity solution implemented by Oracle Cloud in this QCon presentation. A Distributed Access Control Architecture for Cloud Computing The authors of this InfoQ article discuss a distributed architecture based on the principles from security management and software engineering. Thought for the Day "Let us change our traditional attitude to the construction of programs. Instead of imagining that our main task is to instruct a computer what to to, let us concentrate rather on explaining to human beings what we want a computer to do." — Donald Knuth Source: Quotes for Software Engineers

    Read the article

  • Additional new content SOA Partner Community

    - by JuergenKress
    Oracle Reference Architecture: Application Infrastructure Foundation One of the earliest additions to the IT Strategies from Oracle library, this paper describes the concepts and capabilities of the application infrastructure and defines the platform on which solutions are built. Read it. Scaling Service Oriented Architecture What is scaling, and what does it mean to a service oriented architecture? Author Philip Wik explores those issues and proposes Oracle-based solutions to SOA scaling and a SOA scaling roadmap. Read it. SOA, Cloud, and Service Technologies: A Conversation with Thomas Erl Thomas Erl, the world's best selling SOA author, is joined by Oracle SOA experts Tim Hall and Demed L'Her for a wide ranging four-part conversation on the evolution of SOA and the emergence of the architect in the era of cloud computing. Listen to the Podcast & Read a Transcript Cloud e-book Invite your customers to download this Cloud e-book, packed with multi-media resources to educate your customers on the value of Oracle Cloud computing. Assessment: Are you Leading or Lagging when it comes to SOA and BPM? Take the online SOA Assessment and BPM Assessment. New Collateral: Whitepaper Series: The Promise of BPM Technology for Financial Services Institutions - Resource Kit Whitepaper: Reaching Process Excellence with Process Accelerators - PDF Demystifying Cloud Integration: Whitepapers, webcasts, and customer case studies - Resource Kit Whitepaper: Leveraging Governance to sustain Enterprise Architecture - PDF Article: Rethink SOA: A Recipe for Business Transformation - Article Oracle SOA Resource Kit Oracle SOA Governance Resource Kit Oracle BPM Resource Kit SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • links for 2011-03-18

    - by Bob Rhubart
    Events Overview (tags: ping.fm entarch) No description available. (tags: ping.fm) Andrejus Baranovskis: SOA & E2.0 Partner Community Forum Slides Oracle ACE Director Andrejus Baranovskis shares slides from his presentation at the SOA & E2.0 Partner Community Forum in Netherlands. (tags: oracle otn oracleace soa enterprise2.0 webcenter) ODTUG Kaleidoscope 2011 - The Premier Conference for Oracle Fusion Middleware AMIS Technology blog Oracle ACE Director Lucas Jellema shares information on what he considers "the best event for anyone doing, dabbling in or considering doing Oracle Fusion Middleware." (tags: oracle otn oracleace odtug fusionmiddleware) Mark Rittman: ODTUG K-Scope 2011 Early Bird Deadline is Closing "The deadline for Early Bird registrations for Kscope is fast approaching [March 25]. If you want to attend at the discounted rate, sign up soon." - Oracle ACE Director Mark Rittman (tags: oracle otn oracleace odtug) Master Data Management and Cloud Computing (Oracle Master Data Management) "Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed." - David Butler (tags: oracle otn mdm cloud)

    Read the article

  • MapReduce

    - by kaleidoscope
    MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of  intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data,  scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Example: A process to count the appearances of each different word in a set of documents void map(String name, String document):   // name: document name   // document: document contents   for each word w in document:     EmitIntermediate(w, 1); void reduce(String word, Iterator partialCounts):   // word: a word   // partialCounts: a list of aggregated partial counts   int result = 0;   for each pc in partialCounts:     result += ParseInt(pc);   Emit(result); Here, each document is split in words, and each word is counted initially with a "1" value by the Map function, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce, thus this function just needs to sum all of its input values to find the total appearances of that word.   Sarang, K

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2, 2012

    - by Bob Rhubart
    ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI, " reports Oracle ACE Director Andrejus Baranovskis. Big Data: Running out of Metric System | Andrew McAfee Do very large numbers make your brain hurt? Better stock up on aspirin. According to Andrew McAfee: "It seems safe to say that before the current decade is out we’ll need to convene a 20th conference to come up with some more prefixes for extraordinarily large quantities not to describe intergalactic distances or the amount of energy released by nuclear reactions, but to capture the amount of digital data in the world." Cloud computing will save us from the zombie apocalypse | Cloud Computing - InfoWorld "It's just a matter of time before we migrate our existing IT assets to public cloud systems," says InfoWorld cloud blogger David Linthicum. "Additionally, it's a short window until the dead rise from the grave and attempt to eat our brains." Is is Halloween or something? Thought for the Day "A computer lets you make more mistakes faster than any invention in human history—with the possible exceptions of hand guns and tequila." — Mitch Ratcliffe

    Read the article

  • New Book: Oracle Exalogic Elastic Cloud Handbook

    - by user12608550
    Oracle Exalogic Elastic Cloud Handbook, by Tom Plunkett, TJ Palazzolo, and Tejas Joshi, Oracle Press. The well-known characteristics and tiers of cloud computing have spawned myriad implementations by a host of vendors and system integrators. One of these, Oracle's Exalogic Elastic Cloud, part of Oracle's family of Engineered Systems, is a key component of Oracle's public and private cloud computing solutions, providing critical PaaS (Platform as a Service) features for cloud developers. These developers need guidance to take advantage of Exalogic's extensive capabilities, and the Oracle Exalogic Elastic Cloud Handbook, written by three highly experienced Oracle technologists, provides that guidance. Part One of the book covers Exalogic's hardware and software components, and includes a very useful chapter on deployment examples, describing best practices for scalabiity, availability, backup and recovery, and multi-tenant security, including integration with other Oracle Engineered Systems and products such as Exadata and storage subsystems. Part Two is a thorough guide to Exalogic installation features, configuration and monitoring, packaged application software management, and scalable application development. The book also provides an extensive list of online resources, including pointers to Web sites, whitepapers, instructional videos, and other Oracle documentation. So, if you're planning to implement Exalogic as part of your cloud infrastructure, or are considering such, you'll find lots of sage advice and best practices in this handbook.

    Read the article

  • ArchBeat Link-o-Rama for 11/29/2011

    - by Bob Rhubart
    Webcast: Introducing Oracle WebLogic Server 12c: Developer Deep Dive December 1, 2011 11am - 12pm PT / 2pm - 3pm ET. Learn how Oracle WebLogic Server 12c enables rapid development of modern, lightweight Java EE 6 applications. Discover how you can leverage the latest development technologies, tools and standards when deploying to Oracle WebLogic Server across both conventional and Cloud environments. Web Services in BI Publisher 11g | Robin Moffatt BI Publisher 11g comes with a shiny set of new Web Services, superseding those that were in 10g. Robin Moffatt's article discusses some of the uses, and ways to implement them. Stanford expands free, online information technology course offerings | ZDNet Joe McKendrick reports on new Stanford online courses set to start in January 2012. Courses include Software as a Service and Computer Science 101. The federal government's secret 1966 cloud computing plan | ZDNet "Even as far back as 45 years ago, the US federal government struggled to consolidate and become more service-oriented across its agency silos," says McKendrick. SOA Made Simple; Architects in AZ; Introduction to Cloud Migration This week on the Oracle Technology Network Architect Home Page. New release of S-ASH v.2.3 | Marcin Przepiorowski A short post from Marcin Przepiorowski on the new version of Oracle Simulate ASH. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ Spend the day with your peers learning from Oracle experts on Cloud Computing, Engineered Systems, and more. Wednesday, December 14, 2011. 8:30am to 5:00pm. Registration is free, but seating is limited.

    Read the article

  • Windows Azure Event

    - by Blog Author
    Get cloud ready with Windows Azure The cloud is everywhere and here at Microsoft we’re flying high with our cloud computing release, Windows Azure. As most of you saw at the Professional Developers Conference, the reaction to Windows Azure has been nothing short of “wow” – and based on your feedback, we’ve organized this special, all-day Windows Azure Firestarter event to help you take full advantage of the cloud. Maybe you've already watched a webcast, attended a recent MSDN Event on the topic, or done your own digging on Azure. Well, here's your chance to go even deeper. This one-of-a-kind event will focus on helping developers get ‘cloud ready’ with concrete details and hands-on tactics. We’ll start by revealing Microsoft’s strategic vision for the cloud, and then offer an end-to-end view of the Windows Azure platform from a developer’s perspective. We’ll also talk about migrating your data and existing applications (regardless of platform) onto the cloud. We’ll finish up with an open panel and lots of time to ask questions. Following this event, please join us for an engaging conversation about any and all Cloud Computing topics. This FREE event is hosted by Northwest Cloud, the cloud agnostic community group, and sponsored by Microsoft. http://www.nwcloud.org/redmond/2010-04-06

    Read the article

  • Metaphor for task synchronization [closed]

    - by nkint
    I'm looking for a metaphor. A friend of mine taught me to use metaphors from nature, everyday life, math, and use them to design my projects. They can help in creating a better design or better understanding or the problem, and they are cool. Now I'm working on a project with hardware and micro-controllers in C. For convenience, I have decided to use multiple micro-controllers as co-processor units for real-time (the slaves) and a master. This has saved me a lot of headache: I can code the main logic in the master without paying too much attention to super optimizing everything; I don't care if I need some blocking-call; I don't worry about serial communication with the computer. I just send messages to the slaves and they are super fast super in real time. I like my design and it seems to work well. So here are the important concepts that I'm trying capture in the metaphor: hierarchy of processing Not using one big brain but rather several small, distributed brain units using distributed power or resources I'm looking for a good metaphor for this concept of having one unit synchronize the work of all the others. Preferably, the metaphor would come from nature, biology, or zoology.

    Read the article

  • Join our webinar: What CFOs Want From IT -- Unlocking Growth with Emerging Technologies

    - by Di Seghposs
    According to the 2012 Gartner-FEI research, big data, analytics, and new mobile, social & cloud computing platforms are increasingly on the CFOs radar screen because of their potential to unlock new growth opportunities. Join Oracle Chair Jeff Henley, & Oracle's Reggie Bradford & Rich Clayton as they explore CFO strategies & best practices for driving real value from IT investments in these areas: Why CFOs should get involved in big data and business analytics projects, and what best practices they can adopt to ensure their success How CFOs are leveraging new mobile and cloud computing platforms to address enterprise demands quickly and cost effectively How CFOs can partner with CMOs to maximize the value of IT investments in social technologies that can help create new growth opportunities CFOs have more responsibility over IT than ever before.  Learn how Oracle unlocks the transformative power of IT to take your business to the next level of performance.   Date:Tuesday, November 27, 2012 Time:8:00 a.m. PST / 11:00 a.m. EST Register now.

    Read the article

  • Prepared statement alternatives for this middle-man program?

    - by user2813274
    I have an program that is using a prepared statement to connect and write to a database working nicely, and now need to create a middle-man program to insert between this program and the database. This middle-man program will actually write to multiple databases and handle any errors and connection issues. I would like advice as to how to replicate the prepared statements such as to create minimal impact to the existing program, however I am not sure where to start. I have thought about creating a "SQL statement class" that mimics the prepared statement, only that seems silly. The existing program is in Java, although it's going to be networked anyways so I would be open to writing it in just about anything that would make sense. The databases are currently MySQL, although I would like to be open to changing the database type in the future. My main question is what should the interface for this program look like, and does doing this even make sense? A distributed DB would be the ideal solution, but they seem overly complex and expensive for my needs. I am hoping to replicate the main functionality of a distributed DB via this middle-man. I am not too familiar with sql-based servers distributing data (or database in general...) - perhaps I am fighting an uphill battle by trying to solve it via programming, but I would like to make an attempt at least.

    Read the article

  • AllSparkCube Packs 4,096 LEDs into a Giant Computer Controlled Display

    - by Jason Fitzpatrick
    LED matrix cubes are nothing new, but this 16x16x16 monster towers over the tiny 4x4x4 desktop variety. Check out the video to see it in action. Sound warning: the music starts off very loud and bass-filled; we’d recommend turning down the speakers if you’re watching from your cube. So what compels someone to build a giant LED cube driven by over a dozen Arduino shields? If you’re the employees at Adaptive Computing, you do it to dazzles crowds and show off your organizational skills: Every time I talk about the All Spark Cube people ask “so what does it do?” The features of the All Spark are the reason it was built and sponsored by Adaptive Computing. The Cube was built to catch peoples’ attention and to demonstrate how Adaptive can take a chaotic mess and inject order, structure and efficiency. We wrote several examples of how the All Spark Cube can demonstrate the effectiveness of a complex data center. If you’re interested in building a monster of your own, hit up the link below for more information, schematics, and videos. How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • Are you ready to take a walk in the clouds?

    - by Steve Loethen
    Cloud computing is here, whether we want it or not.  When I say "a walk in the clouds” I am not talking about a pleasant romantic comedy, but a real alternative to hosting applications on-premise.  For years we have had the power to host our web sites on remote systems.  Sure, challenges existed.  Mostly web sites.  I could, with a few clicks, create a account at a myriad of web host sites, put my site in the hands of a remote hosting company, and boom, I was a site on the internet.  But choices, power, and management was limited. Now, we have a set of services to let us approach and power and control we love, but with scalability of the data center.  My personal web site is hosted on a laptop running hyperV in my basement.  I have to manage the machine, patch it, make sure it is powered up.  This is fine for the “hello, this is my dog skippy site” that I maintain. If the football pool I run has an issue, one of the 10 users I have calls or emails me and I go check it out.  All is well. But this falls well below the needs of even the simplest of enterprises.  A business needs a stronger datacenter, a better pipe to the world.  Do I really want to base my business on a dynamic dns and a dsl line from the local phone company? Cloud computing gives us most of what I value (control, a db of my own, updating my site from Visual Studio). Come learn how this technology can transform your business.  If you are a Microsoft shop, or are interested in Microsoft in the cloud, on April 8 and 9, a 2 day free Azure training class is being conducted in Kansas City.  http://www.azurebootcamp.com/city/kansascity Hope to see you there.  If you come, make sure you look me up.

    Read the article

  • Best peer-to-peer game architecture

    - by Dejw
    Consider a setup where game clients: have quite small computing resources (mobile devices, smartphones) are all connected to a common router (LAN, hotspot etc) The users want to play a multiplayer game, without an external server. One solution is to host an authoritative server on one phone, which in this case would be also a client. Considering point 1 this solution is not acceptable, since the phone's computing resources are not sufficient. So, I want to design a peer-to-peer architecture that will distribute the game's simulation load among the clients. Because of point 2 the system needn't be complex with regards to optimization; the latency will be very low. Each client can be an authoritative source of data about himself and his immediate environment (for example bullets.) What would be the best approach to designing such an architecture? Are there any known examples of such a LAN-level peer-to-peer protocol? Notes: Some of the problems are addressed here, but the concepts listed there are too high-level for me. Security I know that not having one authoritative server is a security issue, but it is not relevant in this case as I'm willing to trust the clients. Edit: I forgot to mention: it will be a rather fast-paced game (a shooter). Also, I have already read about networking architectures at Gaffer on Games.

    Read the article

  • How to document requirements for an API systematically?

    - by Heinrich
    I am currently working on a project, where I have to analyze the requirements of two given IT systems, that use cloud computing, for a Cloud API. In other words, I have to analyze what requirements these systems have for a Cloud API, such that they would be able to switch it, while being able to accomplish their current goals. Let me give you an example for some informal requirements of Project A: When starting virtual machines in the cloud through the API, it must be possible to specify the memory size, CPU type, operating system and a SSH key for the root user. It must be possible to monitor the inbound and outbound network traffic per hour per virtual machine. The API must support the assignment of public IPs to a virtual machine and the retrieval of the public IPs. ... In a later stage of the project I will analyze some Cloud Computing standards that standardize cloud APIs to find out where possible shortcomings in the current standards are. A finding could and will probably be, that a certain standard does not support monitoring resource usage and thus is not currently usable. I am currently trying to find a way to systematically write down and classify my requirements. I feel that the way I currently have them written down (like the three points above) is too informal. I have read in a couple of requirements enineering and software architecture books, but they all focus too much on details and implementation. I do really only care about the functionalities provided through the API/interface and I don't think UML diagrams etc. are the right choice for me. I think currently the requirements that I collected can be described as user stories, but is that already enough for a sophisticated requirements analysis? Probably I should go "one level deeper" ... Any advice/learning resources for me?

    Read the article

  • Relationship between SOA and OOA

    - by TheSilverBullet
    Thomas Erl defines SOA as follows in his site: Service-oriented computing represents a new generation distributed computing platform. As such, it encompasses many things, including its own design paradigm and design principles, design pattern catalogs, pattern languages, a distinct architectural model, and related concepts, technologies, and frameworks. This definitely sounds like a whole new category which is parallel to object orientation. Almost one in which you would expect an entirely new language to exist for. Like procedural C and object oriented C#. Here is my understanding: In real life, we don't have entirely new language for SOA. And most application which have SOA architecture have an object oriented design underneath it. SOA is a "strategy" to make the entire application/service distributed and reliable. SOA needs OOPS working underneath it. Is this correct? Where does SOA (if at all it does) fit in with object oriented programming practices? Edit: I have learnt through answers that OOA and SOA work with each other and cannot be compared (in a "which is better" way). I have changed the title to "Relationship between SOA and OOA" rather than "comparison".

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Software Manager who makes developers do Project Management

    - by hdman
    I'm a software developer working in an embedded systems company. We have a Project Manager, who takes care of the overall project schedule (including electrical, quality, software and manufacturing) hence his software schedule is very brief. We also have a Software Manager, who's my boss. He makes me write and maintain the software schedule, design documents (high and low level design), SRS, change management, verification plans and reports, release management, reviews, and ofcourse the software. We only have one Test Engineer for the whole software team (10 members), and at any given time, there are a couple of projects going on. I'm spending 80% of my time making these documents. My boss comes from a Process background, and believes what we need is better documentation to improve software: (1) He considers the design to be paramount, coding is "just writing the design down", it shouldn't take too long, and "all the code should be written before the hardware is ready". (2) Doesn't understand the difference between a Central & Distributed Version control, even after we told him its easier to collaborate with a distributed model. (3) Doesn't understand code, and wants to understand every bug and its proposed solution. (4) Believes verification should be done by developer, and validation by the Tester. Thing is though, our verification only checks if implementation is correct (we don't write unit tests, its never considered in the schedule), and validation is black box testing, so the units tests are missing. I'm really confused. (1) Am I responsible for maintaining all these documents? It makes me feel like I'm doing the Software Project Management, in essence. (2) I don't really like creating documents, I want to solve problems and write code. In my experience, creating design documents only helps to an extent, its never the solution to better or faster code. (3) I feel the boss doesn't really care about making better products, but only about being a good manager in the eyes of the management. What can I do?

    Read the article

  • Ternary and Artificial Intelligence

    - by user2957844
    Not much of a programmer myself, however I have been thinking about the future of AI. If a fully functional AI is programmed in a binary environment as is used in current computing, would that create a bit of a black and white personality? As in just yes/no, on/off, 1/0? I will use the Skynet computer from the Terminator series as a bad analogy; it is brought online and comes to the conclusion that humanity should just be destroyed so the problem is resolved, basically its only options were; fire the missiles or not. (The films do not really go into what its moves would be after doing such a thing, but that goes into the realms of AI evolution so does not really fit with this question.) It may also have been badly programmed. Now, the human mind has been akin to a ternary system which allows our "out of the box" thinking along with all the other wonderful things our minds can do. So, would it not be more prudent to create a functional ternary system and program an AI using it so the resulting personality would be able to benefit from the third "maybe" (so to speak) option? I understand that in binary there are ways to get around the whole yes/no etc. way of things, however the basic operations are still just 1's and 0's. Again with using the above bad Skynet analogy; if it could have had that third "maybe" option as part of its core system, it may have decided to not launch due to being able to make sense of the intricacies of human nature and the politics of such a move etc. In effect, my question is; Would an AI benefit more from ternary computing as opposed to binary due to the inclusion of -1, or 2, dependent on the system ("maybe," as I call it)?

    Read the article

  • Would having an undergraduate certificate in Computer Science help me get employed as a computer programmer? [on hold]

    - by JDneverSleeps
    I am wondering how would employers perceive the Universtiy Certificate in Computing and Information Systems offered by Athabasca University (a distance education institution... The university is legit and accredited by the Government of Alberta, Canada). I already have a BSc in Statistics from University of Alberta (a classic brick and mortar public university in Alberta, Canada)...so I can state in my resume that I have a "university degree"..... Luckily, I was able to secure a very good employment in my field after the graduation from the U of A. The main reason why I am interested in taking the certificate program through Athabasca is because knowing how to program can increase the chance for promotion in my current job. I also believe that if something turns out bad in my current job and if I ever need to look for a new place to work, having the certificate in computer science will help me get employed as a computer programmer (i.e. my choice for the new job wouldn't be restricted to the field of Statistics). Athabasca University is claiming that the certificate program is meant to be equivalent to the undergraduate minor in computing science. I carefully looked at the certificate's curriculum and as far as I am concerned, the certificate program does have the same level of rigour as the undergraduate minor in Computer Science programs offered by other Canadian universities. I am also confident that the certificate program will get me to pick up enough skills/background to start a career as a computer programmer. The reasons why I am not 100% sure on getting the certificate is worth the tuition are: Athabasca University is a distance education institution (accredited by government but still) The credential that I will receive is "university certificate", not a "undergraduate degree" Do you think it's a good idea for me to pursue the certificate, given the two facts above? again, I already have my Bachelor's degree - although it is not in CS Thanks,

    Read the article

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • Wednesday at Oracle OpenWorld 2012 - Must See Session: “Event-Driven Patterns and Best Practices: Even More Important with Big Data”

    - by Lionel Dubreuil
    Don’t miss this “CON8636 - Event-Driven Patterns and Best Practices: Even More Important with Big Data“ session: Speakers: Faisal Nazir - Senior Solutions Architect, Motorola Shinichiro Takahashi - Senior Manager, Service Platform Department, NTT DOCOMO, INC. Robin Smith - Product Management/Strategy Director - Oracle Event Processing, Oracle Date: Wednesday, Oct 3 Time: 10:15 AM - 11:15 AM Location: Moscone South - 310 As the demand for big data analytics and integration grows across all industries, this session focuses on the role of the Oracle event-driven solution platform in delivering vital real-time integrated analysis intelligence to the data streams consumed and emitted from these large distributed data stores. Objectives for this session are to: Increase awareness of Oracle Event Processing, showcasing tight alignment with big data solutions Highlight emerging usage patterns in relation to streaming event data and distributed data stores Show a significant Oracle competitive advantage over IBM solutions advertised in this domain Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • College Courses through distance learning

    - by Matt
    I realize this isn't really a programming question, but didn't really know where to post this in the stackexchange and because I am a computer science major i thought id ask here. This is pretty unique to the programmer community since my degree is about 95% programming. I have 1 semester left, but i work full time. I would like to finish up in December, but to make things easier i like to take online classes whenever I can. So, my question is does anyone know of any colleges that offer distance learning courses for computer science? I have been searching around and found a few potential classes, but not sure yet. I would like to gather some classes and see what i can get approval for. Class I need: Only need one C SC 437 Geometric Algorithms C SC 445 Algorithms C SC 473 Automata Only need one C SC 452 Operating Systems C SC 453 Compilers/Systems Software While i only need of each of the above courses i still need to take two more electives. These also have to be upper 400 level classes. So i can take multiple in each category. Some other classes I can take are: CSC 447 - Green Computing CSC 425 - Computer Networking CSC 460 - Database Design CSC 466 - Computer Security I hoping to take one or two of these courses over the summer. If not, then online over the regular semester would be ok too. Any help in helping find these classes would be awesome. Maybe you went to a college that offered distance learning. Some of these classes may be considered to be graduate courses too. Descriptions are listed below if you need. Thanks! Descriptions Computer Security This is an introductory course covering the fundamentals of computer security. In particular, the course will cover basic concepts of computer security such as threat models and security policies, and will show how these concepts apply to specific areas such as communication security, software security, operating systems security, network security, web security, and hardware-based security. Computer Networking Theory and practice of computer networks, emphasizing the principles underlying the design of network software and the role of the communications system in distributed computing. Topics include routing, flow and congestion control, end-to-end protocols, and multicast. Database Design Functions of a database system. Data modeling and logical database design. Query languages and query optimization. Efficient data storage and access. Database access through standalone and web applications. Green Computing This course covers fundamental principles of energy management faced by designers of hardware, operating systems, and data centers. We will explore basic energy management option in individual components such as CPUs, network interfaces, hard drives, memory. We will further present the energy management policies at the operating system level that consider performance vs. energy saving tradeoffs. Finally we will consider large scale data centers where energy management is done at multiple layers from individual components in the system to shutting down entries subset of machines. We will also discuss energy generation and delivery and well as cooling issues in large data centers. Compilers/Systems Software Basic concepts of compilation and related systems software. Topics include lexical analysis, parsing, semantic analysis, code generation; assemblers, loaders, linkers; debuggers. Operating Systems Concepts of modern operating systems; concurrent processes; process synchronization and communication; resource allocation; kernels; deadlock; memory management; file systems. Algorithms Introduction to the design and analysis of algorithms: basic analysis techniques (asymptotics, sums, recurrences); basic design techniques (divide and conquer, dynamic programming, greedy, amortization); acquiring an algorithm repertoire (sorting, median finding, strong components, spanning trees, shortest paths, maximum flow, string matching); and handling intractability (approximation algorithms, branch and bound). Automata Introduction to models of computation (finite automata, pushdown automata, Turing machines), representations of languages (regular expressions, context-free grammars), and the basic hierarchy of languages (regular, context-free, decidable, and undecidable languages). Geometric Algorithms The study of algorithms for geometric objects, using a computational geometry approach, with an emphasis on applications for graphics, VLSI, GIS, robotics, and sensor networks. Topics may include the representation and overlaying of maps, finding nearest neighbors, solving linear programming problems, and searching geometric databases.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >