Search Results

Search found 8266 results on 331 pages for 'distributed systems'.

Page 94/331 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • SQL Server 2012 content on Channel 9

    - by jamiet
    A mountain of SQL Server 2012 video content featuring Greg Low, Jonathan Kehayias, Joe Sack and Roger Doherty has just been released on Channel 9. Channel 9 has great support for tags and RSS feeds so if you want to automatically download all of that content simply you can add the following RSS feed: http://channel9.msdn.com/Tags/sql+server+2012/RSS to your podcast reader of choice and have fun learning about all the new features in SQL Server 2012 such as: AlwaysOn Power View SSDT SSRS Data Alerts SSAS Tabular Modelling DAX Improvements MDS improvements SSIS improvements DQS StreamInsight improvements Data-Tier Apps (DACs) LocalDB FileTable Spatial improvements T-SQL paging Distributed Replay XEvents improvements ADO.Net Code-first T-SQL improvements Server roles Partitioning improvements ColumnStore Whew, quite a list! @jamiet

    Read the article

  • How might one teach OO without referencing physical real-world objects?

    - by hal10001
    I remember reading somewhere that the original concepts behind OO were to find a better architecture for handling the messaging of data between multiple systems in a way that protected the state of that data. Now that is probably a poor paraphrase, but it made me wonder if there is a way of teaching OO without the (Bike, Car, Person, etc.) object analogies, and that instead focuses on the messaging aspects. If you have articles, links, books, etc., that would be helpful.

    Read the article

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • Big Data – Buzz Words: What is Hadoop – Day 6 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is NoSQL. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – Hadoop. What is Hadoop? Apache Hadoop is an open-source, free and Java based software framework offers a powerful distributed platform to store and manage Big Data. It is licensed under an Apache V2 license. It runs applications on large clusters of commodity hardware and it processes thousands of terabytes of data on thousands of the nodes. Hadoop is inspired from Google’s MapReduce and Google File System (GFS) papers. The major advantage of Hadoop framework is that it provides reliability and high availability. What are the core components of Hadoop? There are two major components of the Hadoop framework and both fo them does two of the important task for it. Hadoop MapReduce is the method to split a larger data problem into smaller chunk and distribute it to many different commodity servers. Each server have their own set of resources and they have processed them locally. Once the commodity server has processed the data they send it back collectively to main server. This is effectively a process where we process large data effectively and efficiently. (We will understand this in tomorrow’s blog post). Hadoop Distributed File System (HDFS) is a virtual file system. There is a big difference between any other file system and Hadoop. When we move a file on HDFS, it is automatically split into many small pieces. These small chunks of the file are replicated and stored on other servers (usually 3) for the fault tolerance or high availability. (We will understand this in the day after tomorrow’s blog post). Besides above two core components Hadoop project also contains following modules as well. Hadoop Common: Common utilities for the other Hadoop modules Hadoop Yarn: A framework for job scheduling and cluster resource management There are a few other projects (like Pig, Hive) related to above Hadoop as well which we will gradually explore in later blog posts. A Multi-node Hadoop Cluster Architecture Now let us quickly see the architecture of the a multi-node Hadoop cluster. A small Hadoop cluster includes a single master node and multiple worker or slave node. As discussed earlier, the entire cluster contains two layers. One of the layer of MapReduce Layer and another is of HDFC Layer. Each of these layer have its own relevant component. The master node consists of a JobTracker, TaskTracker, NameNode and DataNode. A slave or worker node consists of a DataNode and TaskTracker. It is also possible that slave node or worker node is only data or compute node. The matter of the fact that is the key feature of the Hadoop. In this introductory blog post we will stop here while describing the architecture of Hadoop. In a future blog post of this 31 day series we will explore various components of Hadoop Architecture in Detail. Why Use Hadoop? There are many advantages of using Hadoop. Let me quickly list them over here: Robust and Scalable – We can add new nodes as needed as well modify them. Affordable and Cost Effective – We do not need any special hardware for running Hadoop. We can just use commodity server. Adaptive and Flexible – Hadoop is built keeping in mind that it will handle structured and unstructured data. Highly Available and Fault Tolerant – When a node fails, the Hadoop framework automatically fails over to another node. Why Hadoop is named as Hadoop? In year 2005 Hadoop was created by Doug Cutting and Mike Cafarella while working at Yahoo. Doug Cutting named Hadoop after his son’s toy elephant. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – MapReduce. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Filtering Your Content

    - by rickramsey
    Watch it directly on YouTube You can't always get what you want, but we do try to get you what you need. Use these OTN System Collections to see what's been published lately in your area of interest: Sysadmin Collection Developer Collection OTN ystems Collection See all collections (work in progress) If you prefer to use your RSS feeder, try this page: RSS Feeds for OTN Systems Content - Rick System Admin and Developer Community of OTN OTN Garage Blog OTN Garage on Facebook OTN Garage on Twitter

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights  und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Breaking down CS courses for freshmen

    - by Avinash
    I'm a student putting together a slide geared towards freshmen level students who are trying to understand what the importance of various classes in the CS curriculum are. Would it be safe to say that this list is fairly accurate? Data structures: how to store stuff in programs Discrete math: how to think logically Bits & bytes: how to ‘speak’ the machine’s language Advanced data structures: how to store stuff in more ways Algorithms: how to compute things efficiently Operating systems: how to do manage different processes/threads Thanks!

    Read the article

  • Developer Compensation System

    - by Graviton
    Joel wrote excellent articles on developer compensation system used at Fogcreek. As a team lead and business owner, I would like to device a system that would work best for my team. And here's the catch: I have no experience in managing a team before, and I don't know what works and what doesn't. So I would like to get as many references as I can on this matter. Is there other developer compensation systems that you find is working for you and your company?

    Read the article

  • How can I keep the cpu temp low?

    - by Newton
    I have an HP pavilion dv7, I'm using ubuntu 12.04 so the overheating problem with sandybridge cpu is a lot better. However my laptop is still becoming too hot to keep on my legs. The problem is that the fan wait too much before starting, so the medium temp is too hight. When I'm using windows 7 the laptop is room-temperature cold, I've absolutely no problem. On windows the fan is always spinning very low & very silently so the heat is continuously removed, without reaching an unconfortable temp. How can I force the computer to act like that also on ubuntu? PS The bios can't let me control this kind of thing, and this is my experience with lm-sensors and fancontrol al@notebook:~$ sudo sensors-detect [sudo] password for al: # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: Hewlett-Packard HP Pavilion dv7 Notebook PC (laptop) # Board: Hewlett-Packard 1800 This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... Yes Found unknown chip with ID 0x8518 Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel Cougar Point (PCH) Module i2c-i801 loaded successfully. Module i2c-dev loaded successfully. Next adapter: i915 gmbus disabled (i2c-0) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus ssc (i2c-1) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOB (i2c-2) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus vga (i2c-3) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOA (i2c-4) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus panel (i2c-5) Do you want to scan it? (YES/no/selectively): y Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No Probing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) Next adapter: i915 GPIOC (i2c-6) Do you want to scan it? (YES/no/selectively): y Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No Probing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) Next adapter: i915 gmbus dpc (i2c-7) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOD (i2c-8) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus dpb (i2c-9) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOE (i2c-10) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus reserved (i2c-11) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus dpd (i2c-12) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOF (i2c-13) Do you want to scan it? (YES/no/selectively): y Next adapter: DPDDC-B (i2c-14) Do you want to scan it? (YES/no/selectively): y Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO)y Successful! Monitoring programs won't work until the needed modules are loaded. You may want to run 'service module-init-tools start' to load them. Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK al@notebook:~$ sudo /etc/init.d/module-init-tools restart Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service module-init-tools restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop module-init-tools ; start module-init-tools. The restart(8) utility is also available. module-init-tools stop/waiting al@notebook:~$ sudo service module-init-tools restart stop: Unknown instance: module-init-tools stop/waiting al@notebook:~$ sudo service module-init-tools start module-init-tools stop/waiting al@notebook:~$ sudo pwmconfig # pwmconfig revision 5857 (2010-08-22) This program will search your sensors for pulse width modulation (pwm) controls, and test each one to see if it controls a fan on your motherboard. Note that many motherboards do not have pwm circuitry installed, even if your sensor chip supports pwm. We will attempt to briefly stop each fan using the pwm controls. The program will attempt to restore each fan to full speed after testing. However, it is ** very important ** that you physically verify that the fans have been to full speed after the program has completed. /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed Is my case too desperate?

    Read the article

  • Chrooted Drop Bear HowTo

    <b>Howtoforge:</b> "Dropbear is a relatively small SSH 2 server and client. It is an alternative lightweight program for openssh and it is designed for environments with low memory and processor resources, such as embedded systems."

    Read the article

  • This Week on the Green Data Center Management Front

    Among the big news this week in green data center management: Horizon Data Center Solutions announces a partnership with Power Loft, Cisco, GDSYS, and Incheon Metropolitan City partner to build a next-generation green data center, and Vycon Energy demonstrates how its flywheel backup power systems can be used for health-care data system power backup.

    Read the article

  • L'inventeur du pi-calcul est mort, Robin Milner nous quitte à 76 ans

    L'inventeur du pi-calcul est mort, Robin Milner nous quitte à 76 ans Robin Milner est décédé hier à Cambridge, où il fut professeur à l'Université (ainsi qu'à celles de Londres, Swansea, Edimbourg et Stanford). Informaticien anglais, il a fait trois découvertes principales dans sa carrière, qui ont largement contribué à l'évolution de l'informatique moderne et qui lui valurent de se voir attribuer le prix Turing en 1991 : - LCF, le premier système de preuves automatiques, utilisé pour démontrer automatiquement des assertions mathématiques - Le langage ML - La théorie d'analyse des systèmes concurrents (calculus of communicating systems, CCS) et son successeur, le pi-calcul RIP Robin...

    Read the article

  • w WARN: Thread pool pressure. Using current thread for a work item.

    - by GrumpyOldDBA
    The skill set needed by a DBA can be quite diverse at times and a run in with SSRS 2005 probably illustrates the point quite well. I don't have skills in IIS although I was responsible for the design and deployment of an online mortage application site some years ago.I had to get hands on with IIS5, firewalls, intrusion systems, ISA Server, dmzs, NAT, IP and lots of other acronyms so I have an understanding of these things but never had to do anything other than set up and configure IIS - no...(read more)

    Read the article

  • Recorded Webcast Available: Extend SCOM to Optimize SQL Server Performance Management

    - by KKline
    Join me and Eric Brown, Quest Software senior product manager for SQL Server monitoring tools, as we discuss the server health-check capabilities of Systems Center Operations Manager (SCOM) in this previously recorded webcast. We delve into techniques to maximize your SCOM investment as well as ways to complement it with deeper monitoring and diagnostics. You’ll walk away from this educational session with the skills to: Take full advantage of SCOM’s value for day-to-day SQL Server monitoring Extend...(read more)

    Read the article

  • A Perspective on Database Performance Tuning

    Fundamentally, database performance tuning is done for two basic reasons, to reduce response time and to reduce resource usage, both of which can apply for any given situation. Julian Stuhler looks at database performance tuning, and why it remains one of the most important topics for any DBA, developer or systems administrator.

    Read the article

  • What Is Nuclear Meltdown?

    - by Gopinath
    Japan was first hit by a massive earth quake, then a ruthless tsunami washed away thousands of homes and now they fear the worst – meltdown of nuclear power stations in the quake hit year. Nuclear meltdowns are horrifying – remember the Chernobyl incident in Russia? The Chernobyl reactor meltdown released 400 times more radio active material than the atomic bombing of Hiroshima. The effects of nuclear meltdowns are beyond imagination of a common man, thousands of people loose their lives and many more lakhs of people suffer with radiation related diseases for many years. Nuclear Meltdowns are dangerous, but how do they happen? What causes a nuclear meltdown? In simple terms – Nuclear meltdown is an accident that happens due to severe overheating of a nuclear reactor and results in release of nuclear radiation into the environment.  How A Nuclear Meltdown Happens? According to Wikipedia A meltdown occurs when a severe failure of a nuclear power plant system prevents proper cooling of the reactor core, to the extent that the nuclear fuel assemblies overheat and melt. A meltdown is considered very serious because of the potential that radioactive materials could be released into the environment. The fuel assemblies in a reactor core can melt if heat is not removed. A nuclear reactor does not have to remain critical for a core damage incident to occur, because decay heat continues to heat the reactor fuel assemblies after the reactor has shut down, though this heat decreases with time. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss of pressure control accident, a loss of coolant accident (LOCA), an uncontrolled power excursion or, in some types, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth, ensure that multiple layers of safety systems are always present to make such accidents unlikely. Video – What Causes Nuclear Meltdown AlJazeera news has a good analysis on feared nuclear meltdown of Japan’s nuclear plants and also an animation on what causes Nuclear Meltdown. cc image credit: flickr/jtjdt This article titled,What Is Nuclear Meltdown?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Third Party Applications and Other Acts of Violence Against Your SQL Server

    - by KKline
    I just got finished reading a great blog post from my buddy, Thomas LaRock ( t | b ), in which he describes a useful personal policy he used to track changes made to his SQL Servers when installing third-party products. Note that I'm talking about line-of-business applications here - your inventory management systems and help desk ticketing apps. I'm not talking about monitoring and tuning applications since they, by their very nature, need a different sort of access to your back-end server resources....(read more)

    Read the article

  • UPK for Testing Webinar Recording Now Available!

    - by Karen Rihs
    For anyone who missed last week’s event, a recording of the UPK for Testing webinar is now available.  As an implementation and enablement tool, Oracle’s User Productivity Kit (UPK) provides value throughout the software lifecycle.  Application testing is one area where customers like Northern Illinois University (NIU) are finding huge value in UPK and are using it to validate their systems.  Hear Beth Renstrom, UPK Product Manager and Bettylynne Gregg, NIU ERP Coordinator, discuss how the Test It Mode, Test Scripts, and Test Cases of UPK can be used to facilitate applications testing.

    Read the article

  • Gartner: Magic Quadrant for Corporate Performance Management Suites, 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Hyperion clearly leads the pack again in Gartner’s analysis of the CPM / EPM market, saying; “Oracle is a Leader in CPM suites, with one of the most widely distributed solutions in the market. Oracle Hyperion Enterprise Performance Management is recognized by CFOs worldwide. The vendor has a well-established partner channel, with both large and smaller CPM SI specialists. Hyperion skills are also plentiful among the independent consultant community, given the well-established products. “ “Oracle continues to innovate, bringing incremental improvements across the portfolio as well as new financial close management, disclosure management and predictive planning additions. Furthermore, Oracle has improved integration of Hyperion with the Oracle BI platform, and has improved planning performance, enabling Hyperion Planning to use Oracle Exalytics In-Memory Machine.” For the full article see here: Gartner: Magic Quadrant for Corporate Performance Management Suites, 2012 And if you missed it, here is also the MQ for BI: Gartner: Magic Quadrant for Business Intelligence Platforms, 2012

    Read the article

  • Skyrim Nexus Mods on Xbox 360 by use of dawnguard?

    - by user17895
    i think it's possible i opened up the dawnguard marketplace content and it consists 3 files: dawnguard.bsa < mod dawnguard.esp <- mod installing file. and spa.bin <-dont know where this is for. and it has been confirmed you can use the top 2 files on pc for a not fully functional dawnguard (barely functional to be exact) and if we could just replace or add a few other bsa and esp files to this marketplace content we could get mods up and running on xbox altough i need confirmation on this. I also have no clue where the spa.bin file for is, i need to examine it some further. Further this is adding a few non-distributed Files to marketplace content and wont get you booted from XBL. Also if anyone wants to examine these files for further information i will gladly share them with you. if you have any information or answers please email me at [email protected] thx

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >