Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 247/925 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Which parallel sorting algorithm has the best average case performance?

    - by Craig P. Motlin
    Sorting takes O(n log n) in the serial case. If we have O(n) processors we would hope for a linear speedup. O(log n) parallel algorithms exist but they have a very high constant. They also aren't applicable on commodity hardware which doesn't have anywhere near O(n) processors. With p processors, reasonable algorithms should take O(n/p log n/p) time. In the serial case, quick sort has the best runtime complexity on average. A parallel quick sort algorithm is easy to implement (see here and here). However it doesn't perform well since the very first step is to partition the whole collection on a single core. I have found information on many parallel sort algorithms but so far I have not seen anything pointing to a clear winner. I'm looking to sort lists of 1 million to 100 million elements in a JVM language running on 8 to 32 cores.

    Read the article

  • Is it okay to violate the principle that collection properties should be readonly for performance?

    - by uriDium
    I used FxCop to analyze some code I had written. I had exposed a collection via a setter. I understand why this is not good. Changing the backing store when I don't expect it is a very bad idea. Here is my problem though. I retrieve a list of business objects from a Data Access Object. I then need to add that collection to another business class and I was doing it with the setter method. The reason I did this was that it is going to be faster to make an assignment than to insert hundreds of thousands of objects one at a time to the collection again via another addElement method. Is it okay to have a getter for a collection in some scenarios? I though of rather having a constructor which takes a collection? I thought maybe I could pass the object in to the Dao and let the Dao populate it directly? Are there any other better ideas?

    Read the article

  • Money vs. Decimal vs. Float Performance issues (SQL data types for Currency value)?

    - by urz shah
    What data type should be selected in case of Currency value column in SQL server. I have read some where on web Working on customer implementations, we found some interesting performance numbers concerning the money data type. For example, when Analysis Services was set to the currency data type (from double) to match the SQL Server money data type, there was a 13% improvement in processing speed (rows/sec). Is it true??

    Read the article

  • MS DOS function like ipconfig to get system performance specs?

    - by JustADude
    I am aware of MSINFO32, but I'm wondering if there is a MS DOS command similar to ipconfig in order to get system specifications? I would like for the system specifications to be displayed in the MS DOS prompt. I would like to see at least: CPU RAM BUS speed Thanks for any insights. Edit: I am unable to install any other software, so just have to use existing DOS programming commands to extract this information. Thank you again. 2nd Edit: Whoops. Using Windows XP and Windows Vista.

    Read the article

  • [boost::filesystem] performance: is it better to read all files once, or use b::fs functions over an

    - by rubenvb
    I'm conflicted between a "read once, use memory+pointers to files" and a "read when necessary" approach. The latter is of course much easier (no additional classes needed to store the whole dir structure), but IMO it is slower? A little clarification: I'm writing a simple build system, that read a project file, checks if all files are present, and runs some compile steps. The file tree is static, so the first option doesn't need to be very dynamic and only needs to be built once every time the program is run. Thanks

    Read the article

  • performance: jquery.live() vs. creating specific listeners on demand?

    - by Haroldo
    I have a page with news item titles, clicking each one ajax loads a full news items including a photo thumbnail. I want to attach a lightbox to the thumbnails to show a bigger photo. I have two options (i think): .live() . $('img .thumb').live('click', function()) add a specific id based listener on callback of the news item click . $('div.news_item').click(function(){ var id = $(this).attr('id'); //click show_news_item(), //callback function(){$(id+' .thumb').lightbox();} })

    Read the article

  • Normalize database or not? Read only MyISAM table, performance is the main priority (MySQL)

    - by hello
    I'm importing data to a future database that will have one, static MyISAM table (will only be read from). I chose MyISAM because as far as I understand it's faster for my requirements (I'm not very experienced with MySQL / SQL at all). That table will have various columns such as ID, Name, Gender, Phone, Status... and Country, City, Street columns. Now the question is, should I create tables (e.g Country: Country_ID, Country_Name) for the last 3 columns and refer to them in the main table by ID (normalize...[?]), or just store them as VARCHAR in the main table (having duplicates, obviously)? My primary concern is speed - since the table won't be written into, data integrity is not a priority. The only actions will be selecting a specific row or searching for rows that much a certain criteria. Would searching by the Country, City and/or Street columns (and possibly other columns in the same search) be faster if I simply use VARCHAR?

    Read the article

  • Performance of vector::size() : is it as fast as reading a variable?

    - by zoli2k
    I have do an extensive calculation on a big vector of integers. The vector size is not changed during the calculation. The size of the vector is frequently accessed by the code. What is faster in general: using the vector::size() function or using helper constant vectorSize storing the size of the vector? I know that compilers usually able to inline the size() function when setting the proper compiler flags, however, making a function inline is something that a compiler may do but can not be forced.

    Read the article

  • Forcing the use of an index can improve performance?

    - by aF.
    Imagine that we have a query like this: select a.col1, b.col2 from t1 a inner join t2 b on a.col1 = b.col2 where a.col1 = 'abc' Both col1 and col2 don't have any index. If I add another restriction on the where clause, one that is always correct but with a column with an index: select a.col1, b.col2 from t1 a inner join t2 b on a.col1 = b.col2 where a.col1 = 'abc' and a.id >= 0 -- column always true and with index May the query perform faster since it may use the index on id column?

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • La Universidad Estatal de California estandariza 23 campus a través de Oracle PeopleSoft

    - by Noelia Gomez
    Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} El sistema universitario más grande de los Estados Unidos consolida la Gestión Financiera y consolidará la gestión del capital humano para mejorar la eficiencia y reducir los costes. Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} La Universidad Estatal de California (CSU) está estandarizando su sistema de 23 campus y la Oficina del Rector con las aplicaciones de Oracle PeopleSoft. La CSU es el mayor sistema universitario público en los Estados Unidos. Los premios CSU con 90.000 grados por año y desde su creación en 1961, ha otorgado casi 2,6 millones. Como parte de su plan estratégico, “Acces to Excellence”, la CSU se ha comprometido a tomar ventaja de la tecnología para satisfacer las futuras necesidades de educación y ha creado los Sistemas Comunes de Gestión (CMS). La misión de CMS es brindar un servicio de calidad eficiente, eficaz y de calidad a los estudiantes, profesores y personal de los 23 campus de la CSU y la Oficina del Decanato. En un esfuerzo por mejorar la eficiencia y reducir los costes de todo el sistema, la CSU ha consolidado sus procesos de gestión financiera y los sistemas a través de PeopleSoft Financial Management. Para proporcionar una visión unificada de las operaciones financieras, CSU ha consolidado 22 campus en un mismo sistema a través de PeopleSoft Financials. Estas aplicaciones incluyen: Contabilidad, Facturación, Cuentas a Pagar, Cuentas a Cobrar y Compras. CSU también utiliza Oracle Business Intelligence Enterprise Edition para el análisis y presentación de informes. CSU está consolidando en PeopleSoft Human Capital Management (HCM) en pos de varios objetivos estratégicos como, entre otros: atraer y retener al personal superior y de la facultad. El sistema financiero consolidad de la CSU es ahora el mayor despliegue único de educación superior de PeopleSoft Financials en los Estados Unidos. Una centralización de esta envergadura no tiene precedentes, y sin embargo todavía se completó a tiempo y dentro del presupuesto. Una vez completado, la CSU también será el mayor despliegue de educación superior de PeopleSoft HCM. CSU también utiliza PeopleSoft Campus Solutions para gestionar sus operaciones académicas de los estudiantes, incluyendo la programación y registro de clases, cálculo y recaudación de las tasas, la concesión de la ayuda financiera y la evaluación y el progreso académico de los estudiantes. Las aplicaciones de Oracle PeopleSoft están diseñadas para atender las necesidades de negocio más complejas. Estas proveen soluciones integrales de negocios y de industria, permitiendo a las organizaciones aumentar la productividad, acelerar el rendimiento del negocio y un menor coste total de propiedad.PeopleSoft de Oracle Campus Solutions es una completa suite de software diseñado para las necesidades cambiantes de las instituciones de educación superior. Oracle sigue colaborando con la CSU y los colegios y universidades de todo el mundo para ofrecer los sistemas de administración y desarrollo de estudiantes más sensible y comprensivo y disponibles en la actualidad. Unisys está implantando aplicaciones PeopleSoft CSU y facilitando el acceso a los estudiantes, profesores y personal de las universidades a través de la solución de Unisys Hosted Secure Private Cloud. Unisys es un miembro de nivel Platino de Oracle PartnerNetwork (OPN). "Con el fin de seguir cumpliendo los objetivos de nuestro plan estratégico, creemos que es fundamental para estandarizar nuestros procesos operativos - tanto en el back office como allí donde lleguen nuestros estudiantes todos los días - maximizar nuestra inversión en tecnología y reducir costes", dijo Larry Furukawa-Schlereth, Director de Finanzas, Sonoma State University, y presidente del Comité Global Ejecutivo de Gestión de la CSU, California State University. "Contar con una vista única de toda la información importante en un sistema ayuda a la Universidad Estatal de California a continuar ofreciendo excelentes oportunidades de educación a un coste asequible mientras ayudamos a dar forma a la futura calidad de vida cívica y económica en California". Conozca más sobre Peoplesoft aquí: Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle’s PeopleSoft Applications Oracle in Higher Education Oracle’s PeopleSoft Financial Management Oracle’s PeopleSoft Human Capital Management Oracle’s PeopleSoft Campus Solutions Oracle Human Capital Management Blog Oracle HCM on Twitter Oracle Higher Education on Facebook Oracle Higher Education on Diigo

    Read the article

  • Light Weight Monitoring using Extended Events

    Introduction SQL Server 2008 introduces Extended Events for performance monitoring. SQL Server Extended Events is a general event-handling system for server systems. So  why another event handling system? We already have activity monitor, Perfmon, SQL Profiler, DMVs,. However, Extended Events ... [Read Full Article]

    Read the article

  • "power limit notification" clobbering on 12G Dell servers with RHEL6

    - by Andrew B
    Server: Poweredge r620 OS: RHEL 6.4 Kernel: 2.6.32-358.18.1.el6.x86_64 I'm experiencing application alarms in my production environment. Critical CPU hungry processes are being starved of resources and causing a processing backlog. The problem is happening on all the 12th Generation Dell servers (r620s) in a recently deployed cluster. As near as I can tell, instances of this happening are matching up to peak CPU utilization, accompanied by massive amounts of "power limit notification" spam in dmesg. An excerpt of one of these events: Nov 7 10:15:15 someserver [.crit] CPU12: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU6: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU14: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Package power limit notification (total events = 11) Nov 7 10:15:15 someserver [.crit] CPU6: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU14: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU20: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Package power limit notification (total events = 12) Nov 7 10:15:15 someserver [.crit] CPU10: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU20: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU10: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU15: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU3: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU1: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU5: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU15: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU3: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU1: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU5: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU19: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Package power limit notification (total events = 377) Nov 7 10:15:15 someserver [.crit] CPU9: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU21: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU23: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU11: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU19: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU9: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU21: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU23: Package power limit notification (total events = 374) A little Google Fu reveals that this is typically associated with the CPU running hot, or voltage regulation kicking in. I don't think that's what is happening though. Temperature sensors for all servers in the cluster are running fine, Power Cap Policy is disabled in the iDRAC, and my System Profile is set to "Performance" on all of these servers: # omreport chassis biossetup | grep -A10 'System Profile' System Profile Settings ------------------------------------------ System Profile : Performance CPU Power Management : Maximum Performance Memory Frequency : Maximum Performance Turbo Boost : Enabled C1E : Disabled C States : Disabled Monitor/Mwait : Enabled Memory Patrol Scrub : Standard Memory Refresh Rate : 1x Memory Operating Voltage : Auto Collaborative CPU Performance Control : Disabled A Dell mailing list post describes the symptoms almost perfectly. Dell suggested that the author try using the Performance profile, but that didn't help. He ended up applying some settings in Dell's guide for configuring a server for low latency environments and one of those settings (or a combination thereof) seems to have fixed the problem. Kernel.org bug #36182 notes that power-limit interrupt debugging was enabled by default, which is causing performance degradation in scenarios where CPU voltage regulation is kicking in. A RHN KB article (RHN login required) mentions a problem impacting PE r620 and r720 servers not running the Performance profile, and recommends an update to a kernel released two weeks ago. ...Except we are running the Performance profile... Everything I can find online is running me in circles here. What's the heck is going on?

    Read the article

  • July 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m super excited to announce the July 2013 release of the Ajax Control Toolkit. You can download the new version of the Ajax Control Toolkit from CodePlex (http://ajaxControlToolkit.CodePlex.com) or install the Ajax Control Toolkit from NuGet: With this release, we have completely rewritten the way the Ajax Control Toolkit combines, minifies, gzips, and caches JavaScript files. The goal of this release was to improve the performance of the Ajax Control Toolkit and make it easier to create custom Ajax Control Toolkit controls. Improving Ajax Control Toolkit Performance Previous releases of the Ajax Control Toolkit optimized performance for a single page but not multiple pages. When you visited each page in an app, the Ajax Control Toolkit would combine all of the JavaScript files required by the controls in the page into a new JavaScript file. So, even if every page in your app used the exact same controls, visitors would need to download a new combined Ajax Control Toolkit JavaScript file for each page visited. Downloading new scripts for each page that you visit does not lead to good performance. In general, you want to make as few requests for JavaScript files as possible and take maximum advantage of caching. For most apps, you would get much better performance if you could specify all of the Ajax Control Toolkit controls that you need for your entire app and create a single JavaScript file which could be used across your entire app. What a great idea! Introducing Control Bundles With this release of the Ajax Control Toolkit, we introduce the concept of Control Bundles. You define a Control Bundle to indicate the set of Ajax Control Toolkit controls that you want to use in your app. You define Control Bundles in a file located in the root of your application named AjaxControlToolkit.config. For example, the following AjaxControlToolkit.config file defines two Control Bundles: <ajaxControlToolkit> <controlBundles> <controlBundle> <control name="CalendarExtender" /> <control name="ComboBox" /> </controlBundle> <controlBundle name="CalendarBundle"> <control name="CalendarExtender"></control> </controlBundle> </controlBundles> </ajaxControlToolkit> The first Control Bundle in the file above does not have a name. When a Control Bundle does not have a name then it becomes the default Control Bundle for your entire application. The default Control Bundle is used by the ToolkitScriptManager by default. For example, the default Control Bundle is used when you declare the ToolkitScriptManager like this:  <ajaxToolkit:ToolkitScriptManager runat=”server” /> The default Control Bundle defined in the file above includes all of the scripts required for the CalendarExtender and ComboBox controls. All of the scripts required for both of these controls are combined, minified, gzipped, and cached automatically. The AjaxControlToolkit.config file above also defines a second Control Bundle with the name CalendarBundle. Here’s how you would use the CalendarBundle with the ToolkitScriptManager: <ajaxToolkit:ToolkitScriptManager runat="server"> <ControlBundles> <ajaxToolkit:ControlBundle Name="CalendarBundle" /> </ControlBundles> </ajaxToolkit:ToolkitScriptManager> In this case, only the JavaScript files required by the CalendarExtender control, and not the ComboBox, would be downloaded because the CalendarBundle lists only the CalendarExtender control. You can use multiple named control bundles with the ToolkitScriptManager and you will get all of the scripts from both bundles. Support for ControlBundles is a new feature of the ToolkitScriptManager that we introduced with this release. We extended the ToolkitScriptManager to support the Control Bundles that you can define in the AjaxControlToolkit.config file. Let me be explicit about the rules for Control Bundles: 1. If you do not create an AjaxControlToolkit.config file then the ToolkitScriptManager will download all of the JavaScript files required for all of the controls in the Ajax Control Toolkit. This is the easy but low performance option. 2. If you create an AjaxControlToolkit.config file and create a ControlBundle without a name then the ToolkitScriptManager uses that Control Bundle by default. For example, if you plan to use only the CalendarExtender and ComboBox controls in your application then you should create a default bundle that lists only these two controls. 3. If you create an AjaxControlToolkit.config file and create one or more named Control Bundles then you can use these named Control Bundles with the ToolkitScriptManager. For example, you might want to use different subsets of the Ajax Control Toolkit controls in different sections of your app. I should also mention that you can use the AjaxControlToolkit.config file with custom Ajax Control Toolkit controls – new controls that you write. For example, here is how you would register a set of custom controls from an assembly named MyAssembly: <ajaxControlToolkit> <controlBundles> <controlBundle name="CustomBundle"> <control name="MyAssembly.MyControl1" assembly="MyAssembly" /> <control name="MyAssembly.MyControl2" assembly="MyAssembly" /> </controlBundle> </ajaxControlToolkit> What about ASP.NET Bundling and Minification? The idea of Control Bundles is similar to the idea of Script Bundles used in ASP.NET Bundling and Minification. You might be wondering why we didn’t simply use Script Bundles with the Ajax Control Toolkit. There were several reasons. First, ASP.NET Bundling does not work with scripts embedded in an assembly. Because all of the scripts used by the Ajax Control Toolkit are embedded in the AjaxControlToolkit.dll assembly, ASP.NET Bundling was not an option. Second, Web Forms developers typically think at the level of controls and not at the level of individual scripts. We believe that it makes more sense for a Web Forms developer to specify the controls that they need in an app (CalendarExtender, ToggleButton) instead of the individual scripts that they need in an app (the 15 or so scripts required by the CalenderExtender). Finally, ASP.NET Bundling does not work with older versions of ASP.NET. The Ajax Control Toolkit needs to support ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Therefore, using ASP.NET Bundling was not an option. There is nothing wrong with using Control Bundles and Script Bundles side-by-side. The ASP.NET 4.0 and 4.5 ToolkitScriptManager supports both approaches to bundling scripts. Using the AjaxControlToolkit.CombineScriptsHandler Browsers cache JavaScript files by URL. For example, if you request the exact same JavaScript file from two different URLs then the exact same JavaScript file must be downloaded twice. However, if you request the same JavaScript file from the same URL more than once then it only needs to be downloaded once. With this release of the Ajax Control Toolkit, we have introduced a new HTTP Handler named the AjaxControlToolkit.CombineScriptsHandler. If you register this handler in your web.config file then the Ajax Control Toolkit can cache your JavaScript files for up to one year in the future automatically. You should register the handler in two places in your web.config file: in the <httpHandlers> section and the <system.webServer> section (don’t forget to register the handler for the AjaxFileUpload while you are there!). <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </httpHandlers> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add name="CombineScriptsHandler" verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </handlers> <system.webServer> The handler is only used in release mode and not in debug mode. You can enable release mode in your web.config file like this: <compilation debug=”false”> You also can override the web.config setting with the ToolkitScriptManager like this: <act:ToolkitScriptManager ScriptMode=”Release” runat=”server”/> In release mode, scripts are combined, minified, gzipped, and cached with a far future cache header automatically. When the handler is not registered, scripts are requested from the page that contains the ToolkitScriptManager: When the handler is registered in the web.config file, scripts are requested from the handler: If you want the best performance, always register the handler. That way, the Ajax Control Toolkit can cache the bundled scripts across page requests with a far future cache header. If you don’t register the handler then a new JavaScript file must be downloaded whenever you travel to a new page. Dynamic Bundling and Minification Previous releases of the Ajax Control Toolkit used a Visual Studio build task to minify the JavaScript files used by the Ajax Control Toolkit controls. The disadvantage of this approach to minification is that it made it difficult to create custom Ajax Control Toolkit controls. Starting with this release of the Ajax Control Toolkit, we support dynamic minification. The JavaScript files in the Ajax Control Toolkit are minified at runtime instead of at build time. Scripts are minified only when in release mode. You can specify release mode with the web.config file or with the ToolkitScriptManager ScriptMode property. Because of this change, the Ajax Control Toolkit now depends on the Ajax Minifier. You must include a reference to AjaxMin.dll in your Visual Studio project or you cannot take advantage of runtime minification. If you install the Ajax Control Toolkit from NuGet then AjaxMin.dll is added to your project as a NuGet dependency automatically. If you download the Ajax Control Toolkit from CodePlex then the AjaxMin.dll is included in the download. This change means that you no longer need to do anything special to create a custom Ajax Control Toolkit. As an open source project, we hope more people will contribute to the Ajax Control Toolkit (Yes, I am looking at you.) We have been working hard on making it much easier to create new custom controls. More on this subject with the next release of the Ajax Control Toolkit. A Single Visual Studio Solution We also made substantial changes to the Visual Studio solution and projects used by the Ajax Control Toolkit with this release. This change will matter to you only if you need to work directly with the Ajax Control Toolkit source code. In previous releases of the Ajax Control Toolkit, we maintained separate solution and project files for ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Starting with this release, we now support a single Visual Studio 2012 solution that takes advantage of multi-targeting to build ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5 versions of the toolkit. This change means that you need Visual Studio 2012 to open the Ajax Control Toolkit project downloaded from CodePlex. For details on how we setup multi-targeting, please see Budi Adiono’s blog post: http://www.budiadiono.com/2013/07/25/visual-studio-2012-multi-targeting-framework-project/ Summary You can take advantage of this release of the Ajax Control Toolkit to significantly improve the performance of your website. You need to do two things: 1) You need to create an AjaxControlToolkit.config file which lists the controls used in your app and 2) You need to register the AjaxControlToolkit.CombineScriptsHandler in the web.config file. We made substantial changes to the Ajax Control Toolkit with this release. We think these changes will result in much better performance for multipage apps and make the process of building custom controls much easier. As always, we look forward to hearing your feedback.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #039

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 FQL – Facebook Query Language Facebook list following advantages of FQL: Condensed XML reduces bandwidth and parsing costs. More complex requests can reduce the number of requests necessary. Provides a single consistent, unified interface for all of your data. It’s fun! UDF – Get the Day of the Week Function The day of the week can be retrieved in SQL Server by using the DatePart function. The value returned by the function is between 1 (Sunday) and 7 (Saturday). To convert this to a string representing the day of the week, use a CASE statement. UDF – Function to Get Previous And Next Work Day – Exclude Saturday and Sunday While reading ColdFusion blog of Ben Nadel Getting the Previous Day In ColdFusion, Excluding Saturday And Sunday, I realize that I use similar function on my SQL Server Database. This function excludes the Weekends (Saturday and Sunday), and it gets previous as well as next work day. Complete Series of SQL Server Interview Questions and Answers Data Warehousing Interview Questions and Answers – Introduction Data Warehousing Interview Questions and Answers – Part 1 Data Warehousing Interview Questions and Answers – Part 2 Data Warehousing Interview Questions and Answers – Part 3 Data Warehousing Interview Questions and Answers Complete List Download 2008 Introduction to Log Viewer In SQL Server all the windows event logs can be seen along with SQL Server logs. Interface for all the logs is same and can be launched from the same place. This log can be exported and filtered as well. DBCC SHRINKFILE Takes Long Time to Run If you are DBA who are involved with Database Maintenance and file group maintenance, you must have experience that many times DBCC SHRINKFILE operations takes a long time but any other operations with Database are relatively quicker. mssqlsystemresource – Resource Database The purpose of resource database is to facilitates upgrading to the new version of SQL Server without any hassle. In previous versions whenever version of SQL Server was upgraded all the previous version system objects needs to be dropped and new version system objects to be created. 2009 Puzzle – Write Script to Generate Primary Key and Foreign Key In SQL Server Management Studio (SSMS), there is no option to script all the keys. If one is required to script keys they will have to manually script each key one at a time. If database has many tables, generating one key at a time can be a very intricate task. I want to throw a question to all of you if any of you have scripts for the same purpose. Maximizing View of SQL Server Management Studio – Full Screen – New Screen I had explained the following two different methods: 1) Open Results in Separate Tab - This is a very interesting method as result pan shows up in a different tab instead of the splitting screen horizontally. 2) Open SSMS in Full Screen - This works always and to its best. Not many people are aware of this method; hence, very few people use it to enhance performance. 2010 Find Queries using Parallelism from Cached Plan T-SQL script gets all the queries and their execution plan where parallelism operations are kicked up. Pay attention there is TOP 10 is used, if you have lots of transactional operations, I suggest that you change TOP 10 to TOP 50 This is the list of the all the articles in the series of computed columns. SQL SERVER – Computed Column – PERSISTED and Storage This article talks about how computed columns are created and why they take more storage space than before. SQL SERVER – Computed Column – PERSISTED and Performance This article talks about how PERSISTED columns give better performance than non-persisted columns. SQL SERVER – Computed Column – PERSISTED and Performance – Part 2 This article talks about how non-persisted columns give better performance than PERSISTED columns. SQL SERVER – Computed Column and Performance – Part 3 This article talks about how Index improves the performance of Computed Columns. SQL SERVER – Computed Column – PERSISTED and Storage – Part 2 This article talks about how creating index on computed column does not grow the row length of table. SQL SERVER – Computed Columns – Index and Performance This article summarized all the articles related to computed columns. 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 21 of 31 What is Data Warehousing? What is Business Intelligence (BI)? What is a Dimension Table? What is Dimensional Modeling? What is a Fact Table? What are the Fundamental Stages of Data Warehousing? What are the Different Methods of Loading Dimension tables? Describes the Foreign Key Columns in Fact Table and Dimension Table? What is Data Mining? What is the Difference between a View and a Materialized View? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 22 of 31 What is OLTP? What is OLAP? What is the Difference between OLTP and OLAP? What is ODS? What is ER Diagram? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 23 of 31 What is ETL? What is VLDB? Is OLTP Database is Design Optimal for Data Warehouse? If denormalizing improves Data Warehouse Processes, then why is the Fact Table is in the Normal Form? What are Lookup Tables? What are Aggregate Tables? What is Real-Time Data-Warehousing? What are Conformed Dimensions? What is a Conformed Fact? How do you Load the Time Dimension? What is a Level of Granularity of a Fact Table? What are Non-Additive Facts? What is a Factless Facts Table? What are Slowly Changing Dimensions (SCD)? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 24 of 31 What is Hybrid Slowly Changing Dimension? What is BUS Schema? What is a Star Schema? What Snow Flake Schema? Differences between the Star and Snowflake Schema? What is Difference between ER Modeling and Dimensional Modeling? What is Degenerate Dimension Table? Why is Data Modeling Important? What is a Surrogate Key? What is Junk Dimension? What is a Data Mart? What is the Difference between OLAP and Data Warehouse? What is a Cube and Linked Cube with Reference to Data Warehouse? What is Snapshot with Reference to Data Warehouse? What is Active Data Warehousing? What is the Difference between Data Warehousing and Business Intelligence? What is MDS? Explain the Paradigm of Bill Inmon and Ralph Kimball. SQL SERVER – Azure Interview Questions and Answers – Guest Post by Paras Doshi – Day 25 of 31 Paras Doshi has submitted 21 interesting question and answers for SQL Azure. 1.What is SQL Azure? 2.What is cloud computing? 3.How is SQL Azure different than SQL server? 4.How many replicas are maintained for each SQL Azure database? 5.How can we migrate from SQL server to SQL Azure? 6.Which tools are available to manage SQL Azure databases and servers? 7.Tell me something about security and SQL Azure. 8.What is SQL Azure Firewall? 9.What is the difference between web edition and business edition? 10.How do we synchronize On Premise SQL server with SQL Azure? 11.How do we Backup SQL Azure Data? 12.What is the current pricing model of SQL Azure? 13.What is the current limitation of the size of SQL Azure DB? 14.How do you handle datasets larger than 50 GB? 15.What happens when the SQL Azure database reaches Max Size? 16.How many databases can we create in a single server? 17.How many servers can we create in a single subscription? 18.How do you improve the performance of a SQL Azure Database? 19.What is code near application topology? 20.What were the latest updates to SQL Azure service? 21.When does a workload on SQL Azure get throttled? SQL SERVER – Interview Questions and Answers – Guest Post by Malathi Mahadevan – Day 26 of 31 Malachi had asked a simple question which has several answers. Each answer makes you think and ponder about the reality of the IT world. Look at the simple question – ‘What is the toughest challenge you have faced in your present job and how did you handle it’? and its various answers. Each answer has its own story. SQL SERVER – Interview Questions and Answers – Guest Post by Rick Morelan – Day 27 of 31 Rick Morelan of Joes2Pros has written an excellent blog post on the subject how to find top N values. Most people are fully aware of how the TOP keyword works with a SELECT statement. After years preparing so many students to pass the SQL Certification I noticed they were pretty well prepared for job interviews too. Yes, they would do well in the interview but not great. There seemed to be a few questions that would come up repeatedly for almost everyone. Rick addresses similar questions in his lucid writing skills. 2012 Observation of Top with Index and Order of Resultset SQL Server has lots of things to learn and share. It is amazing to see how people evaluate and understand different techniques and styles differently when implementing. The real reason may be absolutely different but we may blame something totally different for the incorrect results. Read the blog post to learn more. How do I Record Video and Webcast How to Convert Hex to Decimal or INT Earlier I asked regarding a question about how to convert Hex to Decimal. I promised that I will post an answer with Due Credit to the author but never got around to post a blog post around it. Read the original post over here SQL SERVER – Question – How to Convert Hex to Decimal. Query to Get Unique Distinct Data Based on Condition – Eliminate Duplicate Data from Resultset The natural reaction will be to suggest DISTINCT or GROUP BY. However, not all the questions can be solved by DISTINCT or GROUP BY. Let us see the following example, where a user wanted only latest records to be displayed. Let us see the example to understand further. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • April Oracle Database Events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 17-18, 2012 – Moscow, Russia Oracle Develop Conference The Oracle database developer conference, Oracle Develop, will visit Moscow, Russia and Hyderabad, India this spring. Oracle Develop includes a database development track, which contains .NET sessions and hands-on lab led by an Oracle .NET product manager. Register today before the event fills up. http://www.oracle.com/javaone/ru-en/index.html Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 24, 26 – San Diego, CA & San Jose, CA ISC2 Leadership Regional Event Series Oracle at (ISC)2 Security Leadership Series: Herding Clouds -- Managing Cloud Security Concerns and Expectations Join us for his interactive day-long session where industry leaders, including Oracle solution experts, will talk about how to: dispel the top Cloud security myths minimize "rogue Cloud" implementations identify potential compliance pitfalls and how to avoid them develop contract language for your cloud providers manage users across your fractured datacenter leverage existing technologies to protect data as it moves from the enterprise to the cloud http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=146972&src=7239493&src=7239493&Act=228 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 22-26, 2012 – Las Vegas, NV IOUG Collaborate 12 From April 22-26, 2012, Oracle takes Las Vegas. Thousands of Oracle professionals will descend upon the Mandalay Bay Convention Center for a weeks worth of education sessions, networking opportunities and more, at the only user-driven and user-run Oracle conference - COLLABORATE 12.  Your COLLABORATE 12 - IOUG Forum registration comes complete with a bonus- a full day Deep Dive education program on Sunday! Choose from numerous hot topics, including Real World Performance, High Availability and more, presented by legendary and seasoned Oracle pros, including Tom Kyte and Craig Shallahamer. http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=143637&src=7360364&src=7360364&Act=5 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Independent Oracle User Group (IOUG) Regional Events: April 16, 2012 – Columbus, Ohio Ohio Oracle Users Group Higher Performance PL/PQL and Oracle Database 11g PL/SQL New Features – Featuring Steven Feuerstein http://www.ooug.org/ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 26, 2012- Irving, TX Dallas Oracle Users Group Oracle Database Forum: 5-7PM Oracle Corporation, 6031 Connection Drive Irving, TX http://memberservices.membee.com/538/irmevents.aspx?id=64 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Apr 30, 2012- Los Angeles, CA Real World Performance Tour A full day of real world database performance with: Tom Kyte, author of the ever popular AskTom Blog, Andrew Holdsworth, head of Oracle's Real World Performance Team, and Graham Wood, renowned Oracle Database performance architect. Through discussion, debate and demos, they’ll show you how to master performance engineering topics like: Best practices for designing hardware architectures and how to spot and fix bad design. How to develop applications that deliver the fastest possible performance without sacrificing accuracy.  New for 2012! Updates on Enterprise Manager, Exadata, and what these technologies mean to your current systems. http://www.ioug.org/Events/ADayofRealWorldPerformance/tabid/194/Default.aspx

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • where to put core services in two-node cluster

    - by Veniamin
    I'm currently configuring two-node HA cluster based on CentOS with DRBD. Most services are packed in virtual machines with migration available. I have not made decision where to put some core services as: dhcp, ldap, dns - which are critical for all network infrastructure. There are two possibilities: Configure them as redundant HA services on cluster hosts. Pack them all into dedicated virtual machine. What is the best practice?

    Read the article

  • ruby on rails configuration

    - by Themasterhimself
    Im using the following guide for getting started with rails for ubuntu 9.10. http://guides.rails.info/getting_started.html I have installed both ruby and gem. gokul@gokul-laptop:~$ ruby -v ruby 1.8.7 (2009-06-12 patchlevel 174) [i486-linux] gokul@gokul-laptop:~$ gem -v 1.3.6 gokul@gokul-laptop:~$ For rails, gokul@gokul-laptop:~$sudo gem install rails doesnt seem to give any response. so used the synaptic package manager for installing it. And it seems to have installed correctly. gokul@gokul-laptop:~$ rails Usage: /usr/bin/rails /path/to/your/app [options] Options: -r, --ruby=path Path to the Ruby binary of your choice (otherwise scripts use env, dispatchers current path). Default: /usr/bin/ruby1.8 -d, --database=name Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite2/sqlite3/frontbase/ibm_db). Default: sqlite3 -D, --with-dispatchers Add CGI/FastCGI/mod_ruby dispatches code to generated application skeleton Default: false --freeze Freeze Rails in vendor/rails from the gems generating the skeleton Default: false -m, --template=path Use an application template that lives at path (can be a filesystem path or URL). Default: (none) Rails Info: -v, --version Show the Rails version number and quit. -h, --help Show this help message and quit. General Options: -p, --pretend Run but do not make any changes. -f, --force Overwrite files that already exist. -s, --skip Skip files that already exist. -q, --quiet Suppress normal output. -t, --backtrace Debugging: show backtrace on errors. -c, --svn Modify files with subversion. (Note: svn must be in path) -g, --git Modify files with git. (Note: git must be in path) Description: The 'rails' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going. gokul@gokul-laptop:~$ app folder is created with all the proper folders. The problem starts with the following commands... gokul@gokul-laptop:~$ sudo gem install bundler [sudo] password for gokul: Successfully installed bundler-0.9.24 1 gem installed Installing ri documentation for bundler-0.9.24... Installing RDoc documentation for bundler-0.9.24... gokul@gokul-laptop:~$ bundle install Could not locate Gemfile gokul@gokul-laptop:~$ coming to the database, the default sqlite3 seems to have installed correctly. gokul@gokul-laptop:~$ sqlite3 SQLite version 3.6.16 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite The welcome aboard page is not being able to be found at (http://localhost:3000) after executing the following commands... gokul@gokul-laptop:~/Desktop$ rails blog create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop$ cd blog gokul@gokul-laptop:~/Desktop/blog$ rake db:create (in /home/gokul/Desktop/blog) gokul@gokul-laptop:~/Desktop/blog$ rails server create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop/blog$ hope some one can help me with this...

    Read the article

  • Is tcerl for Mnesia production ready? Is there any alternatives?

    - by Sanoj
    I would like to create a scalable web service using Mnesia as database. However Mnesia per default isn't scalable for persistent storgage since it is using Dets (which has a 2GB limit) as backend. I have seen discussions about extending Mnesia with MnesiaEx and use tcerl as backend. It sounds good and have showed good performance. However, I have seen in a talk about Tokyo Cabinet and CouchDB with Mnesia that there are some issues: issues with durability issues with memory leaks issues with crashes Is tcerl + Mnesia really production ready? And is there any other alternatives? How doe´s companies overcome these issues if they use Mnesia in bigger systems? Is there a working solution with Mnesia and Tokyo Tyrant that is working better?

    Read the article

  • Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

    - by Scott McNeil
    Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption News Summary IT organizations are adopting private clouds as a stepping-stone to business-driven, self-service IT. Successful implementations hinge on the ability to efficiently deploy and manage cloud services at enterprise scale. Having a complete cloud management solution integrated with an enterprise-class technology stack is a fundamental requirement for IT. Oracle Enterprise Manager 12c Release 4 meets that requirement by helping businesses become more agile and responsive, while reducing cost, complexity, and risk. News Facts Oracle Enterprise Manager 12c Release 4, available today, lets organizations rapidly adopt Oracle-based, enterprise-scale private clouds. New capabilities provide advanced technology stack management, secure database administration, and enterprise service governance, enabling Oracle customers and partners to maximize database and application performance and drive innovation using self-service IT platforms. The enhancements have been driven by customers and the growing Oracle Enterprise Manager Ecosystem, comprised of more than 750 Oracle PartnerNetwork (OPN) Specialized partners. Oracle and its partners and customers have built over 140 plug-ins and connectors for Oracle Enterprise Manager. Watch the video highlights. Automation for Broader Cloud Services Oracle Enterprise Manager 12c Release 4 allows for a rapid enterprise-wide adoption of database, middleware and infrastructure services in the private cloud, driven by an enhanced API-enabled service catalog. The release features “push button” style provisioning of complete environments such as SOA and Oracle Active Data Guard, and fast data cloning that enables rapid deployment and testing of enterprise applications. Out-of-the-box capabilities to detect data and configuration vulnerabilities provide enhanced cloud service governance along with greater operational control through a flexible and extensible showback mechanism. Enhanced Database Management A new performance warehouse enables predictive database diagnostics and trend analysis and helps identify database problems before they occur. New enterprise data-governance capabilities enhance security by helping systematically discover and protect sensitive data. Step-by-step orchestration of upgrades with the ability to rollback changes enables faster adoption of Oracle Database 12c. Expanded Fusion Middleware Management A new consolidated view of Oracle Fusion Middleware 12c deployments with a guided management capability lets administrators apply best management practices to diverse middleware environments and identify performance issues quickly. A Java VM Diagnostics as a Service feature allows governed access to diagnostics data for IT workers across multiple disciplines for accelerated DevOps resolutions of defects and performance optimization. New automated provisioning for SOA lets middleware administrators perform mass SOA provisioning with ease. Superior Enterprise-Grade Management Private roles and preferred credentials have been added to Oracle Enterprise Manager to provide additional fine-grained security for organizations with complex access control requirements. A new security console provides a single point of control for managing the security of Oracle Enterprise Manager environments. Support for the latest industry standard SNMP v3 protocol, including encryption, enables more secure heterogeneous management. “Smart monitoring” adapts to observed environmental changes and adds self-management capabilities to help Oracle Enterprise Manager run at peak performance, while demanding less IT supervision. Supporting Quotes “Lawrence Livermore National Laboratory has a strong tradition of technology breakthroughs and leadership. As a member of Oracle’s Customer Advisory Board for Oracle Enterprise Manager, we have consistently provided feedback and guidance in the areas of enterprise-scale cloud, self-diagnosability, and secure administration for the product,” said Tim Frazier, CIO, NIF and Photon Sciences, Lawrence Livermore National Laboratory. “We intend to take advantage of the Release 4 features that support enterprise-scale availability and fine-grained security capabilities for private cloud deployments.” “IDC's most recent CloudTrack survey shows that most enterprises plan to adopt hybrid cloud architectures over the next three years,” said Mary Johnston Turner, Research Vice President, Enterprise System Management Software, IDC. “These organizations plan to deploy a wide range of workloads into cloud environments including mission critical database and middleware services that require high levels of fault tolerance and disaster recovery. Such capabilities were traditionally custom configured for each application but cloud offers the possibility to incorporate such properties within the service definition, enabling organizations to adopt cloud without compromise. With the latest release of Oracle Enterprise Manager 12c, Oracle is providing customers with an out-of-the-box experience for delivering highly-resilient cloud services for databases and applications.” “Since its inception, Oracle has been leading the way in innovative, scalable and high performance solutions for the enterprise. With this release of Oracle Enterprise Manager, we are extending this leadership by providing enterprise-scale capabilities for planning, delivering, and managing private clouds. We call this ‘zero-to-cloud – accelerated.’ These enhancements help our customers to expedite their adoption of cloud computing and prepares them for the next generation of self-service IT,” said Prakash Ramamurthy, senior vice president of Systems and Cloud Management at Oracle. Supporting Resources Oracle Enterprise Manager 12c Video: Cerner Delivers High Performance Private Cloud Video: BIAS Achieves Outstanding Results with Private Cloud Press Release Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • xmlrpc client call in python does not come back

    - by Jack Ha
    Using Python 2.6.4, windows With the following script I want to test a certain xmlrpc server. I call a non-existent function and hope for a traceback with an error. Instead, the function does not return. What could be the cause? import xmlrpclib s = xmlrpclib.Server("http://127.0.0.1:80", verbose=True) s.functioncall() The output is: send: 'POST /RPC2 HTTP/1.0\r\nHost: 127.0.0.1:80\r\nUser-Agent: xmlrpclib.py/1.0 .1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: 106\r\n\ r\n' send: "<?xml version='1.0'?>\n<methodCall>\n<methodName>functioncall</methodName >\n<params>\n</params>\n</methodCall>\n" reply: 'HTTP/1.1 200 OK\r\n' header: Content-Type: text/xml header: Cache-Control: no-cache header: Content-Length: 376 header: Date: Tue, 30 Mar 2010 13:27:21 GMT body: '<?xml version="1.0"?>\r\n<methodResponse>\r\n<fault>\r\n<value>\r\n<struc t>\r\n<member>\r\n<name>faultCode</name>\r\n<value><i4>1</i4></value>\r\n</membe r>\r\n<member>\r\n<name>faultString</name>\r\n<value><string>PVSS00ctrl (2), 2 010.03.30 15:27:21.395, CTRL, SEVERE, 72, Function not defined, functioncall , , \n</string></value>\r\n</member>\r\n</struct>\r\n</value>\r\n</fault>\r\n</m ethodResponse>\r\n' (here the program hangs and does not return until I kill the server) edit: the server is written in c++, using its own xmlrpc library

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >