Search Results

Search found 12072 results on 483 pages for 'x86 servers'.

Page 194/483 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

  • How to Audit and Monitor BI Publisher Reports Access?

    - by kanichiro.nishida
    Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions. With compliance becoming an integral part of any business requirement, auditing your reporting environment is also becoming one of the most critical and hot agenda in today’s enterprise reporting deployments. Also, I believe that auditing the reporting environment is not just for the compliance, but also the way to understand how your users are using the reports and be able to improve the user reporting experience. BI Publisher have introduced Enterprise Level Auditing feature with its 11G release, with an integration of Oracle Fusion Middleware Audit Framework, which comes out of the box with the installation. Yes, this is another great example of the benefit of its tight integration with Fusion Middleware introduced with BI Publisher 11g release. What Information Can I Know about our Reporting Environment? With this new Auditing feature you can now gain the following insights. When a particular user login or logout What report is accessed by who and when and how How long does it take to process a particular report Yes, it’s all there. This is a great news for 10G users, right ? I used to be one of them working with many different IT organizations and were craving for this, but it’s here now with 11G! How Can I Access to the Auditing Information? With the Fusion Middleware Auditing Framework, BI Publisher feed such information either to a log file or to a database. If you decided to get the data into the database then, of course you know, you can use BI Publisher to report and publish, or visualize the data to gain more insights. One thing though, in order to feed the data it requires a few extra steps, which I’ll cover it later.  Regardless of whether it’s the log file or the database to store the Auditing data, first, you need to enable the Auditing feature, which is not enabled as default. So, let’s take a look at how to enable it. How to Enable Auditing Feature? Here is a quick list of the steps: Enable Auditing related properties in BI Publisher configuration file Copy component_events.xml file to Fusion Middleware Audit Framework’s location Enable Auditing Policy with Fusion Middleware Control (Enterprise Manager) Restart WebLogic Server Enable Auditing related properties in BI Publisher configuration file Open xmlp-server-config.xml file, which is located under $BI_HOME/ user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Configuration directory. Set the following three properties values to ‘true’. AUDIT_ENABLED MONITORING_ENABLED AUDIT_JPS_INTEGRATION The ‘AUDIT_JPS_INTEGRATION’ is not in the file as default, so you need to add this. Here is an example of how it looks for the xmlp-server-config.xml file after the modification. <?xml version="1.0" encoding="UTF-8" standalone="no"?><xmlpConfigxmlns="http://xmlns.oracle.com/oxp/xmlp"> <property name="SAW_SERVER" value="adc6160510"/> <property name="SAW_SESSION_TIMEOUT" value="90"/> <property name="DEBUG_LEVEL" value="exception"/> <property name="SAW_PORT" value="7001"/> <property name="SAW_PASSWORD" value=""/> <property name="SAW_PROTOCOL" value="http"/> <property name="SAW_VERSION" value="v6"/> <property name="SAW_USERNAME" value=""/> <property name="SAW_URL_SUFFIX" value="analytics/saw.dll"/> <property name="MONITORING_ENABLED" value="true"/> <property name="MONITORING_DEFAULT_HISTORY_SIZE" value="30"/> <property name="AUDIT_ENABLED" value="true"/> <property name="JSESSION_RESET_DISABLED" value="true"/> <property name="SECURITY_MODEL" value="ORACLE_AS_JPS"/> <property name="AUDIT_JPS_INTEGRATION" value="true"/> </xmlpConfig>   Copy component_events.xml file to Audit Framework’s location There is a Audit related configuration file provided by BI Publisher that needs to be copied to the Audit Framework location. 1. Go to the following directory. $BI_HOME /oracle_common/modules/oracle.iau_11.1.1/components 2. Create a directory called ‘xmlpserver’ 3. Copy component_events.xml file from /user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Audit To the newly created ‘xmlpserver’ directory. Enable Auditing Policy with Fusion Middleware Control (EM) Now you can set a level of the auditing for each BI Publisher’s auditing type by using Fusion Middleware Control (a.k.a. Enterprise Manager). 1. Login to Fusion Middleware Control UI http://hostname:port/em (e.g. reporting.oracle.com:7001/em) 2. Access to Audit Policy configuration UI from the menu Under WebLogic Domain, right-click bifoundation_domain, select Security and then click Audit Policy.   3. Set Audit Level for BI Publisher. While you can select ‘Custom’ to set a customized level of Auditing for each component, I’m selecting ‘Medium’ for this exercise.   Restart WebLogic Server After all the above settings, now you need to restart the WebLogic Server instance in order to take those changes in effect. If you’re on Windows you can simply do this by selecting ‘Stop BI Servers’ and ‘Start BI Servers’ from the Start menu. If you’re on Linux then you can run ‘stopWebLogic.sh’ and ‘startWebLogic.sh’, which can be found under $BI_HOME/user_projects/domains/bifoundation_domain/bin Start Auditing! Now assuming that you have completed the above steps successfully, then from this point on any reporting activity should be audited and stored in the auditing log file, which can be found at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/auditlogs/xmlpserver/audit.log And here is a sample of the log file: 2011-02-18 02:25:49.928 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "200" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 86608512 486989824 24517 169 - - - 2011-02-18 02:25:49.929 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "200" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:25:49.554 "" "ReportDataProcess" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "260" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" - - - - - - - - - - - - - - - - - 34980200 554033152 - 134 - - - 2011-02-18 03:25:50.282 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "263" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 16158944 554033152 24517 503 - - - 2011-02-18 03:25:50.282 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "263" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:30:00.448 "barack.obama" "UserLogin" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000406,0" - - - - "bipublisher(11.1.1)" "UserSession" "26" "" - - - - - - - - - - - - - - - - - - - - - - - - - From the above log file you can tell a user ‘steve.jobs’ was running some reports like ‘Balance Letter’ around afternoon on 2/18 and another user ‘barack.obama’ logged into the system at 3:30 on the same day. Yes, every login and log out will be recorded, and every report access will be recorded in this log file. Now, looking at this text file to understand what’s going on is pretty overwhelming. And accessing to this log file, which is located at the server’s file system where the BI Publisher/WebLogic Server are running, is another challenge in typical deployment scenarios. And that’s where the database storage option for the Auditing data  comes into a picture. I’ll talk about this tomorrow, so stay tuned!  

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

  • Setup Guide for updating local system and the repository with the incremental Solaris 11.1 SRU

    - by Gurubalan
    This guide covers the steps to implement the following setup. I. Updating the local system from Solaris 11.1 to Solaris 11.1 SRU 16.5II. Setting up local system as an IPS Repository Server (HTTP interface)III. Updating the local repository with the incremental Solaris 11.1 SRU 16.5I. Updating the local system from Solaris 11.1 to Solaris 11.1 SRU 16.5We assume that the local system is currently installed with Solaris 11.1 GA and the system doesn't have internet connectivity.What I have:1. Two parts of full repo iso files downloaded from http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html. Both files are concatenated to a single file using the following command. $ cat sol-11_1-repo-full.iso-a sol-11_1-repo-full.iso-b > sol-11_1-repo-full.iso I suggest to verify the downloaded file against its md5checksum value [http://download.oracle.com/otn/solaris/11_1/md5sum.txt] using the following command digest -a md5 <file-name>  // the output of this command should match the original checksum value for that file.2. Incremental repo sol-11_1_16_5_0-incr-repo.iso downloaded from MOS [Patch 18269379: ORACLE SOLARIS 11.1.16.5.0 REPO ISO IMAGE (SPARC/X86 (64-BIT)]. You can get the checksum value of incremental repo iso by clicking the check box "show digest details" when you download the file.3. The local system IP is 192.168.10.10 & port 81 is reserved for repo serverPlease note that this repo file (either full or incremental) is common for both SPARC and X86(64BIT).Steps to update the local system: 1. #mounting s11.1 full repo iso to mnt        $ mount -F hsfs /soft/sol-11_1-repo-full.iso /mnt 2. Setting the pkg publisher to full repo source         $ pkg set-publisher -g file:///mnt/repo solaris 3. Perform the update of the packages.        $ pkg updateII. Setting up local system (Oracle Solaris 11.1) as an IPS Repository Server(HTTP interface):Please note that we have already mounted the full repo iso at /mnt    1. # copying /mnt permanently to the disk location at /s11.1        #zfs create -o atime=off -o mountpoint=/s11.1 rpool/s11.1        #rsync -aP /mnt/* /s11.1     2. #unmounting mnt         #umount /mnt3. To allow clients to access the local repository via HTTP, enable the application/pkg/server Service Management Facility (SMF) service.        svccfg -s application/pkg/server setprop pkg/inst_root=<data_source>/repo        eg: $svccfg -s application/pkg/server setprop pkg/inst_root=/s11.1/repo4. Setting port# to 81      svccfg -s application/pkg/server setprop pkg/port=<port_number>      eg: svccfg -s application/pkg/server setprop pkg/port="81"5a. Enable the pkg/server service (if the service is disabled)     $svcs pkg/server     STATE          STIME    FMRI     disabled        19:55:03 svc:/application/pkg/server:default      $svcadm enable pkg/server5b. Refresh/Restart the service, if it is already online       $svcadm refresh application/pkg/server       $svcadm restart application/pkg/server6. Setting pkg publisher on repo server and repo clients:      pkg set-publisher -G '*' -g http://<ip>:<port> solaris      eg: $pkg set-publisher -G '*' -g 'http://192.168.10.10:81' solaris7. Verify the Solaris 11.1 version from the repository         $pkgrepo list -s http://192.168.10.10:81 | grep entire         solaris   entire     0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z You will have multiple row entries if the repository is setup with incremental SRUs.III. Updating the local repository with the incremental Solaris 11.1 SRU 16.51. #mounting s11.1 incremental SRU repo iso to mnt        $ mount -F hsfs <full_path_to>/sol-11_1_sruN_bldnum_respinnum-incr-repo.iso  /mnt        $ mount -F hsfs /soft/sol-11_1_16_5_0-incr-repo.iso /mnt2. Updating the local repository        $pkgrecv -s  /mnt/repo -d /s11.1/repo '*'3. Building a Search Index    $pkgrepo -s /s11.1/repo refresh     Initiating repository refresh.4. Refresh/Restart the service       $svcadm refresh svc:/application/pkg/server       $svcadm restart svc:/application/pkg/server5. Verify the repo has the incremental SRU as well.       # pkgrepo list -s http://192.168.10.10:81 | grep entire        solaris   entire      0.5.11,5.11-0.175.1.16.0.5.0:20140218T165248Z       solaris   entire      0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z

    Read the article

  • Can you help me fix my broken packages?

    - by Andreas Hartmann
    I would like to upgrade from 13.04 to 13.10, but some broken packages are preventing upgrade success: grep Broken /var/log/dist-upgrade/apt.log output: Broken libwayland-client0:amd64 Conflicts on libwayland0 [ amd64 ] < 1.0.5-0ubuntu1 > ( libs ) (< 1.1.0) Broken libunity9:amd64 Breaks on unity-common [ amd64 ] < 7.0.0daily13.06.19~13.04-0ubuntu1 > ( gnome ) (< 7.1.2) Broken cups-filters:amd64 Conflicts on ghostscript-cups [ amd64 ] < 9.07~dfsg2-0ubuntu3.1 > ( text ) Broken libpam-systemd:amd64 Conflicts on libpam-xdg-support [ amd64 ] < 0.2-0ubuntu2 > ( admin ) Broken libharfbuzz0a:amd64 Breaks on libharfbuzz0 [ amd64 ] < 0.9.13-1 > ( libs ) Broken libharfbuzz0a:amd64 Breaks on libharfbuzz0 [ i386 ] < 0.9.13-1 > ( libs ) Broken libunity-scopes-json-def-desktop:amd64 Conflicts on libunity-common [ amd64 ] < 6.90.2daily13.04.05-0ubuntu1 > ( gnome ) (< 7.0.7) Broken libunity-scopes-json-def-desktop:amd64 Conflicts on libunity-common [ i386 ] < none > ( none ) (< 7.0.7) Broken libaccount-plugin-generic-oauth:amd64 Conflicts on account-plugin-generic-oauth [ amd64 ] < 0.10bzr13.03.26-0ubuntu1.1 > ( gnome ) (< 0.10bzr13.04.30) Broken libaccount-plugin-generic-oauth:amd64 Breaks on account-plugin-generic-oauth [ amd64 ] < 0.10bzr13.03.26-0ubuntu1.1 > ( gnome ) (< 0.10bzr13.04.30) Broken libmutter0b:amd64 Breaks on libmutter0a [ amd64 ] < 3.6.3-0ubuntu2 > ( libs ) Broken python3-aptdaemon.pkcompat:amd64 Breaks on libpackagekit-glib2-14 [ amd64 ] < 0.7.6-3ubuntu1 > ( libs ) (<= 0.7.6-4) Broken apache2:amd64 Conflicts on apache2.2-common [ amd64 ] < 2.2.22-6ubuntu5.1 > ( httpd ) Broken chromium-codecs-ffmpeg-extra:amd64 Conflicts on chromium-codecs-ffmpeg [ amd64 ] < 28.0.1500.71-0ubuntu1.13.04.1 -> 29.0.1547.65-0ubuntu2 > ( universe/web ) Broken unity-scope-home:amd64 Conflicts on unity-lens-shopping [ amd64 ] < 6.8.0daily13.03.04-0ubuntu1 > ( gnome ) Broken libsnmp30:amd64 Breaks on libsnmp15 [ amd64 ] < 5.4.3~dfsg-2.7ubuntu1 > ( libs ) Broken apache2.2-bin:amd64 Breaks on gnome-user-share [ amd64 ] < 3.0.4-0ubuntu1 > ( gnome ) (< 3.8.0-2~) Broken libgjs0d:amd64 Conflicts on libgjs0c [ amd64 ] < 1.34.0-0ubuntu1 > ( libs ) Broken unity-gtk2-module:amd64 Conflicts on appmenu-gtk [ amd64 ] < 12.10.3daily13.04.03-0ubuntu1 > ( libs ) Broken lib32asound2:amd64 Depends on libasound2 [ amd64 ] < 1.0.25-4ubuntu3.1 -> 1.0.27.2-1ubuntu6 > ( libs ) (= 1.0.25-4ubuntu3.1) Broken unity-gtk3-module:amd64 Conflicts on appmenu-gtk3 [ amd64 ] < 12.10.3daily13.04.03-0ubuntu1 > ( libs ) Broken activity-log-manager:amd64 Conflicts on activity-log-manager-common [ amd64 ] < 0.9.4-0ubuntu6.2 > ( utils ) Broken libgtksourceview-3.0-0:amd64 Depends on libgtksourceview-3.0-common [ amd64 ] < 3.6.3-0ubuntu1 -> 3.8.2-0ubuntu1 > ( libs ) (< 3.7) Broken icaclient:amd64 Depends on lib32asound2 [ amd64 ] < 1.0.25-4ubuntu3.1 > ( libs ) Broken libunity-core-6.0-5:amd64 Depends on unity-services [ amd64 ] < 7.0.0daily13.06.19~13.04-0ubuntu1 -> 7.1.2+13.10.20131014.1-0ubuntu1 > ( gnome ) (= 7.0.0daily13.06.19~13.04-0ubuntu1) Broken libbamf3-1:amd64 Depends on bamfdaemon [ amd64 ] < 0.4.0daily13.06.19~13.04-0ubuntu1 -> 0.5.1+13.10.20131011-0ubuntu1 > ( libs ) (= 0.4.0daily13.06.19~13.04-0ubuntu1) Broken apache2-bin:amd64 Conflicts on apache2.2-bin [ amd64 ] < 2.2.22-6ubuntu5.1 -> 2.4.6-2ubuntu2 > ( httpd ) (< 2.3~) Output for cat /etc/apt/sources.list /etc/apt/sources.list.d/*.list # deb cdrom:[Ubuntu 13.04 _Raring Ringtail_ - Release amd64 (20130424)]/ raring main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://de.archive.ubuntu.com/ubuntu/ raring main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://de.archive.ubuntu.com/ubuntu/ raring-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://de.archive.ubuntu.com/ubuntu/ raring universe deb http://de.archive.ubuntu.com/ubuntu/ raring-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://de.archive.ubuntu.com/ubuntu/ raring multiverse deb http://de.archive.ubuntu.com/ubuntu/ raring-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu raring-security main restricted deb http://security.ubuntu.com/ubuntu raring-security universe deb http://security.ubuntu.com/ubuntu raring-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu raring partner # deb-src http://archive.canonical.com/ubuntu raring partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu raring main # deb-src http://extras.ubuntu.com/ubuntu raring main # deb http://linux.dropbox.com/ubuntu precise main output for sudo dpkg -l | grep -e "^iU" -e "^rc": rc ibm-lotus-cae 8.5.2-20100805.0821 i386 IBM Lotus Composite Application Editor rc ibm-lotus-cae-nl1 8.5.2-20100805.0821 i386 IBM Lotus CAE NL1 rc ibm-lotus-feedreader 8.5.2-20100805.0821 i386 Feeds for IBM Lotus Notes 8.5.2 rc ibm-lotus-feedreader-nl1 8.5.2-20100805.0821 i386 IBM Lotus Feed Reader NL1 rc ibm-lotus-notes 8.5.2-20100805.0821 i386 IBM Lotus Notes rc ibm-lotus-notes-core-de 8.5.2-20100805.0821 i386 IBM Lotus Notes Native German (de) rc ibm-lotus-notes-nl1 8.5.2-20100805.0821 i386 IBM Lotus Notes Java NL1 rc ibm-lotus-sametime 8.5.2-20100805.0821 i386 IBM Lotus Sametime rc ibm-lotus-symphony 8.5.2-20100805.0821 i386 IBM Lotus Symphony rc ibm-lotus-symphony-nl1 8.5.2-20100805.0821 i386 IBM Lotus Symphony NL1 rc libapache2-mod-php5filter 5.4.9-4ubuntu2.2 amd64 server-side, HTML-embedded scripting language (apache 2 filter module) rc libavcodec53:amd64 6:0.8.6-1ubuntu2 amd64 Libav codec library rc libavutil51:amd64 6:0.8.6-1ubuntu2 amd64 Libav utility library rc libmotif4:amd64 2.3.3-7ubuntu1 amd64 Open Motif - shared libraries rc linux-image-3.8.0-25-generic 3.8.0-25.37 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-25-generic 3.8.0-25.37 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP

    Read the article

  • Das T5-4 TPC-H Ergebnis naeher betrachtet

    - by Stefan Hinker
    Inzwischen haben vermutlich viele das neue TPC-H Ergebnis der SPARC T5-4 gesehen, das am 7. Juni bei der TPC eingereicht wurde.  Die wesentlichen Punkte dieses Benchmarks wurden wie gewohnt bereits von unserer Benchmark-Truppe auf  "BestPerf" zusammengefasst.  Es gibt aber noch einiges mehr, das eine naehere Betrachtung lohnt. Skalierbarkeit Das TPC raet von einem Vergleich von TPC-H Ergebnissen in unterschiedlichen Groessenklassen ab.  Aber auch innerhalb der 3000GB-Klasse ist es interessant: SPARC T4-4 mit 4 CPUs (32 Cores mit 3.0 GHz) liefert 205,792 QphH. SPARC T5-4 mit 4 CPUs (64 Cores mit 3.6 GHz) liefert 409,721 QphH. Das heisst, es fehlen lediglich 1863 QphH oder 0.45% zu 100% Skalierbarkeit, wenn man davon ausgeht, dass die doppelte Anzahl Kerne das doppelte Ergebnis liefern sollte.  Etwas anspruchsvoller, koennte man natuerlich auch einen Faktor von 2.4 erwarten, wenn man die hoehere Taktrate mit beruecksichtigt.  Das wuerde die Latte auf 493901 QphH legen.  Dann waere die SPARC T5-4 bei 83%.  Damit stellt sich die Frage: Was hat hier nicht skaliert?  Vermutlich der Plattenspeicher!  Auch hier lohnt sich eine naehere Betrachtung: Plattenspeicher Im Bericht auf BestPerf und auch im Full Disclosure Report der TPC stehen einige interessante Details zum Plattenspeicher und der Konfiguration.   In der Konfiguration der SPARC T4-4 wurden 12 2540-M2 Arrays verwendet, die jeweils ca. 1.5 GB/s Durchsatz liefert, insgesamt also eta 18 GB/s.  Dabei waren die Arrays offensichtlich mit jeweils 2 Kabeln pro Array direkt an die 24 8GBit FC-Ports des Servers angeschlossen.  Mit den 2x 8GBit Ports pro Array koennte man so ein theoretisches Maximum von 2GB/s erreichen.  Tatsaechlich wurden 1.5GB/s geliefert, was so ziemlich dem realistischen Maximum entsprechen duerfte. Fuer den Lauf mit der SPARC T5-4 wurden doppelt so viele Platten verwendet.  Dafuer wurden die 2540-M2 Arrays mit je einem zusaetzlichen Plattentray erweitert.  Mit dieser Konfiguration wurde dann (laut BestPerf) ein Maximaldurchsatz von 33 GB/s erreicht - nicht ganz das doppelte des SPARC T4-4 Laufs.  Um tatsaechlich den doppelten Durchsatz (36 GB/s) zu liefern, haette jedes der 12 Arrays 3 GB/s ueber seine 4 8GBit Ports liefern muessen.  Im FDR stehen nur 12 dual-port FC HBAs, was die Verwendung der Brocade FC Switches erklaert: Es wurden alle 4 8GBit ports jedes Arrays an die Switches angeschlossen, die die Datenstroeme dann in die 24 16GBit HBA ports des Servers buendelten.  Das theoretische Maximum jedes Storage-Arrays waere nun 4 GB/s.  Wenn man jedoch den Protokoll- und "Realitaets"-Overhead mit einrechnet, sind die tatsaechlich gelieferten 2.75 GB/s gar nicht schlecht.  Mit diesen Zahlen im Hinterkopf ist die Verdopplung des SPARC T4-4 Ergebnisses eine gute Leistung - und gleichzeitig eine gute Erklaerung, warum nicht bis zum 2.4-fachen skaliert wurde. Nebenbei bemerkt: Weder die SPARC T4-4 noch die SPARC T5-4 hatten in der gemessenen Konfiguration irgendwelche Flash-Devices. Mitbewerb Seit die T4 Systeme auf dem Markt sind, bemuehen sich unsere Mitbewerber redlich darum, ueberall den Eindruck zu hinterlassen, die Leistung des SPARC CPU-Kerns waere weiterhin mangelhaft.  Auch scheinen sie ueberzeugt zu sein, dass (ueber)grosse Caches und hohe Taktraten die einzigen Schluessel zu echter Server Performance seien.  Wenn ich mir nun jedoch die oeffentlichen TPC-H Ergebnisse ansehe, sehe ich dies: TPC-H @3000GB, Non-Clustered Systems System QphH SPARC T5-4 3.6 GHz SPARC T5 4/64 – 2048 GB 409,721.8 SPARC T4-4 3.0 GHz SPARC T4 4/32 – 1024 GB 205,792.0 IBM Power 780 4.1 GHz POWER7 8/32 – 1024 GB 192,001.1 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64 – 512 GB 162,601.7 Kurz zusammengefasst: Mit 32 Kernen (mit 3 GHz und 4MB L3 Cache), liefert die SPARC T4-4 mehr QphH@3000GB ab als IBM mit ihrer 32 Kern Power7 (bei 4.1 GHz und 32MB L3 Cache) und auch mehr als HP mit einem 64 Kern Intel Xeon System (2.27 GHz und 24MB L3 Cache).  Ich frage mich, wo genau SPARC hier mangelhaft ist? Nun koennte man natuerlich argumentieren, dass beide Ergebnisse nicht gerade neu sind.  Nun, in Ermangelung neuerer Ergebnisse kann man ja mal ein wenig spekulieren: IBMs aktueller Performance Report listet die o.g. IBM Power 780 mit einem rPerf Wert von 425.5.  Ein passendes Nachfolgesystem mit Power7+ CPUs waere die Power 780+ mit 64 Kernen, verfuegbar mit 3.72 GHz.  Sie wird mit einem rPerf Wert von  690.1 angegeben, also 1.62x mehr.  Wenn man also annimmt, dass Plattenspeicher nicht der limitierende Faktor ist (IBM hat mit 177 SSDs getestet, sie duerfen das gerne auf 400 erhoehen) und IBMs eigene Leistungsabschaetzung zugrunde legt, darf man ein theoretisches Ergebnis von 311398 QphH@3000GB erwarten.  Das waere dann allerdings immer noch weit von dem Ergebnis der SPARC T5-4 entfernt, und gerade in der von IBM so geschaetzen "per core" Metric noch weniger vorteilhaft. In der x86-Welt sieht es nicht besser aus.  Leider gibt es von Intel keine so praktischen rPerf-Tabellen.  Daher muss ich hier fuer eine Schaetzung auf SPECint_rate2006 zurueckgreifen.  (Ich bin kein grosser Fan von solchen Kreuz- und Querschaetzungen.  Insb. SPECcpu ist nicht besonders geeignet, um Datenbank-Leistung abzuschaetzen, da fast kein IO im Spiel ist.)  Das o.g. HP System wird bei SPEC mit 1580 CINT2006_rate gelistet.  Das bis einschl. 2013-06-14 beste Resultat fuer den neuen Intel Xeon E7-4870 mit 8 CPUs ist 2180 CINT2006_rate.  Das ist immerhin 1.38x besser.  (Wenn man nur die Taktrate beruecksichtigen wuerde, waere man bei 1.32x.)  Hier weiter zu rechnen, ist muessig, aber fuer die ungeduldigen Leser hier eine kleine tabellarische Zusammenfassung: TPC-H @3000GB Performance Spekulationen System QphH* Verbesserung gegenueber der frueheren Generation SPARC T4-4 32 cores SPARC T4 205,792 2x SPARC T5-464 cores SPARC T5 409,721 IBM Power 780 32 cores Power7 192,001 1.62x IBM Power 780+ 64 cores Power7+  311,398* HP ProLiant DL980 G764 cores Intel Xeon X7560 162,601 1.38x HP ProLiant DL980 G780 cores Intel Xeon E7-4870    224,348* * Keine echten Resultate  - spekulative Werte auf der Grundlage von rPerf (Power7+) oder SPECint_rate2006 (HP) Natuerlich sind IBM oder HP herzlich eingeladen, diese Werte zu widerlegen.  Aber stand heute warte ich noch auf aktuelle Benchmark Veroffentlichungen in diesem Datensegment. Was koennen wir also zusammenfassen? Es gibt einige Hinweise, dass der Plattenspeicher der begrenzende Faktor war, der die SPARC T5-4 daran hinderte, auf jenseits von 2x zu skalieren Der Mythos, dass SPARC Kerne keine Leistung bringen, ist genau das - ein Mythos.  Wie sieht es umgekehrt eigentlich mit einem TPC-H Ergebnis fuer die Power7+ aus? Cache ist nicht der magische Performance-Schalter, fuer den ihn manche Leute offenbar halten. Ein System, eine CPU-Architektur und ein Betriebsystem jenseits einer gewissen Grenze zu skalieren ist schwer.  In der x86-Welt scheint es noch ein wenig schwerer zu sein. Was fehlt?  Nun, das Thema Preis/Leistung ueberlasse ich gerne den Verkaeufern ;-) Und zu guter Letzt: Nein, ich habe mich nicht ins Marketing versetzen lassen.  Aber manchmal kann ich mich einfach nicht zurueckhalten... Disclosure Statements The views expressed on this blog are my own and do not necessarily reflect the views of Oracle. TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org, results as of 6/7/13. Prices are in USD. SPARC T5-4 409,721.8 QphH@3000GB, $3.94/QphH@3000GB, available 9/24/13, 4 processors, 64 cores, 512 threads; SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads. SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 18, 2013 from www.spec.org. HP ProLiant DL980 G7 (2.27 GHz, Intel Xeon X7560): 1580 SPECint_rate2006; HP ProLiant DL980 G7 (2.4 GHz, Intel Xeon E7-4870): 2180 SPECint_rate2006,

    Read the article

  • Oracle Solaris Zones Physical to virtual (P2V)

    - by user939057
    IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical system and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image. 2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1. The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe target Oracle Solaris system should be running the latest version of the patching patch cluster. and the minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> createzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Installation

    - by StuartBrierley
    As previously detailed, I have completed a single server installation of BizTalk Server 2009 standard on my development laptop; a MacBook Pro Core2Duo running at 2.16Ghz with 2Gb of RAM.  Following this I also posted on my use of the BizTalk Server Best Practices Anaylser and how to configure the BizTalk SQL Server Jobs.  All of which means that I should have some confidence that I have a decent working BizTalk Server 2009 environment, Next I thought that it would be a good idea to try and get some idea of how this setup performs by carrying out some baseline tests that can then be replicated on the test and live servers. The aim of this would be to allow confident predictions to be made of how any solutions developed on a single "server" installation may be expected to perform when deployed to these multi-server BizTalk Server 2009 standard installations. The BizTalk Benchmark Wizard would seem to be the perfect tool for the job. The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected. This utility should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment: "The executed scenarios may or may not be relative to any realistic scenario, and is only intended for testing. The BizTalk Benchmark Wizard has been developed in relation to the BizTalk Server 2009 Scale Out Testing Study. More information about the study can be found here: http://msdn.microsoft.com/en-us/library/ee377068(BTS.10).aspx" After downloading and installing the wizard you will need set up the Hosts, Instances and Adapter handlers.  This is done by running a script file using the “cscript” detailed below.  To do this you will need to open a command prompt window and navigate to the script folder; assuming the default installation location this would be C:\Program Files\Blogical\BizTalk Benchmark Wizard\Artefacts\BizTalk. In this folder you should find an InstallHosts.vbs file which can be executed using the following parameters: NTGroupName - The name of the Windows NT group. UserName – The name of the user account running the service instances. Password – The password of the user account running the service instances. Receive Host – The name of the server where you want to run the receive host instance.  Send Host - The name of the server where you want to run the sen host instance. Processing Host - The name of the server where you want to run the process host instance. By default the script is set up for 64 bit hosts, so if you are running in 32 bit environment make sure that you change the following line in the script before continuing: from:   objHS.IsHost32BitOnly = False to:    objHS.IsHost32BitOnly = True If you have a single box installation, your script command might look like this: cscript InstallHosts.vbs "BizTalk Application Users" “\MyUser” “MyPassword” “BtsServer1” “BtsServer1” “BtsServer1” If you have a multi server installation, your script command might look like this: cscript InstallHosts.vbs "MyDomain\BizTalk Application Users" “MyDomain\MyUser” “MyPassword” “BtsServer1” “BtsServer2” “BtsServer2” Running this script will create: Three hosts (BBW_RxHost, BBW_TxHost and BBW_PxHost) Three host instances One send and one receive adapter handler for the WCF NetTcp adapter. You will then need to import the BizTalk MSI via the BizTalk Administration Console.  Open the BizTalk Administration Console, point to the “Applications” node and import the BizTalk Benchmark Wizard.msi found in the same folder as the script above. This will create a “BizTalk Benchmark Wizard” application along with all ports and orchestrations needed. To finish the installation you will need to run the BizTalk Benchmark Wizard.msi on all BizTalk servers to add the assemblies to the Global Assembly Cache (GAC). Next I will look at running the BizTalk Benchmark Wizard.

    Read the article

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • MySQL Connect 8 Days Away - Replication Sessions

    - by Mat Keep
    Following on from my post about MySQL Cluster sessions at the forthcoming Connect conference, its now the turn of MySQL Replication - another technology at the heart of scaling and high availability for MySQL. Unless you've only just returned from a 6-month alien abduction, you will know that MySQL 5.6 includes the largest set of replication enhancements ever packaged into a single new release: - Global Transaction IDs + HA utilities for self-healing cluster..(yes both automatic failover and manual switchover available!) - Crash-safe slaves and binlog - Binlog Group Commit and Multi-Threaded Slaves for high performance - Replication Event Checksums and Time-Delayed replication - and many more There are a number of sessions dedicated to learn more about these important new enhancements, delivered by the same engineers who developed them. Here is a summary Saturday 29th, 13.00 Replication Tips and Tricks, Mats Kindahl In this session, the developers of MySQL Replication present a bag of useful tips and tricks related to the MySQL 5.5 GA and MySQL 5.6 development milestone releases, including multisource replication, using logs for auditing, handling filtering, examining the binary log, using relay slaves, splitting the replication stream, and handling failover. Saturday 29th, 17.30 Enabling the New Generation of Web and Cloud Services with MySQL 5.6 Replication, Lars Thalmann This session showcases the new replication features, including • High performance (group commit, multithreaded slave) • High availability (crash-safe slaves, failover utilities) • Flexibility and usability (global transaction identifiers, annotated row-based replication [RBR]) • Data integrity (event checksums) Saturday 29th, 1900 MySQL Replication Birds of a Feather In this session, the MySQL Replication engineers discuss all the goodies, including global transaction identifiers (GTIDs) with autofailover; multithreaded, crash-safe slaves; checksums; and more. The team discusses the design behind these enhancements and how to get started with them. You will get the opportunity to present your feedback on how these can be further enhanced and can share any additional replication requirements you have to further scale your critical MySQL-based workloads. Sunday 30th, 10.15 Hands-On Lab, MySQL Replication, Luis Soares and Sven Sandberg But how do you get started, how does it work, and what are the best practices and tools? During this hands-on lab, you will learn how to get started with replication, how it works, architecture, replication prerequisites, setting up a simple topology, and advanced replication configurations. The session also covers some of the new features in the MySQL 5.6 development milestone releases. Sunday 30th, 13.15 Hands-On Lab, MySQL Utilities, Chuck Bell Would you like to learn how to more effectively manage a host of MySQL servers and manage high-availability features such as replication? This hands-on lab addresses these areas and more. Participants will get familiar with all of the MySQL utilities, using each of them with a variety of options to configure and manage MySQL servers. Sunday 30th, 14.45 Eliminating Downtime with MySQL Replication, Luis Soares The presentation takes a deep dive into new replication features such as global transaction identifiers and crash-safe slaves. It also showcases a range of Python utilities that, combined with the Release 5.6 feature set, results in a self-healing data infrastructure. By the end of the session, attendees will be familiar with the new high-availability features in the whole MySQL 5.6 release and how to make use of them to protect and grow their business. Sunday 30th, 17.45 Scaling for the Web and the Cloud with MySQL Replication, Luis Soares In a Replication topology, high performance directly translates into improving read consistency from slaves and reducing the risk of data loss if a master fails. MySQL 5.6 introduces several new replication features to enhance performance. In this session, you will learn about these new features, how they work, and how you can leverage them in your applications. In addition, you will learn about some other best practices that can be used to improve performance. So how can you make sure you don't miss out - the good news is that registration is still open ;-) And just to whet your appetite, listen to the On-Demand webinar that presents an overview of MySQL 5.6 Replication.  

    Read the article

  • SQL SERVER – 3 Challenges for DBA and Smart Solutions

    - by Pinal Dave
    Developer’s life is never easy. DBA’s life is even crazier. DBA’s Life When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day. DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day. Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them. Challenge #1 Synchronize Data Across Server A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL. Challenge #2 SQL Report Builder DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server. Challenge #3 Work with the OTHER Database The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries. Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL Tagged: Devart, SQL Tool

    Read the article

  • Windows Azure Use Case: Web Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many applications have a requirement to be located outside of the organization’s internal infrastructure control. For instance, the company website for a brick-and-mortar retail company may want to post not only static but interactive content to be available to their external customers, and not want the customers to have access inside the organization’s firewall. There are also cases of pure web applications used for a great many of the internal functions of the business. This allows for remote workers, shared customer/employee workloads and data and other advantages. Some firms choose to host these web servers internally, others choose to contract out the infrastructure to an “ASP” (Application Service Provider) or an Infrastructure as a Service (IaaS) company. In any case, the design of these applications often resembles the following: In this design, a server (or perhaps more than one) hosts the presentation function (http or https) access to the application, and this same system may hold the computational aspects of the program. Authorization and Access is controlled programmatically, or is more open if this is a customer-facing application. Storage is either placed on the same or other servers, hosted within an RDBMS or NoSQL database, or a combination of the options, all coded into the application. High-Availability within this scenario is often the responsibility of the architects of the application, and by purchasing more hosting resources which must be built, licensed and configured, and manually added as demand requires, although some IaaS providers have a partially automatic method to add nodes for scale-out, if the architecture of the application supports it. Disaster Recovery is the responsibility of the system architect as well. Implementation: In a Windows Azure Platform as a Service (PaaS) environment, many of these architectural considerations are designed into the system. The Azure “Fabric” (not to be confused with the Azure implementation of Application Fabric - more on that in a moment) is designed to provide scalability. Compute resources can be added and removed programmatically based on any number of factors. Balancers at the request-level of the Fabric automatically route http and https requests. The fabric also provides High-Availability for storage and other components. Disaster recovery is a shared responsibility between the facilities (which have the ability to restore in case of catastrophic failure) and your code, which should build in recovery. In a Windows Azure-based web application, you have the ability to separate out the various functions and components. Presentation can be coded for multiple platforms like smart phones, tablets and PC’s, while the computation can be a single entity shared between them. This makes the applications more resilient and more object-oriented, and lends itself to a SOA or Distributed Computing architecture. It is true that you could code up a similar set of functionality in a traditional web-farm, but the difference here is that the components are built into the very design of the architecture. The API’s and DLL’s you call in a Windows Azure code base contains components as first-class citizens. For instance, if you need storage, it is simply called within the application as an object.  Computation has multiple options and the ability to scale linearly. You also gain another component that you would either have to write or bolt-in to a typical web-farm: the Application Fabric. This Windows Azure component provides communication between applications or even to on-premise systems. It provides authorization in either person-based or claims-based perspectives. SQL Azure provides relational storage as another option, and can also be used or accessed from on-premise systems. It should be noted that you can use all or some of these components individually. Resources: Design Strategies for Scalable Active Server Applications - http://msdn.microsoft.com/en-us/library/ms972349.aspx  Physical Tiers and Deployment  - http://msdn.microsoft.com/en-us/library/ee658120.aspx

    Read the article

  • SQL Server – SafePeak “Logon Trigger” Feature for Managing Data Access

    - by pinaldave
    Lately I received an interesting question about the abilities of SafePeak for SQL Server acceleration software: Q: “I would like to use SafePeak to make my CRM application faster. It is an application we bought from some vendor, after a while it became slow and we can’t reprogram it. SafePeak automated caching sounds like an easy and good solution for us. But, in my application there are many servers and different other applications services that address its main database, and some even change data, and I feel that there is a chance that some servers that during the connection process we may miss some. Is there a way to ensure that SafePeak will be aware of all connections to the SQL Server, so its cache will remain intact?” Interesting question, as I remember that SafePeak (http://www.safepeak.com/Product/SafePeak-Overview) likes that all traffic to the database will go thru it. I decided to check out the features of SafePeak latest version (2.1) and seek for an answer there. A: Indeed I found SafePeak has a feature they call “Logon Trigger” and is designed for that purpose. It is located in the user interface, under: Settings -> SQL instances management  ->  [your instance]  ->  [Logon Trigger] tab. From here you activate / deactivate it and control a white-list of enabled server IPs and Login names that SafePeak will ignore them. Click to Enlarge After activation of the “logon trigger” Safepeak server is notified by the SQL Server itself on each new opened connection. Safepeak monitors those connections and decides if there is something to do with them or not. On a typical installation SafePeak likes all application and users connections to go via SafePeak – this way it knows about data and schema updates immediately (real time). With activation of the safepeak “logon trigger”  a special CLR trigger is deployed on the SQL server and notifies Safepeak on any connection that has not arrived via SafePeak. In such cases Safepeak can act to clear and lock the cache or to ignore it. This feature enables to make sure SafePeak will be aware of all connections so SafePeak cache will maintain exactly correct all times. So even if a user, like a DBA will connect to the SQL Server not via SafePeak, SafePeak will know about it and take actions. The notification does not impact the work of that connection, the user or application still continue to do whatever they planned to do. Note: I found that activation of logon trigger in SafePeak requires that SafePeak SQL login will have the next permissions: 1) CONTROL SERVER; 2) VIEW SERVER STATE; 3) And the SQL Server instance is CLR enabled; Seeing SafePeak in action, I can say SafePeak brings fantastic resource for those who seek to get performance for SQL Server critical apps. SafePeak promises to accelerate SQL Server applications in just several hours of installation, automatic learning and some optimization configuration (no code changes!!!). If better application and database performance means better business to you – I suggest you to download and try SafePeak. The solution of SafePeak is indeed unique, and the questions I receive are very interesting. Have any more questions on SafePeak? Please leave your question as a comment and I will try to get an answer for you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • 2010 April Fools Joke

    - by Dane Morgridge
    I started at my current job at the end of March last year and there were some pretty funny April fools jokes.  Nothing super crazy, but pretty funny.  One guy came in and there was a tree in his cube.  We (me and the rest of my team) were planning for a couple of weeks on what we could do that would be just awesome.  We had a lot of really good ideas but nothing was spectacular.  Then Steve Andrews had a brilliant idea (yes it's true).  Since we have internal DNS servers we could redirect DNS to our internal servers for a site such as cnn.com.  Then we would lift the code from the site and create our own home page that would contain news about people in the company.  Steve was actually laughing so hard when he thought of the idea that it took him almost 30 minutes to spit it out. I thought, "this is perfect". I had enlisted a couple of people to help come up with the stories and at the same time we were trying to figure out how to get everybody to the site the morning of the 1st.  Then it hit me.  We could have the main article be one of my getting picked up by the FBI on hacking charges.  Then Chris (my boss) could send an email out telling everyone that I would not be there today and direct them to the site.  That would for sure get everyone to go to cnn.com first thing and see our prank.  I begun the process of looking for photos I could crop myself into and found the perfect one.  Then my wife took a good pic with our Canon 40D and I went to work.  The night before I didn't have any other stories due to everyone being really busy at work, but I decided to go ahead with just the FBI bust on it's own.  I got everything working and tested and coordinated with Chris for me to come in late so no one would see me at the office until after everyone had seen the joke. And so the morning of April fools came and I was waiting at home and the email was perfect.  Chris told everyone that I wouldn't be in and that not to answer any questions if you got any calls from anybody.  The Photoshop job I did was not perfect, but good enough and I even wrote an article with it that went into more detail about how I had been classified as a terrorist and all kinds of stuff. People at work started getting the emails and a few people didn't realize it was a joke (as I had hoped), including some from senior management (one person in particular who shall remain nameless in this post).  Emails started flying around about how to contain the situation and how to handle bad PR.  He basically bought it hook, line and sinker and then went in to crisis mode.  It was awesome! He did finally realize it was a joke and I will likely print and frame the email he sent out.  In short, April fools this year was a huge success.

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • Steps for MySQL DB Replication

    - by Manish Agrawal
    Following are the steps for MySQL Replication implementation on Linux machine: Pre-implementation steps for DB Replication:   1.    Identify the databases to be replicated 2.    Identify the tables to be ignored during replication per database for example log tables 3.  Carefully identify and replace the variables and paths(locations) mentioned (in bold) in the commands given below with appropriate values 4.  Schedule the maintenance activity in odd hours as these activities will affect all the databases on Master database server       Implementation steps for DB Replication:     1.    Configure the /etc/my.cnf file on Master database server to enable Binary logging, setting of server id and configuring of dbnames for which logging should be done. [mysqld] log-bin=mysql-bin server-id=1 binlog-do-db = dbname   Note: You can specify multiple DB in binlog-do-db by using comma separated dbname values like: dbname1, dbname2, …, dbnameN   2.    On Master database, Grant Replication Slave Privileges, by executing following command on mysql prompt mysql> GRANT REPLICATION SLAVE ON *.* TO slaveuser@<hostname> identified by ‘slavepassword’;   3.    Stop the Master & Slave database by giving the command      mysqladmin shutdown   4.    Start the Master database by giving the command      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user&     5.    mysql> FLUSH TABLES WITH READ LOCK; Note: Leave the client (putty session) from which you issued the FLUSH TABLES statement running, so that the read lock remains in effect. If you exit the client, the lock is released. 6.    mysql > SHOW MASTER STATUS;          +---------------+----------+--------------+------------------+          | File          | Position | Binlog_Do_DB | Binlog_Ignore_DB |          +---------------+----------+--------------+------------------+          | mysql-bin.003 | 117       | dbname       |                  |          +---------------+----------+--------------+------------------+ Note: Note this information as this will be required while starting of Slave and replication in later steps   7.    Take MySQL dump by giving the following command, In another session window (putty window) run the following command: mysqldump –u user --ignore-table=dbname.tbl_name -–ignore-table=dbname.tbl_name2 --master-data dbname > dbname_dump.db Note: When choosing databases to include in the dump, remember that you will need to filter out databases on each slave that you do not want to include in the replication process.     8.    Unlock the tables on Master by giving following command: mysql> UNLOCK TABLES;   9.    Copy the dump file to Slave DB server   10.  Startup the Slave by using option --skip-slave      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --skip-slave&   11.  Restore the dump file on Slave DB server      mysql –u user dbname < dbname_dump.db   12.  Stop the Slave database by giving the command      mysqladmin shutdown   13.  Configure the /etc/my.cnf file on the Slave database server [mysqld] server-id=2 replicate-ignore-table = dbname.tablename   14.  Start the Slave Mysql Server with 'replicate-do-db=DB name' option.      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --replicate-do-db=dbname --skip-slave   15.  Configure the settings at Slave server for Master host name, log filename and position within the log file as shown in Step 6 above Use Change Master statement in the MySQL session mysql> CHANGE MASTER TO MASTER_HOST='<master_host_name>', MASTER_USER='<replication_user_name>', MASTER_PASSWORD='<replication_password>', MASTER_LOG_FILE='<recorded_log_file_name>', MASTER_LOG_POS=<recorded_log_position>;   16.  On Slave Servers mysql prompt give the following command: a.     mysql > START SLAVE; b.    mysql > SHOW SLAVE STATUS;         Note: To stop slave for backup or any other activity you can use the following command on the Slave Servers mysql prompt: mysql> STOP SLAVE     Refer following links for more information on MySQL DB Replication: http://dev.mysql.com/doc/refman/5.0/en/replication-options.html http://crazytoon.com/2008/04/21/mysql-replication-replicate-by-choice/ http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html

    Read the article

  • Why is my machine unable to mount my SMB drives ("CIFS VFS: Error connecting to socket. Aborting operation", return code -115)?

    - by downbeat
    I have a machine running Precise (12.04 x64), and I cannot mount my SMB drives (I have 3, we'll call them public, private and download). It used to work (a week or two ago) and I didn't touch fstab! The machine hosting the shares is a commercial NAS, and I'm not seeing anything that would indicate it's an issue with the NAS. I have an older machine which I updated to Precise at the same time (both fresh installed, not dist-upgrade), so should have a very similar configuration. It is not having any problems. I am not having problems on windows machines/partitions either, only one of my Precise machines. The two machines are using identical entries in fstab and identical /etc/samba/smb.conf files. I don't think I've ever changed smb.conf (has never mattered before). My fstab entries all basically look like this: //10.1.1.111/public /media/public cifs credentials=/home/downbeat/.credentials,iocharset=utf8,uid=downbeat,gid=downbeat,file_mode=0644,dir_mode=0755 0 0 Here's the dmesg output on boot: [ 51.162198] CIFS VFS: Error connecting to socket. Aborting operation [ 51.162369] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.194106] CIFS VFS: Error connecting to socket. Aborting operation [ 51.194250] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.198120] CIFS VFS: Error connecting to socket. Aborting operation [ 51.198243] CIFS VFS: cifs_mount failed w/return code = -115 There are no other errors I see in the dmesg output. Originally when I ran 'testparm -s', the output contained these lines ERROR: lock directory /var/run/samba does not exist ERROR: pid directory /var/run/samba does not exist Here's the samba related programs I have installed: $ dpkg --list|grep -i samba ii libpam-winbind 2:3.6.3-2ubuntu2.3 Samba nameservice and authentication integration plugins ii libwbclient0 2:3.6.3-2ubuntu2.3 Samba winbind client library ii nautilus-share 0.7.3-1ubuntu2 Nautilus extension to share folder using Samba ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii samba-common 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii samba-common-bin 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii winbind 2:3.6.3-2ubuntu2.3 Samba nameservice integration server $ dpkg --list|grep -i smb ii dmidecode 2.11-4 SMBIOS/DMI table decoder ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix ii smbfs 2:5.1-1ubuntu1 Common Internet File System utilities - compatibility package $ dpkg --list|grep -i cifs ii cifs-utils 2:5.1-1ubuntu1 Common Internet File System utilities ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix I originally noticed that my other machine had "libpam-winbind" and "nautilus-share" installed and the machine with the issue did not. Installing those two packages solved my errors with 'testparm -s', but did not fix my issue. Finally, I tried to purge and reinstall these packages smbclient smbfs cifs-utils samba-common samba-common-bin Still no luck. Again, it used to work; now it doesn't. Very similarly configured machine works (but some packages are out of date on the working machine). The NAS has only one interface/IP address, nmblookup works to find it's IP from it's hostname (from the machine with the issue) and it responds to a ping. Please any help would be great. I've been searching on AskUbuntu, SuperUser, ubuntuforums and plain old search engines for a week now and it's driving me crazy!

    Read the article

  • FFmpeg Video Hosting for Linux and Windows Server

    - by Aditi
    FFmpeg hosting is a special type of web hosting where the host servers have video transcoding software loaded on them, which allows the automatic conversion of videos from one format to another. FFmpeg is a cross-platform solution for recording, converting, transcoding and stream audio and video. It includes libavcodec – the leading audio/video codec library. FFmpeg hosting gets its name from a set of server side programs (modules) called FFmpeg. There are a number of applications or web scripts available, which allow webmasters to create their own video sharing websites. Video hosting typically requires: PHP 4.3 and above (including support of CLI) Mencoder and also Mplayer FFMpeg-PHP MySQL database server LAME MP3 Encoder Libogg + Libvorbis GD Library 2 or higher CGI-BIN There are number of web service providers who provide FFmpeg hosting service. Following is a list of some of the Best FFmpeg hosting providers for both Linux and Windows Server below. Dream Host Dreamhost provides for web based email access, mail filtering, spam filtering, unlimited email ids, vacation autoresponder, python support, full CGI access and many more services. Price: $7.95 View Details Micfo It offers unlimited disk space and bandwidth. Other services include free domain for life and free Website Transfer with many more services. All in all one of the best option to consider. Price: $5 View Details Host Upon HostUpon offers FFMpeg Hosting on all their hosting packages, with readily installed modules to start a Video website or Social Network with Video uploading. These scripts such as Boonex Dolphin / PHPMotion / Social Engine / ABKsoft Scripts / Joomla Video Plugin / Clipshare / ClipBucket / Social Media / Rayzz / Vidi Script work with their ffmpeg. Their FFMPEG hosting plan offers 24/7/365 support with typical response time of 15min or less. Price: $5.95 View Details DownTown Host DownTown Host provides full and exceptional support by live chat and telephone. It has high-power, modern servers and the finest web server technology. It offers free search engine Submission and continuous data backup protection with free email forwarding and site move. There are many more services too. Site5 This ffmpeg service provider offers uptime guarantee, a real time stats on each server and many more attractive services. Price: $4.95 View Details Cirtex Hosting Cirtex Hosting allows to host 7 websites & domains and provides for unlimited storage space and monthly bandwidth. It also offers FTP and email accounts and many more services. Price: $2.49 View Details FLV Hosting FLV hosting supplies RTMP SERVER STREAMING for large size video streaming and server side recording. It is flexible and costs less. They customize to the clients requirements. Price: $9.95 View Details AptHost This hosting service provides for 24x7x365 Premium Support and fully ffmpeg enabled services. Price: $4.95 View Details HostMDS Great Support, Priced Low. It provides for SSH access, CGI, Ruby on Rails, Perl, PHP, MySQL, front page extentions, 24/7 Support, FREE Domain transfer and spam filtering. It offers instant account setup, low latency fast bandwidth & much more! They were formerly known as Vistapages. Price: $4.95 View Details Related posts:Best WordPress Video Themes for a Video Blog Free Web Based Applications 24+ Coda Alternatives for Windows and Linux

    Read the article

  • How Exactly Is One Linux OS “Based On” Another Linux OS?

    - by Jason Fitzpatrick
    When reviewing different flavors of Linux, you’ll frequently come across phrases like “Ubuntu is based on Debian” but what exactly does that mean? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader PLPiper is trying to get a handle on how Linux variants work: I’ve been looking through quite a number of Linux distros recently to get an idea of what’s around, and one phrase that keeps coming up is that “[this OS] is based on [another OS]“. For example: Fedora is based on Red Hat Ubuntu is based on Debian Linux Mint is based on Ubuntu For someone coming from a Mac environment I understand how “OS X is based on Darwin”, however when I look at Linux Distros, I find myself asking “Aren’t they all based on Linux..?” In this context, what exactly does it mean for one Linux OS to be based on another Linux OS? So, what exactly does it mean when we talk about one version of Linux being based off another version? The Answer SuperUser contributor kostix offers a solid overview of the whole system: Linux is a kernel — a (complex) piece of software which works with the hardware and exports a certain Application Programming Interface (API), and binary conventions on how to precisely use it (Application Binary Interface, ABI) available to the “user-space” applications. Debian, RedHat and others are operating systems — complete software environments which consist of the kernel and a set of user-space programs which make the computer useful as they perform sensible tasks (sending/receiving mail, allowing you to browse the Internet, driving a robot etc). Now each such OS, while providing mostly the same software (there are not so many free mail server programs or Internet browsers or desktop environments, for example) differ in approaches to do this and also in their stated goals and release cycles. Quite typically these OSes are called “distributions”. This is, IMO, a somewhat wrong term stemming from the fact you’re technically able to build all the required software by hand and install it on a target machine, so these OSes distribute the packaged software so you either don’t need to build it (Debian, RedHat) or they facilitate such building (Gentoo). They also usually provide an installer which helps to install the OS onto a target machine. Making and supporting an OS is a very complicated task requiring a complex and intricate infrastructure (upload queues, build servers, a bug tracker, and archive servers, mailing list software etc etc etc) and staff. This obviously raises a high barrier for creating a new, from-scratch OS. For instance, Debian provides ca. 37k packages for some five hardware architectures — go figure how much work is put into supporting this stuff. Still, if someone thinks they need to create a new OS for whatever reason, it may be a good idea to use an existing foundation to build on. And this is exactly where OSes based on other OSes come into existence. For instance, Ubuntu builds upon Debian by just importing most packages from it and repackaging only a small subset of them, plus packaging their own, providing their own artwork, default settings, documentation etc. Note that there are variations to this “based on” thing. For instance, Debian fosters the creation of “pure blends” of itself: distributions which use Debian rather directly, and just add a bunch of packages and other stuff only useful for rather small groups of users such as those working in education or medicine or music industry etc. Another twist is that not all these OSes are based on Linux. For instance, Debian also provide FreeBSD and Hurd kernels. They have quite tiny user groups but anyway. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Tuning Red Gate: #3 of Lots

    - by Grant Fritchey
    I'm drilling down into the metrics about SQL Server itself available to me in the Analysis tab of SQL Monitor to see what's up with our two problematic servers. In the previous post I'd noticed that rg-sql01 had quite a few CPU spikes. So one of the first things I want to check there is how much CPU is getting used by SQL Server itself. It's possible we're looking at some other process using up all the CPU Nope, It's SQL Server. I compared this to the rg-sql02 server: You can see that there is a more, consistently low set of CPU counters there. I clearly need to look at rg-sql01 and capture more specific data around the queries running on it to identify which ones are causing these CPU spikes. I always like to look at the Batch Requests/sec on a server, not because it's an indication of a problem, but because it gives you some idea of the load. Just how much is this server getting hit? Here are rg-sql01 and rg-sql02: Of the two, clearly rg-sql01 has a lot of activity. Remember though, that's all this is a measure of, activity. It doesn't suggest anything other than what it says, the number of requests coming in. But it's the kind of thing you want to know in order to understand how the system is used. Are you seeing a correlation between the number of requests and the CPU usage, or a reverse correlation, the number of requests drops as the CPU spikes? See, it's useful. Some of the details you can look at are Compilations/sec, Compilations/Batch and Recompilations/sec. These give you some idea of how the cache is getting used within the system. None of these showed anything interesting on either server. One metric that I like (even though I know it can be controversial) is the Page Life Expectancy. On the average server I expect see a series of mountains as the PLE climbs then drops due to a data load or something along those lines. That's not the case here: Those spikes back in January suggest that the servers weren't really being used much. The PLE on the rg-sql01 seems to be somewhat consistent growing to 3 hours or so then dropping, but the rg-sql02 PLE looks like it might be all over the map. Instead of continuing to look at this high level gathering data view, I'm going to drill down on rg-sql02 and see what it's done for the last week: And now we begin to see where we might have an issue. Memory on this system is getting flushed every 1/2 hour or so. I'm going to check another metric, scans: Whoa! I'm going back to the system real quick to look at some disk information again for rg-sql02. Here is the average disk queue length on the server: and the transfers Right, I think I have a guess as to what's up here. We're seeing memory get flushed constantly and we're seeing lots of scans. The disks are queuing, especially that F drive, and there are lots of requests that correspond to the scans and the memory flushes. In short, we've got queries that are scanning the data, a lot, so we either have bad queries or bad indexes. I'm going back to the server overview for rg-sql02 and check the Top 10 expensive queries. I'm modifying it to show me the last 3 days and the totals, so I'm not looking at some maintenance routine that ran 10 minutes ago and is skewing the results: OK. I need to look into these queries that are getting executed this much. They're generating a lot of reads, but which queries are generating the most reads: Ow, all still going against the same database. This is where I'm going to temporarily leave SQL Monitor. What I want to do is connect up to the server, validate that the Warehouse database is using the F:\ drive (which I'll put money down it is) and then start seeing what's up with these queries. Part 1 of the Series Part 2 of the Series

    Read the article

  • STOP PRESS: FY15 Q1 Oracle ZS3 Contest for Partners

    - by Cinzia Mascanzoni
    04 JUNE 2014 Oracle EMEA Partners Stop Press Stay Connected Oracle Media Network   OPN on PartnerCast   STOP PRESS: FY15 Q1 Oracle ZS3 Contest for PartnersShare an unforgettable experience at the Teatro Alla Scala in Milan Dear valued Partner, We are pleased to launch a partner contest exclusive to our partners dedicated to promoting and selling Oracle Systems! You are essential to the success of Oracle and we want to recognize your contribution and effort in driving Oracle Storage to the market. To show our appreciation we are delighted to announce a contest, giving the winners the opportunity to attend a roundtable chaired by Senior Oracle Executives and spend an unforgettable evening at the magnificent Teatro Alla Scala in Milan, followed by a stay at the Grand Hotel et de Milan, courtesy of Oracle. Recognition will be given to 12 partner companies (10 VARs & 2 VADs) who will be recognized for their ZFS storage booking achievement in the broad market between June 1st and July 18th 2014. Criteria of Eligibility A minimum deal value of $30k is required for qualification Partners who are wholly or partially owned by a public sector organization are not eligible for participation Winners The winning VARs will be: The highest ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region (1) The highest Oracle on Oracle (2) ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region The winning VADs (3) will be: The highest ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA The highest Oracle on Oracle (2) ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA (1) Two VAR winners for each EMEA region – Eastern Europe & CIS, Middle East & Africa, South Europe, North Europe, UK/Ireland & Israel - as per the criteria outlined above(2) Oracle on Oracle, in this instance, means ZS3 or ZBA storage attached to DB or DB options, Engineered Systems or Sparc servers sold to the same customer by the same partner within the contest timelines.(3) Two VAD winners, one for each of the criteria outlined above, will be selected from across EMEA. Oracle shall be the final arbiter in selecting the winners. All winners will be notified via their Oracle account manager. Full details about the contest, expenses covered by Oracle and timetable of events can be found on the Oracle EMEA Hardware (Servers & Storage) Partner Community workspace (FY15 Q1 ZFS Partner Contest). Access to the community workspace requires membership. If you are not a member please register here. The Prize Winners will be invited to participate to a roundtable chaired by Oracle on Monday September 8th 2014 in Milan and to be guests of Oracle in the evening of September 8th, 2014 at the Teatro Alla Scala. The evening will comprise of a private tour of the Scala museum, cocktail reception at the elegant museum rooms and attending the performance by the renowned Soprano, Maria Agresta. Our guests will then retire for the evening to the Grand Hotel et de Milan, courtesy of Oracle. Good Luck!! For more information, please contact Sasan Moaveni. Regards, Olivier TordoSenior Director - Systems Business DevelopmentOracle EMEA Alliances & Channels Resources EMEA Hardware Partner Community EMEA Oracle Partner Days Find Partner Events EMEA Partner News Blog EMEA Partner Enablement Blog Oracle PartnerNetwork Copyright © 2014, Oracle and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • F1 Pit Pragmatics

    - by mikef
    "I hate computers. No, really, I hate them. I love the communications they facilitate, I love the conveniences they provide to my life. but I actually hate the computers themselves." - Scott Merrill, 'I hate computers: confessions of a Sysadmin' If Scott's goal was to polarize opinion and trigger raging arguments over the 'real reasons why computers suck', then he certainly succeeded. Impassioned vitriol sits side-by-side with rational debate. Yet Scott's fundamental point is absolutely on the money - Computers are a means to an end. The IT industry is finally starting to put weight behind the notion that good User Experience is an absolutely crucial goal, a cause championed by the likes of Microsoft's Bill Buxton, and which Apple's increasingly ubiquitous touch screen interface exemplifies. However, that doesn't change the fact that, occasionally, you just have to man up and deal with complex systems. In fact, sometimes you just need to sacrifice everything else in the name of performance. You'll find a perfect example of this Faustian bargain in Trevor Clarke's fascinating look into the (diabolical) IT infrastructure of modern F1 racing - high performance, high availability. high everything. To paraphrase, each car has up to 100 sensors, transmitting around 30Gb of data over the course of a race (70% in real-time). This data is then processed by no less than 3 servers (per car) so that the engineers in the pit have access to telemetry, strategy information, timing feeds, a connection back to the operations room in the team's home base - the list goes on. All of this while the servers are exposed "to carbon dust, oil, vibration, rain, heat, [and] variable power". Now, this is admittedly an extreme context where there's no real choice but to use complex systems where ease-of-use is, at best, a secondary concern. The flip-side is seen in small-scale personal computing such as that seen in Apple's iDevices, which are incredibly intuitive but limited in their scope. In terms of what kinds of systems they prefer to use, I suspect that most SysAdmins find themselves somewhere along this axis of Power vs. Usability, and which end of this axis you resonate with also hints at where you think the IT industry should focus its energy. Do you see yourself in the F1 pit, making split-second decisions, wrestling with information flows and reticent hardware to bend them to your will? If so, I imagine you feel that computers are subtle tools which need to be tuned and honed, using the advanced knowledge possessed only by responsible SysAdmins (If you have an iPhone, I suspect it's jail-broken). If the machines throw enigmatic errors, it's the price of flexibility and raw power. Alternatively, would you prefer to have your role more accessible, with users empowered by knowledge, spreading the load of managing IT environments? In that case, then you want hardware and software to have User Experience as their primary focus, and are of the "means to an end" school of thought (you're probably also fed up with users not listening to you when you try and help). At its heart, the dichotomy is between raw power (which might be difficult to use) and ease-of-use (which might have some limitations, but you can be up and running immediately). Of course, the ultimate goal is a fusion of flexibility, power and usability all in one system. It's achievable in specific software environments, and Red Gate considers it a target worth aiming for, but in other cases it's a goal right up there with cold fusion. I think it'll be a long time before we see it become ubiquitous. In the meantime, are you Power-Hungry or a Champion of Usability? Cheers, Michael Francis Simple Talk SysAdmin Editor

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >