Search Results

Search found 16537 results on 662 pages for 'oracle accelerate for midsize companies'.

Page 480/662 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Is it safe to expect Sun Java 6 to supported for the life of RHEL 6?

    - by Ophidian
    I'm in the planning stages of a java application that we're targeting for Red Hat Enterprise Linux 6. Unfortunately, we're stuck at RHEL 6.1 which does not ship the java-1.7.0-oracle package set (they were added in 6.3) and I don't really have any control over when we will be upgraded to the more recent version. I don't have any specific technical requirements to use Java 7, but Java 6 is going to hit public EOL in February 2013. Am I safe to assume that since Red Hat (and subsequently Oracle with its Oracle Unbreakable Linux) has shipped a copy of Java 6 in the java-1.6.0-sun package, it will support it for the entire 10 year support life of RHEL6?

    Read the article

  • How to Extending a logical volume in WMWare

    - by Mercer
    down vote favorite i have a CentOS 6.3 into my Virtual Machine. I have 2 Disk: Disk#1 = 18G Disk#2 = 20G [root@vm ~]# df -h Filesystem Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_system-lv_root 1008M 250M 708M 27% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 194M 31M 154M 17% /boot /dev/mapper/vg_system-lv_home 504M 17M 462M 4% /home /dev/mapper/vg_system-lv_opt 2.0G 68M 1.9G 4% /opt /dev/mapper/vg_produits-lv_grid 6.9G 2.5G 4.1G 38% /opt/grid /dev/mapper/vg_produits-lv_oracle 6.9G 144M 6.4G 3% /opt/oracle /dev/mapper/vg_system-lv_tmp 2.8G 71M 2.6G 3% /tmp /dev/mapper/vg_system-lv_usr 2.5G 1.6G 799M 67% /usr /dev/mapper/vg_system-lv_var 2.0G 278M 1.6G 15% /var So i want to extend my /tmp and my /opt/oracle like this: 10Go in/tmp 13Go in /opt/oracle Thx.

    Read the article

  • What IT certification is most valuable without job experience? [closed]

    - by Eric Wilson
    I'm trying to change vocations towards IT. I'm learning JAVA, SQL, and other things, but I have no job experience or formal education (other than a math Ph.D.) I know that certifications only go so far, but I was curious which certifications might be the most valuable for a first IT job? To clarify my question: Oracle certification + Zero Oracle experience = 0% chance of Oracle DBA job. Perhaps, though: [foobar certification] + Zero IT job experience = nonzero chance of entry IT job? Please give specific suggestions of certifications that you would consider relevant towards an entry-level IT job.

    Read the article

  • How to start a cmd window and issue tail request in a bat file?

    - by Kari
    I can open a cmd window and start a tail by entering something like this: tail -f C:\Oracle\WebCenter\Sites\11gR1\Sites\11.1.1.6.1\logs\sites.log This is probably a stupid question, but how do I do this in a batch file? It should be easy but it's not working - I have tried a couple variations and no success. Can anyone tell me what I am doing wrong here? ECHO OFF CD C:\Oracle\WebCenter\Sites\11gR1\Sites\11.1.1.6.1\logs\ cmd tail -f sites.log I've also tried: ECHO OFF start cmd tail -f C:\Oracle\WebCenter\Sites\11gR1\Sites\11.1.1.6.1\logs\sites.log (am using Win7 Ultimate, on a 64-bit machine, if that has any bearing)

    Read the article

  • Solaris 11 installed, no updates?

    - by Paul De Niro
    I was messing around with solaris and decided to give Solaris 11 a try so I downloaded it from the Oracle website. After installing the OS, I went into the package manager and did an update. It told me that there were to available updates! I find this hard to believe considering that it's running a vulnerable version of firefox and java, its own in-house software product! Many of the other software products that came with the default install are also out of date and vulnerable. Is this normal for an Oracle install, or did I do something wrong with the upgrade process? I typed "pkg update" at the prompt, and I noticed that it did call out to pkg.oracle.com looking for updates. I find it bizarre that there are no updates available for an OS that was released a couple months ago with vulnerable software...

    Read the article

  • SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

    - by Pinal Dave
    Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0. Here are different tools released earlier last week to migrate various product to SQL Server. Microsoft SQL Server Migration Assistant v6.0 for Sybase SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for MySQL SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Oracle SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Access SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Migration

    Read the article

  • Can I use `UnlockCommercialFeatures` for developing Java applications without a commercial license?

    - by nondescript1
    As of Java 7 Update 40, Oracle is now including Java Mission Control in the JDK. Being always interested in a new profiling tool, I decided to check it out. However, trying to start Flight Recorder against a process, I get the following error, Now I'm getting cold feet about adding the JVM option -XX:+UnlockCommercialFeatures. I would only use this for profiling in development and not in production. From the article linked above, JMC is available under the Oracle Binary Code License for Java. The license allows you to use JMC for free during development and testing, though a different (paid for) licence is required for production use. I'm still leery about this. From Java SE Products, Flight Recorder certainly is a commercial feature; however, I find it very confusing that it's now included in the standard JDK release. Anyone else have a read on this? Clearly nothing here is legally binding and your legal department should be consulted. Reference: Oracle Binary Code License Agreement for the Java SE Platform Products and JavaFX

    Read the article

  • Latest Security Updates for Java are Available for Download

    - by Akemi Iwaya
    Oracle has released new updates that patch 40 security holes in their Java Runtime Environment software. Anyone who needs or actively uses the Java Runtime Environment for work or gaming should promptly update their Java installation as soon as possible. One thing to keep in mind is that there are limitations placed on updates for older versions of Java as shown in the following excerpt. If you are using an older version, then it is recommended that you update to the Java SE 7 release if possible (depending on your usage circumstances). From the The H Security blog post: Only the current version of Java, Java SE 7, will be updated for free; downloads of the new version, Java SE 7 Update 25, are available and existing installs should auto-update. Mac OS X users will get an updated Java SE 6 for their systems as an automatic update; Java SE 7 on Mac OS X is updated by Oracle. Users of other older versions of Java will only get updates if they have a maintenance contract with Oracle. Affected Product Releases and Versions: JDK and JRE 7 Update 21 and earlier JDK and JRE 6 Update 45 and earlier JDK and JRE 5.0 Update 45 and earlier JavaFX 2.2.21 and earlier Note: If you do not need Java on your system, we recommend uninstalling it entirely or disabling the browser plugin. You can download and read through the details about the latest Java updates by visiting the links shown below.    

    Read the article

  • New Skool Crosstabbing

    - by Tim Dexter
    A while back I spoke about having to go back to BIP's original crosstabbing solution to achieve a certain layout. Hok Min has provided a 'man' page for the new crosstab/pivot builder for 10.1.3.4.1 users. This will make the documentation drop but for now, get it here! The old, hand method is still available but this new approach, is more efficient and flexible. That said you may need to get into the crosstab code to tweak it where the crosstab dialog can not help. I had to do this, this week but more on that later. The following explains how the crosstab wizard builds the crosstab and what the fields inside the resulting template structure are there for. To create the crosstab a new XDO command "<?crosstab:...?>" has been created. XDO Command: <?crosstab: ctvarname; data-element; rows; columns; measures; aggregation?> Parameter Description Example Ctvarname Crosstab variable name. This is automatically generated by the Add-in. C123 data-element This is the XML data element that contains the data. "//ROW" Rows This contains a list of XML elements for row headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the row header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "Region{,o=a,t=t}, District{,o=a,t=t}" In the example, the first row header is "Region". It is sort by "Region", order is ascending, and type is text. The second row header is "District". It is sort by "District", order is ascending, and type is text. Columns This contains a list of XML elements for columns headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the column header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" In the example, the first column header is "ProductsBrand". It is sort by "ProductsBrand", order is ascending, and type is text. The second column header is "PeriodYear". It is sort by "District", order is ascending, and type is text. Measures This contains a list of XML elements for measures. "Revenue, PrevRevenue" Aggregation The aggregation function name. Currently, we only support "sum". "sum" Using the Oracle BI Publisher Template Builder for Word add-in, we are able to construct the following Pivot Table: The generated XDO command for this Pivot Table is as follow: <?crosstab:c547; "//ROW";"Region{,o=a,t=t}, District{,o=a,t=t}"; "ProductsBrand{,o=a,t=t},PeriodYear{,o=a,t=t}"; "Revenue, PrevRevenue";"sum"?> Running the command on the give XML data files generates this XML file "cttree.xml". Each XPath in the "cttree.xml" is described in the following table. Element XPath Count Description C0 /cttree/C0 1 This contains elements which are related to column. C1 /cttree/C0/C1 4 The first level column "ProductsBrand". There are four distinct values. They are shown in the label H element. CS /cttree/C0/C1/CS 4 The column-span value. It is used to format the crosstab table. H /cttree/C0/C1/H 4 The column header label. There are four distinct values "Enterprise", "Magicolor", "McCloskey" and "Valspar". T1 /cttree/C0/C1/T1 4 The sum for measure 1, which is Revenue. T2 /cttree/C0/C1/T2 4 The sum for measure 2, which is PrevRevenue. C2 /cttree/C0/C1/C2 8 The first level column "PeriodYear", which is the second group-by key. There are two distinct values "2001" and "2002". H /cttree/C0/C1/C2/H 8 The column header label. There are two distinct values "2001" and "2002". Since it is under C1, therefore the total number of entries is 4 x 2 => 8. T1 /cttree/C0/C1/C2/T1 8 The sum for measure 1 "Revenue". T2 /cttree/C0/C1/C2/T2 8 The sum for measure 2 "PrevRevenue". M0 /cttree/M0 1 This contains elements which are related to measures. M1 /cttree/M0/M1 1 This contains summary for measure 1. H /cttree/M0/M1/H 1 The measure 1 label, which is "Revenue". T /cttree/M0/M1/T 1 The sum of measure 1 for the entire xpath from "//ROW". M2 /cttree/M0/M2 1 This contains summary for measure 2. H /cttree/M0/M2/H 1 The measure 2 label, which is "PrevRevenue". T /cttree/M0/M2/T 1 The sum of measure 2 for the entire xpath from "//ROW". R0 /cttree/R0 1 This contains elements which are related to row. R1 /cttree/R0/R1 4 The first level row "Region". There are four distinct values, they are shown in the label H element. H /cttree/R0/R1/H 4 This is row header label for "Region". There are four distinct values "CENTRAL REGION", "EASTERN REGION", "SOUTHERN REGION" and "WESTERN REGION". RS /cttree/R0/R1/RS 4 The row-span value. It is used to format the crosstab table. T1 /cttree/R0/R1/T1 4 The sum of measure 1 "Revenue" for each distinct "Region" value. T2 /cttree/R0/R1/T2 4 The sum of measure 1 "Revenue" for each distinct "Region" value. R1C1 /cttree/R0/R1/R1C1 16 This contains elements from combining R1 and C1. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand". Therefore, the combination is 4 X 4 è 16. T1 /cttree/R0/R1/R1C1/T1 16 The sum of measure 1 "Revenue" for each combination of "Region" and "ProductsBrand". T2 /cttree/R0/R1/R1C1/T2 16 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "ProductsBrand". R1C2 /cttree/R0/R1/R1C1/R1C2 32 This contains elements from combining R1, C1 and C2. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand", and two distinct values of "PeriodYear". Therefore, the combination is 4 X 4 X 2 è 32. T1 /cttree/R0/R1/R1C1/R1C2/T1 32 The sum of measure 1 "Revenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". T2 /cttree/R0/R1/R1C1/R1C2/T2 32 The sum of measure 2 "PrevRevenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". R2 /cttree/R0/R1/R2 18 This contains elements from combining R1 "Region" and R2 "District". Since the list of values in R2 has dependency on R1, therefore the number of entries is not just a simple multiplication. H /cttree/R0/R1/R2/H 18 The row header label for R2 "District". R1N /cttree/R0/R1/R2/R1N 18 The R2 position number within R1. This is used to check if it is the last row, and draw table border accordingly. T1 /cttree/R0/R1/R2/T1 18 The sum of measure 1 "Revenue" for each combination "Region" and "District". T2 /cttree/R0/R1/R2/T2 18 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "District". R2C1 /cttree/R0/R1/R2/R2C1 72 This contains elements from combining R1, R2 and C1. T1 /cttree/R0/R1/R2/R2C1/T1 72 The sum of measure 1 "Revenue" for each combination of "Region", "District" and "ProductsBrand". T2 /cttree/R0/R1/R2/R2C1/T2 72 The sum of measure 2 "PrevRevenue" for each combination of "Region", "District" and "ProductsBrand". R2C2 /cttree/R0/R1/R2/R2C1/R2C2 144 This contains elements from combining R1, R2, C1 and C2, which gives the finest level of details. M1 /cttree/R0/R1/R2/R2C1/R2C2/M1 144 The sum of measure 1 "Revenue". M2 /cttree/R0/R1/R2/R2C1/R2C2/M2 144 The sum of measure 2 "PrevRevenue". Lots to read and digest I know! Customization One new feature I discovered this week is the ability to show one column and sort by another. I had a data set that was extracting month abbreviations, we wanted to show the months across the top and some row headers to the side. As you may know XSL is not great with dates, especially recognising month names. It just wants to sort them alphabetically, so Apr comes before Jan, etc. A way around this is to generate a month number alongside the month and use that to sort. We can do that in the crosstab, sadly its not exposed in the UI yet but its doable. Go back up and take a look a the initial crosstab command. especially the Rows and Columns entries. In there you will find the sort criteria. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" Notice those leading commas inside the curly braces? Because there is no field preceding them it means that the crosstab should sort on the column before the brace ie PeriodYear. But you can insert another column in the data set to sort by. To get my sort working how I needed. <?crosstab:c794;"current-group()";"_Fund_Type_._Fund_Type_Display_{_Fund_Type_._Fund_Type_Sort_,o=a,t=n}";"_Fiscal_Period__Amount__._Amt_Fm_Disp_Abbr_{_Fiscal_Period__Amount__._Amt_Fiscal_Month_Sort_,o=a,t=n}";"_Execution_Facts_._Amt_";"sum"?> Excuse the horribly verbose XML tags, good ol BIEE :0) The emboldened columns are not in the crosstab but are in the data set. I just opened up the field, dropped them in and changed the type(t) value to be 'n', for number, instead of the default 'a' and my crosstab started sorting how I wanted it. If you find other tips and tricks, please share in the comments.

    Read the article

  • What should JavaScript be renamed to [closed]

    - by Evan Plaice
    Background: I have been watching Douglas Crockford's series of presentation about JavaScript History (which I highly recommend) lately and a one comment of his specifically piqued my attention. The trademark for 'JavaScript' is owned by Oracle History: Due to time constraints at Netscape, the language was literally written in weeks and released in very buggy form. To make it seem more appealing, Netscape picked JavaScript to appeal to the massively growing population of Java developers. Unfortunately, this pissed off Sun and stirred up a lot of controversy between the two organizations. At some point, they came to an agreement whereby Netscape was given permission to use the name as long as Sun owned the trademark. Some people incorrectly refer to JavaScript as ECMAScript because that's where the standard for the language is registered but, aside from it's current marketing-driven label, it doesn't really have a name. Fast Forward Sun goes down only to be swallowed by Oracle, who has no reservations about litigating for profit, now owns the name. So... If Oracle decides and forces JavaScript to take on a new name, what name would best represent the language?

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • What is ODBC?

    According to Microsoft, ODBC is a specification for a database API.  This API is database and operating system agnostic due to the fact that the primary goal of the ODBC API is to be language-independent. Additionally, the open functions of the API are created by the manufactures of DBMS-specific drivers. Developers can use these exposed functions from within their own custom applications so that they can communicate with DBMS through the language-independent drivers. ODBC Advantages Multiple ODBC drivers for each DBSM Example Oracle’s ODBC Driver Merant’s Oracle Driver Microsoft’s Oracle Driver ODBC Drivers are constantly updated for the latest data types ODBC allows for more control when querying ODBC allows for Isolation Levels ODBC Disadvantages ODBC Requires DSN ODBC is the proxy between an application and a database ODBC is dependent on third party drivers ODBC Transaction Isolation Levels are related to and limited by the transaction management capabilities of the data source. Transaction isolation levels:  READ UNCOMMITTED Data is allowed to be read prior to the committing of a transaction.  READ COMMITTED Data is only accessible after a transaction has completed  REPEATABLE READ The same data value is read during the entire transaction  SERIALIZABLE Transactions have no effect on other transactions

    Read the article

  • Social Business Forum Milano: Day 1

    - by me
    div.c50 {font-family: Helvetica;} div.c49 {position: relative; height: 0px; overflow: hidden;} span.c48 {color: #333333; font-size: 14px; line-height: 18px;} div.c47 {background-color: #ffffff; border-left: 1px solid rgba(0, 0, 0, 0.098); border-right: 1px solid rgba(0, 0, 0, 0.098); background-clip: padding-box;} div.c46 {color: #666666; font-family: arial, helvetica, sans-serif; font-weight: normal} span.c45 {line-height: 14px;} div.c44 {border-width: 0px; font-family: arial, helvetica, sans-serif; margin: 0px; outline-width: 0px; padding: 0px 0px 10px; vertical-align: baseline} div.c43 {border-width: 0px; margin: 0px; outline-width: 0px; padding: 0px 0px 10px; vertical-align: baseline;} p.c42 {color: #666666; font-family: arial, helvetica, sans-serif} span.c41 {line-height: 14px; font-size: 11px;} h2.c40 {font-family: arial, helvetica, sans-serif} p.c39 {font-family: arial, helvetica, sans-serif} span.c38 {font-family: arial, helvetica, sans-serif; font-size: 80%; font-weight: bold} div.c37 {color: #999999; font-size: 14px; font-weight: normal; line-height: 18px} div.c36 {background-clip: padding-box; background-color: #ffffff; border-bottom: 1px solid #e8e8e8; border-left: 1px solid rgba(0, 0, 0, 0.098); border-right: 1px solid rgba(0, 0, 0, 0.098); cursor: pointer; margin-left: 58px; min-height: 51px; padding: 9px 12px; position: relative; z-index: auto} div.c35 {background-clip: padding-box; background-color: #ffffff; border-bottom: 1px solid #e8e8e8; border-left: 1px solid rgba(0, 0, 0, 0.098); border-right: 1px solid rgba(0, 0, 0, 0.098); cursor: pointer; margin-left: 58px; min-height: 51px; padding: 9px 12px; position: relative} div.c34 {overflow: hidden; font-size: 12px; padding-top: 1px;} ul.c33 {padding: 0px; margin: 0px; list-style-type: none; opacity: 0;} li.c32 {display: inline;} a.c31 {color: #298500; text-decoration: none; outline-width: 0px; font-size: 12px; margin-left: 8px;} a.c30 {color: #999999; text-decoration: none; outline-width: 0px; font-size: 12px; float: left; margin-right: 2px;} strong.c29 {font-weight: normal; color: #298500;} span.c28 {color: #999999;} div.c27 {font-family: arial, helvetica, sans-serif; margin: 0px; word-wrap: break-word} span.c26 {border-width: 0px; width: 48px; height: 48px; border-radius: 5px 5px 5px 5px; position: absolute; top: 12px; left: 12px;} small.c25 {font-size: 12px; color: #bbbbbb; position: absolute; top: 9px; right: 12px; float: right; margin-top: 1px;} a.c24 {color: #999999; text-decoration: none; outline-width: 0px; font-size: 12px;} h3.c23 {font-family: arial, helvetica, sans-serif} span.c22 {font-family: arial, helvetica, sans-serif} div.c21 {display: inline ! important; font-weight: normal} span.c20 {font-family: arial, helvetica, sans-serif; font-size: 80%} a.c19 {font-weight: normal;} span.c18 {font-weight: normal;} div.c17 {font-weight: normal;} div.c16 {margin: 0px; word-wrap: break-word;} a.c15 {color: #298500; text-decoration: none; outline-width: 0px;} strong.c14 {font-weight: normal; color: inherit;} span.c13 {color: #7eb566; text-decoration: none} span.c12 {color: #333333; font-family: arial, helvetica, sans-serif; font-size: 14px; line-height: 18px} a.c11 {color: #999999; text-decoration: none; outline-width: 0px;} span.c10 {font-size: 12px; color: #999999; direction: ltr; unicode-bidi: embed;} strong.c9 {font-weight: normal;} span.c8 {color: #bbbbbb; text-decoration: none} strong.c7 {font-weight: bold; color: #333333;} div.c6 {font-family: arial, helvetica, sans-serif; font-weight: normal} div.c5 {font-family: arial, helvetica, sans-serif; font-size: 80%; font-weight: normal} p.c4 {font-family: arial, helvetica, sans-serif; font-size: 80%; font-weight: normal} h3.c3 {font-family: arial, helvetica, sans-serif; font-weight: bold} span.c2 {font-size: 80%} span.c1 {font-family: arial,helvetica,sans-serif;} Here are my impressions of the first day of the Social Business Forum in Milano A dialogue on Social Business Manifesto - Emanuele Scotti, Rosario Sica The presentation was focusing on Thesis and Anti-Thesis around Social Business My favorite one is: Peter H. Reiser ?@peterreiser social business manifesto theses #2: organizations are conversations - hello Oracle Social Network #sbf12 Here are the Thesis (auto-translated from italian to english) From Stress to Success - Pragmatic pathways for Social Business - John Hagel John Hagel talked about challenges of deploying new social technologies. Below are some key points participant tweeted during the session. 6hRhiannon Hughes ?@Rhi_Hughes Favourite quote this morning 'We need to strengthen the champions & neutralise the enemies' John Hagel. Not a hard task at all #sbf12 Expand Reply Retweet Favorite 8hElena Torresani ?@ElenaTorresani Minimize the power of the enemies of change. Maximize the power of the champions - John Hagel #sbf12 Expand Reply Retweet Favorite 8hGaetano Mazzanti ?@mgaewsj John Hagel change: minimize the power of the enemies #sbf12 Expand Reply Retweet Favorite 8hGaetano Mazzanti ?@mgaewsj John Hagel social software as band-aid for poor leadtime/waste management? mmm #sbf12 Expand Reply Retweet Favorite 8hElena Torresani ?@ElenaTorresani "information is power. We need access to information to get power"John Hagel, Deloitte &Touche #sbf12http://instagr.am/p/LcjgFqMXrf/ View photo Reply Retweet Favorite 8hItalo Marconi ?@italomarconi Information is power and Knowledge is subversive. John Hagel#sbf12 Expand Reply Retweet Favorite 8hdanielce ?@danielce #sbf12 john Hagel: innovation is not rational. from Milano, Milano Reply Retweet Favorite 8hGaetano Mazzanti ?@mgaewsj John Hagel: change is a political (not rational) process #sbf12 Expand Reply Retweet Favorite Enterprise gamification to drive engagement - Ray Wang Ray Wang did an excellent speech around engagement strategies and gamification More details can be found on the Harvard Business Review blog Panel Discussion: Does technology matter? Understanding how software enables or prevents participation Christian Finn, Ram Menon, Mike Gotta, moderated by Paolo Calderari Below are the highlights of the panel discussions as live tweets: 2hPeter H. Reiser ?@peterreiser @cfinn Q: social silos: mega trend social suites - do we create social silos + apps silos + org silos ... #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @cfinn A: Social will be less siloed - more integrated into application design. Analyatics is key to make intelligent decisions #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @MikeGotta - A: its more social be design then social by layer - Better work experience using social design. #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser Ram Menon: A: Social + Mobile + consumeration is coming together#sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser Q: What is the evolution for social business solution in the next 4-5 years? #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @cfinn Adoption: A: User experience is king - no training needed - We let you participate into a conversation via mobile and email#sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @MikeGotta A:Adoption - how can we measure quality? Literacy - Are people get confident to talk to a invisible audience ? #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser Ram Meno: A:Adoption - What should I measure ? Depend on business goal you want to active? #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser Q: How can technology facilitate adoption #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser #sbf12 @cfinn @mgotta Ram Menon at panel discussion about social technology @oraclewebcenter http://pic.twitter.com/Pquz73jO View photo Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser Ram Menon: 100% of data is in a system somewhere. 100% of collective intelligence is with people. Social System bridge both worlds Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser #sbf12 @MikeGotta Adoption is specific to the culture of the company Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @cfinn - drive adoption is important @MikeGotta - activity stream + watch list is most important feature in a social system #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @MikeGotta Why just adoption? email as 100% adoption? #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @MikeGotta Ram Menon respond: there is only 1 questions to ask: What is the adoption? #sbf12 @socialadoption you like this ? #sbf12 Expand Reply Delete Favorite 3hPeter H. Reiser ?@peterreiser @MikeGotta - just replacing old technology (e.g. email) with new technology does not help. we need to change model/attitude #sbf12 Expand Reply Delete Favorite 3hPeter H. Reiser ?@peterreiser Ram Menon: CEO mandated to replace 6500 email aliases with Social Networking Software #sbf12 Expand Reply Delete Favorite 3hPeter H. Reiser ?@peterreiser @MikeGotta A: How to bring interface together #sbf12 . Going from point tools to platform, UI, Architecture + Eco-system is important Expand Reply Delete Favorite 3hPeter H. Reiser ?@peterreiser Q: How is technology important in Social Business #sbf12 A:@cfinn - technology is enabler , user experience -easy of use is important Expand Reply Delete Favorite 3hPeter H. Reiser ?@peterreiser @cfinn particiapte in panel "Does technology matter? Understanding how software enables or prevents participation" #sbf #webcenter

    Read the article

  • SQL SERVER – Migration Assistant Upgraded to Support SQL Server 2014

    - by Pinal Dave
    We all start somewhere when it is about database. There are different reasons, why we go for one database over another database. Usually the reason is cost and convenience. After a period of time when business is successful and traffic is growing, the same two reasons of cost and convenience start to become secondary goals. I have seen quite a lot of companies starting with free databases and after a while switching to another database as they want stability and service from the product company. Microsoft has an excellent product which lets you migrate your database from the alternate database to SQL Server. It is called SQL Server Migration Assistant (SSMA) and earlier this week, it has been upgraded to support SQL Server 2014. Now you can migrate from your database to to all editions of SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012 and SQL Server 2014. SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft. Here is where you can download SSMA v5.3 for various databases. Microsoft SQL Server Migration Assistant v5.3 for Access Microsoft SQL Server Migration Assistant (SSMA) for Access is a tool to automate migration from Microsoft Access database(s) to SQL Server Microsoft SQL Server Migration Assistant v5.3 for Oracle Microsoft SQL Server Migration Assistant (SSMA) for Oracle is a tool to automate migration from Oracle database to SQL Server. Microsoft SQL Server Migration Assistant v5.3 for Sybase Microsoft SQL Server Migration Assistant (SSMA) for Sybase is a tool to automate migration from Sybase ASE database to SQL Server. Microsoft SQL Server Migration Assistant v5.3 for MySQL Microsoft SQL Server Migration Assistant (SSMA) for MySQL is a tool to automate migration from MySQL database to SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How do I resolve a plugin conflict in Eclipse?

    - by Jason Thompson
    I'd like to upgrade my Helios installation of Eclipse to Indigo. When I do, I get the following message: Cannot complete the install because of a conflicting dependency. Software being installed: Eclipse IDE for Java EE Developers 1.4.2.20120213-0813 (epp.package.jee 1.4.2.20120213-0813) Software currently installed: Oracle GlassFish Server Tools 1.6.1.201009290929 (oracle.eclipse.tools.helios.glassfish.feature.group 1.6.1.201009290929) So my first thought was to simply uninstall GlassFish. For the life of me, I can't figure out how and where to go to do this. I went to Help-About Eclipse...-Installation Details. The only place that it looks like I can uninstall stuff is in the "Installed Software" tab. I do not see the Oracle Glassfish package anywhere. If I go to "Feature" or "Plug-ins", I can find it just fine, but there is no option to uninstall. So my next thought was to upgrade Glassfish. So I put the indigo repo in there, but I still get the same message when trying to update. Any ideas?

    Read the article

  • Top 5 Developer Enabling Nuggets in MySQL 5.6

    - by Rob Young
    MySQL 5.6 is truly a better MySQL and reflects Oracle's commitment to the evolution of the most popular and widelyused open source database on the planet.  The feature-complete 5.6 release candidate was announced at MySQL Connect in late September and the production-ready, generally available ("GA") product should be available in early 2013.  While the message around 5.6 has been focused mainly on mass appeal, advanced topics like performance/scale, high availability, and self-healing replication clusters, MySQL 5.6 also provides many developer-friendly nuggets that are designed to enable those who are building the next generation of web-based and embedded applications and services. Boiling down the 5.6 feature set into a smaller set, of simple, easy to use goodies designed with developer agility in mind, these things deserve a quick look:Subquery Optimizations Using semi-JOINs and late materialization, the MySQL 5.6 Optimizer delivers greatly improved subquery performance. Specifically, the optimizer is now more efficient in handling subqueries in the FROM clause; materialization of subqueries in the FROM clause is now postponed until their contents are needed during execution. Additionally, the optimizer may add an index to derived tables during execution to speed up row retrieval. Internal tests run using the DBT-3 benchmark Query #13, shown below, demonstrate an order of magnitude improvement in execution times (from days to seconds) over previous versions. select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)from customer, orders, lineitemwhere o_orderkey in (                select l_orderkey                from lineitem                group by l_orderkey                having sum(l_quantity) > 313  )  and c_custkey = o_custkey  and o_orderkey = l_orderkeygroup by c_name, c_custkey, o_orderkey, o_orderdate, o_totalpriceorder by o_totalprice desc, o_orderdateLIMIT 100;What does this mean for developers?  For starters, simplified subqueries can now be coded instead of complex joins for cross table lookups: SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_actor GROUP BY film_id HAVING count(*) > 12); And even more importantly subqueries embedded in packaged applications no longer need to be re-written into joins.  This is good news for both ISVs and their customers who have access to the underlying queries and who have spent development cycles writing, testing and maintaining their own versions of re-written queries across updated versions of a packaged app.The details are in the MySQL 5.6 docs. Online DDL OperationsToday's web-based applications are designed to rapidly evolve and adapt to meet business and revenue-generationrequirements. As a result, development SLAs are now most often measured in minutes vs days or weeks. For example, when an application must quickly support new product lines or new products within existing product lines, the backend database schema must adapt in kind, and most commonly while the application remains available for normal business operations.  MySQL 5.6 supports this level of online schema flexibility and agility by providing the following new ALTER TABLE online DDL syntax additions:  CREATE INDEX DROP INDEX Change AUTO_INCREMENT value for a column ADD/DROP FOREIGN KEY Rename COLUMN Change ROW FORMAT, KEY_BLOCK_SIZE for a table Change COLUMN NULL, NOT_NULL Add, drop, reorder COLUMN Again, the details are in the MySQL 5.6 docs. Key-value access to InnoDB via Memcached APIMany of the next generation of web, cloud, social and mobile applications require fast operations against simple Key/Value pairs. At the same time, they must retain the ability to run complex queries against the same data, as well as ensure the data is protected with ACID guarantees. With the new NoSQL API for InnoDB, developers have allthe benefits of a transactional RDBMS, coupled with the performance capabilities of Key/Value store.MySQL 5.6 provides simple, key-value interaction with InnoDB data via the familiar Memcached API.  Implemented via a new Memcached daemon plug-in to mysqld, the new Memcached protocol is mapped directly to the native InnoDB API and enables developers to use existing Memcached clients to bypass the expense of query parsing and go directly to InnoDB data for lookups and transactional compliant updates.  The API makes it possible to re-use standard Memcached libraries and clients, while extending Memcached functionality by integrating a persistent, crash-safe, transactional database back-end.  The implementation is shown here:So does this option provide a performance benefit over SQL?  Internal performance benchmarks using a customized Java application and test harness show some very promising results with a 9X improvement in overall throughput for SET/INSERT operations:You can follow the InnoDB team blog for the methodology, implementation and internal test cases that generated these results here. How to get started with Memcached API to InnoDB is here. New Instrumentation in Performance SchemaThe MySQL Performance Schema was introduced in MySQL 5.5 and is designed to provide point in time metrics for key performance indicators.  MySQL 5.6 improves the Performance Schema in answer to the most common DBA and Developer problems.  New instrumentations include: Statements/Stages What are my most resource intensive queries? Where do they spend time? Table/Index I/O, Table Locks Which application tables/indexes cause the most load or contention? Users/Hosts/Accounts Which application users, hosts, accounts are consuming the most resources? Network I/O What is the network load like? How long do sessions idle? Summaries Aggregated statistics grouped by statement, thread, user, host, account or object. The MySQL 5.6 Performance Schema is now enabled by default in the my.cnf file with optimized and auto-tune settings that minimize overhead (< 5%, but mileage will vary), so using the Performance Schema ona production server to monitor the most common application use cases is less of an issue.  In addition, new atomic levels of instrumentation enable the capture of granular levels of resource consumption by users, hosts, accounts, applications, etc. for billing and chargeback purposes in cloud computing environments.The MySQL docs are an excellent resource for all that is available and that can be done with the 5.6 Performance Schema. Better Condition Handling - GET DIAGNOSTICSMySQL 5.6 enables developers to easily check for error conditions and code for exceptions by introducing the new MySQL Diagnostics Area and corresponding GET DIAGNOSTICS interface command. The Diagnostic Area can be populated via multiple options and provides 2 kinds of information:Statement - which provides affected row count and number of conditions that occurredCondition - which provides error codes and messages for all conditions that were returned by a previous operation The addressable items for each are: The new GET DIAGNOSTICS command provides a standard interface into the Diagnostics Area and can be used via the CLI or from within application code to easily retrieve and handle the results of the most recent statement execution.  An example of how it is used might be:mysql> DROP TABLE test.no_such_table; ERROR 1051 (42S02): Unknown table 'test.no_such_table' mysql> GET DIAGNOSTICS CONDITION 1 -> @p1 = RETURNED_SQLSTATE, @p2 = MESSAGE_TEXT; mysql> SELECT @p1, @p2; +-------+------------------------------------+| @p1   | @p2                                | +-------+------------------------------------+| 42S02 | Unknown table 'test.no_such_table' | +-------+------------------------------------+ Options for leveraging the MySQL Diagnotics Area and GET DIAGNOSTICS are detailed in the MySQL Docs.While the above is a summary of some of the key developer enabling 5.6 features, it is by no means exhaustive. You can dig deeper into what MySQL 5.6 has to offer by reading this developer zone article or checking out "What's New in MySQL 5.6" in the MySQL docs.BONUS ALERT!  If you are developing on Windows or are considering MySQL as an alternative to SQL Server for your next project, application or shipping product, you should check out the MySQL Installer for Windows.  The installer includes the MySQL 5.6 RC database, all drivers, Visual Studio and Excel plugins, tray monitor and development tools all a single download and GUI installer.   So what are your next steps? Register for Dec. 13 "MySQL 5.6: Building the Next Generation of Web-Based Applications and Services" live web event.  Hurry!  Seats are limited. Download the MySQL 5.6 Release Candidate (look under the Development Releases tab) Provide Feedback <link to http://bugs.mysql.com/> Join the Developer discussion on the MySQL Forums Explore all MySQL Products and Developer Tools As always, thanks for your continued support of MySQL!

    Read the article

  • Our winners- and some BBQ for everyone

    - by Steve Tunstall
    Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…) OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting. Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have. Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole. See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens. Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best. Now cover and place in fridge overnight. Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice. Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork. So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever. Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you. Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor. Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough. How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!! Step 6. Eat. It pulls apart like this when it’s done. By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor. Thanks, and I will put up another good tip, about the ZFSSA, around the end of November. Steve 

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • What are the memory-management capabilities of MySQL + JDBC (in light of autonomic computing)?

    - by Adel
    I'm interested in implementing some kind of autonomic-computing functionality using MySQL. By autonomic-computing I mean roughly some failsafe abilities, whereby the application appears to be at least slightly "intelligent" For reference, the main parts of autonomic computing we'd like are the "self-configuring" and "self-healing" features (the other two - "self-optimizing" and "self-protecting", are too abstract/futuristic for us, at this time). Sofor example, if we have a sample Java application that utilizes a MySQL database, we might want to automatically restart the MySQL database if we take up too much memory. Or maybe we want to have the ability to dynamiccally adjust the database memory as needed. So for example, when we start the application the database begins with a 56 Megabyte buffer; but then as we insert so many rows we want to have it automatically jump up to 512 MB, then to 1024, until a max of 4096 MB. Does all of the above suggest that MySQL is too "weak" for the task? Do you suggest using Oracle database? My professor believes that by using Java we can basically make up for any memory-management deficiencies that MySQL has in relation to Oracle DB. I'm new to MySQL , but have experience with Oracle. If all of the above sounds wishy-washy, it is because I'm still fleshing it out. thanks

    Read the article

  • Java and .NET cost of use [on hold]

    - by 1110
    I work with .NET technology stack for about 4 years. I am learning and enjoy working with ASP MVC framework and I never did anything serious in other languages. This is not the question like what is better (I read all similar questions). What interest me is the cost of switching. For example: If you are about to start a start-up company today and you are in my situation not too much money, some good idea that you think others will use and have a knowledge of .NET. In my head I have a few questions that I can't answer and I know that somebody with experience can: 1) Java & .NET hosting. Suppose shared hosting is not good enough anymore, your site has grown and you need more resources. How much Java services is cheaper compared to .NET? 2) I didn't follow hype about ORACLE will kill java long time. Does oracle show interest in investing in java. I mean is is safe to bet on java as a technology when starting start-up (basically did oracle show some will to destroy java platform)? 3) I am not sure what I am asking here. When you use Java you can use JEEE stack or Java with third party stack (spring, hibernate, maven etc.). I saw a lot of project that work with second option if web application is not enterprise level but social networking site for example which stack is best pick? Summary of this question is is it safe to jump in to Java learn it and build product based on it. It's not too hard for me to learn it. But how much can I get from it.

    Read the article

  • SOA Suite 11g Native Format Builder Complex Format Example

    - by bob.webster
    This rather long posting details the steps required to process a grouping of fixed length records using Format Builder.   If it’s 10 pm and you’re feeling beat you might want to leave this until tomorrow.  But if it’s 10 pm and you need to get a Format Builder Complex template done, read on… The goal is to process individual orders from a file using the 11g File Adapter and Format Builder Sample Data =========== 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 301Hexagon Widget         1150.98 ORD6735502/01/2011 The records are fixed length records representing a number of logical Order records. Each order record consists of a number of item records starting with a 3 digit number, followed by a single Summary Record which starts with the constant ORD. How can this file be processed so that the first poll returns the first order? 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 And the second poll returns the second order? 301Hexagon Widget           1150.98 ORD6735502/01/2011 Note: if you need more than one order per poll, that’s also possible, see the “Multiple Messages” field in the “File Adapter Step 6 of 9” snapshot further down.   To follow along with this example you will need - Studio Edition Version 11.1.1.4.0    with the   - SOA Extension for JDeveloper 11.1.1.4.0 installed Both can be downloaded from here:  http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html You will not need a running WebLogic Server domain to complete the steps and Format Builder tests in this article.     Start with a SOA Composite containing a File Adapter The Format Builder is part of the File Adapter so start by creating a new SOA Project and Composite. Here is a quick summary for those not familiar with these steps - Start JDeveloper - From the Main Menu choose File->New - In the New Gallery window that opens Expand the “General” category and Select the Applications node.   Then choose SOA Application from the Items section on the right.  Finally press the OK button. - In Step 1 of the “Create SOA Application wizard” that appears enter an Application Name and an Directory of your     choice,   then press the Next button. - In Step 2 of the “Create SOA Application wizard”, press the Next button leaving all entries as defaulted. - In Step 3 of the “Create SOA Application wizard”, Enter a composite name of your choice and Press the Finish   Button These steps result in a new Application and SOA Project. The SOA Project contains a composite.xml file which is opened and shown below. For our example we have not defined a Mediator or a BPEL process to minimize the steps, but one or the other would eventually be needed to use the File Adapter we are about to create. Drag and drop the File Adapter icon from the Component Pallette onto either the LEFT side of the diagram under “Exposed Services” or the right side under “External References”.  (See the Green Circle in the image below).  Placing the adapter on the left side would indicate the file being processed is inbound to the composite, if the adapter is placed on the right side then the data is outbound to a file.     Note that the same Format Builder definition can be used in both directions.  For example we could use the format with a File Adapter on the left side of the composite to parse fixed data into XML, modify the data in our Composite or BPEL process and then use the same Format Builder definition with a File adapter on the right side of the composite to write the data back out in the same fixed data format When the File Adapter is dropped on the Composite the File Adapter Wizard Appears. Skip Past the first page, Step 1 of 9 by pressing the Next button. In Step 2 enter a service name of your choice as shown below, then press Next   When the Native Format Builder appears, skip the welcome page by pressing next. Also press the Next button to accept the settings on Step 3 of 9 On Step 4, select Read File and press the Next button as shown below.   On Step 5 enter a directory that will contain a file with the input data, then  Press the Next button as shown below. In step 6, enter *.txt or another file format to select input files from the input directory mentioned in step 5. ALSO check the “Files contain Multiple Messages” checkbox and set the “Publish Messages in Batches of” field to 1.  The value can be set higher to increase the number of logical order group records returned on each poll of the file adapter.  In other words, it determines the number of Orders that will be sent to each instance of a Mediator or Composite processing using the File Adapter.   Skip Step 7 by pressing the Next button In Step 8 press the Gear Icon on the right side to load the Native Format Builder.       Native Format Builder  appears Before diving into the format, here is an overview of the process. Approach - Bottom up Assuming an Order is a grouping of item records and a summary record…. - Define a separate  Complex Type for each Record Type found in the group.    (One for itemRecord and one for summaryRecord) - Define a Complex Type to contain the Group of Record types defined above   (LogicalOrderRecord) - Define a top level element to represent an order.  (order)   The order element will be of type LogicalOrderRecord   Defining the Format In Step 1 select   “Create new”  and  “Complex Type” and “Next”   In Step two browse to and select a file containing the test data shown at the start of this article. A link is provided at the end of this article to download a file containing the test data. Press the Next button     In Step 3 Complex types must be define for each type of input record. Select the Root-Element and Click on the Add Complex Type icon This creates a new empty complex type definition shown below. The fastest way to create the definition is to highlight the first line of the Sample File data and drag the line onto the  <new_complex_type> Format Builder introspects the data and provides a grid to define additional fields. Change the “Complex Type Name” to  “itemRecord” Then click on the ruler to indicate the position of fixed columns.  Drag the red triangle icons to the exact columns if necessary. Double click on an existing red triangle to remove an unwanted entry. In the case below fields are define in columns 0-3, 4-28, 29-eol When the field definitions are correct, press the “Generate Fields” button. Field entries named C1, C2 and C3 will be created as shown below. Click on the field names and rename them from C1->itemNum, C2->itemDesc and C3->itemCost  When all the fields are correctly defined press OK to save the complex type.        Next, the process is repeated to define a Complex Type for the SummaryRecord. Select the Root-Element in the schema tree and press the new complex type icon Then highlight and drag the Summary Record from the sample data onto the <new_complex_type>   Change the complex type name to “summaryRecord” Mark the fixed fields for Order Number and Order Date. Press the Generate Fields button and rename C1 and C2 to itemNum and orderDate respectively.   The last complex type to be defined is a type to hold the group of items and the summary record. Select the Root-Element in the schema tree and click the new complex type icon Select the “<new_complex_type>” entry and click the pencil icon   On the Complex Type Details page change the name and type of each input field. Change line 1 to be named item and set the Type  to “itemRecord” Change line 2 to be named summary and set the Type to “summaryRecord” We also need to indicate that itemRecords repeat in the input file. Click the pencil icon at the right side of the item line. On the Edit Details page change the “Max Occurs” entry from 1 to UNBOUNDED. We also need to indicate how to identify an itemRecord.  Since each item record has “.” in column 32 we can use this fact to differentiate an item record from a summary record. Change the “Look Ahead” field to value 32 and enter a period in the “Look For” field Press the OK button to save entry.     Finally, its time to create a top level element to represent an order. Select the “Root-Element” in the schema tree and press the New element icon Click on the <new_element> and press the pencil icon.   Set the Element Name to “order” and change the Data Type to “logicalOrderRecord” Press the OK button to save the element definition.   The final definition should match the screenshot below. Press the Next Button to view the definition source.     Press the Test Button to test the definition   Press the Green Triangle Icon to run the test.   And we are presented with an unwelcome error. The error states that the processor ran out of data while working through the definition. The processor was unable to differentiate between itemRecords and summaryRecords and therefore treated the entire file as a list of itemRecords.  At end of file, the “summary” portion of the logicalOrderRecord remained unprocessed but mandatory.   This root cause of this error is the loss of our “lookAhead” definition used to identify itemRecords. This appears to be a bug in the  Native Format Builder 11.1.1.4.0 Luckily, a simple workaround exists. Press the Cancel button and return to the “Step 4 of 4” Window. Manually add    nxsd:lookAhead="32" nxsd:lookFor="."   attributes after the maxOccurs attribute of the item element. as shown in the highlighted text below.   When the lookAhead and lookFor attributes have been added Press the Test button and on the Test page press the Green Triangle. The test is now successful, the first order in the file is returned by the File Adapter.     Below is a complete listing of the Result XML from the right column of the screen above   Try running it The downloaded input test file and completed schema file can be used for testing without following all the Native Format Builder steps in this example. Use the following link to download a file containing the sample data. Download Sample Input Data This is the best approach rather than cutting and pasting the input data at the top of the article.  Since the data is fixed length it’s very important to watch out for trailing spaces in the data and to ensure an eol character at the end of every line. The download file is correctly formatted. The final schema definition can be downloaded at the following link Download Completed Schema Definition   - Save the inputData.txt file to a known location like the xsd folder in your project. - Save the inputData_6.xsd file to the xsd folder in your project. - At step 1 in the Native Format Builder wizard  (as shown above) check the “Edit existing” radio button,    then browse and select the inputData_6.xsd file - At step 2 of the Format Builder configuration Wizard (as shown above) supply the path and filename for    the inputData.txt file. - You can then proceed to the test page and run a test. - Remember the wizard bug will drop the lookAhead and lookFor attributes,  you will need to manually add   nxsd:lookAhead="32" nxsd:lookFor="."    after the maxOccurs attribute of the item element in the   LogicalOrderRecord Complex Type.  (as shown above)   Good Luck with your Format Project

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Making Money from your SQL Server Blog

    - by Bill Graziano
    My SQL Server blog reading list is around one hundred blogs.  Many people are writing great content and generating lots of page views.  I see some of them running Google AdSense and trying to make a little money off their traffic.  If you want to earn some some extra money from what you’ve written there are a couple of options.  And one new option that I’m announcing here. Background Internet advertising is sold based on a few different pricing schemes.  Flat Fee.  You offer either all your impressions (page views) or some percentage of your impressions in exchange for a flat monthly fee.  CPM or cost per thousand impressions.  If the quoted price is $2 CPM you’ll get $2 for every 1,000 times the ad is displayed.  While you might think the “M” means millions, the “M” in CPM is the roman numeral for 1,000. CPC or cost per click.  This is also called PPC or pay per click.  In this method you get paid based on how many clicks there are on the ad.  CPA or cost per action.  In this method you get paid based on an action that occurs on the advertisers site after they click on the ad.  This is typically some type of sign up form.  This is how most affiliate programs work. Darren Rowse at ProBlogger has been writing about blogging and making money off blogs for years.  He has a good introduction to making money on your blog in his “Making Money” section.  If you’re interested in learning more he has a post up titled How to Make More Money From Your Blog in the New Year that links to many of his best posts on the subject. Google AdSense This is the most common method for people earning money from their blogging.  It’s easy to setup and administer.  You tell AdSense what size ads you’d like to run and it gives you a little piece of JavaScript to put on your site.  AdSense quickly learns the topics you write about and displays ads that are appropriate for your site.  I typically see ads for hosting, SQL Server tools and developer tools running in AdSense slots.  AdSense pays on a CPC model.  If you translate that back to CPM pricing you’ll see rates from $0.50 to $1.00 CPM. Amazon While you might not make much money writing books it’s now possible to make even less helping Amazon sell them.  You can sign up for an Amazon affiliate program.  Each time you send Amazon a link and someone buys the book you get a cut of that sale.  This is the CPA model from above.  Amazon can help you build some pretty nice “stores”.  Here’s the SQL Server bookstore I built for SQLTeam.com.  If you’re just putting in a page with books like I’ve done on SQLTeam you should keep your expectations low.  If you’re writing book reviews of suggesting books on your blog it really does make sense to setup an Amazon affiliate link.  People are much more likely to buy a book based on a review from a trusted source.  I always try to buy through a referral link if there is one. Amazon pays about 4% of the price as a referral fee.  You also get credit for anything else they buy while on the site.  I recently had someone buy an iPod nano with their SQL Server book making me an extra $5.60 richer!  Estimating how much you can make is difficult though.  How much attention you draw to the links and book reviews can dramatically affect the earnings. Private Ad Sales This is the hardest but potentially most lucrative option.  You sell advertising directly to companies that want to sell things to your readers.  Typically this would be SQL Server tool vendors, hosting companies or anyone else that wants to make money off database administrators.  This is also the most difficult to do.  You’ll need the contacts at the companies and enough page views to make it worth their while.  You’ll also need software to track the page views and clicks, geo-target your ads and smooth out the impressions.  Your earnings are based on whatever you can negotiate with the companies. SQL Server Ad Network For the last couple of years I’ve run any extra ads that I sold on the SQLTeam Weblogs.  You can see an example of that on Mladen’s blog.  The ad in the upper right corner is one that I’m running for him.  (Note: Many of the ads I’m running are geo-targeted to only appear in English speaking countries.  You may see a different set of ads outside the US, Canada and the UK.  You can also see he has a couple of Google ads on his blog.)  When I run ads on his blog I split the advertising revenue with him.  They make a little and I make a little. I recently started to expand this and sell advertising specifically to run on SQL Server-related blogs.  I’m also starting to run ads on non-SQLTeam blogs.  The only way I can sell more advertising is to have more blogs to run it on.  And that’s where you come in. I’ve created a SQL Server advertising network.  I handle all the ad sales and provide the technology to serve the ads.  I handle collections and payments back to you.  You get paid at the end of each month regardless of when (or if) the advertiser actually pays.  All you need to do is add a small piece of JavaScript to your site to display the ads. If you’re writing about SQL Server and interested in earning a little money for your site I’d like to talk to you.  You can use the Contact Us page on SQLTeam.com to reach me.  Running advertising on your blog isn’t for everyone.  If you’re concerned about what advertisers might think about certain posts then you might not be a good fit.  For the most part this isn’t an issue.  You’ll also need to have a PayPal account to receive payments.  You probably won’t get rich doing this.  But you can earn extra cash on the side for doing what you would do anyway.  I do know that people have earned enough to buy themselves a nice laptop doing this. My initial target is blogs with more than 10,000 page views per month.  I expect to pay two to three times what Google pays.  If you have less than 10,000 page views per month but are still interested I’d still like to hear from you.  I may not be able to sign up smaller blogs right away but we’ll get the process started.  If you’re unsure about your traffic Google Analytics is a free tool that provides great reporting on traffic, popular posts and how people find your blog.  If you have any questions or are just curious drop me a line and I’ll try to answer your questions.

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Scripting Asset Creation

    - by The Old Toxophilist
    So far in this series we have looked at creating asset within the EMOC BUI but the Exalogic 2.0.1 installation also provide the Iaas cli as an alternative to most of the common functionality available within EMOC. The IaaS cli interface provides access to the functions that are available to a user logged into the BUI with the CloudUser Role. As such not all functionality is available from the command line interface however having said that the IaaS cli provides all the functionality required to create the Assets within a specific Account (Tenure). Because these action are common and repeatable I decided to wrap the functionality within a simple script that takes a simple input file and creates the Asset. Following the Script through will show us the required steps needed to create the various Assets within an Account and hence I will work through the various functions within the script below describing the steps. You will note from the various steps within the script that it is designed to pause between actions allowing the proceeding action to complete. The reason for this is because we could swamp EMOC with a series of actions and may end up with a situation where we are trying to action a Volume attached before the creation of the vServer and Volume have completed. processAssets() This function simply reads through the passed input file identifying what assets need to be created. An example of the input file can be found below. It can be seen that the input file can be used to create Assets in multiple Accounts during a single run. The order of the entries define the functions that need to be actioned as follows: Input Command Iaas Actions Parameters Production:Connect akm-describe-accounts akm-create-access-key iaas-create-key-pair iaas-describe-vnets iaas-describe-vserver-types iaas-describe-server-templates Username Password Production:Create|vServer iaas-run-vserver vServer Name vServer Type Name Template Name Comma separated list of network names which the vServer will connect to. Comma separated list of IPs for the specified networks. Production:Create|Volume iaas-create-volume Volume Name Volume Size Production:Attach|Volume iaas-attach-volumes-to-vserver vServer Name Comma separated list of volume names Production:Disconnect iaas-delete-key-pair akm-delete-access-key None connectToAccount() It can be seen from the connectToAccount function that before we can execute any Asset creation we must first connect to the appropriate account. To do this we will need the ID associated with the Account. This can be found by executing the akm-describe-accounts cli command which will return a list of all Accounts and there IDs. Once we have the Account ID we generate and Access key using the akm-create-access-key command and then a keypair with the iaas-create-key-pair command. At this point we now have all the information we need to access the specific named account. createVServer() This function simply retrieved the information from the input line and then will create the vServer using the iaas-run-vserver cli command. Reading the function you will notice that it takes the various input names for vServer Type, Template and Networks and converts them into the appropriate IDs. The IaaS cli will not work directly with component names and hence all IDs need to be found. createVolume() Function that simply takes the Volume name and Size then executes the iaas-create-volume command to create the volume. attachVolume() Takes the name of the Volume, which we may have just created, and a Volume then identifies the appropriate IDs before assigning the Volume to the vServer with the iaas-attach-volumes-to-vserver. disconnectFromAccount() Once we have finished connecting to the Account we simply remove the key pair with iaas-delete-key-pair and the access key with akm-delete-access-key although it may be useful to keep this if ssh is required and you do not subsequently modify the sshd information to allow unsecured access. By default the key is required for ssh access when a vServer is created from the command-line. CreateAssets.sh 1 export OCCLI=/opt/sun/occli/bin 2 export IAAS_HOME=/opt/oracle/iaas/cli 3 export JAVA_HOME=/usr/java/latest 4 export IAAS_BASE_URL=https://127.0.0.1 5 export IAAS_ACCESS_KEY_FILE=iaas_access.key 6 export KEY_FILE=iaas_access.pub 7 #CloudUser used to create vServers & Volumes 8 export IAAS_USER=exaprod 9 export IAAS_PASSWORD_FILE=root.pwd 10 export KEY_NAME=cli.recreate 11 export INPUT_FILE=CreateAssets.in 12 13 export ACCOUNTS_FILE=accounts.out 14 export VOLUMES_FILE=volumes.out 15 export DISTGRPS_FILE=distgrp.out 16 export VNETS_FILE=vnets.out 17 export VSERVER_TYPES_FILE=vstype.out 18 export VSERVER_FILE=vserver.out 19 export VSERVER_TEMPLATES=template.out 20 export KEY_PAIRS=keypairs.out 21 22 PROCESSING_ACCOUNT="" 23 24 function cleanTempFiles() { 25 rm -f $ACCOUNTS_FILE $VOLUMES_FILE $DISTGRPS_FILE $VNETS_FILE $VSERVER_TYPES_FILE $VSERVER_FILE $VSERVER_TEMPLATES $KEY_PAIRS $IAAS_PASSWORD_FILE $KEY_FILE $IAAS_ACCESS_KEY_FILE 26 } 27 28 function connectToAccount() { 29 if [[ "$ACCOUNT" != "$PROCESSING_ACCOUNT" ]] 30 then 31 if [[ "" != "$PROCESSING_ACCOUNT" ]] 32 then 33 $IAAS_HOME/bin/iaas-delete-key-pair --key-name $KEY_NAME --access-key-file $IAAS_ACCESS_KEY_FILE 34 $IAAS_HOME/bin/akm-delete-access-key $AK 35 fi 36 PROCESSING_ACCOUNT=$ACCOUNT 37 IAAS_USER=$ACCOUNT_USER 38 echo "$ACCOUNT_PASSWORD" > $IAAS_PASSWORD_FILE 39 $IAAS_HOME/bin/akm-describe-accounts --sep "|" > $ACCOUNTS_FILE 40 while read line 41 do 42 ACCOUNT_ID=${line%%|*} 43 line=${line#*|} 44 ACCOUNT_NAME=${line%%|*} 45 # echo "Id = $ACCOUNT_ID" 46 # echo "Name = $ACCOUNT_NAME" 47 if [[ "$ACCOUNT_NAME" == "$ACCOUNT" ]] 48 then 49 echo "Found Production Account $line" 50 AK=`$IAAS_HOME/bin/akm-create-access-key --account $ACCOUNT_ID --access-key-file $IAAS_ACCESS_KEY_FILE` 51 KEYPAIR=`$IAAS_HOME/bin/iaas-create-key-pair --key-name $KEY_NAME --key-file $KEY_FILE` 52 echo "Connected to $ACCOUNT_NAME" 53 break 54 fi 55 done < $ACCOUNTS_FILE 56 fi 57 } 58 59 function disconnectFromAccount() { 60 $IAAS_HOME/bin/iaas-delete-key-pair --key-name $KEY_NAME --access-key-file $IAAS_ACCESS_KEY_FILE 61 $IAAS_HOME/bin/akm-delete-access-key $AK 62 PROCESSING_ACCOUNT="" 63 } 64 65 function getNetworks() { 66 $IAAS_HOME/bin/iaas-describe-vnets --sep "|" > $VNETS_FILE 67 } 68 69 function getVSTypes() { 70 $IAAS_HOME/bin/iaas-describe-vserver-types --sep "|" > $VSERVER_TYPES_FILE 71 } 72 73 function getTemplates() { 74 $IAAS_HOME/bin/iaas-describe-server-templates --sep "|" > $VSERVER_TEMPLATES 75 } 76 77 function getVolumes() { 78 $IAAS_HOME/bin/iaas-describe-volumes --sep "|" > $VOLUMES_FILE 79 } 80 81 function getVServers() { 82 $IAAS_HOME/bin/iaas-describe-vservers --sep "|" > $VSERVER_FILE 83 } 84 85 function getNetworkId() { 86 while read line 87 do 88 NETWORK_ID=${line%%|*} 89 line=${line#*|} 90 NAME=${line%%|*} 91 if [[ "$NAME" == "$NETWORK_NAME" ]] 92 then 93 break 94 fi 95 done < $VNETS_FILE 96 } 97 98 function getVSTypeId() { 99 while read line 100 do 101 VSTYPE_ID=${line%%|*} 102 line=${line#*|} 103 NAME=${line%%|*} 104 if [[ "$VSTYPE_NAME" == "$NAME" ]] 105 then 106 break 107 fi 108 done < $VSERVER_TYPES_FILE 109 } 110 111 function getTemplateId() { 112 while read line 113 do 114 TEMPLATE_ID=${line%%|*} 115 line=${line#*|} 116 NAME=${line%%|*} 117 if [[ "$TEMPLATE_NAME" == "$NAME" ]] 118 then 119 break 120 fi 121 done < $VSERVER_TEMPLATES 122 } 123 124 function getVolumeId() { 125 while read line 126 do 127 export VOLUME_ID=${line%%|*} 128 line=${line#*|} 129 NAME=${line%%|*} 130 if [[ "$NAME" == "$VOLUME_NAME" ]] 131 then 132 break; 133 fi 134 done < $VOLUMES_FILE 135 } 136 137 function getVServerId() { 138 while read line 139 do 140 VSERVER_ID=${line%%|*} 141 line=${line#*|} 142 NAME=${line%%|*} 143 if [[ "$VSERVER_NAME" == "$NAME" ]] 144 then 145 break; 146 fi 147 done < $VSERVER_FILE 148 } 149 150 function getVServerState() { 151 getVServers 152 while read line 153 do 154 VSERVER_ID=${line%%|*} 155 line=${line#*|} 156 NAME=${line%%|*} 157 line=${line#*|} 158 line=${line#*|} 159 VSERVER_STATE=${line%%|*} 160 if [[ "$VSERVER_NAME" == "$NAME" ]] 161 then 162 break; 163 fi 164 done < $VSERVER_FILE 165 } 166 167 function pauseUntilVServerRunning() { 168 # Wait until the Server is running before creating the next 169 getVServerState 170 while [[ "$VSERVER_STATE" != "RUNNING" ]] 171 do 172 getVServerState 173 echo "$NAME $VSERVER_STATE" 174 if [[ "$VSERVER_STATE" != "RUNNING" ]] 175 then 176 echo "Sleeping......." 177 sleep 60 178 fi 179 if [[ "$VSERVER_STATE" == "FAILED" ]] 180 then 181 echo "Will Delete $NAME in 5 Minutes....." 182 sleep 300 183 deleteVServer 184 echo "Deleted $NAME waiting 5 Minutes....." 185 sleep 300 186 break 187 fi 188 done 189 # Lets pause for a minute or two 190 echo "Just Chilling......" 191 sleep 60 192 echo "Ahhhhh we're getting there......." 193 sleep 60 194 echo "I'm almost at one with the universe......." 195 sleep 60 196 echo "Bong Reality Check !" 197 } 198 199 function deleteVServer() { 200 $IAAS_HOME/bin/iaas-terminate-vservers --force --vserver-ids $VSERVER_ID 201 } 202 203 function createVServer() { 204 VSERVER_NAME=${ASSET_DETAILS%%|*} 205 ASSET_DETAILS=${ASSET_DETAILS#*|} 206 VSTYPE_NAME=${ASSET_DETAILS%%|*} 207 ASSET_DETAILS=${ASSET_DETAILS#*|} 208 TEMPLATE_NAME=${ASSET_DETAILS%%|*} 209 ASSET_DETAILS=${ASSET_DETAILS#*|} 210 NETWORK_NAMES=${ASSET_DETAILS%%|*} 211 ASSET_DETAILS=${ASSET_DETAILS#*|} 212 IP_ADDRESSES=${ASSET_DETAILS%%|*} 213 # Get Ids associated with names 214 getVSTypeId 215 getTemplateId 216 # Convert Network Names to Ids 217 NETWORK_IDS="" 218 while true 219 do 220 NETWORK_NAME=${NETWORK_NAMES%%,*} 221 NETWORK_NAMES=${NETWORK_NAMES#*,} 222 getNetworkId 223 if [[ "$NETWORK_IDS" != "" ]] 224 then 225 NETWORK_IDS="$NETWORK_IDS,$NETWORK_ID" 226 else 227 NETWORK_IDS=$NETWORK_ID 228 fi 229 if [[ "$NETWORK_NAME" == "$NETWORK_NAMES" ]] 230 then 231 break 232 fi 233 done 234 # Create vServer 235 echo "About to execute : $IAAS_HOME/bin/iaas-run-vserver --name $VSERVER_NAME --key-name $KEY_NAME --vserver-type $VSTYPE_ID --server-template-id $TEMPLATE_ID --vnets $NETWORK_IDS --ip-addresses $IP_ADDRESSES" 236 $IAAS_HOME/bin/iaas-run-vserver --name $VSERVER_NAME --key-name $KEY_NAME --vserver-type $VSTYPE_ID --server-template-id $TEMPLATE_ID --vnets $NETWORK_IDS --ip-addresses $IP_ADDRESSES 237 pauseUntilVServerRunning 238 } 239 240 function createVolume() { 241 VOLUME_NAME=${ASSET_DETAILS%%|*} 242 ASSET_DETAILS=${ASSET_DETAILS#*|} 243 VOLUME_SIZE=${ASSET_DETAILS%%|*} 244 # Create Volume 245 echo "About to execute : $IAAS_HOME/bin/iaas-create-volume --name $VOLUME_NAME --size $VOLUME_SIZE" 246 $IAAS_HOME/bin/iaas-create-volume --name $VOLUME_NAME --size $VOLUME_SIZE 247 # Lets pause 248 echo "Just Waiting 30 Seconds......" 249 sleep 30 250 } 251 252 function attachVolume() { 253 VSERVER_NAME=${ASSET_DETAILS%%|*} 254 ASSET_DETAILS=${ASSET_DETAILS#*|} 255 VOLUME_NAMES=${ASSET_DETAILS%%|*} 256 # Get vServer Id 257 getVServerId 258 # Convert Volume Names to Ids 259 VOLUME_IDS="" 260 while true 261 do 262 VOLUME_NAME=${VOLUME_NAMES%%,*} 263 VOLUME_NAMES=${VOLUME_NAMES#*,} 264 getVolumeId 265 if [[ "$VOLUME_IDS" != "" ]] 266 then 267 VOLUME_IDS="$VOLUME_IDS,$VOLUME_ID" 268 else 269 VOLUME_IDS=$VOLUME_ID 270 fi 271 if [[ "$VOLUME_NAME" == "$VOLUME_NAMES" ]] 272 then 273 break 274 fi 275 done 276 # Attach Volumes 277 echo "About to execute : $IAAS_HOME/bin/iaas-attach-volumes-to-vserver --vserver-id $VSERVER_ID --volume-ids $VOLUME_IDS" 278 $IAAS_HOME/bin/iaas-attach-volumes-to-vserver --vserver-id $VSERVER_ID --volume-ids $VOLUME_IDS 279 # Lets pause 280 echo "Just Waiting 30 Seconds......" 281 sleep 30 282 } 283 284 function processAssets() { 285 while read line 286 do 287 ACCOUNT=${line%%:*} 288 line=${line#*:} 289 ACTION=${line%%|*} 290 line=${line#*|} 291 if [[ "$ACTION" == "Connect" ]] 292 then 293 ACCOUNT_USER=${line%%|*} 294 line=${line#*|} 295 ACCOUNT_PASSWORD=${line%%|*} 296 connectToAccount 297 298 ## Account Info 299 getNetworks 300 getVSTypes 301 getTemplates 302 303 continue 304 fi 305 if [[ "$ACTION" == "Create" ]] 306 then 307 ASSET=${line%%|*} 308 line=${line#*|} 309 ASSET_DETAILS=$line 310 if [[ "$ASSET" == "vServer" ]] 311 then 312 createVServer 313 314 continue 315 fi 316 if [[ "$ASSET" == "Volume" ]] 317 then 318 createVolume 319 320 continue 321 fi 322 fi 323 if [[ "$ACTION" == "Attach" ]] 324 then 325 ASSET=${line%%|*} 326 line=${line#*|} 327 ASSET_DETAILS=$line 328 if [[ "$ASSET" == "Volume" ]] 329 then 330 getVolumes 331 getVServers 332 attachVolume 333 334 continue 335 fi 336 fi 337 if [[ "$ACTION" == "Connect" ]] 338 then 339 disconnectFromAccount 340 341 continue 342 fi 343 done < $INPUT_FILE 344 } 345 346 # Should Parameterise this 347 348 while [ $# -gt 0 ] 349 do 350 case "$1" in 351 -a) INPUT_FILE="$2"; shift;; 352 *) echo ""; echo >&2 \ 353 "usage: $0 [-a <Asset Definition File>] (Default is CreateAssets.in)" 354 echo""; exit 1;; 355 *) break;; 356 esac 357 shift 358 done 359 360 361 362 363 processAssets 364 365 echo "**************************************" 366 echo "***** Finished Creating Assets *****" 367 echo "**************************************" 368 CreateAssetsProd.in Production:Connect|exaprod|welcome1 Production:Create|vServer|VS006|VSTProduction|BaseOEL56ServerTemplate|EoIB-otd-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.223.13,192.168.0.13,10.117.81.67,172.17.0.14 Production:Create|vServer|VS007|VSTProduction|BaseOEL56ServerTemplate|EoIB-otd-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.223.14,192.168.0.14,10.117.81.68,172.17.0.15 Production:Create|vServer|VS008|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.61,192.168.0.61,10.117.81.61,172.17.0.16 Production:Create|vServer|VS009|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.62,192.168.0.62,10.117.81.62,172.17.0.17 Production:Create|vServer|VS000|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.63,192.168.0.63,10.117.81.63,172.17.0.18 Production:Create|vServer|VS001|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.64,192.168.0.64,10.117.81.64,172.17.0.19 Production:Create|vServer|VS002|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.65,192.168.0.65,10.117.81.65,172.17.0.20 Production:Create|vServer|VS003|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.66,192.168.0.66,10.117.81.66,172.17.0.21 Production:Create|Volume|VS006|50 Production:Create|Volume|VS007|50 Production:Create|Volume|VS008|50 Production:Create|Volume|VS009|50 Production:Create|Volume|VS000|50 Production:Create|Volume|VS001|50 Production:Create|Volume|VS002|50 Production:Create|Volume|VS003|50 Production:Attach|Volume|VS006|VS006 Production:Attach|Volume|VS007|VS007 Production:Attach|Volume|VS008|VS008 Production:Attach|Volume|VS009|VS009 Production:Attach|Volume|VS000|VS000 Production:Attach|Volume|VS001|VS001 Production:Attach|Volume|VS002|VS002 Production:Attach|Volume|VS003|VS003 Production:Disconnect Development:Connect|exadev|welcome1 Development:Create|vServer|VS014|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.24,10.117.81.71,172.17.0.24 Development:Create|vServer|VS015|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.25,10.117.81.72,172.17.0.25 Development:Create|vServer|VS016|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.26,10.117.81.73,172.17.0.26 Development:Create|vServer|VS017|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.27,10.117.81.74,172.17.0.27 Development:Create|vServer|VS018|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.28,10.117.81.75,172.17.0.28 Development:Create|vServer|VS019|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.29,10.117.81.76,172.17.0.29 Development:Create|vServer|VS020|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.30,10.117.81.77,172.17.0.30 Development:Create|vServer|VS021|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.31,10.117.81.78,172.17.0.31 Development:Create|vServer|VS022|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.32,10.117.81.79,172.17.0.32 Development:Create|vServer|VS023|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.33,10.117.81.80,172.17.0.33 Development:Create|vServer|VS024|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.34,10.117.81.81,172.17.0.34 Development:Create|vServer|VS025|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.35,10.117.81.82,172.17.0.35 Development:Create|vServer|VS026|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.36,10.117.81.83,172.17.0.36 Development:Create|vServer|VS027|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.37,10.117.81.84,172.17.0.37 Development:Create|Volume|VS014|50 Development:Create|Volume|VS015|50 Development:Create|Volume|VS016|50 Development:Create|Volume|VS017|50 Development:Create|Volume|VS018|50 Development:Create|Volume|VS019|50 Development:Create|Volume|VS020|50 Development:Create|Volume|VS021|50 Development:Create|Volume|VS022|50 Development:Create|Volume|VS023|50 Development:Create|Volume|VS024|50 Development:Create|Volume|VS025|50 Development:Create|Volume|VS026|50 Development:Create|Volume|VS027|50 Development:Attach|Volume|VS014|VS014 Development:Attach|Volume|VS015|VS015 Development:Attach|Volume|VS016|VS016 Development:Attach|Volume|VS017|VS017 Development:Attach|Volume|VS018|VS018 Development:Attach|Volume|VS019|VS019 Development:Attach|Volume|VS020|VS020 Development:Attach|Volume|VS021|VS021 Development:Attach|Volume|VS022|VS022 Development:Attach|Volume|VS023|VS023 Development:Attach|Volume|VS024|VS024 Development:Attach|Volume|VS025|VS025 Development:Attach|Volume|VS026|VS026 Development:Attach|Volume|VS027|VS027 Development:Disconnect This entry was originally posted on the The Old Toxophilist Site.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >