Search Results

Search found 533 results on 22 pages for 'variant'.

Page 5/22 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Authorization design-pattern / practice?

    - by Lawtonfogle
    On one end, you have users. On the other end, you have activities. I was wondering if there is a best practice to relate the two. The simplest way I can think of is to have every activity have a role, and assign every user every role they need. The problem is that this gets really messy in practice as soon as you go beyond a trivial system. A way I recently designed was to have users who have roles, and roles have privileges, and activities require some combinations of privileges. For the trivial case, this is more complex, but I think it will scale better. But after I implemented it, I felt like it was overkill for the system I had. Another option would be to have users, who have roles, and activities require you to have a certain role to perform with many activities sharing roles. A more complex variant of this would given activities many possible roles, which you only needed one of. And an even more complex variant would be to allow logical statements of role ownership to use an activity (i.e. Must have A and (B exclusive or C) and must not have D). I could continue to list more, but I think this already gives a picture. And many of these have trade offs. But in software design, there are oftentimes solutions, while perhaps not perfect in every possible case, are clearly top of the pack to an extent it isn't even considered opinion based (i.e. how to store passwords, plain text is worse, hashing better, hashing and salt even better, despite the increased complexity of each level) (i.e. 2, Smart UI designs for applications are bad, even if it is subjective as to what the best design is). So, is there a best practice for authorization design that is not purely opinion based/subjective?

    Read the article

  • How to plan/manage multi-platform (mobile) products?

    - by PhD
    Say I've to develop an app that runs on iOS, Android and Windows 8 Mobile. Now all three platforms are technically in different program languages. The only 'reuse' that I can see is that of the boxes-and-lines drawings (UML :) charts and nothing else. So how do companies/programmers manage the variation of the same product across different platforms especially since the implementation languages differ? It's 'easier' in the desktop world IMO given the plethora of languages and cross-platform libraries to make your life easier. Not so in the mobile world. More so, product line management principles don't seem to be all that applicable - what is same and variant doesn't really matter - the application is the same (conceptually) and the implementation is variant. Some difficulties that come to mind: Bug Fixing: Applications maybe designed in a similar manner but the bug identification and fixing would be radically different. A bug on iOS may/may-not be existent for that on Android. Or a bug fix approach on one platform may not be the same on another (unless it's a semantic bug like a!=b instead of a==b which would require the same 'approach' to fixing in essence Enhancements: Making a change on one platform would be radically different than on another Code-Design Divergence: They way the code is written/organized, the class structures etc., could be very different given the different implementation environments - leading to further reuse of the (above) UML models. There are of course many others - just keeping the development in sync and making sure all applications are up to the same version with the same set of features etc. Seems the effort is 3x that of a single application. So how exactly does one manage this nightmarish situation? Some thoughts: Split application to client/server to minimize the effect to client side only (not always doable) Use frameworks like Unity-3D that could take care of the cross-platform problem (mostly applicable to games and probably not to other applications etc.) Any other ways of managing a platform line? What are some proven approaches to managing/taming the effects?

    Read the article

  • How to fill in Different String Values in Different Cells in Excel 2007 VBA marcos

    - by user325160
    Hello everyone, So I am trying to fill in Values "A-Z, 0-9" in a 2007 excel macro in four different locations (I am trying to put A-Z and 0-9 in cells: a1 to d9, e1 to h9, a10 to d18, and e10 to h18). So far I have the code: Sub TwoDArrays() Dim Matrix(9, 4) As Variant Dim Matrix2(9, 4) As Variant Dim Matrix3(9, 4) As Variant Dim Matrix4(9, 4) As Variant Matrix(1, 1) = "A" Matrix(1, 2) = "B" Matrix(1, 3) = "C" Matrix(1, 4) = "D" Matrix(2, 1) = "E" Matrix(2, 2) = "F" Matrix(2, 3) = "G" Matrix(2, 4) = "H" Matrix(3, 1) = "I" Matrix(3, 2) = "J" Matrix(3, 3) = "K" Matrix(3, 4) = "L" Matrix(4, 1) = "M" Matrix(4, 2) = "N" Matrix(4, 3) = "O" Matrix(4, 4) = "P" Matrix(5, 1) = "Q" Matrix(5, 2) = "R" Matrix(5, 3) = "S" Matrix(5, 4) = "T" Matrix(6, 1) = "U" Matrix(6, 2) = "V" Matrix(6, 3) = "W" Matrix(6, 4) = "X" Matrix(7, 1) = "Y" Matrix(7, 2) = "Z" Matrix(7, 3) = "0" Matrix(7, 4) = "1" Matrix(8, 1) = "2" Matrix(8, 2) = "3" Matrix(8, 3) = "4" Matrix(8, 4) = "5" Matrix(9, 1) = "6" Matrix(9, 2) = "7" Matrix(9, 3) = "8" Matrix(9, 4) = "9" Matrix2(1, 1) = "A" Matrix2(1, 2) = "B" Matrix2(1, 3) = "C" Matrix2(1, 4) = "D" Matrix2(2, 1) = "E" Matrix2(2, 2) = "F" Matrix2(2, 3) = "G" Matrix2(2, 4) = "H" Matrix2(3, 1) = "I" Matrix2(3, 2) = "J" Matrix2(3, 3) = "K" Matrix2(3, 4) = "L" Matrix2(4, 1) = "M" Matrix2(4, 2) = "N" Matrix2(4, 3) = "O" Matrix2(4, 4) = "P" Matrix2(5, 1) = "Q" Matrix2(5, 2) = "R" Matrix2(5, 3) = "S" Matrix2(5, 4) = "T" Matrix2(6, 1) = "U" Matrix2(6, 2) = "V" Matrix2(6, 3) = "W" Matrix2(6, 4) = "X" Matrix2(7, 1) = "Y" Matrix2(7, 2) = "Z" Matrix2(7, 3) = "0" Matrix2(7, 4) = "1" Matrix2(8, 1) = "2" Matrix2(8, 2) = "3" Matrix2(8, 3) = "4" Matrix2(8, 4) = "5" Matrix2(9, 1) = "6" Matrix2(9, 2) = "7" Matrix2(9, 3) = "8" Matrix2(9, 4) = "9" Matrix3(1, 1) = "A" Matrix3(1, 2) = "B" Matrix3(1, 3) = "C" Matrix3(1, 4) = "D" Matrix3(2, 1) = "E" Matrix3(2, 2) = "F" Matrix3(2, 3) = "G" Matrix3(2, 4) = "H" Matrix3(3, 1) = "I" Matrix3(3, 2) = "J" Matrix3(3, 3) = "K" Matrix3(3, 4) = "L" Matrix3(4, 1) = "M" Matrix3(4, 2) = "N" Matrix3(4, 3) = "O" Matrix3(4, 4) = "P" Matrix3(5, 1) = "Q" Matrix3(5, 2) = "R" Matrix3(5, 3) = "S" Matrix3(5, 4) = "T" Matrix3(6, 1) = "U" Matrix3(6, 2) = "V" Matrix3(6, 3) = "W" Matrix3(6, 4) = "X" Matrix3(7, 1) = "Y" Matrix3(7, 2) = "Z" Matrix3(7, 3) = "0" Matrix3(7, 4) = "1" Matrix3(8, 1) = "2" Matrix3(8, 2) = "3" Matrix3(8, 3) = "4" Matrix3(8, 4) = "5" Matrix3(9, 1) = "6" Matrix3(9, 2) = "7" Matrix3(9, 3) = "8" Matrix3(9, 4) = "9" Matrix4(1, 1) = "A" Matrix4(1, 2) = "B" Matrix4(1, 3) = "C" Matrix4(1, 4) = "D" Matrix4(2, 1) = "E" Matrix4(2, 2) = "F" Matrix4(2, 3) = "G" Matrix4(2, 4) = "H" Matrix4(3, 1) = "I" Matrix4(3, 2) = "J" Matrix4(3, 3) = "K" Matrix4(3, 4) = "L" Matrix4(4, 1) = "M" Matrix4(4, 2) = "N" Matrix4(4, 3) = "O" Matrix4(4, 4) = "P" Matrix4(5, 1) = "Q" Matrix4(5, 2) = "R" Matrix4(5, 3) = "S" Matrix4(5, 4) = "T" Matrix4(6, 1) = "U" Matrix4(6, 2) = "V" Matrix4(6, 3) = "W" Matrix4(6, 4) = "X" Matrix4(7, 1) = "Y" Matrix4(7, 2) = "Z" Matrix4(7, 3) = "0" Matrix4(7, 4) = "1" Matrix4(8, 1) = "2" Matrix4(8, 2) = "3" Matrix4(8, 3) = "4" Matrix4(8, 4) = "5" Matrix4(9, 1) = "6" Matrix4(9, 2) = "7" Matrix4(9, 3) = "8" Matrix4(9, 4) = "9" For i = 1 To 9 For j = 1 To 4 Cells(i, j) = Matrix(i, j) Next j Next i 'For i = 1 To 9 'For j = 1 To 4 ' Range("a1:d1", "a1:a10").Value = Matrix(i, j) 'Application.WorksheetFunction.Transpose (Matrix) 'Next j 'Next i End Sub However, at the top for loop where it does not use the Range function with the cells, I can only do this for cells a1:d9 (a1 to d9) and if I use the second for loop with the range, get the value 9 appearing in every cell from a1 to d9. So is there a way to make it so that I can get the values A-Z and 0-9 in the other cells I specified above? Thank you.

    Read the article

  • obiee 10g teradata Solaris deployment

    - by user554629
    I have 3-4 years worth of notes on proper Teradata deployment across multiple operating systems.   The topic that is too large to cover succinctly in a blog entry.   I'm trying something new:  document a specific situation, consolidate the facts, document diagnostic procedures and then clone the structure to cover other obiee deployments (11g and other operating systems). Until the icon below is removed, this blog entry may be revised frequently.  No construction between June 6th through June 25th. Getting started obiee 10g certification:  pg 24-25 Teradata V2R5.1.x - V2R6.2, Client 13.10, certified 10.1.3.4.1obiee 10g documentation: Deployment Guide, Server Administration, Install/Config Guideobiee overview: teradata connectivity downloads: ( requires registration )solaris odbc drivers: sparc 13.10:  Choose 13.10.00.04  ( ReadMe ) sparc 14.00: probably would work, but not certified by Oracle on 10g I assume you have obiee 10.1.3.4.1 installed; 10.1.3.4.2 would be a better choice. Teradata odbc install requires root for Solaris pkgadd Only 1 version of Teradata odbc can be installed.symbolic links to the current version are created in /usr/lib at install obiee implementation background database access has two types of implementation:  native and odbcnative drivers use DB vendor client interfaces for accessodbc drivers are provided by the DB vendor for DB accessTeradata is an odbc interface Database. odbc drivers require an ODBC Driver Managerobiee uses Merant Data Direct driver manager obiee servers communicate with one another using odbc.The internal odbc driver is implemented by the obiee team and requires Merant Driver Manager. Teradata supplies a Driver Manager, which is built by Merant, but should not be used in obiee. The nqsserver shared library deployment looks like this  OBIEE Server<->DataDirect Manager<->Teradata Driver<->Teradata Database nqsserver startup $ cd $BI/setup$ . ./sa-init64.sh$ run-sa.sh autorestart64 The following files are referenced from setup:  .variant.sh  user.sh  NQSConfig.INI  DBFeatures.INI  $ODBCINI ( odbc.ini )  sqlnet.ora How does nqsserver connect to Teradata? A teradata DSN is created in the RPD. ( TD71 )setup/odbc.ini contains: [ODBC Data Sources] TD71=tdata.so[TD71]Driver=/opt/tdodbc/odbc/drivers/tdata.soDescription=Teradata V7.1.0DBCName=###.##.##.### LastUser=Username=northwindPassword=northwindDatabase=DefaultDatabase=northwind setup/user.sh contains LIBPATH\=/opt/tdicu/lib_64\:/usr/odbc/lib\:/usr/odbc/drivers\:/usr/lpp/tdodbc/odbc/drivers\:$LIBPATHexport LIBPATH   setup/.variant.sh contains if [ "$ANA_SERVER_64" = "1" ]; then  ANA_BIN_DIR=${SAROOTDIR}/server/Bin64  ANA_WEB_DIR=${SAROOTDIR}/web/bin64  ANA_ODBC_DIR=${SAROOTDIR}/odbc/lib64         setup/sa-run.sh  contains . ${ANA_INSTALL_DIR}/setup/.variant.sh. ${ANA_INSTALL_DIR}/setup/user.sh logfile="${SAROOTDIR}/server/Log/nqsserver.out.log"${ANA_BIN_DIR}/nqsserver -quiet >> ${logfile} 2>&1 &   nqsserver is running: nqsserver produces $BI/server/nqsserver.logAt startup, the native database drivers connect and record DB versions.tdata.so is not loaded until a Teradata DB connection is attempted.    Teradata odbc client installation Accept all the defaults for pkgadd.   Install in /opt. $ mkdir odbc$ cd odbc$ gzip -dc ../tdodbc__solaris_sparc.13.10.00.04.tar.gz | tar -xf - $ sudo su# pkgadd -d . TeraGSS# pkgadd -d . tdicu1310# pkgadd -d . tdodbc1310   Directory Notes: /opt/teradata/client/13.10/odbc_64/lib/tdata.soThe 64-bit obiee library loaded by nqsserver. /opt/teradata/client/13.10/odbc_64/lib is not needed in LD_LIBRARY_PATH /opt/teradata/client/13.10/tdicu/lib64is needed in LD_LIBRARY_PATH /usr/odbc should not be referenced;  it is a link to 32-bit libraries LD_LIBRARY_PATH_64 should not be used.     Useful bash functions and aliases export SAROOTDIR=/export/home/dw_adm/OracleBIexport TERA_HOME=/opt/teradata/client/13.10 export ORACLE_HOME=/export/home/oracle/product/10.2.0/clientexport ODBCINI=$SAROOTDIR/setup/odbc.iniexport TD_ICU_DATA=$TERA_HOME/tdicu/lib64alias cds="alias | grep '^alias cd' | sed 's/^alias //' | sort"alias cdtd="cd $TERA_HOME; ls" alias cdtdodbc="cd $TERA_HOME/odbc_64; ls -l"alias cdtdicu="cd $TERA_HOME/tdicu/lib64; ls -l"alias cdbi="cd $SAROOTDIR; ls"alias cdbiodbc="cd $SAROOTDIR/odbc; ls -l"alias cdsetup="cd $SAROOTDIR/setup; ls -ltr"alias cdsvr="cd $SAROOTDIR/server; ls"alias cdrep="cd $SAROOTDIR/server/Repository; ls -ltr"alias cdsvrcfg="cd $SAROOTDIR/server/Config; ls -ltr"alias cdsvrlog="cd $SAROOTDIR/server/Log; ls -ltr"alias cdweb="cd $SAROOTDIR/web; ls"alias cdwebconfig="cd $SAROOTDIR/web/config; ls -ltr"alias cdoci="cd $ORACLE_HOME; ls"pkgfiles() { pkgchk -l $1 | awk  '/^Pathname/ {print $2}'; }pkgfind()  { pkginfo | egrep -i $1 ; } Examples: $ pkgfind td$ pkgfiles tdodbc1310 | grep 64$ cds$ cdtdodbc$ cdsetup$ cdsvrlog$ cdweblog

    Read the article

  • The application called an interface that was marshalled for a different thread

    - by X-Ray
    i'm writing a delphi app that communicates with excel. one thing i noticed is that if i call the Save method on the Excel workbook object, it can appear to hang because excel has a dialog box open for the user. i'm using the late binding. i'd like for my app to be able to notice when Save takes several seconds and then take some kind of action like show a dialog box telling this is what's happening. i figured this'd be fairly easy. all i'd need to do is create a thread that calls Save and have that thread call Excel's Save routine. if it takes too long, i can take some action. procedure TOfficeConnect.Save; var Thread:TOfficeHangThread; begin // spin off as thread so we can control timeout Thread:=TOfficeSaveThread.Create(m_vExcelWorkbook); if WaitForSingleObject(Thread.Handle, 5 {s} * 1000 {ms/s})=WAIT_TIMEOUT then begin Thread.FreeOnTerminate:=true; raise Exception.Create(_('The Office spreadsheet program seems to be busy.')); end; Thread.Free; end; TOfficeSaveThread = class(TThread) private { Private declarations } m_vExcelWorkbook:variant; protected procedure Execute; override; procedure DoSave; public constructor Create(vExcelWorkbook:variant); end; { TOfficeSaveThread } constructor TOfficeSaveThread.Create(vExcelWorkbook:variant); begin inherited Create(true); m_vExcelWorkbook:=vExcelWorkbook; Resume; end; procedure TOfficeSaveThread.Execute; begin m_vExcelWorkbook.Save; end; i understand this problem happens because the OLE object was created from another thread (absolutely). how can i get around this problem? most likely i'll need to "re-marshall" for this call somehow... any ideas? thank you!

    Read the article

  • Unable to resolve class in build.gradle using Android Studio 0.60/Gradle 0.11

    - by saywhatnow
    Established app working fine using Android Studio 0.5.9/ Gradle 0.9 but upgrading to Android Studio 0.6.0/ Gradle 0.11 causes the error below. Somehow Studio seems to have lost the ability to resolve the android tools import at the top of the build.gradle file. Anyone got any ideas on how to solve this? build file 'Users/[me]/Repositories/[project]/[module]/build.gradle': 1: unable to resolve class com.android.builder.DefaultManifestParser @ line 1, column 1. import com.android.builder.DefaultManifestParser 1 error at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:302) at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:858) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:548) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:497) at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:306) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:287) at org.gradle.groovy.scripts.internal.DefaultScriptCompilationHandler.compileScript(DefaultScriptCompilationHandler.java:115) ... 77 more 2014-06-09 10:15:28,537 [ 92905] INFO - .BaseProjectImportErrorHandler - Failed to import Gradle project at '/Users/[me]/Repositories/[project]' org.gradle.tooling.BuildException: Could not run build action using Gradle distribution 'http://services.gradle.org/distributions/gradle-1.12-all.zip'. at org.gradle.tooling.internal.consumer.ResultHandlerAdapter.onFailure(ResultHandlerAdapter.java:53) at org.gradle.tooling.internal.consumer.async.DefaultAsyncConsumerActionExecutor$1$1.run(DefaultAsyncConsumerActionExecutor.java:57) at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64) [project]/[module]/build.gradle import com.android.builder.DefaultManifestParser apply plugin: 'android-sdk-manager' apply plugin: 'android' android { sourceSets { main { manifest.srcFile 'src/main/AndroidManifest.xml' res.srcDirs = ['src/main/res'] } debug { res.srcDirs = ['src/debug/res'] } release { res.srcDirs = ['src/release/res'] } } compileSdkVersion 19 buildToolsVersion '19.0.0' defaultConfig { minSdkVersion 14 targetSdkVersion 19 } signingConfigs { release } buildTypes { release { runProguard false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt' signingConfig signingConfigs.release applicationVariants.all { variant -> def file = variant.outputFile def manifestParser = new DefaultManifestParser() def wmgVersionCode = manifestParser.getVersionCode(android.sourceSets.main.manifest.srcFile) println wmgVersionCode variant.outputFile = new File(file.parent, file.name.replace("-release.apk", "_" + wmgVersionCode + ".apk")) } } } packagingOptions { exclude 'META-INF/LICENSE.txt' exclude 'META-INF/NOTICE.txt' } } def Properties props = new Properties() def propFile = file('signing.properties') if (propFile.canRead()){ props.load(new FileInputStream(propFile)) if (props!=null && props.containsKey('STORE_FILE') && props.containsKey('STORE_PASSWORD') && props.containsKey('KEY_ALIAS') && props.containsKey('KEY_PASSWORD')) { println 'RELEASE BUILD SIGNING' android.signingConfigs.release.storeFile = file(props['STORE_FILE']) android.signingConfigs.release.storePassword = props['STORE_PASSWORD'] android.signingConfigs.release.keyAlias = props['KEY_ALIAS'] android.signingConfigs.release.keyPassword = props['KEY_PASSWORD'] } else { println 'RELEASE BUILD NOT FOUND SIGNING PROPERTIES' android.buildTypes.release.signingConfig = null } }else { println 'RELEASE BUILD NOT FOUND SIGNING FILE' android.buildTypes.release.signingConfig = null } repositories { maven { url 'https://repo.commonsware.com.s3.amazonaws.com' } maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' } } dependencies { compile 'com.github.gabrielemariotti.changeloglib:library:1.4.+' compile 'com.google.code.gson:gson:2.2.4' compile 'com.google.android.gms:play-services:+' compile 'com.android.support:appcompat-v7:+' compile 'com.squareup.okhttp:okhttp:1.5.+' compile 'com.octo.android.robospice:robospice:1.4.11' compile 'com.octo.android.robospice:robospice-cache:1.4.11' compile 'com.octo.android.robospice:robospice-retrofit:1.4.11' compile 'com.commonsware.cwac:security:0.1.+' compile 'com.readystatesoftware.sqliteasset:sqliteassethelper:+' compile 'com.android.support:support-v4:19.+' compile 'uk.co.androidalliance:edgeeffectoverride:1.0.1+' compile 'de.greenrobot:eventbus:2.2.1+' compile project(':captureActivity') compile ('de.keyboardsurfer.android.widget:crouton:1.8.+') { exclude group: 'com.google.android', module: 'support-v4' } compile files('libs/CWAC-LoaderEx.jar') }

    Read the article

  • What is the C# equivalent of this Excel VBA code for Shapes?

    - by code4life
    This is the VBA code for an Excel template, which I'm trying to convert to C# in a VSTO project I'm working on. By the way, it's a VSTO add-in: Dim addedShapes() As Variant ReDim addedShapes(1) addedShapes(1) = aBracket.Name ReDim Preserve addedShapes(UBound(addedShapes) + 1) addedShapes(UBound(addedShapes)) = "unique2" Set tmpShape = Me.Shapes.Range(addedShapes).Group At this point, I'm stumped by the addedShapes(), not sure what this is all about. Update: Matti mentioned that addedShapes() represents a variant array in VBA. So now I'm wondering what the contents of addedShapes() should be. Would this be the correct way to call the Shapes.Range() call in C#? List<string> addedShapes = new List<string>(); ... Shape tmpShape = worksheet.Shapes.get_Range (addedShapes.Cast<object>().ToArray()).Group(); I'd appreciate anyone who's worked with VBA and C# willing to make a comment on my question & problem!

    Read the article

  • Where is the virtual function call overhead?

    - by Semen Semenych
    Hello everybody, I'm trying to benchmark the difference between a function pointer call and a virtual function call. To do this, I have written two pieces of code, that do the same mathematical computation over an array. One variant uses an array of pointers to functions and calls those in a loop. The other variant uses an array of pointers to a base class and calls its virtual function, which is overloaded in the derived classes to do absolutely the same thing as the functions in the first variant. Then I print the time elapsed and use a simple shell script to run the benchmark many times and compute the average run time. Here is the code: #include <iostream> #include <cstdlib> #include <ctime> #include <cmath> using namespace std; long long timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p) { return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) - ((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec); } void function_not( double *d ) { *d = sin(*d); } void function_and( double *d ) { *d = cos(*d); } void function_or( double *d ) { *d = tan(*d); } void function_xor( double *d ) { *d = sqrt(*d); } void ( * const function_table[4] )( double* ) = { &function_not, &function_and, &function_or, &function_xor }; int main(void) { srand(time(0)); void ( * index_array[100000] )( double * ); double array[100000]; for ( long int i = 0; i < 100000; ++i ) { index_array[i] = function_table[ rand() % 4 ]; array[i] = ( double )( rand() / 1000 ); } struct timespec start, end; clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start); for ( long int i = 0; i < 100000; ++i ) { index_array[i]( &array[i] ); } clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end); unsigned long long time_elapsed = timespecDiff(&end, &start); cout << time_elapsed / 1000000000.0 << endl; } and here is the virtual function variant: #include <iostream> #include <cstdlib> #include <ctime> #include <cmath> using namespace std; long long timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p) { return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) - ((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec); } class A { public: virtual void calculate( double *i ) = 0; }; class A1 : public A { public: void calculate( double *i ) { *i = sin(*i); } }; class A2 : public A { public: void calculate( double *i ) { *i = cos(*i); } }; class A3 : public A { public: void calculate( double *i ) { *i = tan(*i); } }; class A4 : public A { public: void calculate( double *i ) { *i = sqrt(*i); } }; int main(void) { srand(time(0)); A *base[100000]; double array[100000]; for ( long int i = 0; i < 100000; ++i ) { array[i] = ( double )( rand() / 1000 ); switch ( rand() % 4 ) { case 0: base[i] = new A1(); break; case 1: base[i] = new A2(); break; case 2: base[i] = new A3(); break; case 3: base[i] = new A4(); break; } } struct timespec start, end; clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start); for ( int i = 0; i < 100000; ++i ) { base[i]->calculate( &array[i] ); } clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end); unsigned long long time_elapsed = timespecDiff(&end, &start); cout << time_elapsed / 1000000000.0 << endl; } My system is LInux, Fedora 13, gcc 4.4.2. The code is compiled it with g++ -O3. The first one is test1, the second is test2. Now I see this in console: [Ignat@localhost circuit_testing]$ ./test2 && ./test2 0.0153142 0.0153166 Well, more or less, I think. And then, this: [Ignat@localhost circuit_testing]$ ./test2 && ./test2 0.01531 0.0152476 Where are the 25% which should be visible? How can the first executable be even slower than the second one? I'm asking this because I'm doing a project which involves calling a lot of small functions in a row like this in order to compute the values of an array, and the code I've inherited does a very complex manipulation to avoid the virtual function call overhead. Now where is this famous call overhead?

    Read the article

  • RubyQt Crashing on QTableWidget

    - by gja
    I'm getting some weirdness with QtRuby when using a TableWidget. The table widget loads, but when you click on the elements in the row, the app segfaults and crashes. require 'Qt4' class SimpleModel < Qt::AbstractTableModel def rowCount(parent) return 1 end def columnCount(parent) return 1 end def data(index, role=Qt::DisplayRole) return Qt::Variant.new("Really Long String") if index.row == 0 and index.column == 0 and role == Qt::DisplayRole return Qt::Variant.new end end Qt::Application.new(ARGV) do Qt::TableWidget.new(1, 1) do set_model SimpleModel.new show end exec end The backtrace seems to imply that it is bombing in mousePressEvent #6 0x01624643 in QAbstractItemView::pressed(QModelIndex const&) () from /usr/lib/libQtGui.so.4 #7 0x016306f5 in QAbstractItemView::mousePressEvent(QMouseEvent*) () from /usr/lib/libQtGui.so.4 If I override mousePressEvent and mouseMoveEvent, these kinds of crashes no longer happen. Am I doing something wrong over here, or can I chalk this up as a bug in QtRuby? I'm on fedora11, with the following packages installed: QtRuby-4.4.0-1.fc11.i586 ruby-1.8.6.369-1.fc11.i586 These crashes also happen when running the script on Windows.

    Read the article

  • raising a vb6 event using interop

    - by Steve
    Hi, I have a legacy VB6 component that I've imported into VS using tlbimp.exe to generate my interop assembly. The VB6 component defines an event that allows me to pass messages within VB6. Public Event Message(ByVal iMsg As Variant, oCancel As Variant) I would really like to be able to raise this even in my C# program, but its getting imported as an event, not a delegate or something else useful. So, I can only listen, but never fire. Does anyone know how to fire an event contained within VB6? The C# event looks like [TypeLibType(16)] [ComVisible(false)] public interface __MyObj_Event { event __MyObj_MessageEventHandler Message; } I unfortunately cannot change the VB6 code. Thanks.

    Read the article

  • Xpath help. Get childnode with variable name

    - by Kim Andersen
    I have the following XML: <StatsContainer> <Variant1>0</Variant1> <Variant2>0.5</Variant2> <Variant3>1.2</Variant3> <Variant4>4.1</Variant4> <Variant5>93.9</Variant5> <Variant6>0.3</Variant6> <Variant7>0</Variant7> <Variant8>0</Variant8> <Variant9>0</Variant9> <Variant10>0</Variant10> <Variant11>0</Variant11> <Variant12>0</Variant12> <GlobalVariant1>4.6</GlobalVariant1> <GlobalVariant2>40.4</GlobalVariant2> <GlobalVariant3>13.8</GlobalVariant3> <GlobalVariant4>2.8</GlobalVariant4> <GlobalVariant5>35.6</GlobalVariant5> <GlobalVariant6>2.8</GlobalVariant6> <GlobalVariant7>0</GlobalVariant7> <GlobalVariant8>0</GlobalVariant8> <GlobalVariant9>0</GlobalVariant9> <GlobalVariant10>0</GlobalVariant10> <GlobalVariant11>0</GlobalVariant11> <GlobalVariant12>0</GlobalVariant12> <MosaicType>Boligtype</MosaicType> <OverRepresentedVariant>5</OverRepresentedVariant> </StatsContainer> As you can see I have a number in the "OverRepresentedVariant"-tag. This number can change from time to time. What i Need is to grab the "Variant"-tag with the right number. In the above case I need to get the value from the "Variant5"-tag (93.9). Tomorrow the "OverRepresentedVariant"-value might have changed to 3, this would mean that I should now grab the "Variant3"-value instead. So this is what I got. I have a variable called $btOver which contains the above XML. I also have a variable called $btId which contains the "OverRepresentedVariant"-value like this: <xsl:variable name="btId" select="$btOver/OverRepresentedVariant" /> So now I need some help finding the Variant-tags with the right ID. The tags that I need will always be named "Variant" followed by an id. So how can I get the right tag? Thanks a lot in advance folks. /Kim Andersen

    Read the article

  • Date format strings in .Net and Java

    - by mizipzor
    I have an application that runs on both C# .Net and Java. Two entirely separate but identical code bases. The problem Im having is formatting date and numbers. For example: A user running the .Net variant is inputting a date and a format string. The 26th of April 1986 is formatted 1986-04-26. The actual date, along with the format string, is serialized to an XML file. Later another user running the Java variant opens said XML file and looks at the date. I want them to look the same. Whats the best approach here? There doesnt seem to be a one-to-one mapping between Java and .Nets format strings. Should I limit the possible formats to a selection I know I can represent fully in both .Net and Java?

    Read the article

  • Fetching value from collection

    - by user334119
    public string GetProductVariantImageUrl(ShoppingCartItem shoppingCartItem) { string pictureUrl = String.Empty; ProductVariant productVariant = shoppingCartItem.ProductVariant; ProductVariantAttributeValueCollection pvaValues = shoppingCartItem.Attributes; [here the count comes 0]{case1} } public string GetAttributeDescription(ShoppingCartItem shoppingCartItem) { string result = string.Empty; ProductVariant productVariant = shoppingCartItem.ProductVariant; if (productVariant != null) { ProductVariantAttributeValueCollection pvaValues = shoppingCartItem.Attributes;[here count is 1] } } WHY am i not able to get count as 1 for the case1. /// <summary> /// Represents a shopping cart item /// </summary> public class ShoppingCartItem : BaseEntity { #region Fields private ProductVariant _cachedProductVariant; private ProductVariantAttributeValueCollection _cachedPvaValues; #endregion #region Ctor /// <summary> /// Creates a new instance of the shopping cart class /// </summary> public ShoppingCartItem() { } #endregion #region Properties /// <summary> /// Gets or sets the shopping cart item identifier /// </summary> public int ShoppingCartItemID { get; set; } /// <summary> /// Gets or sets the shopping cart type identifier /// </summary> public int ShoppingCartTypeID { get; set; } /// <summary> /// Gets or sets the customer session identifier /// </summary> public Guid CustomerSessionGUID { get; set; } /// <summary> /// Gets or sets the product variant identifier /// </summary> public int ProductVariantID { get; set; } /// <summary> /// Gets or sets the product variant attribute identifiers /// </summary> public List<int> AttributeIDs { get; set; } /// <summary> /// Gets or sets the text option /// </summary> public string TextOption { get; set; } /// <summary> /// Gets or sets the quantity /// </summary> public int Quantity { get; set; } /// <summary> /// Gets or sets the date and time of instance creation /// </summary> public DateTime CreatedOn { get; set; } /// <summary> /// Gets or sets the date and time of instance update /// </summary> public DateTime UpdatedOn { get; set; } #endregion #region Custom Properties /// <summary> /// Gets the log type /// </summary> public ShoppingCartTypeEnum ShoppingCartType { get { return (ShoppingCartTypeEnum)ShoppingCartTypeID; } } /// <summary> /// Gets the product variant /// </summary> public ProductVariant ProductVariant { get { if (_cachedProductVariant == null) { _cachedProductVariant = ProductManager.GetProductVariantByID(ProductVariantID); } return _cachedProductVariant; } } /// <summary> /// Gets the product variant attribute values /// </summary> public ProductVariantAttributeValueCollection Attributes { get { if (_cachedPvaValues == null) { ProductVariantAttributeValueCollection pvaValues = new ProductVariantAttributeValueCollection(); foreach (int attributeID in this.AttributeIDs) { ProductVariantAttributeValue pvaValue = ProductAttributeManager.GetProductVariantAttributeValueByID(attributeID); if (pvaValue != null) pvaValues.Add(pvaValue); } _cachedPvaValues = pvaValues; } return _cachedPvaValues; } } /// <summary> /// Gets the total weight /// </summary> public decimal TotalWeigth { get { decimal totalWeigth = decimal.Zero; ProductVariant productVariant = ProductVariant; if (productVariant != null) { decimal attributesTotalWeight = decimal.Zero; foreach (ProductVariantAttributeValue pvaValue in this.Attributes) { attributesTotalWeight += pvaValue.WeightAdjustment; } decimal unitWeight = productVariant.Weight + attributesTotalWeight; totalWeigth = unitWeight * Quantity; } return totalWeigth; } } /// <summary> /// Gets a value indicating whether the shopping cart item is free shipping /// </summary> public bool IsFreeShipping { get { ProductVariant productVariant = this.ProductVariant; if (productVariant != null) return productVariant.IsFreeShipping; return true; } }

    Read the article

  • how to handle a array of objects in a session

    - by Robert
    Hello, In the project I'm working on I have got a list List<Item> with objects that Is saved in a session. Session.Add("SessionName", List); In the Controller I build a viewModel with the data from this session var arrayList = (List<Item>)Session["SessionName"]; var arrayListItems= new List<CartItem>(); foreach (var item in arrayList) { var listItem = new Item { Amount = item.Amount, Variant= item.variant, Id = item.Id }; arrayListItems.Add(listItem); } var viewModel = new DetailViewModel { itemList = arrayListItems } and in my View I loop trough the list of Items and make a form for all of them to be able to remove the item. <table> <%foreach (var Item in Model.itemList) { %> <% using (Html.BeginForm()) { %> <tr> <td><%=Html.Hidden(Settings.Prefix + ".VariantId", Item .Variant.Id)%> <td> <%=Html.TextBox(Settings.Prefix + ".Amount", Item.Amount)%></td> <td> <%=Html.Encode(Item.Amount)%> </td> <td> <input type="submit" value="Remove" /> </td> </tr> <% } %> <% } %> </table> When the post from the submit button is handeld the item is removed from the array and post back exactly the same viewModel (with 1 item less in the itemList). return View("view.ascx", viewModel); When the post is handled and the view has reloaded the value's of the html.Hidden and Html.Textbox are the value's of the removed item. The value of the html.Encode is the correct value. When i reload the page the correct values are in the fields. Both times i build the viewModel the exact same way. I cant find the cause or solution of this error. I would be very happy with any help to solve this problem Thanx in advance for any tips or help

    Read the article

  • Facing problem in VB6.0 Activex Controls design.

    - by Dharmaraju
    Hi, This is dharmaraju, I am facing some problem in Activex Controls design. Kindly help me to resolve the issue. Problem Description: I have created a property mentioned below for a textbox. Public Property Let DataControl_Value(ByVal Value As Variant) Public Property Get DataControl_Value() As Variant This property is editable at design time if i use it VB6.0 Applications. Same thing is read only in case if i use it in vc++ MFC applications. I have defined one more property like below. Public Property Let DataControl_DataItemDef(ByVal Value As DTMDATACONTROLLib.IXMLDOMNode) Public Property Get DataControl_DataItemDef() As DTMDATACONTROLLib.IXMLDOMNode In this case the "DataControl_DataItemDef" property will not be available at design time.[not displaying in control's property window.] Kindly help me to resolve the issue.

    Read the article

  • Excel VBA pass array of arrays to a function

    - by user429400
    I have one function that creates an array of arrays, and one function that should get the resulting array and write it to the spreadsheet. I don't find the syntax which will allow me to pass the array of arrays to the second function... Could you please help? Here is my code: The function that creates the array of arrays: Function GetCellDetails(dict1 As Dictionary, dict2 As Dictionary) As Variant Dim arr1, arr2 arr1 = dict1.Items arr2 = dict2.Items GetCellDetails = Array(arr1, arr2) End Function the function that writes it to the spreadsheet: Sub WriteCellDataToMemory(arr As Variant, day As Integer, cellId As Integer, nCells As Integer) row = CellIdToMemRow(cellId, nCells) col = DayToMemCol(day) arrSize = UBound(arr, 2) Range(Cells(row, col), Cells(row + arrSize , col + 2)) = Application.Transpose(arr) End Sub The code that calls the functions: Dim CellDetails CellDetails = GetCellDetails(dict1, dict2) WriteCellDataToMemory CellDetails, day, cellId, nCells Thanks, Li

    Read the article

  • Communication between layers in an application

    - by Petar Minchev
    Hi guys! Let's assume we have the following method in the business layer. What's the best practice to tell the UI layer that something went wrong and give also the error message? Should the method return an empty String when it was OK, otherwise the error message, or should it throw another exception in the catch code wrapping the caught exception? If we choose the second variant then the UI should have another try,catch which is too much try,catch maybe. Here is a pseudocode for the first variant. public String updateSomething() { try { //Begin transaction here dataLayer.do1(); dataLayer.do2(); dataLayer.doN(); } catch(Exception exc) { //Rollback transaction code here return exc.message; } return ""; } Is this a good practice or should I throw another exception in the catch(then the method will be void)?

    Read the article

  • VBScript Permission Denied on CopyFile

    - by Chris
    I'm running a VBScript in SQL Agent but I get a 'Permission Denied' on line 34 (the first copy attempt). I've run this script outside SQL Agent with no problems Function Main() Const SourceDrive As String = "X:\" Dim fso Dim Today Dim FileName Dim FromFile Dim FromDrive Dim ArchivePath Set fso = CreateObject("Scripting.FileSystemObject") Today = Format(Now, "yyyyMMdd") 'To add more sources just add them to the array list Dim Sources() As Variant Sources() = Array("Item1", _ "Item2") 'To add more targets just add them to the array list Dim Targets() As Variant Targets() = Array("C:\Users\myalias\Desktop\MyToFolder", _ "C:\Users\myalias\Desktop\MyToFolder2") For i = 0 To UBound(Sources) FileName = "WebSurveyAlertCallbacks_" & Sources(i) & "_" & Today & ".xls" FromDrive = fso.BuildPath(SourceDrive, Sources(i)) FromFile = fso.BuildPath(FromDrive, FileName) ArchivePath = fso.BuildPath(FromDrive, "Archive") If fso.FileExists(FromFile) Then For t = 0 To UBound(Targets) fso.CopyFile FromFile, fso.BuildPath(Targets(t), FileName), True Next fso.CopyFile FromFile, fso.BuildPath(ArchivePath, FileName), True fso.DeleteFile FromFile End If Next Set fso = Nothing Main = DTSTaskExecResult_Success End Function

    Read the article

  • The curious case of SOA Human tasks' automatic completion

    - by Kavitha Srinivasan
    A large south-Asian insurance industry customer using Oracle BPM and SOA ran into this. I have survived this ordeal previously myself but didnt think to blog it then. However, it seems like a good idea to share this knowledge with this reader community and so here goes.. Symptom: A human task (in a SOA/BPEL/BPM process) completes automatically while it should have been assigned to a proper user.There are no stack traces, no related exceptions in the logs. Why: The product is designed to treat human tasks that don't have assignees as one that is eligible for completion. And hence no warning/error messages are recorded in the logs. Usecase variant: A variant of this usecase, where an assignee doesnt exist in the repository is treated as a recoverable error. One can find this in the 'pending recovery' instances in EM and reactivate the task by changing the assignees in the bpm workspace as a process owner /administrator. But back to the usecase when tasks get completed automatically... When: This happens when the users/groups assigned to a task are 'empty' or null. This has been seen only on tasks whose assignees are derived from an assignment expression - ie at runtime an XPath is used to determine who to assign the task to. (This should not happen if task assignees are populated via swim-lane roles.) How to detect this in EM For instances that are auto-completed thus, one will notice in the Audit Trail of such instances, that the 'outcome' of the task is empty. The 'acquired by' element will also show as empty/null. Enabling the oracle.soa.services.workflow.* logger in em should print more verbose messages about this. How to fix this The application code needs two fixes: input to HT: The XSLT/XPath used  to set the task 'assignee' and the process itself should be enhanced to handle nulls better. For eg: if no-data-found, set assignees to alternate value, force default assignees etc. output from HT: Additionally, in the application code, check that the 'outcome' of the HT is not-null. If null, route the task to be performed again after setting the assignee correctly. Beginning PS4FP, one should be able to use 'grab' to route back to the task to fire again. Hope this helps. 

    Read the article

  • Intel Xeon E5 (Sandy Bridge-EP) and SQL Server 2012 Benchmarks

    - by jchang
    Intel officially announced the Xeon E5 2600 series processor based on Sandy Bridge-EP variant with upto 8 cores and 20MB LLC per socket. Only one TPC benchmark accompanied product launch, summary below. Processors Cores per Frequency Memory SQL Vendor TPC-E 2 x Xeon E5-2690 8 2.9GHz 512GB (16x32GB) 2012 IBM 1,863.23 2 x Xeon E7-2870 10 2.4GHz 512GB (32x16GB) 2008R2 IBM 1,560.70 2 x Xeon X5690 6 3.46GHz 192GB (12x16GB) 2008R2 HP 1,284.14 Note: the HP report lists SQL Server 2008 R2 Enterprise Edition...(read more)

    Read the article

  • Agile PLM 9.3 Service Pack 2 (SP2 or 9.3.0.2) is released along with AUT 1.6.2.0 and AutoVue 20 for

    - by Shane Goodwin
    Oracle released Agile PLM 9.3 SP2 on June 14 and the Agile installer for AutoVue 20 for Agile PLM on April 30. Also available are the new versions of AUT and Averify - 1.6.3 for both tools. 9.3 SP2 is a combined English and NLS release for use on any version of 9.3.0. SP2 contains many bug fixes and rolls up several Hot Fixes - please review the Readme for all the details. In addition, this release also addresses some scalability issues when working with very large Exports and Reports. When exporting very large BOMs, the export module will now release objects more efficiently to reduce the amount of memory consumed on the Application Server. Adminstrators can also control the maximum row limits for Users verses system processes, like ACS. Several out of the box BOM reports have also been changed to use a new row limit option. The combination of all these changes will provide more stability on the application server for customers managing very large datasets. 9.3 SP2 also adds support for Oracle Database 11gR2 for Windows, Oracle Internet Directory (OID) and Oracle Access Manager (OAM). Please note that currently the Variant Patch is not intended to be released for SP2. Customers running the Variant Patch should remain on 9.3.0.0 or 9.3.0.1. Back in April, we also released the AutoVue 20 for Agile PLM installer. AutoVue 20 has many new features which will help Agile PLM customers. Large multi-page Word documents and 2D CAD documents will open more quickly to the first page or first rendition. Memory usage is less when working with 3D Models. There are many new formats supported for MCAD, 2D Cad, and EDA. AutoVue 20 is immediately available for Windows and Linux platforms. The new software can be found in Edelivery or Metalink / Oracle Support: - AutoVue 20 for Agile PLM is on E-Delivery with part number B58963-01 - Oracle Agile PLM 9.3 Service Pack 2 (9.3.0.2) My Oracle Support Patch ID 9782736 - AVERIFY 1.6.3 My Oracle Support Patch ID 9791892 - AUT 1.6.3 My Oracle Support Patch ID 9791908 - Agile PLM 9.3 SP2 Documentation is available on the OTN Agile Documentation Page

    Read the article

  • Eclipse vs. Aptana

    - by RPK
    I know that Eclipse is a universal IDE and variety of plugins are available to scale it. What is the difference between: The original Eclipse IDE, Aptana and NetBeans. I looked into Wikipedia and came to know that the latter two originate from the main Eclipse. For Aptana specially, what was the need to reproduce a new variant that resembles too much with its base IDE? If your preferred choice is Eclipse itself, what makes it unique as compared to the other two.

    Read the article

  • Oracle Enterprise Manager Ops Center : Using Operational Profiles to Install Packages and other Content

    - by LeonShaner
    Oracle Enterprise Manager Ops Center provides numerous ways to deploy content, such as through OS Update Profiles, or as part of an OS Provisioning plan or combinations of those and other "Install Software" capabilities of Deployment Plans.  This short "how-to" blog will highlight an alternative way to deploy content using Operational Profiles. Usually we think of Operational Profiles as a way to execute a simple "one-time" script to perform a basic system administration function, which can optionally be based on user input; however, Operational Profiles can be much more powerful than that.  There is often more to performing an action than merely running a script -- sometimes configuration files, packages, binaries, and other scripts, etc. are needed to perform the action, and sometimes the user would like to leave such content on the system for later use. For shell scripts and other content written to be generic enough to work on any flavor of UNIX, converting the same scripts and configuration files into Solaris 10 SVR4 package, Solaris 11 IPS package, and/or a Linux RPM's might be seen as three times the work, for little appreciable gain.   That is where using an Operational Profile to deploy simple scripts and other generic content can be very helpful.  The approach is so powerful, that pretty much any kind of content can be deployed using an Operational Profile, provided the files involved are not overly large, and it is not necessary to convert the content into UNIX variant-specific formats. The basic formula for deploying content with an Operational Profile is as follows: Begin with a traditional script header, which is a UNIX shell script that will be responsible for decoding and extracting content, copying files into the right places, and executing any other scripts and commands needed to install and configure that content. Include steps to make the script platform-aware, to do the right thing for a given UNIX variant, or a "sorry" message if the operator has somehow tried to run the Operational Profile on a system where the script is not designed to run.  Ops Center can constrain execution by target type, so such checks at this level are an added safeguard, but also useful with the generic target type of "Operating System" where the admin wants the script to "do the right thing," whatever the UNIX variant. Include helpful output to show script progress, and any other informational messages that can help the admin determine what has gone wrong in the case of a problem in script execution.  Such messages will be shown in the job execution log. Include necessary "clean up" steps for normal and error exit conditions Set non-zero exit codes when appropriate -- a non-zero exit code will cause an Operational Profile job to be marked failed, which is the admin's cue to look into the job details for diagnostic messages in the output from the script. That first bullet deserves some explanation.  If Operational Profiles are usually simple "one-time" scripts and binary content is not allowed, then how does the actual content, packages, binaries, and other scripts get delivered along with the script?  More specifically, how does one include such content without needing to first create some kind of traditional package?   All that is required is to simply encode the content and append it to the end of the Operational Profile.  The header portion of the Operational Profile will need to contain the commands to decode the embedded content that has been appended to the bottom of the script.  The header code can do whatever else is needed, and finally clean up any intermediate files that were created during the decoding and extraction of the content. One way to encode binary and other content for inclusion in a script is to use the "uuencode" utility to convert the content into simple base64 ASCII text -- a form that is suitable to be appended to an Operational Profile.   The behavior of the "uudecode" utility is such that it will skip over any parts of the input that do not fit the uuencoded "begin" and "end" clauses.  For that reason, your header script will be skipped over, and uudecode will find your embedded content, that you will uuencode and paste at the end of the Operational Profile.  You can have as many "begin" / "end" clauses as you need -- just separate each embedded file by an empty line between "begin" and "end" clauses. Example:  Install SUNWsneep and set the system serial number Script:  deploySUNWsneep.sh ( <- right-click / save to download) Highlights: #!/bin/sh # Required variables: OC_SERIAL="$OC_SERIAL" # The user-supplied serial number for the asset ... Above is a good practice, showing right up front what kind of input the Operational Profile will require.   The right-hand side where $OC_SERIAL appears in this example will be filled in by Ops Center based on the user input at deployment time. The script goes on to restrict the use of the program to the intended OS type (Solaris 10 or older, in this example, but other content might be suitable for Solaris 11, or Linux -- it depends on the content and the script that will handle it). A temporary working directory is created, and then we have the command that decodes the embedded content from "self" which in scripting terms is $0 (a variable that expands to the name of the currently executing script): # Pass myself through uudecode, which will extract content to the current dir uudecode $0 At that point, whatever content was appended in uuencoded form at the end of the script has been written out to the current directory.  In this example that yields a file, SUNWsneep.7.0.zip, which the rest of the script proceeds to unzip, and pkgadd, followed by running "/opt/SUNWsneep/bin/sneep -s $OC_SERIAL" which is the command that stores the system serial for future use by other programs such as Explorer.   Don't get hung up on the example having used a pkgadd command.  The content started as a zip file and it could have been a tar.gz, or any other file.  This approach simply decodes the file.  The header portion of the script has to make sense of the file and do the right thing (e.g. it's up to you). The script goes on to clean up after itself, whether or not the above was successful.  Errors are echo'd by the script and a non-zero exit code is set where appropriate. Second to last, we have: # just in case, exit explicitly, so that uuencoded content will not cause error OPCleanUP exit # The rest of the script is ignored, except by uudecode # # UUencoded content follows # # e.g. for each file needed, #  $ uuencode -m {source} {source} > {target}.uu5 # then paste the {target}.uu5 files below # they will be extracted into the workding dir at $TDIR # The commentary above also describes how to encode the content. Finally we have the uuencoded content: begin-base64 444 SUNWsneep.7.0.zip UEsDBBQAAAAIAPsRy0Di3vnukAAAAMcAAAAKABUAcmVhZG1lLnR4dFVUCQADOqnVT7up ... VXgAAFBLBQYAAAAAAgACAJEAAADTNwEAAAA= ==== That last line of "====" is the base64 uuencode equivalent of a blank line, followed by "end" and as mentioned you can have as many begin/end clauses as you need.  Just separate each embedded file by a blank line after each ==== and before each begin-base64. Deploying the example Operational Profile looks like this (where I have pasted the system serial number into the required field): The job succeeded, but here is an example of the kind of diagnostic messages that the example script produces, and how Ops Center displays them in the job details: This same general approach could be used to deploy Explorer, and other useful utilities and scripts. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >