Search Results

Search found 12083 results on 484 pages for 'general purpose'.

Page 439/484 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • Webcor Builders Coordinates Construction Schedules and Mitigates Potential Delays More Efficiently with Integrated Project Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} With more than 40 years of commercial construction experience, Webcor Builders is a leading builder of distinguished, high-profile projects, including high-rise condominiums and hotels, laboratories, healthcare centers, and public works projects. Webcor is also known for its award-winning concrete, interior construction, historic restoration, and seismic renovation work. The company has completed more than 50 million square feet of projects to date. Considering the variety and complexity of the construction projects Webcor undertakes, an integrated project management solution is critical to ensuring optimal efficiency and completing client projects on time and on budget. The company previously used a number of scheduling systems for its various building projects. These packages provided different levels of schedule detail and required schedulers, engineers, and other employees to learn multiple systems. From an IT cost and complexity perspective, the company had to manage multiple scheduling systems and pay for multiple sets of licenses. The company looked to standardize on an enterprise project management system, and selected Oracle’s Primavera P6 Enterprise Project Portfolio Management. Webcor uses the solution’s advanced capabilities to schedule complex projects, analyze delays, model and propose multiple scenarios to demonstrate and mitigate delays and cost overruns, and process that information efficiently to deliver the scheduling precision that public and private projects require. In fact, the solution was instrumental in helping the company’s expansion into public sector projects during the recent economic downturn, and with Primavera P6 in place, it can deliver the precise schedule reporting required for large public projects. With Primavera P6 in place, the company could deliver the precise scheduling and milestone reporting capabilities required for large public projects. The solution is in managing the high-profile University of California – Berkeley Memorial Stadium project. Webcor was hired as construction manager and general contractor for the stadium renovation project, which is a fast-paced project located near the seismically active Hayward Fault Zone. Due to the University of California’s football schedule, meeting the Universities deadline for the coming season placed Webcor in a situation where risk awareness and early warnings of issues would be paramount. Webcor and the extended project team needed a solution that could instantly analyze alternate scenarios to mitigate potential delays; Primavera would deliver those answers.The team would also need to enable multiple stakeholders to use an internet-based platform to access the schedule from various locations, and model complicated sequencing requirements where swift decisions would be made to keep the project on track. The schedule is an integral part of Webcor’s construction management process for the stadium project. Rather than providing the client with the industry-standard monthly update, Webcor updates the critical path method (CPM) schedule on a weekly basis. The project team also reviews the schedule and updates weekly to confirm that progress and forecasted performance are accurate. Hired by the University for their ability to deliver in high risk environments The Webcor team was hit recently with a design supplement that could have added up to 70 days to the project. Using Oracle Primavera P6 the team sprung into action analyzing multiple “what if” scenarios to review mitigation means and methods.  Determined to make sure the Bears could take the field in the coming season the project team nearly eliminated the impact with their creative analysis in working the schedule. The total time from the issuance of the final design supplement to an agreed mitigation response was less than one week; leveraging the Oracle Primavera solution Webcor was able to deliver superior customer value With the ability to efficiently manage projects and schedules, Webcor can ensure it completes its projects on time and on budget, as well as inform clients about what changes to plans will mean in terms of delays and additional costs. Read the complete customer case study at :  http://www.oracle.com/us/corporate/customers/customersearch/webcor-builders-1-primavera-ss-1639886.html

    Read the article

  • BizTalk 2009 - Custom Functoid Wizard

    - by StuartBrierley
    When creating BizTalk maps you may find that there are times when you need perform tasks that the standard functoids do not cover.  At other times you may find yourself reapeating a pattern of standard functoids over and over again, adding visual complexity to an otherwise simple process.  In these cases you may find it preferable to create your own custom functoids.  In the past I have created a number of custom functoids from scratch, but recently I decided to try out the Custom Functoid Wizard for BizTalk 2009. After downloading and installing the wizard you should start Visual Studio and select to create a new BizTalk Server Functoid Project. Following the splash screen you will be presented with the General Properties screen, where you can set the classname, namespace, assembly name and strong name key file. The next screen is the first set of properties for the functoid.  First of all is the fuctoid ID; this must be a value above 6000. You should also then set the name, tooltip and description of the functoid.  The name will appear in the visual studio toolbox and the tooltip on hover over in the toolbox.  The descrition will be shown when you configure the functoid inputs when using it in a map; as such it should provide a decent level of information to allow the functoid to be used. Next you must set the category, exception mesage, icon and implementation language.  The category will affect the positioning of the functoid within the toolbox and also some of the behaviours of the functoid. We must then define the parameters and connections for our new functoid.  Here you can define the names and types of your input parameters along with the minimum and maximum number of input connections.  You will also need to define the types of connections accepted and the output type of the functoid. Finally you can click finish and your custom functoid project will be created. The results of this process can be seen in the solution explorer, where you will see that a project, functoid class file and a resource file have been created for you. If you open the class file you will see that the following code has been created for you: The "base" function sets all the properties that you previsouly detailed in the custom functoid wizard.  public TestFunctoids():base()  {    int functoidID;    // This has to be a number greater than 6000    functoidID = System.Convert.ToInt32(resmgr.GetString("FunctoidId"));    this.ID = functoidID;    // Set Resource strings, bitmaps    SetupResourceAssembly(ResourceName, Assembly.GetExecutingAssembly());    SetName("FunctoidName");                     SetTooltip("FunctoidToolTip");    SetDescription("FunctoidDescription");    SetBitmap("FunctoidBitmap");    // Minimum and maximum parameters that the functoid accepts    this.SetMinParams(2);    this.SetMaxParams(2);    /// Function name that needs to be called when this Functoid is invoked.    /// Put this in GAC.    SetExternalFunctionName(GetType().Assembly.FullName,     "MyCompany.BizTalk.Functoids.TestFuntoids.TestFunctoids", "Execute");    // Category for this functoid.    this.Category = FunctoidCategory.String;    // Input and output Connection type    this.OutputConnectionType = ConnectionType.AllExceptRecord;    AddInputConnectionType(ConnectionType.AllExceptRecord);   } The "Execute" function provides a skeleton function that contains the code to be executed by your new functoid.  The inputs and outputs should match those you defined in the Custom Functoid Wizard.   public System.Int32 Execute(System.Int32 Cool)   {    ResourceManager resmgr = new ResourceManager(ResourceName, Assembly.GetExecutingAssembly());    try    {     // TODO: Implement Functoid Logic    }    catch (Exception e)    {     throw new Exception(resmgr.GetString("FunctoidException"), e);    }   } Opening the resource file you will see some of the various string values that you defined in the Custom Functoid Wizard - Name, Tooltip, Description and Exception. You can also select to look at the image resources.  This will display the embedded icon image for the functoid.  To change this right click the icon and select "Import from File". Once you have completed the skeleton code you can then look at trying out your functoid. To do this you will need to build the project, copy the compiled DLL to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions and then refresh the toolbox in visual studio.

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • Web Services Example - Part 2: Programmatic

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 2 of our Web Service examples.  In this posting we'll take a look at using a SOAP Web Service but calling it programmatically in code and parsing the return into a bean. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part1 Defining our Web Service: Just like our first installment, we are using the same public weather forecast web service provided free by CDYNE Corporation.  Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. We're going to concentrate on the same two web service methods, GetCityForecastByZIP and GetWeatherInformation. Defing the Application: The application setup is identical to the Weather1 version.  There are some improvements to the data that is displayed as part of this example though.  Now we are able to show the associated image along with each forecast line when using the Forecast By Zip feature.  We've also added the temperature Hi/Low values into the UI. Summary of Fundamental Changes In This Application The most fundamental change is that we're binding the UI to the Bean Data Controls instead of directly to the Web Service Data Controls.  This gives us much more flexibility to control the shape of the data and allows us to do caching of the data outside of the Web Service.  This way if your application is, say offline, your bean could still populate with data from a local cache and still show you some UI as opposed to completely failing because you don't have any connectivity. In general we promote this type of programming technique with ADF Mobile to insulate your application from any issues with network connectivity. What's different with this example? We have setup the Web Service DC the same way but now we have managed beans to process the data.  The following classes define the "Model" of our application:  CityInformation-CityForecast-Forecast, WeatherInformation-WeatherDescription.  We use WeatherBean for UI interaction to the model layer.  If you look through this example, we don't really do that much with the java code except use it to grab the image URL from the weather description.  In a more realistic example, you might be using some JDBC classes to persist the data to a local database. To have a good architecture it is always good to keep your model and UI layers separate.  This gets muddied if you start to use bindings on a page invoked from Java code and this java code starts to become your "model" layer.  Since bindings are page specific, your model layer starts to become entwined with your UI.  Not good!  To help with this, we've added some utility functions that let you invoke DC methods without having a binding and thus execute methods from your "model" layer without requiring a binding in your page definition.  We do this with the invokeDataControlMethod of the AdfmfJavaUtilities class.  An example of this method call is available in line 95 of WeatherInformation.java and line 93 of CityInformation.Java. What's a GenericType? Because Web Service Data Controls (and also URL Data Controls AKA REST) use generic name/value pairs to define their structure and don't have strongly typed objects, these are actually stored internally as GenericType objects.  The GenericType class is simply a property map of name/value pairs that can be hierarchical.  There are methods like getAttribute where you supply the index of the attribute or it's string property name.  Why is this important to know?  Because invokeDataControlMethod returns GenericType objects and developers either need to parse these GenericType objects themselves or use one of our helper functions. GenericTypeBeanSerializationHelper This class does exactly what it's name implies.  It's a helper class for developers to aid in serialization of GenericTypes to/from java objects.  This is extremely handy if you have a large GenericType object with many attributes (or you're just lazy like me!) and you just want to parse it out into a real java object you can use more easily.  Here you would use the fromGenericType method.  This method takes the class of the Java object you wish to return and the GenericType as parameters.  The method then parses through each attribute in the GenericType and uses reflection to set that same attribute in the Java class.  Then the method returns that new object of the class you specified.  This is obviously very handy to avoid a lot of shuffling code between GenericType and your own Java classes.  The reverse method, toGenericType is also available when you want to go the other way.  In this case you supply the string that represents the package location in the DataControl definition (Example: "MyDC.myParams.MyCollection") and then pass in the Java object you have that holds the data and a GenericType is returned to you.  Again, it will use reflection to calculate the attributes that match between the java class and the GenericType and call the getters/setters on those. Issues and Possible Improvements: In the next installment we'll show you how to make your web service calls asynchronously so your UI will fill dynamically when the service call returns but in the meantime you show the data you have locally in your bean fed from some local cache.  This gives your users instant delivery of some data while you fetch other data in the background.

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • Private Cloud: Putting some method behind the madness

    - by Sudip Datta
    Finally, I decided to join the blogging community. And what could be a better time to start than the week after OpenWorld 2012. 50K+ attendees, demonstrations, speaker sessions and a whole lot of buzz on Oracle Cloud..It was raining clouds in this year's Openworld. I am not here to write about Oracle's cloud strategy in general, but on Enterprise Manager's cloud management capabilities. This year's Openworld was the first after we announced the 12c Cloud Control and we were happy to share the stage with quite a few early adopters. Stay tuned for videos from our customers and partners, I will post them as they get published. I met a number of platform administrators in Oracle-DBAs, Middleware Admins, SOA Admins...The cloud has affected them all, at least to the point where it beckoned more than just curiosity..Most IT infrastructure are already heavily virtualized (on VMWare and on others including Oracle VM), and some would claim they are already on “cloud” (at least their Sysadmins told them so). But none of them were confident of the benefits because their pain points continued to grow.. Isn't cloud supposed to ease those? Instead, they were chasing hundreds of databases running on hundreds of VMs, often with as much certainty propounded by Heisenberg. What happened to the age-old IT discipline around administration, compliance, configuration management? VMs are great for what they are. I personally think they have opened the doors to new approaches in which an application stack gets provisioned and updated. In fact, Enterprise Manager 12c is possibly the only tool out there that can provision full-fledged application as VM Assemblies. In this year's Openworld, customers talked on how they provisioned RAC and Siebel assemblies, which as the techies out there know, are not trivial (hearing provisioning time for Siebel down from weeks to hours was gratifying indeed). However, I do have an issue with a "one-size fits all" approach to cloud. In a week's span, I met several personas: Project owners requiring an EC2 like VM instance for their projects Admins needing the same for Sparc-Solaris. DBAs requiring dedicated databases for new projects APEX Developers needing just a ready-to-consume schema as a service Java Developers looking for a runtime platform QA engineers needing a fast clone of their production environment If you drill down further, you will end up peeling more layers of the details. For example, the requirements for Load testing and Functional testing are very different. For Load testing the test environment should ideally be the same as the production. You shouldn't run production on Exadata and load test on a VM; they will just not be good representations of one another. For Functional testing it does not possibly matter. DBAs seem to be at the worst affected of the lot. It seems they have been asked to choose between agile provisioning and  faster runtime performance. And in some cases, it is really a Hobson's choice, because their infrastructure provider made no distinction between the OLTP application and the Virtual desktop! Sad indeed. When one looks at the portfolio of services that we already offer (vanilla IaaS, VM Assembly based PaaS, DBaaS) or have announced (Java PaaS, Instant Cloning, Schema-aaS), one can possibly think that we are trying to be the "renaissance man" ! Well I would have possibly digested that had it not been for the various personas that I described above. Getting the use cases right is very important for an application such as cloud management. We iterate and iterate over these over and over again and re-validate them in CABs (Customer Advisory Boards). We consider over the major aspects of tenancy: service placement, resource isolation (can a tenant execute an expensive SQL and run away with all the resources), quota and security. We, in Engineering, keep reminding ourselves that we are dealing with enterprise clouds. We owe it to our customer base ! In the coming posts, I will drill down more into each of the services. In the meanwhile, here are some collateral and  demos for starters with EM 12c. http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html Sudip Datta The views expressed here are my own and do not necessarily reflect the views of Oracle. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter --

    Read the article

  • Azure Task Scheduling Options

    - by charlie.mott
    Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options? PaaS Option 1: Worker Roles Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this: http://azuretoolkit.codeplex.com https://github.com/Lokad/lokad-cloud/wiki/TaskScheduler http://blog.smarx.com/posts/building-a-task-scheduler-in-windows-azure - This addresses a slightly different set of requirements. It’s a more dynamic approach for queuing up tasks, but not repeatable tasks (e.g. daily). I found the Azure Toolkit option the most simple to implement.  Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method. 1: namespace Acme.WorkerRole.Jobs 2: { 3: using AzureToolkit; 4: using ScheduledTasksService; 5: 6: public class UploadEmployeesJob : IJob 7: { 8: public void Run() 9: { 10: // Call Tasks Service 11: var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService"); 12: client.UploadEmployees(); 13: client.Close(); 14: } 15: } 16: } Step 2 : In the worker role run method, add the jobs to the toolkit engine. 1: namespace Acme.WorkerRole 2: { 3: using AzureToolkit.Engine; 4: using Jobs; 5:   6: public class WorkerRole : WorkerRoleEntryPoint 7: { 8: public override void Run() 9: { 10: var engine = new CloudEngine(); 11:   12: // Add Scheduled Jobs (using CronJob syntax - see http://www.adminschoice.com/crontab-quick-reference). 13:   14: // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri) 15: engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; }); 16: // 2. Purge Data job - 10 AM every Saturday 17: engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; }); 18: // 3. Process Exceptions job - Every 5 minutes 19: engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; }); 20:   21: engine.Run(); 22: base.Run(); 23: } 24: } 25: } Pros Cons Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.   Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012). Option 2: Use Scheduled Task on Azure Web Role calling a console app Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here: http://www.ronaldwidha.net/2011/02/23/cron-job-on-azure-using-scheduled-task-on-a-web-role-to-replace-azure-worker-role-for-background-job/ http://www.voiceoftech.com/swhitley/index.php/2011/07/windows-azure-task-scheduler/ http://devlicio.us/blogs/vinull/archive/2011/10/23/moving-to-azure-worker-roles-for-nothing-and-tasks-for-free.aspx Pros Cons Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.   Saleability - multiple instances would trigger multiple tasks. You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine). Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it). SaaS Option 3: Azure Marketplace I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so. https://datamarket.azure.com/ Pros Cons   Nobody currently offers this on the Azure Marketplace. Option 4: Online Job Scheduling Service Provider There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here: http://en.wikipedia.org/wiki/Webcron Pros Cons No bespoke development for scheduler. Reliance on third party. IaaS Option 5: Setup Scheduling Software on Azure IaaS VM’s One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here: http://en.wikipedia.org/wiki/List_of_job_scheduler_software Pros Cons Enterprise distributed\resilient task scheduling service VM Setup and maintenance   Software Licence Costs Option 6: VM Gallery A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options. Pros Cons   No current VM template. Summary For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks).  If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration. Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.

    Read the article

  • CS, SE, HCI, Information Science, Please recommendation for further education of the former performing art manager seeking career in IT industries? [on hold]

    - by Baek Seungjoo
    IT specialists there J Thank you very much for your collective efforts here, and I got huge help reading your professional comments and advices on each questions I have searched so far! This time, I would like to ask for your practical advices or recommendation on what I am struggling on at this moment. I am currently seeking higher education for my career transition from performing art manager and director to “IT software and/or service development and management specialist”. However, as this field is quite new to me, and there are lots of different work positions, I have no idea which grad major I better pursue in order to get qualification. Of course I know this question could sounds wired as it is kind of personal choice. But my lack of understanding on how IT software companies work in general, your practical and experience-based advice will be great help to me, who spent more than two months of self-research on net. OK. Before my question, here is my plan and history, which are quite different from those currently in IT industry I think… 1) Target Firstly, get career transition into IT service or products companies and get experiences. Eventually, pursue IT entrepreneurship in combination with my arts and cultural production and business expertise. 2) Background Career: performing arts director and manager in theatre-based scale opera and musical Art education in youth BA in literature and Chinese studies (Art & Humanities) MA in Cultural & Creative Industries (Art & Humanities) – dissertation with focus on digital prosumption and the lived experience of the prosumer. (a qualitative research on the agents in the digital world) 2) Personally Huge interest in IT hardware and software, and their trend. Skills to build up, repair, tune PCs -of course this is no more than personal hobby, but shows my interests in this field. 4) Problem Encounter a question “So, what do you think you can contribute practically in this position”. This question turn me down everytime I go through job interviews, and I decided more education in the relevant area. Here are my questions. 1) In terms of work positions in IT software companies, I wonder if I can put the comparison of what “Artists” is to “Arts Manager or Director” is what “Developer” is to “Product Manager”. (Of course, this stereotypical division of Artist-Art Manager is out of sense because the domain overlaps to some extent, and is blurring at least in my field, and they are in different contexts, but just speaking easily.) Normally, artist comes with special arts educations, and they live in their own world of artistic inspiration and creation, and they feel alive in practice and on stages. Meanwhile, from the point of staging and managing productions, the role of art manager is critical as well. Our role cares how the production appeals to the audience in effective way, how to make profit and future sustainable management through that, how to set up future strategy in consideration of the external conditions such as political and social circumstances, audience trend and level, other production trends from on-going and historical perspectives, how and what the production make voice to the society from political, economic, humanitarian stances. So, we need keen eyes on economic, political, and societal environment, have to understand human-being and their desires, must know how to make presentation and attract investors, must have sense in managing and fighting over the limited financial resource, how to extend networking and so on. It is common that the two agents create productions in collaboration (normally not in that ideal way but in conflict and fight though J ). So, we need to know each other’s expertise to some extent, for better production. What are the work positions in IT software industries equivalent to the role of “art manager” in performing arts? From my view, considering developers come with special education in the world of computer science, software engineering, or others (self-education sometimes), and they express themselves with the arts of coding, computer languages on the black screen, and make sort of their artistic production online to the audience, I guess there might be someone who collaborate with developers in creating, managing, and launching IT services or products. 2) Which education among CS, SE, HCI, Information Science, is needed for those seeking such work position? Especially for person like me. (At this moment, Information Science has the highest possibility to get in, since I lack Calculus and Math in undergrad educaiton. But please let me know irrespective of this concern, I think there are ways to back it up if CS or SE education needed in my case) 3) Which field between Information Science and HCI can be more practical background regarding job hungting? And which of them have more demands in job market? AS I checked, HCI is more close to CS than IS in its focus of study area. Thank you very much for your patience reading such a long inquiry, and I appreciate to your efforts in advance. Have a nice day in this beautiful summer.

    Read the article

  • How You Helped Shape Java EE 7...

    - by reza_rahman
    I have been working with the JCP in various roles since EJB 3/Java EE 5 (much of it on my own time), eventually culminating in my decision to accept my current role at Oracle (despite it's inevitable set of unique challenges, a role I find by and large positive and fulfilling). During these years, it has always been clear to me that pretty much everyone in the JCP genuinely cares about openness, feedback and developer participation. Perhaps the most visible sign to date of this high regard for grassroots level input is a survey on Java EE 7 gathered a few months ago. The survey was designed to get open feedback on a number of critical issues central to the Java EE 7 umbrella specification including what APIs to include in the standard. When we started the survey, I don't think anyone was certain what the level of participation from developers would really be. I also think everyone was pleasantly surprised that a large number of developers (around 1100) took the time out to vote on these very important issues that could impact their own professional life. And it wasn't just a matter of the quantity of responses. I was particularly impressed with the quality of the comments made through the survey (some of which I'll try to do justice to below). With Java EE 7 under our belt and the horizons for Java EE 8 emerging, this is a good time to thank everyone that took the survey once again for their thoughts and let you know what the impact of your voice actually was. As an aside, you may be happy to know that we are working hard behind the scenes to try to put together a similar survey to help kick off the agenda for Java EE 8 (although this is by no means certain). I'll break things down by the questions asked in the survey, the responses and the resulting change in the specification. APIs to Add to Java EE 7 Full/Web Profile The first question in the survey asked which of four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. Developers by and large wanted all the new APIs added to the full platform. The comments expressed particularly strong support for WebSocket and JCache. Others expressed dissatisfaction over the lack of a JSON binding (as opposed to JSON processing) API. WebSocket, JSON-P and JBatch are now part of Java EE 7. In addition, the long-awaited Java EE Concurrency Utilities API was also included in the Full Profile. Unfortunately, JCache was not finalized in time for Java EE 7 and the decision was made not to hold up the Java EE release any longer. JCache continues to move forward strongly and will very likely be included in Java EE 8 (it will be available much sooner than Java EE 8 to boot). An emergent standard for JSON-B is also a strong possibility for Java EE 8. When it came to the Web Profile, developers were supportive of adding WebSocket and JSON-P, but not JBatch and JCache. Both WebSocket and JSON-P are now part of the Web Profile, now also including the already popular JAX-RS API. Enabling CDI by Default The second question asked whether CDI should be enabled in Java EE by default. The overwhelming majority of developers supported the default enablement of CDI. In addition, developers expressed a desire for better CDI/Java EE alignment (with regards to EJB and JSF in particular). Some developers expressed legitimate concerns over the performance implications of enabling CDI globally as well as the potential conflict with other JSR 330 implementations like Spring and Guice. CDI is enabled by default in Java EE 7. Respecting the legitimate concerns, CDI 1.1 was very careful to add additional controls around component scanning. While a lot of work was done in Java EE 6 and Java EE 7 around CDI alignment, further alignment is under serious consideration for Java EE 8. Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations (e.g. @BatchContext). A majority of developers wanted consistent usage of @Inject. The comments again reflected a strong desire for CDI/Java EE alignment. A lot of emphasis in Java EE 7 was put into using @Inject consistently. For example, the JBatch specification is focused on using @Inject wherever possible. JAX-RS remains an exception with it's existing custom injection annotations. However, the JAX-RS specification leads understand the importance of eventual convergence, hopefully in Java EE 8. Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A solid majority of developers supported the idea of making @Stereotype more universal in Java EE. The comments maintained the general theme of strong support for CDI/Java EE alignment Unfortunately, there was not enough time and resources in Java EE 7 to implement this fairly pervasive feature. However, it remains a serious consideration for Java EE 8. Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE. Developers strongly supported the concept. Along with injection, interceptors are now supported across all Java EE 7 components including Servlets, Filters, Listeners, JAX-WS endpoints, JAX-RS resources, WebSocket endpoints and so on. I hope you are encouraged by how your input to the survey helped shape Java EE 7 and continues to shape Java EE 8. Participating in these sorts of surveys is of course just one way of contributing to Java EE. Another great way to stay involved is the Adopt-A-JSR Program. A large number of developers are already participating through their local JUGs. You could of course become a Java EE JSR expert group member or observer. You should stay tuned to The Aquarium for the progress of Java EE 8 JSRs if that's something you want to look into...

    Read the article

  • "Guiding" a Domain Expert to Retire from Programming

    - by James Kolpack
    I've got a friend who does IT at a local non-profit where they're using a custom web application which is no longer supported by the company who built it. (out of business, support was too expensive, I'm not sure...) Development on this app started around 10+ years ago so the technologies being harnessed are pretty out of date now - classic asp using vbscript and SQL Server 2000. The application domain is in the realm of government bookkeeping - so even though the development team is long gone, there are often new requirements of this software. Enter the... The domain expert. This is an middle aged accounting whiz without much (or any?) prior development experience. He studied the pages, code and queries and learned how to ape the style of the original team which, believe me, is mediocre at best. He's very clever and very tenacious but has no experience in software beyond what he's picked up from this app. Otherwise, he's a pleasant guy to talk to and definitely knows his domain. My friend in IT, and probably his superiors in the company, want him out of the code. They view him as wasting his expertise on coding tasks he shouldn't be doing. My friend got me involved with a few small contracts which I handled without much problem - other than somewhat of a communication barrier with the domain expert. He explained the requirements very quickly, assuming prior knowledge of the domain which I do not have. This is partially his normal style, and I think maybe a bit of resentment from my involvement. So, I think he feels like the owner of the code and has entrenched himself in a development position. So... his coding technique. One of his latest endeavors was to make a page that only he could reach (theoretically - the security model for the system is wretched) where he can enter a raw SQL query, run it, and save the query to run again later. A report that I worked on had been originally implemented by him using 6 distinct queries, 3 or 4 temp tables to coordinate the data between the queries, and the final result obtained by importing the data from the final query into Access and doing a pivot and some formatting. It worked - well, some of the results were incorrect - but at what a cost! (I implemented the report in a single query with at least 1/10th the amount of code.) He edits code in notepad. He doesn't seem to know about online reference material for the languages. I recently read an article on Dr. Dobbs titled "What Makes Bad Programmers Different" - and instantly thought of our domain expert. From the article: Their code is large, messy, and bug laden. They have very superficial knowledge of their problem domain and their tools. Their code has a lot of copy/paste and they have very little interest in techniques that reduce it. The fail to account for edge cases, while inefficiently dealing with the general case. They never have time to comment their code or break it into smaller pieces. Empirical evidence plays no little role in their decisions. 5.5 out of 6. My friend is wanting me to argue the case to their management - specifically, I got this email from their manager to respond to: ...Also, I need to talk to you about what effect there is from Domain Expert continuing to make edits to the live environment. If that is a problem for you I need to know so I can have his access blocked. Some examples would help. In my opinion, from a technical standpoint, it's dangerous to have him making changes without any oversight. On the other hand, I'm just doing one-off contracts at this point and don't have much desire to get involved deeply enough that I'm essentially arguing as one of the Bobs from Office Space. I'd like to help my friend out - but I feel like I'm getting in the middle of a political battle. More importantly - if I do get involved and suggest that his editing privileges be removed, it needs to be handled carefully so that doesn't feel belittled. He is beyond a doubt the foremost expert on this system. I'm hoping this is familiar territory for some other stackechangers, because I'm feeling a little bewildered. How should I respond? Should I argue that he shouldn't be allowed to touch the code? Should I phrase it as "no single developer, no matter how experienced, should be working on production code unchecked"? Should I argue to keep him involved with the code, but with a review process? Should I say "glad I could help, but uh, I'm busy now!" Other options? Thanks a bunch!

    Read the article

  • Pirates, Treasure Chests and Architectural Mapping

    Pirate 1: Why do pirates create treasure maps? Pirate 2: I do not know.Pirate 1: So they can find their gold. Yes, that was a bad joke, but it does illustrate a point. Pirates are known for drawing treasure maps to their most prized possession. These documents detail the decisions pirates made in order to hide and find their chests of gold. The map allows them to trace the steps they took originally to hide their treasure so that they may return. As software engineers, programmers, and architects we need to treat software implementations much like our treasure chest. Why is software like a treasure chest? It cost money, time,  and resources to develop (Usually) It can make or save money, time, and resources (Hopefully) If we operate under the assumption that software is like a treasure chest then wouldn’t make sense to document the steps, rationale, concerns, and decisions about how it was designed? Pirates are notorious for documenting where they hide their treasure.  Shouldn’t we as creators of software do the same? By documenting our design decisions and rationale behind them will help others be able to understand and maintain implemented systems. This can only be done if the design decisions are correctly mapped to its corresponding implementation. This allows for architectural decisions to be traced from the conceptual model, architectural design and finally to the implementation. Mapping gives software professional a method to trace the reason why specific areas of code were developed verses other options. Just like the pirates we need to able to trace our steps from the start of a project to its implementation,  so that we will understand why specific choices were chosen. The traceability of a software implementation that actually maps back to its originating design decisions is invaluable for ensuring that architectural drifting and erosion does not take place. The drifting and erosion is prevented by allowing others to understand the rational of why an implementation was created in a specific manor or methodology The process of mapping distinct design concerns/decisions to the location of its implemented is called traceability. In this context traceability is defined as method for connecting distinctive software artifacts. This process allows architectural design models and decisions to be directly connected with its physical implementation. The process of mapping architectural design concerns to a software implementation can be very complex. However, most design decision can be placed in  a few generalized categories. Commonly Mapped Design Decisions Design Rationale Components and Connectors Interfaces Behaviors/Properties Design rational is one of the hardest categories to map directly to an implementation. Typically this rational is mapped or document in code via comments. These comments consist of general design decisions and reasoning because they do not directly refer to a specific part of an application. They typically focus more on the higher level concerns. Components and connectors can directly be mapped to architectural concerns. Typically concerns subdivide an application in to distinct functional areas. These functional areas then can map directly back to their originating concerns.Interfaces can be mapped back to design concerns in one of two ways. Interfaces that pertain to specific function definitions can be directly mapped back to its originating concern(s). However, more complicated interfaces require additional analysis to ensure that the proper mappings are created. Depending on the complexity some Behaviors\Properties can be translated directly into a generic implementation structure that is ready for business logic. In addition, some behaviors can be translated directly in to an actual implementation depending on the complexity and architectural tools used. Mapping design concerns to an implementation is a lot of work to maintain, but is doable. In order to ensure that concerns are mapped correctly and that an implementation correctly reflects its design concerns then one of two standard approaches are usually used. All Changes Come From ArchitectureBy forcing all application changes to come through the architectural model prior to implementation then the existing mappings will be used to locate where in the implementation changes need to occur. Allow Changes From Implementation Or Architecture By allowing changes to come from the implementation and/or the architecture then the other area must be kept in sync. This methodology is more complex compared to the previous approach.  One reason to justify the added complexity for an application is due to the fact that this approach tends to detect and prevent architectural drift and erosion. Additionally, this approach is usually maintained via software because of the complexity. Reference:Taylor, R. N., Medvidovic, N., & Dashofy, E. M. (2009). Software architecture: Foundations, theory, and practice Hoboken, NJ: John Wiley & Sons  

    Read the article

  • Navigate Quickly with JustCode and Ctrl+Click

    Ctrl + Click is a widely used shortcut for Go To Definition in many development environments but not in Visual Studio. We, the JustCode team, find it really useful so we added it to Visual Studio. But we didn't stop there - we improved it even further. Read on to find the details. With JustCode you get an enhanced Go To Definition. By default you can execute it in the Visual Studio editor using one of the following shortcuts: Middle Click, Ctrl+Left Click, F12, Ctrl+Enter, Ctrl+B. The first usage of this feature is not much different from the default Visual Studio Go To Definition command use it where a member, type, method, property, etc is used to navigate to the definition of that item. For example, if you have this method:         public void Start()         {             lion = new Lion();             lion.Roar();         } If you hold Ctrl and click on the usage of the lion you will go to the lion member definition. If you hold Ctrl and click on the Lion you will go to the Lion class definition. What we added is the ability to easily find all the usages of the item you just navigated to. For example:     public class Lion     {         public void Roar()         {             Console.WriteLine("Rhaaaar");         }     }   If you hold Ctrl and click on the Lion definition you will see all the usages of the Lion type; if you click on the Roar method definition you will see all the usages of the Roar method: And if there is only one usage you will get automatically to that usage. In the examples I use C#, but it works also in VB.NET, JavaScript, ASP.NET and XAML. Why we like this feature? Let me first start with how the Ctrl+Click (or Go To Definition command) is used. We noticed that developers use it especially in what we call "code browsing sessions". In simple words this is when you browse around the code looking for a bug, just reading the code or searching for something. Sounds familiar? In our experience when you go to the definition of some item you often want to know more about it and the first thing you need is to find its usages. With JustCode this is just one click away. Why Ctrl+Click/Middle Click over F12/Ctrl+Enter/Ctrl+B? Actually you can use all of them. But during these "code browsing sessions" we noticed that most developers use the mouse. So the mouse is already in use and pressing Ctrl+Click (or the Middle Click) is so natural. During heavy coding sessions or if you are a keyboard type developer F12 (or any of the other keyboard shortcuts) is the key. We really use heavily this feature not only in our team but in the whole company. It saves us a bit of time many times a day. And it adds up. We hope you will like it too. Your feedback is more than welcome for us. P.S. If you dont want JustCode to capture the Ctrl+Click and the Middle Click in the editor, you can change that in JustCode->Options->General in the Navigation group. Keyboard shortcuts can be reassigned using the Visual Studio keyboard shortcuts editor.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Time to Get Started with Oracle Data Integrator 12c!

    - by Sandrine Riley
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} It is time to get started with Oracle Data Integrator 12c! We would like to highlight for you a great place to begin your journey with ODI.  Here you will find the Getting Started section for ODI, which provides a few options. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Step 1 – Start by downloading the Getting Started (PDF). The document provides general background information and detailed examples to help you learn how to use Oracle Data Integrator. This document guides you through a first project with Oracle Data Integrator 12c, and the instructions in this document are required for using the Getting Started Demonstration Environment. If you would like to run the Getting Started Demonstration Environment on your system go to Step 2 A. If you would like to run the Getting Started Demonstration Environment installed and pre-configured in a Virtual Image, go to Step 2 B. Step 2 - A-   A -  If you would like to run the Getting Started Demo Environment on your system, you will want to then download the Demo Environment for Getting Started (ZIP) archive containing ODI metadata, configuration scripts and sample data. B-   B -  Alternatively, a pre-configured virtual machine is available with the Getting Started installation and configuration. The Virtual Machine platform uses Oracle Virtual Box technology, and use of this configuration would necessitate the download/installation of Virtual Box and the Getting Started virtual machine. If you prefer this route and would rather run the Getting Started Demonstration Environment installed and preconfigured in a Virtual Machine, please download the ODI 12c Getting Started Virtual Machine. Step 3 – In continuation with the Getting Started Demonstration Environment and the Getting Started Guide, you will now be able to run through a first project with Oracle Data Integrator. Now that you have successfully done the exercises above, you may want to play on your own, and develop with Oracle Data Integrator on your environment...  here you can download a full version of Oracle Data Integrator 12c.  Enjoy, and happy developing! Learn more about Oracle Data Integrator; including FAQ, the Oracle by Example Series, Sample Code, and White Papers. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Styling ASP.NET MVC Error Messages

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/styling-asp.net-mvc-error-messages.aspxOff the cuff, it may look like you’re stuck with the presentation of your error messages (model errors) in ASP.NET MVC. That’s not the case, though. You actually have quite a number of options with regard to styling those boogers. Like many of the helpers in MVC, the Html.ValidationMessageFor helper has multiple prototypes. One of those prototypes lets you pass a dictionary, or anonymous object, representing attribute values for the resulting markup. @Html.ValidationMessageFor( m => Model.Whatever, null, new { @class = “my-error” }) By passing the htmlAttributes parameter, which is the last parameter in the call to the prototype of Html.ValidationMessageFor shown above, I can style the resulting markup by associating styles to the my-error css class.  When you run your MVC project and view the source, you’ll notice that MVC adds the class field-validation-valid or field-validation-error to a span created by the helper. You could actually just style those classes instead of adding your own…it’s really up to you. Now, what if you wanted to move that error message around? Maybe you want to put that error message in a box or a callout. How do you do that? When I first started using MVC, it didn’t occur to me that the Html.ValidationMessageFor helper just spits out a little bit of markup. I wanted to put the error messages in boxes with white backgrounds, our site originally had a black background, and show a little nib on the side to make them look like callouts or conversation bubbles. Not realizing how much freedom there is in the styling and markup, and after reading someone else’s post, I created my own version of the ValidationMessageFor helper that took out the span and replaced it with divs. I styled the divs to produce the effect of a popup box and had a lot of trouble with sizing and such. That’s a really silly and unnecessary way to solve this problem. If you want to move your error messages around, all you have to do is move the helper. MVC doesn’t appear to care where you put it, which makes total sense when you think about it. Html.ValidationMessageFor is just spitting out a little markup using a little bit of reflection on the name you’re passing it. All you’ve got to do to style it the way you want it is to put it in whatever markup you desire. Take a look at this, for example… <div class=”my-anchor”>@Html.ValidationMessageFor( m => Model.Whatever )</div> @Html.TextBoxFor(m => Model.Whatever) Now, given that bit of HTML, consider the following CSS… <style> .my-anchor { position:relative; } .field-validation-error {    background-color:white;    border-radius:4px;    border: solid 1px #333;    display: block;    position: absolute;    top:0; right:0; left:0;    text-align:right; } </style> The my-anchor class establishes an anchor for the absolutely positioned error message. Now you can move the error message wherever you want it relative to the anchor. Using css3, there are some other tricks. For example, you can use the :not(:empty) selector to select the span and apply styles based upon whether or not the span has text in it. Keep it simple, though. Moving your elements around using absolute positioning may cause you issues on devices with screens smaller than your standard laptop or PC. While looking for something else recently, I saw someone asking how to style the output for Html.ValidationSummary.  Html.ValidationSummery is the helper that will spit out a list of property errors, general model errors, or both. Html.ValidationSummary spits out fairly simple markup as well, so you can use the techniques described above with it also. The resulting markup is a <ul><li></li></ul> unordered list of error messages that carries the class validation-summary-errors In the forum question, the user was asking how to hide the error summary when there are no errors. Their errors were in a red box and they didn’t want to show an empty red box when there aren’t any errors. Obviously, you can use the css3 selectors to apply different styles to the list when it’s empty and when it’s not empty; however, that’s not support in all browsers. Well, it just so happens that the unordered list carries the style validation-summary-valid when the list is empty. While the div rendered by the Html.ValidationSummary helper renders a visible div, containing one invisible listitem, you can always just style the whole div with “display:none” when the validation-summary-valid class is applied and make it visible when the validation-summary-errors class is applied. Or, if you don’t like that solution, which I like quite well, you can also check the model state for errors with something like this… int errors = ViewData.ModelState.Sum(ms => ms.Value.Errors.Count); That’ll give you a count of the errors that have been added to ModelState. You can check that and conditionally include markup in your page if you want to. The choice is yours. Obviously, doing most everything you can with styles increases the flexibility of the presentation of your solution, so I recommend going that route when you can. That picture of the fat guy jumping has nothing to do with the article. That’s just a picture of me on the roof and I thought it was funny. Doesn’t every post need a picture?

    Read the article

  • Access Control Service V2 and Facebook Integration

    - by Your DisplayName here!
    I haven’t been blogging about ACS2 in the past because it was not released and I was kinda busy with other stuff. Needless to say I spent quite some time with ACS2 already (both in customer situations as well as in the classroom and at conferences). ACS2 rocks! It’s IMHO the most interesting and useful (and most unique) part of the whole Azure offering! For my talk at VSLive yesterday, I played a little with the Facebook integration. See Steve’s post on the general setup. One claim that you get back from Facebook is an access token. This token can be used to directly talk to Facebook and query additional properties about the user. Which properties you have access to depends on which authorization your Facebook app requests. You can specify this in the identity provider registration page for Facebook in ACS2. In my example I added access to the home town property of the user. Once you have the access token from ACS you can use e.g. the Facebook SDK from Codeplex (also available via NuGet) to talk to the Facebook API. In my sample I used the WIF ClaimsAuthenticationManager to add the additional home town claim. This is not necessarily how you would do it in a “real” app. Depends ;) The code looks like this (sample code!): public class ClaimsTransformer : ClaimsAuthenticationManager {     public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal)     {         if (!incomingPrincipal.Identity.IsAuthenticated)         {             return base.Authenticate(resourceName, incomingPrincipal);         }         string accessToken;         if (incomingPrincipal.TryGetClaimValue( "http://www.facebook.com/claims/AccessToken", out accessToken))         {             try             {                 var home = GetFacebookHometown(accessToken);                 if (!string.IsNullOrWhiteSpace(home))                 {                     incomingPrincipal.Identities[0].Claims.Add( new Claim("http://www.facebook.com/claims/HomeTown", home));                 }             }             catch { }         }         return incomingPrincipal;     }      private string GetFacebookHometown(string token)     {         var client = new FacebookClient(token);         dynamic parameters = new ExpandoObject();         parameters.fields = "hometown";         dynamic result = client.Get("me", parameters);         return result.hometown.name;     } }  

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 2: Platforms and Features

    - by Tim Murphy
    This is part 2 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. In the previous post I discussed what reasons a company might have for creating a smartphone application.  In this installment I will cover some of history and state of the different platforms as well as features that can be leveraged for building enterprise smartphone applications. Platforms Before you start choosing a platform to develop your solutions on it is good to understand how we got here and what features you can leverage. History To my memory we owe all of this to a product called the Apple Newton that came out in 1987. It was the first PDA and back then I was much more of an Apple fan.  I was very impressed with this device even though it never really went anywhere.  The Palm Pilot by US Robotics was the next major advancement in PDA. It had a simple short hand window that allowed for quick stylus entry.. Later, Windows CE came out and started the broadening of the PDA market. After that it was the Palm and CE operating systems that started showing up on cell phones and for some time these were the two dominant operating systems that were distributed with devices from multiple hardware vendors. Current The iPhone was the first smartphone to take away the stylus and give us a multi-touch interface.  It was a revolution in usability and really changed the attractiveness of smartphones for the general public.  This brought us to the beginning of the current state of the market with the concept of an online store that makes it easy for customers to get new features and functionality on demand. With Android, Google made this more than a one horse race.  Not only did they come to compete, their low cost actually made them the leading OS.  Of course what made Android so attractive also is its major fault.  It is so open that it has been a target for malware which leaves consumers exposed.  Fortunately for Google though, most consumers aren’t aware of the threat that they are under. Although Microsoft had put out one of the first smart phone operating systems with CE it had to play catch up and finally came out with the Windows Phone.  They have gone for a market approach between those of iOS and Android.  They support multiple hardware vendors like Google, but they kept a certification process for applications that is similar to Apple.  They also created a user interface that was different enough to give it a clear separation from the other two platforms. The result of all this is hundreds of millions of smartphones being sold monthly across all three platforms giving us a wide range of choices and challenges when it comes to developing solutions. Features So what are the features that make these devices flexible enough be considered for use in the enterprise? The biggest advantage of today's devices is network connectivity.  The ability to access information from multiple sources at a moment’s notice is critical for businesses.  Add to that the ability to communicate over a variety of text, voice and video modes and we have a powerful starting point. Every smartphone has a cameras and they are not just useful for posting to Instagram. We are seeing more applications such as Bing vision that allow us to scan just about any printed code or text to find information.  These capabilities have been made available to developers in the form of standard libraries for reading barcodes of just about an flavor and optical character recognition (OCR) interpretation. Bluetooth give us the ability to communicate with multiple devices. Whether these are headsets, keyboard or printers the wireless communication capabilities are just starting to evolve.  The more these wireless communication protocols grow, the more opportunities we will see to transfer data between users and a variety of devices. Local storage of information that can be called up even when the device cannot reach the network is the other big capability.  This give users the ability to work offline as well and transmit information when connections are restored. These are the tools that we have to work with to build applications that can be leveraged to gain a competitive advantage for companies that implement them. Coming Up In the third installment I will cover key concerns that you face when building enterprise smartphone apps. del.icio.us Tags: smartphones,enterprise smartphone Apps,architecture,iOS,Android,Windows Phone

    Read the article

  • 2012&ndash;The End Of The World Review

    - by Tim Murphy
    The end of the world must be coming.  Not because the Mayan calendar says so, but because Microsoft is innovating more than Apple.  It has been a crazy year, with pundits declaring not that the end of the world is coming, but that the end of Microsoft is coming.  Let’s take a look at what 2012 has brought us. The beginning of year is a blur.  I managed to get to TechEd in June which was the first time that I got to take a deep dive into Windows 8 and many other things that had been announced in 2011.  The promise I saw in these products was really encouraging.  The thought of being able to run Windows 8 from a thumb drive or have Hyper-V native to the OS told me that at least for developers good things were coming. I finally got my feet wet with Windows 8 with the developer preview just prior to the RTM.  While the initial experience was a bit of a culture shock I quickly grew to love it.  The media still seems to hold little love for the “reimagined” platform, but I think that once people spend some time with it they will enjoy the experience and what the FUD mongers say will fade into the background.  With the launch of the OS we finally got a look at the Surface.  I think this is a bold entry into the tablet market.  While I wish it was a little more affordable I am already starting to see them in the wild being used by non-techies. I was waiting for Windows Phone 8 at least as much as Windows 8, probably more.  The new hardware, better marketing and new OS features I think are going to finally push us to the point of having a real presence in the smartphone market.  I am seeing a number of iPhone users picking up a Nokia Lumia 920 and getting rid of their brand new iPhone 5.  The only real debacle that I saw around the launch was when they held back the SDK from general developers. Shortly after the launch events came Build 2012.  I was extremely disappointed that I didn’t make it to this year’s Build.  Even if they weren’t handing out Surface and Lumia devices I think the atmosphere and content were something that really needed to be experience in person.  Hopefully there will be a Build next year and it’s schedule will be announced soon.  As you would expect Windows 8 and Windows Phone 8 development were the mainstay of the conference, but improvements in Azure also played a key role.  This movement of services to the cloud will continue and we need to understand where it best fits into the solutions we build. Lower on the radar this year were Office 2013, SQL Server 2012, and Windows Server 2012.  Their glory stolen by the consumer OS and hardware announcements, these new releases are no less important.  Companies will see significant improvements in performance and capabilities if they upgrade.  At TechEd they had shown some of the new features of Windows Server 2012 around hardware integration and Hyper-V performance which absolutely blew me away.  It is our job to bring these important improvements to our company’s attention so that they can be leveraged. Personally, the consulting business in 2012 was the busiest it has been in a long time.  More companies were ready to attack new projects after several years of putting them on the back burner.  I also worked to bring back momentum to the Chicago Information Technology Architects Group.  Both the community and clients are excited about the new technologies that have come out in 2012 and now it is time to deliver. What does 2013 have in store.  I don’t see it be quite as exciting as 2012.  Microsoft will be releasing the Surface Pro in January and it seems that we will see more frequent OS update for Windows.  There are rumors that we may see a Surface phone in 2013.  It has also been announced that there will finally be a rework of the XBox next fall.  The new year will also be a time for us in the development community to take advantage of these new tools and devices.  After all, it is what we build on top of these platforms that will attract more consumers and corporations to using them. Just as I am 99.999% sure that the world is not going to end this year, I am also sure that Microsoft will move on and that most of this negative backlash from the media is actually fear and jealousy.  In the end I think we have a promising year ahead of us. del.icio.us Tags: Microsoft,Pundits,Mayans,Windows 8,Windows Phone 8,Surface

    Read the article

  • Sending notification after an event has remained open for a specified period

    - by Loc Nhan
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Enterprise Manager (EM) 12c allows you to create an incident rule to send a notification and/or create an incident after an event has been open for a specified period. Such an incident rule will help prevent premature alerts on issues that may correct themselves within a certain amount of time. For example, there are some agents in an unstable network area, and often there are communication failures between the agents and the OMS lasting three, four minutes at a time. In this scenario, you may only want to receive alerts after an agent in that area has been in the Agent Unreachable status for at least five minutes. Note: Many non-target availability metrics allow users to specify the “number of occurrences” or the number of consecutive times metric values reach thresholds before a notification is sent. It is best to use the feature for such metrics. This article provides a step-by-step guide for creating an incident rule set to cater for the above scenario, that is, to create an incident and send a notification after the Agent Unreachable event has remained open for a five-minute duration. Steps to create the incident rule 1.     Log on to the console and navigate to Setup -> Incidents -> Incident Rules. Note: A non-super user requires the Create Enterprise Rule Set privilege, which is a resource privilege, to create an incident rule. The Incident Rules - All Enterprise Rules page displays. 2.     Click Create Rule Set … The Create Rule Set page displays. 3.     Enter a name for the rule set (e.g. Rule set for agents in flaky network areas), optionally enter a description, and leave everything else at default values, and click + Add. The Search and Select: Targets page pops up. Note:  While you can create a rule set for individual targets, it is a best practice to use a group for this purpose. 4.     Select an appropriate group, e.g. the AgentsInFlakyNework group. The Select button becomes enabled, click the button. The Create Rule Set page displays. 5.     Leave everything at default values, and click the Rules tab. The Create Rule Set page displays. 6.     Click Create… The Select Type of Rule to Create page pops up. 7.     Leave the Incoming events and updates to events option selected, and click Continue. The Create New Rule : Select Events page displays. 8.     Select Target Availability from the Type drop-down list. The page shows more options for Target Availability. 9.     Select the Specific events of type Target Availability option, and click + Add. The Select Target Availability events page pops up. 10.   Select Agent from the Target Type dropdown list. The page expands. 11.   Click the Agent unreachable checkbox, and click OK. Note: If you want to also receive a notification when the event is cleared, click the Agent unreachable end checkbox as well before clicking OK. The Create New Rule : Select Events page displays. 12.   Click Next. The Create New Rule : Add Actions page displays. 13.   Click + Add. The Add Actions page displays. 14.   Do the following: a.     Select the Only execute the actions if specified conditions match option (You don’t want the action to trigger always). The following options appear in the Conditions for Actions section. b.     Select the Event has been open for specified duration option. The Conditions for actions section expands. c.     Change the values of Event has been open for to 5 Minutes as shown below. d.     In the Create Incident or Update Incident section, click the Create Incident checkbox as following: e.     In the Notifications section, enter an appropriate EM user or email address in the E-mail To field. f.     Click Continue (in the top right hand corner). The Create New Rule : Add Actions page displays. 15.   Click Next. The Create New Rule : Specify name and Description page displays. 16.   Enter a rule name, and click Next. The Create New Rule : Review page appears. 17.   Click Continue, and proceed to save the rule set. The incident rule set creation completes. After one of the agents in the group specified in the rule set is stopped for over 5 minutes, EM will send a mail notification and create an incident as shown in the following screenshot. In conclusion, you have seen the steps to create an example incident rule set that only creates an incident and triggers a notification after an event has been open for a specified period. Such an incident rule can help prevent unnecessary incidents and alert notifications leaving EM administrators time to more important tasks. - Loc Nhan

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >