Search Results

Search found 487 results on 20 pages for 'etl instrumentation'.

Page 4/20 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • ODI 11g – How to override SQL at runtime?

    - by David Allan
    Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime. Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful. When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable. Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc. Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS. m=new Properties(); m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>"); s=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true); m2=new Properties(); m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>"); s2=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true); You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

    Read the article

  • Why "Tailoring" Your Resume Is Bad

    - by Mike C
    I was just writing a response to a comment on my "Sell Yourself!" presentation ( http://sqlblog.com/blogs/michael_coles/archive/2010/12/05/sell-yourself-presentation.aspx#comments ), and it started getting a little lengthy so I decided to turn it into a blog post. The "Sell Yourself!" post got a couple of very good comments on the blog, and quite a few more comments offline. I think I'll start this one with a great exchange from the movie "The Princess Bride": Vizzini: HE DIDN'T FALL? INCONCEIVABLE....(read more)

    Read the article

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

  • ODI 11g – How to Load Using Partition Exchange

    - by David Allan
    Here we will look at how to load large volumes of data efficiently into the Oracle database using a mixture of CTAS and partition exchange loading. The example we will leverage was posted by Mark Rittman a couple of years back on Interval Partitioning, you can find that posting here. The best thing about ODI is that you can encapsulate all those ‘how to’ blog posts and scripts into templates that can be reused – the templates are of course Knowledge Modules. The interface design to mimic Mark's posting is shown below; The IKM I have constructed performs a simple series of steps to perform a CTAS to create the stage table to use in the exchange, then lock the partition (to ensure it exists, it will be created if it doesn’t) then exchange the partition in the target table. You can find the IKM Oracle PEL.xml file here. The IKM performs the follows steps and is meant to illustrate what can be done; So when you use the IKM in an interface you configure the options for hints (for parallelism levels etc), initial extent size, next extent size and the partition variable;   The KM has an option where the name of the partition can be passed in, so if you know the name of the partition then set the variable to the name, if you have interval partitioning you probably don’t know the name, so you can use the FOR clause. In my example I set the variable to use the date value of the source data FOR (TO_DATE(''01-FEB-2010'',''dd-MON-yyyy'')) Using a variable lets me invoke the scenario many times loading different partitions of the same target table. Below you can see where this is defined within ODI, I had to double single-quote the strings since this is placed inside the execute immediate tasks in the KM; Note also this example interface uses the LKM Oracle to Oracle (datapump), so this illustration uses a lot of the high performing Oracle database capabilities – it uses Data Pump to unload, then a CreateTableAsSelect (CTAS) is executed on the external table based on top of the Data Pump export. This table is then exchanged in the target. The IKM and illustrations above are using ODI 11.1.1.6 which was needed to get around some bugs in earlier releases with how the variable is handled...as far as I remember.

    Read the article

  • Ant target for compile-time code instrumentation with Spring aspects

    - by alecswan
    I have developed a web application using Netbeans 6.7 and Ant. The webapp works, but I would like to refactor the code to use @Configurable Spring annotation for cleaner dependency injection. I was able to get load-time weaving (LTW) of Spring aspects to work intermittently (see http://forum.springsource.org/showthread.php?t=86904). At this point I would like to use compile-time weaving with my tool set. Could anybody provide an Ant target that I can use to weave Spring aspects at compile time? An extra credit will be given to anybody who explains how to configure Netbeans to execute the new Ant target right after code compilation. Thanks.

    Read the article

  • Oracle Magazine - OWB 11gR2 and Heterogeneous Databases

    - by David Allan
    There's a nice article titled 'Oracle Warehouse Builder 11g Release 2 and Heterogeneous Databases' from Oracle ACE director and cofounder of Rittman Mead Consulting, Mark Rittman in the May/June 2010 Oracle Magazine that covers the heterogeneous database support in OWB 11gR2: http://www.oracle.com/technology/oramag/oracle/10-may/o30bi.html Big thanks to Mark for this write up. There is an Oracle white paper on the support here and for examples of this extensibility you can go to the OWB blog archive where there are quite a few posts. I would recommend the following interesting posts out of the archive architecture overview, bulk file loading, MySQL open connectivity and MySQL bulk extract as interesting posts amongst others.

    Read the article

  • Interesting sessions/tips from RMOUG

    - by jean-pierre.dijcks
    One of the sessions I was at at last week's RMOUG was a session on Temp Tablespace Groups. I had a look because I had no experience with this and it seemed to help with parallel processing and the allocation/usage of temp. You can read the excellent write-up at Kellyn Pedersen's blog - who did the session and all the work - here. So for all of those who may be seeing lot's of waits like enq: TS - Contention when you are doing hash joins and sorts, do have a look at the above blog post. I also had the chance to listen in at Stewart Bryson's session on Restartability (he had 3 R-s) where he gave very useful tips about how to deal with your data warehouse loads. Questions like archive log mode - should I or should I not were well covered. Flashback archives, also nice to hear about. Very nice talk, very interesting. Unfortunately he hasn't blogged about it yes, so no pointers to that one. Got to see a couple of other interesting sessions, and as conferences go got to meet some interesting Oracle folks from the region. As usual RMOUG was useful and fun. Off to the drawing boards to design next year's session!

    Read the article

  • OWB 11gR2 &ndash; JDBC Helper Utility

    - by David Allan
    One of the common queries when importing the tables via JDBC with 11gR2 is determining why the import wizard doesn’t display the tables that you think it should. I often just use the script below to dump out the schemas, tables and columns that the JDBC driver is returning. This is useful in a few areas; to figure out what the schema name is returned to double check with the schema name you have used in the location (this is used in the DatabaseMetaData.getTables API call within the basic JDBC metadata import. to figure out the data types returned from the JDBC driver when you see columns skipped because of no datatype supported messages. also…I can do it via scripting and don’t need to recompile classes and stuff :-) Edit the tcl script and set the JDBC driver, the connection URL and the username and password (they are at the bottom of the script), the script then calls a basic tcl procedure which writes to standard out the schemas, tables and columns with various properties. For example I executed it using the XML JDBC driver from ODI over a simple customers XML file and it writes the following metadata; You can add more details as you need and execute from the OMBPlus panel within OWB. Download the sample tcl jdbc script here There is a bunch of really useful stuff on OTN documenting this area (start with the white paper here) that is worth checking out all related to the OWB SDK covering everything from platform definitions, custom metadata importers, application adapters, code templates etc. You can find a bunch of goodies on the OWB SDK here.

    Read the article

  • Presenting at Roanoke Code Camp Saturday!

    - by andyleonard
    Introduction I am honored to once again be selected to present at Roanoke Code Camp ! An Introductory Topic One of my presentations is titled "I See a Control Flow Tab. Now What?" It's a Level 100 talk for those wishing to learn how to build their very first SSIS package. This highly-interactive, demo-intense presentation is for beginners and developers just getting started with SSIS. Attend and learn how to build SSIS packages from the ground up . Designing an SSIS Framework I'm also presenting...(read more)

    Read the article

  • T-SQL Tuesday: Aggregations in SSIS

    - by andyleonard
    Introduction Jes Borland ( Blog | @grrl_geek ) is hosting this month's T-SQL Tuesday - started by SQLBlog's own Adam Machanic ( Blog | @AdamMachanic ) - and it is about aggregation. I thought I'd show a couple ways to do aggregation using SSIS. The Aggregate Transformation in SSIS The Aggregate transform in SSIS is fast . I built an SSIS package (AggregateScripts.dtsx) with two Data Flow Tasks (Using the Aggregate Transform and Using a Script Component). Using the Aggregate Transform looks like this:...(read more)

    Read the article

  • External table and preprocessor for loading LOBs

    - by David Allan
    I was using the COLUMN TRANSFORMS syntax to load LOBs into Oracle using the Oracle external which is a handy way of doing several stuff - from loading LOBs from the filesystem to having constants as fields. In OWB you can use unbound external tables to define an external table using your own arbitrary access parameters - I blogged a while back on this for doing preprocessing before it was added into OWB 11gR2. For loading LOBs using the COLUMN TRANSFORMS syntax have a read through this post on loading CLOB, BLOB or any LOB, the files to load can be specified as a field that is a filename field, the content of this file will be the LOB data. So using the example from the linked post, you can define the columns; Then define the access parameters - if you go the unbound external table route you can can put whatever you want in here (your external table get out of jail free card); This will let you read the LOB files fromn the filesystem and use the external table in a mapping. Pushing the envelope a little further I then thought about marrying together the preprocessor with the COLUMN TRANSFORMS, this would have let me have a shell script for example as the preprocessor which listed the contents of a directory and let me read the files as LOBs via an external table. Unfortunately that doesn't quote work - there is now a bug/enhancement logged, so one day maybe. So I'm afraid my blog title was a little bit of a teaser....

    Read the article

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • OWB - 11.2.0.4 Windows standalone client released

    - by David Allan
    The 11.2.0.4 release of OWB containing the 32 bit and 64 bit standalone Windows client is released today, I had previously blogged about the Linux standalone client here. Big thanks to Anil for spearheading that, another milestone on the Data Integration roadmap. Below are the patch numbers; 17743124 - OWB 11.2.0.4 STANDALONE CLIENT FOR Windows 64 BIT 17743119 - OWB 11.2.0.4 STANDALONE CLIENT FOR Windows 32 BIT This is the terminal release of OWB and customer bugs will be resolved on top of this release. We are excited to share information on the Oracle Data Integration 12c release in our upcoming launch video webcast on November 12th.

    Read the article

  • Have You Downloaded SQL Server 2012 Evaluation Edition? Why Not?!

    - by andyleonard
    I am installing SQL Server 2012 Evaluation Edition on a virtual machine as I type. You can do this. Here’s one way: Grab some virtual machine software. I like Oracle VirtualBox . It’s cool. It’s free. Install VirtualBox. Download the 180-day free trial of Windows Server 2008 R2 . Also cool. Also free. Once Windows Server 2008 R2 is downloaded, build a VirtualBox VM. Download and install SQL Server 2012 Evaluation Edition ! That’s all there is to it. You can get started today, no need to wait until...(read more)

    Read the article

  • Data Warehouse ETL slow - change primary key in dimension?

    - by Jubbles
    I have a working MySQL data warehouse that is organized as a star schema and I am using Talend Open Studio for Data Integration 5.1 to create the ETL process. I would like this process to run once per day. I have estimated that one of the dimension tables (dimUser) will have approximately 2 million records and 23 columns. I created a small test ETL process in Talend that worked, but given the amount of data that may need to be updated daily, the current performance will not cut it. It takes the ETL process four minutes to UPDATE or INSERT 100 records to dimUser. If I assumed a linear relationship between the count of records and the amount of time to UPDATE or INSERT, then there is no way the ETL can finish in 3-4 hours (my hope), let alone one day. Since I'm unfamiliar with Java, I wrote the ETL as a Python script and ran into the same problem. Although, I did discover that if I did only INSERT, the process went much faster. I am pretty sure that the bottleneck is caused by the UPDATE statements. The primary key in dimUser is an auto-increment integer. My friend suggested that I scrap this primary key and replace it with a multi-field primary key (in my case, 2-3 fields). Before I rip the test data out of my warehouse and change the schema, can anyone provide suggestions or guidelines related to the design of the data warehouse the ETL process how realistic it is to have an ETL process INSERT or UPDATE a few million records each day will my friend's suggestion significantly help If you need any further information, just let me know and I'll post it. UPDATE - additional information: mysql> describe dimUser; Field Type Null Key Default Extra user_key int(10) unsigned NO PRI NULL auto_increment id_A int(10) unsigned NO NULL id_B int(10) unsigned NO NULL field_4 tinyint(4) unsigned NO 0 field_5 varchar(50) YES NULL city varchar(50) YES NULL state varchar(2) YES NULL country varchar(50) YES NULL zip_code varchar(10) NO 99999 field_10 tinyint(1) NO 0 field_11 tinyint(1) NO 0 field_12 tinyint(1) NO 0 field_13 tinyint(1) NO 1 field_14 tinyint(1) NO 0 field_15 tinyint(1) NO 0 field_16 tinyint(1) NO 0 field_17 tinyint(1) NO 1 field_18 tinyint(1) NO 0 field_19 tinyint(1) NO 0 field_20 tinyint(1) NO 0 create_date datetime NO 2012-01-01 00:00:00 last_update datetime NO 2012-01-01 00:00:00 run_id int(10) unsigned NO 999 I used a surrogate key because I had read that it was good practice. Since, from a business perspective, I want to keep aware of potential fraudulent activity (say for 200 days a user is associated with state X and then the next day they are associated with state Y - they could have moved or their account could have been compromised), so that is why geographic data is kept. The field id_B may have a few distinct values of id_A associated with it, but I am interested in knowing distinct (id_A, id_B) tuples. In the context of this information, my friend suggested that something like (id_A, id_B, zip_code) be the primary key. For the large majority of daily ETL processes (80%), I only expect the following fields to be updated for existing records: field_10 - field_14, last_update, and run_id (this field is a foreign key to my etlLog table and is used for ETL auditing purposes).

    Read the article

  • Data validate tools (ETL tools) for SQL server

    - by Stan
    I have some data in Excel and need to import into database. Is there any tool that can validate and maybe clean the data? Does Red Gate have such tool? The input will be Excel. Given table constraints, eg. CHECK, UNIQUE KEY, datetime format, NOT NULL. Desire output should be as least shows which lines are having problems, and then fix some trivial error automatically, like fill in default value for NULL columns, automatically correct datetime format. I know using Python can build such a script. But just wonder what's the popular way to do this. Thanks.

    Read the article

  • Adding code to the beginning / end of methods in runtime dynamically

    - by Irchi
    I know instrumentation is a technique to add trace code dynamically into the methods to enable tracing and debugging. I was wondering if this is only a "Trace" option, hard coded into the CLR to add only trace code, or is there the ability to add any code to the methods? For example, I want to check for a condition in the beginning of every single method call in a certain class (say for permissions). Can I do this via adding dynamic code to the beginning of the methods in execution time? I'm not sure how this trace "instrumentation" thing works, but I'm wondering if this can be used for other goals too, or not.

    Read the article

  • How to figure out which record has been deleted in an effiecient way?

    - by janetsmith
    Hi, I am working on an in-house ETL solution, from db1 (Oracle) to db2 (Sybase). We needs to transfer data incrementally (Change Data Capture?) into db2. I have only read access to tables, so I can't create any table or trigger in Oracle db1. The challenge I am facing is, how to detect record deletion in Oracle? The solution which I can think of, is by using additional standalone/embedded db (e.g. derby, h2 etc). This db contains 2 tables, namely old_data, new_data. old_data contains primary key field from tahle of interest in Oracle. Every time ETL process runs, new_data table will be populated with primary key field from Oracle table. After that, I will run the following sql command to get the deleted rows: SELECT old_data.id FROM old_data WHERE old_data.id NOT IN (SELECT new_data.id FROM new_data) I think this will be a very expensive operation when the volume of data become very large. Do you have any better idea of doing this? Thanks.

    Read the article

  • pgadmin III doesn't work due to "The server lacks instrumentation functions."

    - by Chaz SLiger
    When pgAdmin III is used to open a PostgreSQL database the following message appears. There does not seem to be any obvious package listed in the Ubuntu Software Center for this. The server lacks instrumentation functions. pgadmin III uses some support functions that are not available by default in all PostgreSQL versions. These enable some tasks that make life easier when dealing with log files and configuration files. The adminpack is installed and activated by default if you are running the one-click installer of PostgreSQL. On Unix, you may have to install the contrib package, either with your package installer tool or by compilation.

    Read the article

  • Instrumenting Database Access

    - by Whisk
    Jeff mentioned in one of the podcasts that one of the things he always does is put in instrumentation for database calls, so that he can tell what queries are causing slowness etc. This is something I've measured in the past using SQL Profiler, but I'm interested in what strategies other people have used to include this as part of the application. Is it simply a case of including a timer across each database call and logging the result, or is there a 'neater' way of doing it? Maybe there's a framework that does this for you already, or is there a flag I could enable in e.g. Linq-to-SQL that would provide similar functionality. I mainly use c# but would also be interested in seeing methods from different languages, and I'd be more interested in a 'code' way of doing this over a db platform method like SQL Profiler.

    Read the article

  • How to discover table properties from SQLAlchemy mapped object

    - by ssaboum
    Hi, My point is i have a class mapped with a table, in my case in a declarative way, and i want to "discover" table properties, columns, names, relations, from this class : engine = create_engine('sqlite:///' + databasePath, echo=True) # setting up root class for declarative declaration Base = declarative_base(bind=engine) class Ship(Base): __tablename__ = 'ships' id = Column(Integer, primary_key=True) name = Column(String(255)) def __init__(self, name): self.name = name def __repr__(self): return "<Ship('%s')>" % (self.name) So now my goal is from the "Ship" class to get the table columns and their properties from another piece of code. I guess i can deal with it using instrumentation but is there any way provided by the SQLAlchemy API ? Thank you.

    Read the article

  • Java NoClassDefFoundError when calling own class from instrumented method

    - by lethal_possum
    Hello, I am working on a kit of simple Java agents to help me (and hopefully others) troubleshoot Java applications. One of the agents I would like to create instruments the JComponent.getToolTipText() method to quickly identify any GUI class by just hovering the mouse cursor over it. You can find the code of my transformer and the rest of the project here: http://sfn.cvs.sourceforge.net/viewvc/sfn/core/src/main/java/org/leplus/sfn/transformer/JComponentTransformer.java?view=markup I launch my test GUI with the agent attached as follow: $ java -javaagent:target/jars/sfn-0.1-agent.jar=JComponent -cp lib/jars/bcel-5.2.jar:target/jars/sfn-0.1-test.jar:target/jars/sfn-0.1-agent.jar org.leplus.sfn.test.Main sfn-0.1-agent.jar contains the org.leplus.sfn.transformer.JComponentTransformer class. sfn-0.1-test.jar contains the org.leplus.sfn.test.Main class. Here is what the application prints when I launch it and I put the mouse over it: Loading agent: JComponent Instrumentation ready! Exception in thread "AWT-EventQueue-0" java.lang.NoClassDefFoundError: org/leplus/sfn/tracer/ComponentTracer at javax.swing.JComponent.getToolTipText(JComponent.java) at javax.swing.ToolTipManager$insideTimerAction.actionPerformed(ToolTipManager.java:662) ... What is surprising to me is that if I change my transformer to call any class from the JRE, it works. But it doesn't work when I call my own class org.leplus.sfn.tracer.ComponentTracer. My first guess was a classpath issue but the ComponentTracer is both in the classpath and in the agent's jar. So I am lost. If any of you see where I am missing something. Cheers, Tom

    Read the article

  • Visual Studio 2010 Professional - Problem Unit-Testing Web Services

    - by Ben
    Have created a very simple Web Service (asmx) in Visual Studio 2010 Professional, and am trying to use the auto-generated unit test cases. I get something that seems quite familiar on this site: The web site could not be configured correctly; getting ASP.NET process information failed. Requesting http://localhost:81/zfp/VSEnterpriseHelper.axd return an error: The remote server returned an error: (500) Internal Server Error. http://stackoverflow.com/questions/260432/500-error-running-visual-studio-asp-net-unit-test I have tried: 1. Running the tests on IIS rather than ASP.NET Development Server 2. Adding and then removing the XML fragment to my Web Service's .config file 3. Giving the MACHINE\ASPNET account Full control to the local folder My current questions: 1. Why am I being bothered with this instrumentation / code coverage DLL, when this doesn't seem to be something that ships with Visual Studio 2010 Professional? Is there any way I can turn it off? 2. I'm placing the node under in Web.config - is that the correct node? 3. Is it possible to bind to a web service without using the webby test attributes? I've seen other people advising making the Web Service as light-weight as possible. I'm trying to call it with jQuery / AJAX / JSON, so being able to debug the actual web service would be really helpful. Best wishes, Ben

    Read the article

  • extract transform load

    - by mitch
    Wikipedia defines a 'typical' ETL cycle as : Cycle initiation Build reference data Extract (from sources) Validate Transform (clean, apply business rules, check for data integrity, create aggregates or disaggregates) Stage (load into staging tables, if used) Audit reports (for example, on compliance with business rules. Also, in case of failure, helps to diagnose/repair) Publish (to target tables) Archive Clean up ..What is meant by 'Build reference data'?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >