Search Results

Search found 1751 results on 71 pages for 'builds'.

Page 63/71 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • ApiChange Is Released!

    - by Alois Kraus
    I have been working on little tool to simplify my life and perhaps yours as developer as well. It is basically a command line tool that allows you to execute queries on your compiled .NET code base. The main purpose is to find out how big the impact of an api change would be if you changed this or that.  Now you can do high level operations like Diff public types for breaking changes. Who uses a method? Who uses a type? Who uses implements an interface? Who references me? What format has the binary  (32/64, Managed C++, Pure IL, Unmanaged)? Search for all event subscribers and unsubscribers. A unique feature is to check for event subscription imbalances. Forgotten event subscriptions are the 90% cause of managed memory leaks. It is done at a per class level. If one class does subscribe to one event more often than it does unsubscribe it is treated as possible event subscription imbalance. Another unique ability is to search for users of string literals which allows you to track users of a string constant which is not possible otherwise. For incremental builds the ShowRebuildTargets command can be used to identify the dependant targets that need a rebuild after you did compile one assembly. It has some heuristics in place to determine the impact of breaking changes and finds out which targets need to be recompiled as well. It has a ton of other features and a an API to access these things programmatically so you can build upon these simple queries create even better tools. Perhaps we get a Visual Studio plug in? You can download it from CodePlex here. It works via XCopy deployment. Simply let it run and check the command line help out. The best feature in my opinion is that the output of nearly all commands can be piped to Excel for further analysis. Since it does read also the pdbs it can show you the source file name and line number as well for all matches. The following picture shows the output of a –WhousesType query. The following command checks where type from BaseLibraryV1.dll are used inside DependantLibV1.dll. All matches are printed out with the reason and matching item along with file and line number. There is even a hyper link to the match which will open Visual Studio. ApiChange -whousestype "*" BaseLibraryV1.dll -in DependantLibV1.dll –excel The "*” is the actual query which means all types. The syntax is the same like in C# just that placeholders are allowed ;-). More info's can be found at the Codeplex Documentation.     The tool was developed in a TDD style manner which means that it is heavily tested and already used by a quite large user base inside the company I do work for. Luckily for you I got the permission to make it public so you take advantage of it. It is fully instrumented with tracing. If you find bugs simply add the –trace command line switch to find out what is failing and send me the output. How is it done? Your first guess might be that it uses reflection. Wrong. It is based on Mono Cecil a free IL parser with a fantastic API to access all internals of a managed assembly. The speed is awesome and to make it even faster I did make the tool heavily multi threaded. The query above did execute in 1.8s with the Excel output. On a rather slow machine I can analyze over 1500 assemblies in less than 40s with a very low memory consumption. The true power of Mono Cecil is that I can load an assembly like any other data file. I have no problems unloading a file but if I would have used reflection I would need to unload a whole AppDomain just to get rid of one assembly in my memory. Just to give you a glimpse how ApiChange.Api.dll can be used I show you one of the unit tests:           public void Can_Find_GenericMethodInvocations_With_Type_Parameters()         { // 1. Create an aggregator to collect our matches             UsageQueryAggregator agg = new UsageQueryAggregator();   // 2. This is the type we want to search for. Load it via the type query             var decimalType = TypeQuery.GetTypeByName(TestConstants.MscorlibAssembly, "System.Decimal");   // 3. register the type query which searches for uses of the Decimal type             new WhoUsesType(agg, decimalType);   // 4. Search for all users of the Decimal type in the DependandLibV1Assembly             agg.Analyze(TestConstants.DependandLibV1Assembly);   // Extract matches and assert             Assert.AreEqual(2, agg.MethodMatches.Count, "Method match count");             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[0].Match.Name);             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[1].Match.Name);         } Many thanks go from here to Jb Evian for the creation of Mono.Cecil. Without this fantastic piece of code it would have been much much harder. There are other options around like the Common Compiler Infrastructure  Metadata Api which should do the same thing but it was not a real option since the Microsoft reader did fail on even simple assemblies (at least in September 2009 this was the case). Besides this I found the CCI Apis much harder to use. The only real competitor was Reflector which does support many things but does not let me access his cool high level analyze commands. So I decided to dig into the IL specs and as a result you can query your compiled binaries from the command line or programmatically. The best thing is you try it out for yourself and give me some feedback what you miss. If you want to contribute or have a cool idea what should be added drop me a mail at A Kraus1@___No [email protected]. There is much more inside the tool I did not talk about it (yet).

    Read the article

  • Solaris 11 SRU / Update relationship explained, and blackout period on delivery of new bug fixes eliminated

    - by user12244672
    Relationship between SRUs and Update releases As you may know, Support Repository Updates (SRUs) for Oracle Solaris 11 are released monthly and are available to customers with an appropriate support contract.  SRUs primarily deliver bug fixes.  They may also deliver low risk feature enhancements. Solaris Update are typically released once or twice a year, containing support for new hardware, new software feature enhancements, and all bug fixes available at the time the Update content was finalized.  They also contain a significant number of new bug fixes, for issues found internally in Oracle and complex customer bug fixes which  require significant "soak" time to ensure their efficacy prior to release. Changes to SRU and Update Naming Conventions We're changing the naming convention of Update releases from a date based format such as Oracle Solaris 10 8/11 to a simpler "dot" version numbering, e.g. Oracle Solaris 11.1. Oracle Solaris 11 11/11 (i.e. the initial Oracle Solaris 11 release) may be referred to as 11.0. SRUs will simply be named as "dot.dot" releases, e.g. Oracle Solaris 11.1.1, for SRU1 after Oracle Solaris 11.1. Many Oracle products and infrastructure tools such as BugDB and MOS are tailored towards this "dot.dot" style of release naming, so these name changes align Oracle Solaris with these conventions. No Blackout Periods on Bug Fix Releases The Oracle Solaris 11 release process has been enhanced to eliminate blackout periods on the delivery of new bug fixes to customers. Previously, Oracle Solaris Updates were a superset of all preceding bug fix deliveries.  This made for a very simple update message - that which releases later is always a superset of that which was delivered previously. However, it had a downside.  Once the contents of an Update release were frozen prior to release, the release of new bug fixes for customer issues was also frozen to maintain the Update's superset relationship. Since the amount of change allowed into the final internal builds of an Update release is reduced to mitigate risk, this throttling back also impacted the release of new bug fixes to customers. This meant that there was effectively a 6 to 9 week hiatus on the release of new bug fixes prior to the release of each Update.  That wasn't good for customers awaiting critical bug fixes. We've eliminated this hiatus on the delivery of new bug fixes in Oracle Solaris 11 by allowing new bug fixes to continue to be released in SRUs even after the contents of the next Update release have been frozen. The release of SRUs will remain contiguous, with the first SRU released after the Update release effectively being a superset of both the the Update release and all preceding SRUs*.  That is, later SRUs are supersets of the content of previous SRUs. Therefore, the progression path from the final SRUs prior to the Update release is to the first SRU after the Update release, rather than to the Update release itself. The timeline / logical sequence of releases can be shown as follows: Updates: 11.0                                                11.1                               11.2     etc.                  \                                                         \                                    \ SRUs:       11.0.1, 11.0.2,...,11.0.12, 11.0.13, 11.1.1, 11.1.2,...,11.1.x, 11.2.1, etc. For example, for systems with Oracle Solaris 11 11/11 SRU12.4 or later installed, the recommended update path is to Oracle Solaris 11.1.1 (i.e. SRU1 after Solaris 11.1) or later rather than to the Solaris 11.1 release itself.  This will ensure no bug fixes are "lost" during the update. If for any reason you do wish to update from SRU12.4 or later to the 11.1 release itself - for example to update a test system - the instructions to do so are in the SRU12.4 README, https://updates.oracle.com/Orion/Services/download?type=readme&aru=15564533 For systems with Oracle Solaris 11 11/11 SRU11.4 or earlier installed, customers can update to either the 11.1 release or any 11.1 SRU as both will be supersets of their current version. Please do read the README of the SRU you are updating to, as it will contain important installation instructions which will save you time and effort. *Nerdy details: SRUs only contain the latest change delta relative to the Update on which they are based.  Their dependencies will, however, effectively pull in the Update content.  Customers maintaining a local Repo (e.g. behind their firewall), need to add both the 11.1 content and the relevant SRU content to their Repo, to enable the SRU's dependencies to be resolved.  Both will be available from the standard Support Repo and from MOS.  This is no different to existing SRUs for Oracle Solaris 11.0, whereby you may often get away with using just the SRU content to update, but the original 11.0 content may be needed in the Repo to resolve dependencies.

    Read the article

  • Possible SWITCH Optimization in DAX – #powerpivot #dax #tabular

    - by Marco Russo (SQLBI)
    In one of the Advanced DAX Workshop I taught this year, I had an interesting discussion about how to optimize a SWITCH statement (which could be frequently used checking a slicer, like in the Parameter Table pattern). Let’s start with the problem. What happen when you have such a statement? Sales :=     SWITCH (         VALUES ( Period[Period] ),         "Current", [Internet Total Sales],         "MTD", [MTD Sales],         "QTD", [QTD Sales],         "YTD", [YTD Sales],          BLANK ()     ) The SWITCH statement is in reality just syntax sugar for a nested IF statement. When you place such a measure in a pivot table, for every cell of the pivot table the IF options are evaluated. In order to optimize performance, the DAX engine usually does not compute cell-by-cell, but tries to compute the values in bulk-mode. However, if a measure contains an IF statement, every cell might have a different execution path, so the current implementation might evaluate all the possible IF branches in bulk-mode, so that for every cell the result from one of the branches will be already available in a pre-calculated dataset. The price for that could be high. If you consider the previous Sales measure, the YTD Sales measure could be evaluated for all the cells where it’s not required, and also when YTD is not selected at all in a Pivot Table. The actual optimization made by the DAX engine could be different in every build, and I expect newer builds of Tabular and Power Pivot to be better than older ones. However, we still don’t live in an ideal world, so it could be better trying to help the engine finding a better execution plan. One student (Niek de Wit) proposed this approach: Selection := IF (     HASONEVALUE ( Period[Period] ),     VALUES ( Period[Period] ) ) Sales := CALCULATE (     [Internet Total Sales],     FILTER (         VALUES ( 'Internet Sales'[Order Quantity] ),         'Internet Sales'[Order Quantity]             = IF (                 [Selection] = "Current",                 'Internet Sales'[Order Quantity],                 -1             )     ) )     + CALCULATE (         [MTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "MTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [QTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "QTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [YTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "YTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     ) At first sight, you might think it’s impossible that this approach could be faster. However, if you examine with the profiler what happens, there is a different story. Every original IF’s execution branch is now a separate CALCULATE statement, which applies a filter that does not execute the required measure calculation if the result of the FILTER is empty. I used the ‘Internet Sales’[Order Quantity] column in this example just because in Adventure Works it has only one value (every row has 1): in the real world, you should use a column that has a very low number of distinct values, or use a column that has always the same value for every row (so it will be compressed very well!). Because the value –1 is never used in this column, the IF comparison in the filter discharge all the values iterated in the filter if the selection does not match with the desired value. I hope to have time in the future to write a longer article about this optimization technique, but in the meantime I’ve seen this optimization has been useful in many other implementations. Please write your feedback if you find scenarios (in both Power Pivot and Tabular) where you obtain performance improvements using this technique!

    Read the article

  • Scheduling thread tiles with C++ AMP

    - by Daniel Moth
    This post assumes you are totally comfortable with, what some of us call, the simple model of C++ AMP, i.e. you could write your own matrix multiplication. We are now ready to explore the tiled model, which builds on top of the non-tiled one. Tiling the extent We know that when we pass a grid (which is just an extent under the covers) to the parallel_for_each call, it determines the number of threads to schedule and their index values (including dimensionality). For the single-, two-, and three- dimensional cases you can go a step further and subdivide the threads into what we call tiles of threads (others may call them thread groups). So here is a single-dimensional example: extent<1> e(20); // 20 units in a single dimension with indices from 0-19 grid<1> g(e);      // same as extent tiled_grid<4> tg = g.tile<4>(); …on the 3rd line we subdivided the single-dimensional space into 5 single-dimensional tiles each having 4 elements, and we captured that result in a concurrency::tiled_grid (a new class in amp.h). Let's move on swiftly to another example, in pictures, this time 2-dimensional: So we start on the left with a grid of a 2-dimensional extent which has 8*6=48 threads. We then have two different examples of tiling. In the first case, in the middle, we subdivide the 48 threads into tiles where each has 4*3=12 threads, hence we have 2*2=4 tiles. In the second example, on the right, we subdivide the original input into tiles where each has 2*2=4 threads, hence we have 4*3=12 tiles. Notice how you can play with the tile size and achieve different number of tiles. The numbers you pick must be such that the original total number of threads (in our example 48), remains the same, and every tile must have the same size. Of course, you still have no clue why you would do that, but stick with me. First, we should see how we can use this tiled_grid, since the parallel_for_each function that we know expects a grid. Tiled parallel_for_each and tiled_index It turns out that we have additional overloads of parallel_for_each that accept a tiled_grid instead of a grid. However, those overloads, also expect that the lambda you pass in accepts a concurrency::tiled_index (new in amp.h), not an index<N>. So how is a tiled_index different to an index? A tiled_index object, can have only 1 or 2 or 3 dimensions (matching exactly the tiled_grid), and consists of 4 index objects that are accessible via properties: global, local, tile_origin, and tile. The global index is the same as the index we know and love: the global thread ID. The local index is the local thread ID within the tile. The tile_origin index returns the global index of the thread that is at position 0,0 of this tile, and the tile index is the position of the tile in relation to the overall grid. Confused? Here is an example accompanied by a picture that hopefully clarifies things: array_view<int, 2> data(8, 6, p_my_data); parallel_for_each(data.grid.tile<2,2>(), [=] (tiled_index<2,2> t_idx) restrict(direct3d) { /* todo */ }); Given the code above and the picture on the right, what are the values of each of the 4 index objects that the t_idx variables exposes, when the lambda is executed by T (highlighted in the picture on the right)? If you can't work it out yourselves, the solution follows: t_idx.global       = index<2> (6,3) t_idx.local          = index<2> (0,1) t_idx.tile_origin = index<2> (6,2) t_idx.tile             = index<2> (3,1) Don't move on until you are comfortable with this… the picture really helps, so use it. Tiled Matrix Multiplication Example – part 1 Let's paste here the C++ AMP matrix multiplication example, bolding the lines we are going to change (can you guess what the changes will be?) 01: void MatrixMultiplyTiled_Part1(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M, N, vC); 07: parallel_for_each(c.grid, 08: [=](index<2> idx) restrict(direct3d) { 09: 10: int row = idx[0]; int col = idx[1]; 11: float sum = 0.0f; 12: for(int i = 0; i < W; i++) 13: sum += a(row, i) * b(i, col); 14: c[idx] = sum; 15: }); 16: } To turn this into a tiled example, first we need to decide our tile size. Let's say we want each tile to be 16*16 (which assumes that we'll have at least 256 threads to process, and that c.grid.extent.size() is divisible by 256, and moreover that c.grid.extent[0] and c.grid.extent[1] are divisible by 16). So we insert at line 03 the tile size (which must be a compile time constant). 03: static const int TS = 16; ...then we need to tile the grid to have tiles where each one has 16*16 threads, so we change line 07 to be as follows 07: parallel_for_each(c.grid.tile<TS,TS>(), ...that means that our index now has to be a tiled_index with the same characteristics as the tiled_grid, so we change line 08 08: [=](tiled_index<TS, TS> t_idx) restrict(direct3d) { ...which means, without changing our core algorithm, we need to be using the global index that the tiled_index gives us access to, so we insert line 09 as follows 09: index<2> idx = t_idx.global; ...and now this code just works and it is tiled! Closing thoughts on part 1 The process we followed just shows the mechanical transformation that can take place from the simple model to the tiled model (think of this as step 1). In fact, when we wrote the matrix multiplication example originally, the compiler was doing this mechanical transformation under the covers for us (and it has additional smarts to deal with the cases where the total number of threads scheduled cannot be divisible by the tile size). The point is that the thread scheduling is always tiled, even when you use the non-tiled model. But with this mechanical transformation, we haven't gained anything… Hint: our goal with explicitly using the tiled model is to gain even more performance. In the next post, we'll evolve this further (beyond what the compiler can automatically do for us, in this first release), so you can see the full usage of the tiled model and its benefits… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

  • Agilist, Heal Thyself!

    - by Dylan Smith
    I’ve been meaning to blog about a great experience I had earlier in the year at Prairie Dev Con Calgary.  Myself and Steve Rogalsky did a session that we called “Agilist, Heal Thyself!”.  We used a format that was new to me, but that Steve had seen used at another conference.  What we did was start by asking the audience to give us a list of challenges they had had when adopting agile.  We wrote them all down, then had everybody vote on the most interesting ones.  Then we split into two groups, and each group was assigned one of the agile challenges.  We had 20 minutes to discuss the challenge, and suggest solutions or approaches to improve things.  At the end of the 20 minutes, each of the groups gave a brief summary of their discussion and learning's, then we mixed up the groups and repeated with another 2 challenges. The 2 groups I was part of had some really interesting discussions, and suggestions: Unfinished Stories at the end of Sprints The first agile challenge we tackled, was something that every single Scrum team I have worked with has struggled with.  What happens when you get to the end of a Sprint, and there are some stories that are only partially completed.  The team in question was getting very de-moralized as they felt that every Sprint was a failure as they never had a set of fully completed stories. How do you avoid this? and/or what do you do when it happens? There were 2 pieces of advice that were well received: 1. Try to bring stories to completion before starting new ones.  This is advice I give all my Scrum teams.  If you have a 3-week sprint, what happens all too often is you get to the end of week 2, and a lot of stories are almost done; but almost none are completely done.  This is a Bad Thing.  I encourage the teams I work with to only start a new story as a very last resort.  If you finish your task look at the stories in progress and see if there’s anything you can do to help before moving onto a new story.  In the daily standup, put a focus on seeing what stories got completed yesterday, if a few days go by with none getting completed, be sure this fact is visible to the team and do something about it.  Something I’ve been doing recently is introducing WIP (Work In Progress) limits while using Scrum.  My current team has 2-week sprints, and we usually have about a dozen or stories in a sprint.  We instituted a WIP limit of 4 stories.  If 4 stories have been started but not finished then nobody is allowed to start new stories.  This made it obvious very quickly that our QA tasks were our bottleneck (we have 4 devs, but only 1.5 testers).  The WIP limit forced the developers to start to pickup QA tasks before moving onto the next dev tasks, and we ended our sprints with many more stories completely finished than we did before introducing WIP limits. 2. Rather than using time-boxed sprints, why not just do away with them altogether and go to a continuous flow type approach like KanBan.  Limit WIP to keep things under control, but don’t have a fixed time box at the end of which all tasks are supposed to be done.  This eliminates the problem almost entirely.  At some points in the project (releases) you need to be able to burn down all the half finished stories to get a stable release build, but this probably occurs less often than every sprint, and there are alternative approaches to achieve it using branching strategies rather than forcing your team to try to get to Zero WIP every 2-weeks (e.g. when you are ready for a release, create a new branch for any new stories, but finish all existing stories in the current branch and release it). Trying to Introduce Agile into a team with previous Bad Agile Experiences One of the agile adoption challenges somebody described, was he was in a leadership role on a team he had recently joined – lets call him Dave.  This team was currently very waterfall in their ALM process, but they were about to start on a new green-field project.  Dave wanted to use this new project as an opportunity to do things the “right way”, using an Agile methodology like Scrum, adopting TDD, automated builds, proper branching strategies, etc.  The problem he was facing is everybody else on the team had previously gone through an “Agile Adoption” that was a horrible failure.  Dave blamed this failure on the consultant brought in previously to lead this agile transition, but regardless of the reason, the team had very negative feelings towards agile, and was very resistant to trying it out again.  Dave possibly had the authority to try to force the team to adopt Agile practices, but we all know that doesn’t work very well.  What was Dave to do? Ultimately, the best advice was to question *why* did Dave want to adopt all these various practices. Rather than trying to convince his team that these were the “right way” to run a dev project, and trying to do a Big Bang approach to introducing change.  He would be better served by identifying problems the team currently faces, have a discussion with the team to get everybody to agree that specific problems existed, then have an open discussion about ways to address those problems.  This way Dave could incrementally introduce agile practices, and he doesn’t even need to identify them as “agile” practices if he doesn’t want to.  For example, when we discussed with Dave, he said probably the teams biggest problem was long periods without feedback from users, then finding out too late that the software is not going to meet their needs.  Rather than Dave jumping right to introducing Scrum and all it entails, it would be easier to get buy-in from team if he framed it as a discussion of existing problems, and brainstorming possible solutions.  And possibly most importantly, don’t try to do massive changes all at once with a team that has not bought-into those changes.  Taking an incremental approach has a greater chance of success. I see something similar in my day job all the time too.  Clients who for one reason or another claim to not be fans of agile (or not ready for agile yet).  But then they go on to ask me to help them get shorter feedback cycles, quicker delivery cycles, iterative development processes, etc.  It’s kind of funny at times, sometimes you just need to phrase the suggestions in terms they are using and avoid the word “agile”. PS – I haven’t blogged all that much over the past couple of years, but in an attempt to motivate myself, a few of us have accepted a blogger challenge.  There’s 6 of us who have all put some money into a pool, and the agreement is that we each need to blog at least once every 2-weeks.  The first 2-week period that we miss we’re eliminated.  Last person standing gets the money.  So expect at least one blog post every couple of weeks for the near future (I hope!).  And check out the blogs of the other 5 people in this blogger challenge: Steve Rogalsky: http://winnipegagilist.blogspot.ca Aaron Kowall: http://www.geekswithblogs.net/caffeinatedgeek Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com (note: site not available yet.  should be shortly or he owes me some money!)

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • What's new in EJB 3.2 ? - Java EE 7 chugging along!

    - by arungupta
    EJB 3.1 added a whole ton of features for simplicity and ease-of-use such as @Singleton, @Asynchronous, @Schedule, Portable JNDI name, EJBContainer.createEJBContainer, EJB 3.1 Lite, and many others. As part of Java EE 7, EJB 3.2 (JSR 345) is making progress and this blog will provide highlights from the work done so far. This release has been particularly kept small but include several minor improvements and tweaks for usability. More features in EJB.Lite Asynchronous session bean Non-persistent EJB Timer service This also means these features can be used in embeddable EJB container and there by improving testability of your application. Pruning - The following features were made Proposed Optional in Java EE 6 and are now made optional. EJB 2.1 and earlier Entity Bean Component Contract for CMP and BMP Client View of an EJB 2.1 and earlier Entity Bean EJB QL: Query Language for CMP Query Methods JAX-RPC-based Web Service Endpoints and Client View The optional features are moved to a separate document and as a result EJB specification is now split into Core and Optional documents. This allows the specification to be more readable and better organized. Updates and Improvements Transactional lifecycle callbacks in Stateful Session Beans, only for CMT. In EJB 3.1, the transaction context for lifecyle callback methods (@PostConstruct, @PreDestroy, @PostActivate, @PrePassivate) are defined as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Unspecified Unspecified Unspecified Unspecified Singleton Bean's transaction management type Bean's transaction management type N/A N/A In EJB 3.2, stateful session bean lifecycle callback methods can opt-in to be transactional. These methods are then executed in a transaction context as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Bean's transaction management type Bean's transaction management type Bean's transaction management type Bean's transaction management type Singleton Bean's transaction management type Bean's transaction management type N/A N/A For example, the following stateful session bean require a new transaction to be started for @PostConstruct and @PreDestroy lifecycle callback methods. @Statefulpublic class HelloBean {   @PersistenceContext(type=PersistenceContextType.EXTENDED)   private EntityManager em;    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)   @PostConstruct   public void init() {        myEntity = em.find(...);   }   @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)    @PostConstruct    public void destroy() {        em.flush();    }} Notice, by default the lifecycle callback methods are not transactional for backwards compatibility. They need to be explicitly opt-in to be made transactional. Opt-out of passivation for stateful session bean - If your stateful session bean needs to stick around or it has non-serializable field then the bean can be opt-out of passivation as shown. @Stateful(passivationCapable=false)public class HelloBean {    private NonSerializableType ref = ... . . .} Simplified the rules to define all local/remote views of the bean. For example, if the bean is defined as: @Statelesspublic class Bean implements Foo, Bar {    . . .} where Foo and Bar have no annotations of their own, then Foo and Bar are exposed as local views of the bean. The bean may be explicitly marked @Local as @Local@Statelesspublic class Bean implements Foo, Bar {    . . .} then this is the same behavior as explained above, i.e. Foo and Bar are local views. If the bean is marked @Remote as: @Remote@Statelesspublic class Bean implements Foo, Bar {    . . .} then Foo and Bar are remote views. If an interface is marked @Local or @Remote then each interface need to be explicitly marked explicitly to be exposed as a view. For example: @Remotepublic interface Foo { . . . }@Statelesspublic class Bean implements Foo, Bar {    . . .} only exposes one remote interface Foo. Section 4.9.7 from the specification provide more details about this feature. TimerService.getAllTimers is a newly added convenience API that returns all timers in the same bean. This is only for displaying the list of timers as the timer can only be canceled by its owner. Removed restriction to obtain the current class loader, and allow to use java.io package. This is handy if you want to do file access within your beans. JMS 2.0 alignment - A standard list of activation-config properties is now defined destinationLookup connectionFactoryLookup clientId subscriptionName shareSubscriptions Tons of other clarifications through out the spec. Appendix A provide a comprehensive list of changes since EJB 3.1. ThreadContext in Singleton is guaranteed to be thread-safe. Embeddable container implement Autocloseable. A complete replay of Enterprise JavaBeans Today and Tomorrow from JavaOne 2012 can be seen here (click on CON4654_mp4_4654_001 in Media). The specification is still evolving so the actual property or method names or their actual behavior may be different from the currently proposed ones. Are there any improvements that you'd like to see in EJB 3.2 ? The EJB 3.2 Expert Group would love to hear your feedback. An Early Draft of the specification is available. The latest version of the specification can always be downloaded from here. Java EE 7 Specification Status EJB Specification Project JIRA of EJB Specification JSR Expert Group Discussion Archive These features will start showing up in GlassFish 4 Promoted Builds soon.

    Read the article

  • JPA 2.1 Schema Generation (TOTD #187)

    - by arungupta
    This blog explained some of the key features of JPA 2.1 earlier. Since then Schema Generation has been added to JPA 2.1. This Tip Of The Day (TOTD) will provide more details about this new feature in JPA 2.1. Schema Generation refers to generation of database artifacts like tables, indexes, and constraints in a database schema. It may or may not involve generation of a proper database schema depending upon the credentials and authorization of the user. This helps in prototyping of your application where the required artifacts are generated either prior to application deployment or as part of EntityManagerFactory creation. This is also useful in environments that require provisioning database on demand, e.g. in a cloud. This feature will allow your JPA domain object model to be directly generated in a database. The generated schema may need to be tuned for actual production environment. This usecase is supported by allowing the schema generation to occur into DDL scripts which can then be further tuned by a DBA. The following set of properties in persistence.xml or specified during EntityManagerFactory creation controls the behaviour of schema generation. Property Name Purpose Values javax.persistence.schema-generation-action Controls action to be taken by persistence provider "none", "create", "drop-and-create", "drop" javax.persistence.schema-generation-target Controls whehter schema to be created in database, whether DDL scripts are to be created, or both "database", "scripts", "database-and-scripts" javax.persistence.ddl-create-script-target, javax.persistence.ddl-drop-script-target Controls target locations for writing of scripts. Writers are pre-configured for the persistence provider. Need to be specified only if scripts are to be generated. java.io.Writer (e.g. MyWriter.class) or URL strings javax.persistence.ddl-create-script-source, javax.persistence.ddl-drop-script-source Specifies locations from which DDL scripts are to be read. Readers are pre-configured for the persistence provider. java.io.Reader (e.g. MyReader.class) or URL strings javax.persistence.sql-load-script-source Specifies location of SQL bulk load script. java.io.Reader (e.g. MyReader.class) or URL string javax.persistence.schema-generation-connection JDBC connection to be used for schema generation javax.persistence.database-product-name, javax.persistence.database-major-version, javax.persistence.database-minor-version Needed if scripts are to be generated and no connection to target database. Values are those obtained from JDBC DatabaseMetaData. javax.persistence.create-database-schemas Whether Persistence Provider need to create schema in addition to creating database objects such as tables, sequences, constraints, etc. "true", "false" Section 11.2 in the JPA 2.1 specification defines the annotations used for schema generation process. For example, @Table, @Column, @CollectionTable, @JoinTable, @JoinColumn, are used to define the generated schema. Several layers of defaulting may be involved. For example, the table name is defaulted from entity name and entity name (which can be specified explicitly as well) is defaulted from the class name. However annotations may be used to override or customize the values. The following entity class: @Entity public class Employee {    @Id private int id;    private String name;     . . .     @ManyToOne     private Department dept; } is generated in the database with the following attributes: Maps to EMPLOYEE table in default schema "id" field is mapped to ID column as primary key "name" is mapped to NAME column with a default VARCHAR(255). The length of this field can be easily tuned using @Column. @ManyToOne is mapped to DEPT_ID foreign key column. Can be customized using JOIN_COLUMN. In addition to these properties, couple of new annotations are added to JPA 2.1: @Index - An index for the primary key is generated by default in a database. This new annotation will allow to define additional indexes, over a single or multiple columns, for a better performance. This is specified as part of @Table, @SecondaryTable, @CollectionTable, @JoinTable, and @TableGenerator. For example: @Table(indexes = {@Index(columnList="NAME"), @Index(columnList="DEPT_ID DESC")})@Entity public class Employee {    . . .} The generated table will have a default index on the primary key. In addition, two new indexes are defined on the NAME column (default ascending) and the foreign key that maps to the department in descending order. @ForeignKey - It is used to define foreign key constraint or to otherwise override or disable the persistence provider's default foreign key definition. Can be specified as part of JoinColumn(s), MapKeyJoinColumn(s), PrimaryKeyJoinColumn(s). For example: @Entity public class Employee {    @Id private int id;    private String name;    @ManyToOne    @JoinColumn(foreignKey=@ForeignKey(foreignKeyDefinition="FOREIGN KEY (MANAGER_ID) REFERENCES MANAGER"))    private Manager manager;     . . . } In this entity, the employee's manager is mapped by MANAGER_ID column in the MANAGER table. The value of foreignKeyDefinition would be a database specific string. A complete replay of Linda's talk at JavaOne 2012 can be seen here (click on CON4212_mp4_4212_001 in Media). These features will be available in GlassFish 4 promoted builds in the near future. JPA 2.1 will be delivered as part of Java EE 7. The different components in the Java EE 7 platform are tracked here. JPA 2.1 Expert Group has released Early Draft 2 of the specification. Section 9.4 and 11.2 provide all details about Schema Generation. The latest javadocs can be obtained from here. And the JPA EG would appreciate feedback.

    Read the article

  • Installing multiple php versions plus extensions on freebsd

    - by jgtumusiime
    I'm a currently learning how to work with freebsd. Lately I have been trying to run multiple php versions along with their respective packages. However, I seem to be running into issues while making installations. The default location for my php installation is /usr/local/etc/, however I want to be able to install php5.2, php5.3 and php5.4 in /usr/local/etc/php52, /usr/local/etc/php53 and /usr/local/etc/php54 respectively. Using ports I simply achieved this by doing cd /usr/ports/lang/php5x && make PREFIX="/usr/local/etc/php5x" install clean. The problem now is: How do I do the same for extensions of all my PHP versions? When I try installing php-extensions like so: cd /usr/ports/lang/php5x-extension && make PREFIX="/usr/local/etc/php5x/lib/php" install clean, I get this error ... ===> PHPizing for php53-bcmath-5.3.17 env: /usr/local/bin/phpize: No such file or directory *** Error code 127 Stop in /usr/ports/math/php53-bcmath. *** Error code 1 Stop in /usr/ports/lang/php53-extensions. My PHPize is located in /usr/local/etc/php5x/bin/phpize So how do I get make or whatever to look for phpize in the right path? Is there a cleaner, may be simpler way of maintaining multiple php installations? I need to achieve this because of compatibility issues from some legacy code that runs on 5.2 and breaks on 5.3. Thank you. ================= So I successfully installed an configured freebsd jail and I would like to install software within my jail but I cannot connect to the network. Here is my rc.conf jail_enable="YES" # Set to NO to disable starting of any jails jail_list="mambo2" # Space separated list of names of jails jail_mambo2_rootdir="/usr/jails/j01" # jail's root directory jail_mambo2_hostname="mambo2.ug" # jail's hostname jail_mambo2_ip="192.168.100.174" # jail's IP address jail_mambo2_devfs_enable="YES" # mount devfs in the jail jail_mambo2_devfs_ruleset="mambo2_ruleset" # devfs ruleset to apply to jail here is my jail ifconfig output mambo2# ifconfig rl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:c1:28:00:48:db media: Ethernet autoselect (100baseTX <full-duplex>) status: active plip0: flags=108810<POINTOPOINT,SIMPLEX,MULTICAST,NEEDSGIANT> metric 0 mtu 1500 lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 mambo2# I created a /etc/resolv.conf for nameservers mambo2# cat /etc/resolv.conf nameserver 192.168.100.251 nameserver 8.8.8.8 mambo2# Here is a list of jails running [root@mambo /usr/home/jtumusiime]# jls JID IP Address Hostname Path 5 192.168.100.174 mambo2.ug /usr/jails/j01 my host has 4 ip addresses, 3 public and one private: 192.168.100.173 I tried creating a jail using ezjail and this does not work out. [root@mambo /usr/home/jtumusiime]# ezjail-admin update -p -i Error: Cannot find your copy of the FreeBSD source tree in . Consider using 'ezjail-admin install' to create the base jail from an ftp server. [root@mambo /usr/home/jtumusiime]# I have an updated copy of freebsd 7.1 source in /usr/src/ and I did #make buildworld while building the first jail mambo2 Here is an excerpt of ouput of ezjail-admin install ... 221 Goodbye. Trying 193.162.146.4... Connected to ftp.freebsd.org. 220 ftp.beastie.tdk.net FTP server (Version 6.00LS) ready. 331 Guest login ok, send your email address as password. 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. 200 Type set to I. 550 pub/FreeBSD-Archive/old-releases/i386/7.1-RELEASE/base: No such file or directory. 221 Goodbye. Could not fetch base from ftp.freebsd.org. Maybe your release (7.1-RELEASE) is specified incorrectly or the host ftp.freebsd.org does not provide that release build. Use the -r option to specify an existing release or the -h option to specify an alternative ftp server. Querying your ftp-server... The ftp server you specified (ftp.freebsd.org) seems to provide the following builds: Trying 193.162.146.4... total 10 drwxrwxr-x 13 1006 1006 512 Feb 20 2011 8.2-RELEASE drwxrwxr-x 13 1006 1006 512 Apr 10 2012 8.3-RELEASE lrwxr-xr-x 1 1006 1006 16 Jan 7 2012 9.0-RELEASE -> i386/9.0-RELEASE drwxrwxr-x 7 1006 1006 1024 Feb 19 2012 ISO-IMAGES -rw-rw-r-- 1 1006 1006 637 Nov 23 2005 README.TXT drwxrwxr-x 5 1006 1006 512 Nov 2 02:59 i386 I do not want to upgrade my freebsd installation. I have googled around; but all on vail

    Read the article

  • Deployment Workbench no longer available after PXE boot

    - by Patrick
    Our build process revolves around windows Deployment Workbench. Unfortunately this was setup by someone who is no longer with the company, and no-one has ever dared/needed to make any changes. The other day it stopped working. It turns out that one of our build guys started thinking about changing some stuff in it, clicked something and now it no longer works (He is saying now that he right clicked on the 'LAB' entry in 'Deployment points' and hit 'Update', which took some time to run through apparently). The job has fallen on me to resolve and frankly I'm not sure what I'm doing. I was wondering if someone with more experience than me can provide some pointers as to troubleshooting cos I'm feeling quite a lot in the dark here. On the server I have Deployment Workbench up and running (MMC snapin) version 3.0. There is a WDS service that appears to be running ok, as does the tFTPd service. Nothing specific to this in event logs. From the client side; PXE boot works and gets you to the Win PE launch, and it has the correct company logo as the background (proving to me that its loading win PE from the network). WPEINIT runs, and asks for domain credentials, here the team simply put User/Pass/Domain in the boxes and click ok. Normally the build would kick off. Instead they get an error message saying that the \NATBLU01\Distribution$ share isn't available. Checking \NATBLU01\Distribution$ shows that its there and accessible over the network. Security/permissions seem ok, even 'ANONYMOUS LOGON' has read access to that share so I don't see that being a problem. Digging the trace files from C:\MININT\SMSOSD\OSDLOGS\ after an attempt to run the build I can see an error saying much the same - <![LOG[Validating connection to \\NATBLU01\Distribution$]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[FindFile: The file OSDConnectToUNC.exe could not be found in any standard locations.]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[The network location cannot be reached. For information about network troubleshooting, see Windows Help.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> <![LOG[ERROR - Unable to map a network drive to \\NATBLU01\Distribution$.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> BDD.LOG shows much the same. Full copies of the .LOG files can both files be found here : BDD.LOG LITETOUCH.LOG I can get to a command prompt from the Win PE that boots from PXE, however there isn't any network stuff there. IPCONFIG returns nothing so none of the tests I would usually run resolve anything. I'm at a loss frankly. I did wonder if I could perhaps start a new build process but if the change to the DeploymentWorkbench has knocked it offline I don't think I'm going to be able to create a new deployment. Failing that; we do have a deployment point labeled type 'Media' which appears to be a DVD ISO image of one of the builds, but its dated 2008, is it possible to export the network build to .ISO and build from DVD? We are looking at new hardware to run this from anyway (for the impending Windows 7 roll out) so a temporary work round isn't going to be too much of a problem. All assistance is appreciated! EDIT : OK. Got it working again. Solution was close to Newmanth's idea. The problem was that our PE image didn't appear to be connecting the network. I had an older copy of the PE boot.WIM on a stick that I had been using for other purposes. I booted that and correctly got a network connection. Showed a correct internal IP and could ping out etc etc. However I was still getting the same errors in all the logs and in when wpeinit was running. What I did seperately was to update the PE image that DeploymentWorkbench was pushing out to display a different back ground. I wanted to prove that I was working in the correct place. Turns out that I wasn't. I went and looked at the other deployment stuff we had on this machine, Windows Deployment Services was installed and although all the install images are off line the boot image was online, so I uploaded the copy from my stick to that. Booted straight off. And fixed. Working. Yay! For anyone stumbling across this in the future you may find that although your deployment images are located in the DeploymentWorkbench, the Win PE boot image you are launching from is located in the associated Windows Deployment Services images.

    Read the article

  • MSVC Compiler options with mojo-native in Maven

    - by graham.reeds
    I'm trying to set up a test environment with Maven to build VC++ and I am way behind on this. I have 3 source files that I builds a dll (once I get this fixed it should be a simple matter to add the unit-tests): hook.cpp hook.h hook.def This is compiled, on the command line, with the following: C:\Develop\hook\src\main\msvc>cl -o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def" Which produces the expected obj, dll, lib and exp files. So now to get this working in Maven2 with the Mojo-native plugin. With no options Maven w/Mojo gives me this (truncated) output: [INFO] [native:initialize {execution: default-initialize}] [INFO] [native:unzipinc {execution: default-unzipinc}] [INFO] [native:javah {execution: default-javah}] [INFO] [native:compile {execution: default-compile}] [INFO] cmd.exe /X /C "cl -IC:\Develop\hook\src\main\msvc /FoC:\Develop\hook\targ et\objs\Hook.obj -c C:\Develop\hook\src\main\msvc\Hook.cpp" Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 15.00.30729.01 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. Hook.cpp [INFO] [native:link {execution: default-link}] [INFO] cmd.exe /X /C "link.exe /out:C:\Develop\hook\target\hook.dll target\objs\ Hook.obj" Microsoft (R) Incremental Linker Version 9.00.30729.01 Copyright (C) Microsoft Corporation. All rights reserved. LINK : fatal error LNK1561: entry point must be defined [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error executing command line. Exit code:1561 mojo-native gives options for manipulating the compiler/linker options but gives no example of usage. No matter what I tweak in these settings I get the error of: [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to configure plugin parameters for: org.codehaus.mojo:native-maven -plugin:1.0-alpha-4 (found static expression: '-o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def"' which may act as a default value). Cause: Cannot assign configuration entry 'compilerStartOptions' to 'interface ja va.util.List' from '-o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook .def"', which is of type class java.lang.String The relevant part of my pom.xml looks like this: <configuration> <sources> <source> <directory>src/main/msvc</directory> <includes> <include>**/*.cpp</include> </includes> </source> </sources> <compilerProvider>msvc</compilerProvider> <compilerExecutable>cl</compilerExecutable> <!-- cl -o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def" --> <compilerStartOptions>-o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def"</compilerStartOptions> <!-- <compilerMiddleOptions></compilerMiddleOptions> <compilerEndOptions></compilerEndOptions> <linkerStartOptions></linkerStartOptions> <linkerMiddleOptions></linkerMiddleOptions> <linkerEndOptions></linkerEndOptions> --> </configuration> How do I manipulate the compiler to produce a DLL like the command line version does? Or should I give up and just use exec?

    Read the article

  • WPF Layout algorithm woes - control will resize, but not below some arbitrary value.

    - by Quantumplation
    I'm working on an application for a client, and one of the requirements is the ability to make appointments, and display the current week's appointments in a visual format, much like in Google Calender's or Microsoft Office. I found a great (3 part) article on codeproject, in which he builds a "RangePanel", and composes one for each "period" (for example, the work day.) You can find part 1 here: http://www.codeproject.com/KB/WPF/OutlookWpfCalendarPart1.aspx The code presents, but seems to choose an arbitrary height value overall (440.04), and won't resize below that without clipping. What I mean to say, is that the window/container will resize, but it just cuts off the bottom of the control, instead of recalculating the height of the range panels, and the controls in the range panels representing the appointment. It will resize and recalculate for greater values, but not less. Code-wise, what's happening is that when you resize below that value, first the "MeasureOverride" is called with the correct "new height". However, by the time the "ArrangeOverride" method is called, it's passing the same 440.04 value as the height to arrange to. I need to find a solution/workaround, but any information that you can provide that might direct me for things to look into would also be greatly appreciated ( I understand how frustrating it is to debug code when you don't have the codebase in front of you. :) ) The code for the various Arrange and Measure functions are provided below. The "CalendarView" control has a "CalendarViewContentPresenter", which handles several periods. Then, the periods have a "CalendarPeriodContentPresenter", which handles each "block" of appointments. Finally, the "RangePanel" has it's own implementation. (To be honest, i'm still a bit hazy on how the control works, so if my explanations are a bit hazy, the article I linked probably has a more cogent explanation. :) ) CalendarViewContentPresenter: protected override Size ArrangeOverride(Size finalSize) { int columnCount = this.CalendarView.Periods.Count; Size columnSize = new Size(finalSize.Width / columnCount, finalSize.Height); double elementX = 0; foreach (UIElement element in this.visualChildren) { element.Arrange(new Rect(new Point(elementX, 0), columnSize)); elementX = elementX + columnSize.Width; } return finalSize; } protected override Size MeasureOverride(Size constraint) { this.GenerateVisualChildren(); this.GenerateListViewItemVisuals(); // If it's coming back infinity, just return some value. if (constraint.Width == Double.PositiveInfinity) constraint.Width = 10; if (constraint.Height == Double.PositiveInfinity) constraint.Height = 10; return constraint; } CalendarViewPeriodPersenter: protected override Size ArrangeOverride(Size finalSize) { foreach (UIElement element in this.visualChildren) { element.Arrange(new Rect(new Point(0, 0), finalSize)); } return finalSize; } protected override Size MeasureOverride(Size constraint) { this.GenerateVisualChildren(); return constraint; } RangePanel: protected override Size ArrangeOverride(Size finalSize) { double containerRange = (this.Maximum - this.Minimum); foreach (UIElement element in this.Children) { double begin = (double)element.GetValue(RangePanel.BeginProperty); double end = (double)element.GetValue(RangePanel.EndProperty); double elementRange = end - begin; Size size = new Size(); size.Width = (Orientation == Orientation.Vertical) ? finalSize.Width : elementRange / containerRange * finalSize.Width; size.Height = (Orientation == Orientation.Vertical) ? elementRange / containerRange * finalSize.Height : finalSize.Height; Point location = new Point(); location.X = (Orientation == Orientation.Vertical) ? 0 : (begin - this.Minimum) / containerRange * finalSize.Width; location.Y = (Orientation == Orientation.Vertical) ? (begin - this.Minimum) / containerRange * finalSize.Height : 0; element.Arrange(new Rect(location, size)); } return finalSize; } protected override Size MeasureOverride(Size availableSize) { foreach (UIElement element in this.Children) { element.Measure(availableSize); } // Constrain infinities if (availableSize.Width == double.PositiveInfinity) availableSize.Width = 10; if (availableSize.Height == double.PositiveInfinity) availableSize.Height = 10; return availableSize; }

    Read the article

  • Installshield Development Error durring the Installation process

    - by Alopekos
    Hey guys, I've been stuck by a problem for the past couple days that makes no sense to me. My installer builds fine in the Installshiled IDE but when it is about to finish the installation, int gets two errors then rollbacks: installation failure. Right when the install bar is at about 100%, an error box pops up that states this: "Error 1001.Exception occurred while initializing the installation: System.IO.FileLoadException: Attempt to load an unverifiable executable with fixups (IAT with more than 2 sections or a TLS section.) (Exception from HRESULT: 0x80131019)." The box pops up once, then the installer flashes its status to "rollback" then pops up another error box, then once 'ok'ed it procedes to rollback as usual. I don't understand that error message so i looked in the msi loggings and found this: "InstallShield 13:20:08: Initializing Property Bag... InstallShield 13:20:08: Getting file count from property bag InstallShield 13:20:08: File Count : 7 InstallShield 13:20:08: Sorting Based On Order... InstallShield 13:20:08: This setup is running on a 32-bit Windows...No need to load ISBEW64.exe InstallShield 13:20:08: Registering file C:\Program Files\Cadwell\Easy III\QMWSChartDataServer.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\DataDelivery.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\QMGlobalData.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\QMAdoDB.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\QMPatientData.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\MedShareGlobalData.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\MedDirectory.dll (32-bit) InstallShield 13:20:09: Begin Comitting Property Bag InstallShield 13:20:09: Write KeyList count InstallShield 13:20:09: Finished Comitting Property Bag Action 13:20:09: _EBDE7916DF6AF3B644016C54F66930DC.commit. Action 13:20:09: _EBDE7916DF6AF3B644016C54F66930DC.rollback. Action 13:20:09: _EBDE7916DF6AF3B644016C54F66930DC.install. Error 1001.Exception occurred while initializing the installation: System.IO.FileLoadException: Attempt to load an unverifiable executable with fixups (IAT with more than 2 sections or a TLS section.) (Exception from HRESULT: 0x80131019). MSI (s) (34!84) [13:20:26:455]: Info 2769.Custom Action _EBDE7916DF6AF3B644016C54F66930DC.install did not close 1 MSIHANDLEs. Action ended 13:20:26: InstallFinalize. Return value 3. Action 13:20:26: Rollback. Rolling back action: Rollback: _EBDE7916DF6AF3B644016C54F66930DC.install Rollback: _EBDE7916DF6AF3B644016C54F66930DC.rollback Error 1001.Exception occurred while initializing the installation: System.IO.FileLoadException: Attempt to load an unverifiable executable with fixups (IAT with more than 2 sections or a TLS section.) (Exception from HRESULT: 0x80131019). MSI (s) (34!E8) [13:20:27:036]: Info 2769.Custom Action _EBDE7916DF6AF3B644016C54F66930DC.rollback did not close 1 MSIHANDLEs. Rollback: _EBDE7916DF6AF3B644016C54F66930DC.commit Rollback: ISSelfRegisterFiles Rollback: Registering modules Rollback: Registering type libraries Rollback: Writing system registry values Rollback: Registering program identifiers" All rollbacking commands after this point. For some reason it looks like to me that installshield is trying to launch my program before it finishes the installation, even when I told it to prompt the user to decide to launch. Is this a registering command system that makes it attemp or what? I've been scouring the web all day and I've found some ideas, but I havent seen any solutions as of yet. The installers that I've tried(and failed) have always needed to be Setup.exes, when i try to build a .msi only setup I get this error message. It may help someone who knows more about this system than I do. Your project contains InstallShield prerequisites. A Setup.exe setup launcher is required if you are build a release that includes InstallShield prerequisites. Change your release settings to build Setup.exe, or remove the prerequisites from your project. -7076 There's nothing on the site that has anything from the error code, so I'm at a loss. Thanks for any help possible, I'm an intern and I'm fresh out of ideas... Josh System: XP SP3 Installshield 2010 Pro Install being tested on a VirtualPC

    Read the article

  • Partial Trust in WPF 4

    - by Hadi Eskandari
    I've started a new project in WPF 4 (.NET 4) and trying to see if I can run it in xbap mode. I need to run the application in Full Trust with the new mode made available in .NET 4 which asks the end user if the full trust application should be run. I've set the "Security" mode to "Full Trust" application, and it builds just fine. When I run it, an exception is thrown and IE error message shows the following error. Any ways around it?? Startup URI: T:\projects\Hightech Sources\PayRoll\PayRoll.Web\publish\PayRoll.Web.xbap Application Identity: file:///T:/projects/Hightech%20Sources/PayRoll/PayRoll.Web/publish/PayRoll.Web.xbap#PayRoll.Web.xbap, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1d910f49755d2c97, processorArchitecture=msil/PayRoll.Web.exe, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1d910f49755d2c97, processorArchitecture=msil, type=win32 System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.FileIOPermission, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessSecurityEngine.Check(CodeAccessPermission cap, StackCrawlMark& stackMark) at System.Security.CodeAccessPermission.Demand() at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, Boolean suppressSecurityChecks, StackCrawlMark& stackMark) at System.Reflection.Assembly.LoadFrom(String assemblyFile) at PayRoll.Web.App.SelectAssemblies() at Caliburn.PresentationFramework.ApplicationModel.CaliburnApplication..ctor() at PayRoll.Web.App..ctor() at PayRoll.Web.App.Main() at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.Runtime.Hosting.ManifestRunner.Run(Boolean checkAptModel) at System.Runtime.Hosting.ManifestRunner.ExecuteAsAssembly() at System.Runtime.Hosting.ApplicationActivator.CreateInstance(ActivationContext activationContext, String[] activationCustomData) at System.Runtime.Hosting.ApplicationActivator.CreateInstance(ActivationContext activationContext) at System.Windows.Interop.PresentationApplicationActivator.CreateInstance(ActivationContext actCtx) at System.Activator.CreateInstance(ActivationContext activationContext) at System.AppDomain.Setup(Object arg) at System.AppDomain.nCreateInstance(String friendlyName, AppDomainSetup setup, Evidence providedSecurityInfo, Evidence creatorsSecurityInfo, IntPtr parentSecurityDescriptor) at System.Runtime.Hosting.ApplicationActivator.CreateInstanceHelper(AppDomainSetup adSetup) at System.Runtime.Hosting.ApplicationActivator.CreateInstance(ActivationContext activationContext, String[] activationCustomData) at System.Windows.Interop.PresentationApplicationActivator.CreateInstance(ActivationContext actCtx) at System.Activator.CreateInstance(ActivationContext activationContext) at System.Deployment.Application.DeploymentManager.ExecuteNewDomain() at System.Deployment.Application.InPlaceHostingManager.Execute() at MS.Internal.AppModel.XappLauncherApp.ExecuteDownloadedApplication() at System.Windows.Interop.DocObjHost.RunApplication(ApplicationRunner runner) at MS.Internal.AppModel.XappLauncherApp.XappLauncherApp_Exit(Object sender, ExitEventArgs e) at System.Windows.Application.OnExit(ExitEventArgs e) at System.Windows.Application.DoShutdown() at System.Windows.Application.ShutdownImpl() at System.Windows.Application.ShutdownCallback(Object arg) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.Run() at System.Windows.Application.RunDispatcher(Object ignore) at System.Windows.Application.StartDispatcherInBrowser(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.FileIOPermission

    Read the article

  • War wont deploy "Unresolved <ejb-link>" Glassfish 3, Netbeans 7

    - by Ime Imee
    I have enterprise aplication with ejb and war module, and since I created local interface web module wont deploy. It builds fine. EJB project is referenced inside web project. Also when I delete <ejb-local-ref> from web.xml it deploys, but then lookup method fails. Glassfish error: SEVERE: Exception while deploying the app [Projekat-war] : Error: Unresolved <ejb-link>: Projekat-ejb.jar#ZaWebSessionBean Simple interface: @Local public interface ZaWebSessionBeanLocal { String vrati(String str); } @Stateless public class ZaWebSessionBean implements ZaWebSessionBeanLocal { @Override public String vrati(String str) { return "vrati"; } // Add business logic below. (Right-click in editor and choose // "Insert Code > Add Business Method") } And web.xml <ejb-local-ref> <ejb-ref-name>ZaWebSessionBean</ejb-ref-name> <ejb-ref-type>Session</ejb-ref-type> <local>za_web.ZaWebSessionBeanLocal</local> <ejb-link>Projekat-ejb.jar#ZaWebSessionBean</ejb-link> </ejb-local-ref> Lookup method (generated) : public class HeaderBean { ZaWebSessionBeanLocal zaWebSessionBean = lookupZaWebSessionBeanLocal(); private ZaWebSessionBeanLocal lookupZaWebSessionBeanLocal() { try { Context c = new InitialContext(); return (ZaWebSessionBeanLocal) c.lookup("java:global/Projekat/Projekat-ejb/ZaWebSessionBean!za_web.ZaWebSessionBeanLocal"); } catch (NamingException ne) { Logger.getLogger(getClass().getName()).log(Level.SEVERE, "exception caught", ne); throw new RuntimeException(ne); } } Full log: SEVERE: Exception while deploying the app [Projekat-war] : Error: Unresolved <ejb-link>: Projekat-ejb.jar#ZaWebSessionBean SEVERE: Unresolved <ejb-link>: Projekat-ejb.jar#ZaWebSessionBean SEVERE: Exception while deploying the app [Projekat-war] SEVERE: Error: Unresolved <ejb-link>: Projekat-ejb.jar#ZaWebSessionBean java.lang.RuntimeException: Error: Unresolved <ejb-link>: Projekat-ejb.jar#ZaWebSessionBean at com.sun.enterprise.deployment.util.EjbBundleValidator.accept(EjbBundleValidator.java:724) at com.sun.enterprise.deployment.WebBundleDescriptor.visit(WebBundleDescriptor.java:2004) at com.sun.enterprise.deployment.Application.visit(Application.java:1777) at com.sun.enterprise.deployment.archivist.ApplicationFactory.openArchive(ApplicationFactory.java:195) at org.glassfish.javaee.core.deployment.DolProvider.load(DolProvider.java:185) at org.glassfish.javaee.core.deployment.DolProvider.load(DolProvider.java:94) at com.sun.enterprise.v3.server.ApplicationLifecycle.loadDeployer(ApplicationLifecycle.java:827) at com.sun.enterprise.v3.server.ApplicationLifecycle.setupContainerInfos(ApplicationLifecycle.java:769) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:368) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:461) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:212) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117) at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:849) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:746) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:619) SEVERE: Exception while deploying the app [Projekat-war] : Error: Unresolved <ejb-link>: Projekat-ejb.jar#ZaWebSessionBean

    Read the article

  • Errors rose when a Netbean Maven Project tries to run

    - by Zakaria
    Hi everybody, I installed NetBeans 6.8 on Vista and and I'm trying to execute a simple Maven Project. When I ran the project, I got this set of errors: WARNING: You are running embedded Maven builds, some build may fail due to incompatibilities with latest Maven release. To set Maven instance to use for building, click here. Scanning for projects... [#process-resources] [resources:resources] Using default encoding to copy filtered resources. [#compile] [compiler:compile] Nothing to compile - all classes are up to date [exec:exec] [EL Info]: 2010-04-04 18:22:54.907--ServerSession(15532856)--EclipseLink, version: Eclipse Persistence Services - 2.0.0.v20091127-r5931 [EL Severe]: 2010-04-04 18:22:54.929--ServerSession(15532856)--Local Exception Stack: Exception [EclipseLink-4003] (Eclipse Persistence Services - 2.0.0.v20091127-r5931): org.eclipse.persistence.exceptions.DatabaseException Exception in thread "main" javax.persistence.PersistenceException: Exception [EclipseLink-4003] (Eclipse Persistence Services - 2.0.0.v20091127-r5931): org.eclipse.persistence.exceptions.DatabaseException Exception Description: Configuration error. Class [org.apache.derby.jdbc.ClientDriver] not found. Exception Description: Configuration error. Class [org.apache.derby.jdbc.ClientDriver] not found. at org.eclipse.persistence.exceptions.DatabaseException.configurationErrorClassNotFound(DatabaseException.java:82) at org.eclipse.persistence.sessions.DefaultConnector.loadDriverClass(DefaultConnector.java:267) at org.eclipse.persistence.sessions.DefaultConnector.connect(DefaultConnector.java:85) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:392) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.getServerSession(EntityManagerFactoryImpl.java:151) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:584) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:207) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:228) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:195) at com.mycompany.chapter2_ex1.Main.main(Main.java:31) Caused by: Exception [EclipseLink-4003] (Eclipse Persistence Services - 2.0.0.v20091127-r5931): org.eclipse.persistence.exceptions.DatabaseException Exception Description: Configuration error. Class [org.apache.derby.jdbc.ClientDriver] not found. at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:368) at org.eclipse.persistence.exceptions.DatabaseException.configurationErrorClassNotFound(DatabaseException.java:82) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.getServerSession(EntityManagerFactoryImpl.java:151) at org.eclipse.persistence.sessions.DefaultConnector.loadDriverClass(DefaultConnector.java:267) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:207) at org.eclipse.persistence.sessions.DefaultConnector.connect(DefaultConnector.java:85) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:195) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at com.mycompany.chapter2_ex1.Main.main(Main.java:31) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:584) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:228) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:368) ... 4 more [ERROR]The following mojo encountered an error while executing: [ERROR]Group-Id: org.codehaus.mojo [ERROR]Artifact-Id: exec-maven-plugin [ERROR]Version: 1.1.1 [ERROR]Mojo: exec [ERROR]brought in via: Direct invocation [ERROR]While building project: [ERROR]Group-Id: com.mycompany [ERROR]Artifact-Id: chapter2_ex1 [ERROR]Version: 1.0-SNAPSHOT [ERROR]From file: C:\Users\Charlotte\Documents\NetBeansProjects\chapter2_ex1\pom.xml [ERROR]Reason: Result of cmd.exe /X /C ""C:\Program Files\Java\jdk1.6.0_11\bin\java.exe" -classpath C:\Users\Charlotte\Documents\NetBeansProjects\chapter2_ex1\target\classes;C:\Users\Charlotte\.m2\repository\javax\persistence\persistence-api\1.0\persistence-api-1.0.jar;C:\Users\Charlotte\.m2\repository\org\eclipse\persistence\javax.persistence\2.0.0\javax.persistence-2.0.0.jar;C:\Users\Charlotte\.m2\repository\org\eclipse\persistence\eclipselink\2.0.0-RC1\eclipselink-2.0.0-RC1.jar com.mycompany.chapter2_ex1.Main" execution is: '1'. ------------------------------------------------------------------------ For more information, run with the -e flag ------------------------------------------------------------------------ BUILD FAILED ------------------------------------------------------------------------ Total time: 3 seconds Finished at: Sun Apr 04 18:22:55 CEST 2010 Final Memory: 47M/94M ------------------------------------------------------------------------ Theses exceptions rose even if I can run the database by using the console (ij) and when I connect the Database, no errors are showing. Can you help me please? Thank you very much. Regards.

    Read the article

  • "Launch Failed. Binary Not Found." Snow Leopard and Eclipse C/C++ IDE issue.

    - by Alex
    Not a question, I've just scoured the internet in search of a solution for this problem and thought I'd share it with the good folks of SO. I'll put it in plain terms so that it's accessible to newbs. :) (Apologies if this is the wrong place -- just trying to be helpful.) This issue occurs with almost any user OS X Snow Leopard who tries to use the Eclipse C/C++ IDE, but is particularly annoying for the people (like me) who were using the Eclipse C/C++ IDE in Leopard, and were unable to work with Eclipse anymore when they upgraded. The issue occurs When users go to build/compile/link their software. They get the following error: Launch Failed. Binary Not Found. Further, the "binaries" branch in the project window on the left is simply nonexistent. THE PROBLEM: is that GCC 4.2 (the GNU Compiler Collection) that comes with Snow Leopard compiles binaries in 64-bit by default. Unfortunately, the linker that Eclipse uses does not understand 64-bit binaries; it reads 32-bit binaries. There may be other issues here, but in short, they culminate in no binary being generated, at least not one that Eclipse can read, which translates into Eclipse not finding the binaries. Hence the error. One solution is to add an -arch i686 flag when making the file, but manually making the file every time is annoying. Luckily for us, Snow Leopard also comes with GCC 4.0, which compiles in 32 bits by default. So one solution is merely to link this as the default compiler. This is the way I did it. THE SOLUTION: The GCCs are in /usr/bin, which is normally a hidden folder, so you can't see it in the Finder unless you explicitly tell the system that you want to see hidden folders. Anyway, what you want to do is go to the /usr/bin folder and delete the path that links the GCC command with GCC 4.2 and add a path that links the GCC command with GCC 4.0. In other words, when you or Eclipse try to access GCC, we want the command to go to the compiler that builds in 32 bits by default, so that the linker can read the files; we do not want it to go to the compiler that compiles in 64 bits. The best way to do this is to go to Applications/Utilities, and select the app called Terminal. A text prompt should come up. It should say something like "(Computer Name):~ (Username)$ " (with a space for you user input at the end). The way to accomplish the tasks above is to enter the following commands, entering each one in sequence VERBATIM, and pressing enter after each individual line. cd /usr/bin rm cc gcc c++ g++ ln -s gcc-4.0 cc ln -s gcc-4.0 gcc ln -s c++-4.0 c++ ln -s g++-4.0 g++ Like me, you will probably get an error that tells you you don't have permission to access these files. If so, try the following commands instead: cd /usr/bin sudo rm cc gcc c++ g++ sudo ln -s gcc-4.0 cc sudo ln -s gcc-4.0 gcc sudo ln -s c++-4.0 c++ sudo ln -s g++-4.0 g++ Sudo may prompt you for a password. If you've never used sudo before, try just pressing enter. If that doesn't work, try the password for your main admin account. OTHER POSSIBLE SOLUTIONS You may be able to enter build variables into Eclipse. I tried this, but I don't know enough about it. If you want to feel it out, the flag you will probably need is -arch i686. In earnest, GCC-4.0 worked for me all this time, and I don't see any reason to switch now. There may be a way to alter the default for the compiler itself, but once again, I don't know enough about it. Hope this has been helpful and informative. Good coding!

    Read the article

  • ASP.NET WebControl ITemplate child controls are null

    - by tster
    Here is what I want: I want a control to put on a page, which other developers can place form elements inside of to display the entities that my control is searching. I have the Searching logic all working. The control builds custom search fields and performs searches based on declarative C# classes implementing my SearchSpec interface. Here is what I've been trying: I've tried using ITemplate on a WebControl which implements INamingContainer I've tried implementing a CompositeControl The closest I can get to working is below. OK I have a custom WebControl [ AspNetHostingPermission(SecurityAction.Demand, Level = AspNetHostingPermissionLevel.Minimal), AspNetHostingPermission(SecurityAction.InheritanceDemand, Level = AspNetHostingPermissionLevel.Minimal), DefaultProperty("SearchSpecName"), ParseChildren(true), ToolboxData("<{0}:SearchPage runat=\"server\"> </{0}:SearchPage>") ] public class SearchPage : WebControl, INamingContainer { [Browsable(false), PersistenceMode(PersistenceMode.InnerProperty), DefaultValue(typeof(ITemplate), ""), Description("Form template"), TemplateInstance(TemplateInstance.Single), TemplateContainer(typeof(FormContainer))] public ITemplate FormTemplate { get; set; } public class FormContainer : Control, INamingContainer{ } public Control MyTemplateContainer { get; private set; } [Bindable(true), Category("Behavior"), DefaultValue(""), Description("The class name of the SearchSpec to use."), Localizable(false)] public virtual string SearchSpecName { get; set; } [Bindable(true), Category("Behavior"), DefaultValue(true), Description("True if this is query mode."), Localizable(false)] public virtual bool QueryMode { get; set; } private SearchSpec _spec; private SearchSpec Spec { get { if (_spec == null) { Type type = Assembly.GetExecutingAssembly().GetTypes().Where(t => t.Name == SearchSpecName).First(); _spec = (SearchSpec)Assembly.GetExecutingAssembly().CreateInstance(type.Namespace + "." + type.Name); } return _spec; } } protected override void CreateChildControls() { if (FormTemplate != null) { MyTemplateContainer = new FormTemplateContainer(this); FormTemplate.InstantiateIn(MyTemplateContainer); Controls.Add(MyTemplateContainer); } else { Controls.Add(new LiteralControl("blah")); } } protected override void RenderContents(HtmlTextWriter writer) { // <snip> } protected override HtmlTextWriterTag TagKey { get { return HtmlTextWriterTag.Div; } } } public class FormTemplateContainer : Control, INamingContainer { private SearchPage parent; public FormTemplateContainer(SearchPage parent) { this.parent = parent; } } then the usage: <tster:SearchPage ID="sp1" runat="server" SearchSpecName="TestSearchSpec" QueryMode="False"> <FormTemplate> <br /> Test Name: <asp:TextBox ID="testNameBox" runat="server" Width="432px"></asp:TextBox> <br /> Owner: <asp:TextBox ID="ownerBox" runat="server" Width="427px"></asp:TextBox> <br /> Description: <asp:TextBox ID="descriptionBox" runat="server" Height="123px" Width="432px" TextMode="MultiLine" Wrap="true"></asp:TextBox> </FormTemplate> </tster:SearchPage> The problem is that in the CodeBehind, the page has members descriptionBox, ownerBox and testNameBox. However, they are all null. Furthermore, FindControl("ownerBox") returns null as does this.sp1.FindControl("ownerBox"). I have to do this.sp1.MyTemplateContainer.FindControl("ownerBox") to get the control. How can I make it so that the C# Code Behind will have the controls generated and not null in my Page_Load event so that developers can just do this: testNameBox.Text = "foo"; ownerBox.Text = "bar"; descriptionBox.Text = "baz";

    Read the article

  • Ivy resolve not working with dynamic artifact

    - by richever
    I've been using Ivy a bit but I seem to still have a lot to learn. I have two projects. One is a web app and the other is a library upon which the web app depends. The set up is that the library project is compiled to a jar file and published using Ivy to a directory within the project. In the web app build file, I have an ant target that calls the Ivy resolve ant task. What I'd like to do is have the web app using the dynamic resolve mode during development (on developer's local machines) and default resolve mode for test and production builds. Previously I was appending a time stamp to the library archive file so that Ivy would notice changes in file when the web app tried to resolve its dependency on it. Within Eclipse this is cumbersome because, in the web app, the project had to be refreshed and the build path tweaked every time a new library jar was published. Publishing a similarly named jar file every time would, I figure, only require developers to refresh the project. The problem is that the web app is unable to retrieve the dynamic jar file. The output I get looks something like this: resolve: [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:configure] :: loading settings :: file = /Users/richard/workspace/webapp/web/WEB-INF/config/ivy/ivysettings.xml [ivy:resolve] :: resolving dependencies :: com.webapp#webapp;[email protected] [ivy:resolve] confs: [default] [ivy:resolve] found com.webapp#library;latest.integration in local [ivy:resolve] :: resolution report :: resolve 142ms :: artifacts dl 0ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 0 | 0 | --------------------------------------------------------------------- [ivy:resolve] [ivy:resolve] :: problems summary :: [ivy:resolve] :::: WARNINGS [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: UNRESOLVED DEPENDENCIES :: [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: com.webapp#library;latest.integration: impossible to resolve dynamic revision [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :::: ERRORS [ivy:resolve] impossible to resolve dynamic revision for com.webapp#library;latest.integration: check your configuration and make sure revision is part of your pattern [ivy:resolve] [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS BUILD FAILED /Users/richard/workspace/webapp/build.xml:71: impossible to resolve dependencies: resolve failed - see output for details The web app resolve target looks like this: <target name="resolve" depends="load-ivy"> <ivy:configure file="${ivy.dir}/ivysettings.xml" /> <ivy:resolve file="${ivy.dir}/ivy.xml" resolveMode="${ivy.resolve.mode}"/> <ivy:retrieve pattern="${lib.dir}/[artifact]-[revision].[ext]" type="jar" sync="true" /> </target> In this case, ivy.resolve.mode has a value of 'dynamic' (without quotes). The web app's Ivy file is simple. It looks like this: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.webapp" module="webapp"/> <dependencies> <dependency name="library" rev="${ivy.revision.default}" revConstraint="${ivy.revision.dynamic}" /> </dependencies> </ivy-module> During development, ivy.revision.dynamic has a value of 'latest.integration'. While, during production or test, 'ivy.revision.default' has a value of '1.0'. Any ideas? Please let me know if there's any more information I need to supply. Thanks!

    Read the article

  • Need a little help with sound in a JApplet

    - by jacob schuschel
    I am working on a solitaire game in Java, and i need to implement sound when the desk is shuffled, card flipped, etc. I used the following sites as reference to try and get it to work, but i am getting Null Pointer Exceptions or mishandled URL exception (depending on what i tweak). here Also, i am using netbeans 6.7.1 as my IDE. I will try to break down the code and explain: package cardgame; import java.applet.*; import java.util.logging.Level; import java.util.logging.Logger; import javax.swing.JApplet; import javax.swing.*; import java.io.*; import java.net.*; /** * * @author jacob */ public class Sound extends JApplet { private AudioClip song; // Sound player private String URL = null; private URL songPath; // Sound path /* *sound_1 = shuffling cards *sound_2 = to discard *sound_3 = from discard *sound_4 = cardflip 1 *sound_5 = cardflip 2 */ Sound(String filename) { try { songPath = new URL(getCodeBase(),filename); // Get the Sound URL } catch (MalformedURLException ex) { Logger.getLogger(Sound.class.getName()).log(Level.SEVERE, null, ex); } song = getAudioClip(songPath); // Load the Sound } Sound(int i) { URL = "./sounds/sound_" + i + ".wav"; System.out.println(URL); try { songPath = new URL(URL); // Get the Sound URL song = getAudioClip(songPath); } catch (MalformedURLException ex) { Logger.getLogger(Sound.class.getName()).log(Level.SEVERE, null, ex); } } public void playSound() { song.loop(); // Play } public void stopSound() { song.stop(); // Stop } public void playSoundOnce() { song.play(); // Play only once } } The 2 different construcors are for different ways i tried to implement this. The first one creates the filepath, and passes it in. The second one builds the filepath in the constructor, given a sound # (i made a list of what numbers correspond to what sound for reference). I am getting the followig errors out: ./sounds/sound_1.wav Nov 16, 2009 4:14:13 PM cardgame.Sound ./sounds/sound_2.wav SEVERE: null java.net.MalformedURLException: no protocol: ./sounds/sound_1.wav ./sounds/sound_3.wav at java.net.URL.(URL.java:583) at java.net.URL.(URL.java:480) at java.net.URL.(URL.java:429) ./sounds/sound_4.wav ./sounds/sound_5.wav at cardgame.Sound.(Sound.java:46) at cardgame.Game.loadSounds(Game.java:712) at cardgame.Game.(Game.java:62) at cardgame.Main.main(Main.java:25) Nov 16, 2009 4:14:13 PM cardgame.Sound SEVERE: null java.net.MalformedURLException: no protocol: ./sounds/sound_2.wav at java.net.URL.(URL.java:583) at java.net.URL.(URL.java:480) at java.net.URL.(URL.java:429) at cardgame.Sound.(Sound.java:46) at cardgame.Game.loadSounds(Game.java:712) at cardgame.Game.(Game.java:62) at cardgame.Main.main(Main.java:25) Nov 16, 2009 4:14:13 PM cardgame.Sound SEVERE: null java.net.MalformedURLException: no protocol: ./sounds/sound_3.wav at java.net.URL.(URL.java:583) at java.net.URL.(URL.java:480) at java.net.URL.(URL.java:429) at cardgame.Sound.(Sound.java:46) at cardgame.Game.loadSounds(Game.java:712) at cardgame.Game.(Game.java:62) at cardgame.Main.main(Main.java:25) Nov 16, 2009 4:14:13 PM cardgame.Sound SEVERE: null java.net.MalformedURLException: no protocol: ./sounds/sound_4.wav at java.net.URL.(URL.java:583) at java.net.URL.(URL.java:480) at java.net.URL.(URL.java:429) at cardgame.Sound.(Sound.java:46) at cardgame.Game.loadSounds(Game.java:712) at cardgame.Game.(Game.java:62) at cardgame.Main.main(Main.java:25) Nov 16, 2009 4:14:13 PM cardgame.Sound SEVERE: null java.net.MalformedURLException: no protocol: ./sounds/sound_5.wav at java.net.URL.(URL.java:583) at java.net.URL.(URL.java:480) at java.net.URL.(URL.java:429) at cardgame.Sound.(Sound.java:46) at cardgame.Game.loadSounds(Game.java:712) at cardgame.Game.(Game.java:62) at cardgame.Main.main(Main.java:25) Thanks for those who read and more thanks to those who help. I know it is somewhat long, but i would rather get it all out there, than have 50 questions that come back or have people not answer due to lack of initial info. also only lets me post a single link right now, so the links are given below dreamincode.net/forums/showtopic14083.htm stackoverflow.com/questions/512436/java-playing-wav-sounds deitel.com/articles/java_tutorials/20060422/LoadingPlayingAudioClips/index.html

    Read the article

  • Receicing POST data in ASP.NET

    - by grast
    Hi, I want to use ASP for code generation in a C# desktop application. To achieve this, I set up a simple host (derived from System.MarshalByRefObject) that processes a System.Web.Hosting.SimpleWorkerRequest via HttpRuntime.ProcessRequest. This processes the ASPX script specified by the incoming request (using System.Net.HttpListener to wait for requests). The client-part is represented by a System.ComponentModel.BackgroundWorker that builds the System.Net.HttpWebRequest and receives the response from the server. A simplified version of my client-part-code looks like this: private void SendRequest(object sender, DoWorkEventArgs e) { // create request with GET parameter var uri = "http://localhost:9876/test.aspx?getTest=321"; var request = (HttpWebRequest)WebRequest.Create(uri); // append POST parameter request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; var postData = Encoding.Default.GetBytes("postTest=654"); var postDataStream = request.GetRequestStream(); postDataStream.Write(postData, 0, postData.Length); // send request, wait for response and store/print content using (var response = (HttpWebResponse)request.GetResponse()) { using (var reader = new StreamReader(response.GetResponseStream(), Encoding.UTF8)) { _processsedContent = reader.ReadToEnd(); Debug.Print(_processsedContent); } } } My server-part-code looks like this (without exception-handling etc.): public void ProcessRequests() { // HttpListener at http://localhost:9876/ var listener = SetupListener(); // SimpleHost created by ApplicationHost.CreateApplicationHost var host = SetupHost(); while (_running) { var context = listener.GetContext(); using (var writer = new StreamWriter(context.Response.OutputStream)) { // process ASP script and send response back to client host.ProcessRequest(GetPage(context), GetQuery(context), writer); } context.Response.Close(); } } So far all this works fine as long as I just use GET parameters. But when it comes to receiving POST data in my ASPX script I run into trouble. For testing I use the following script: // GET parameters are working: var getTest = Request.QueryString["getTest"]; Response.Write("getTest: " + getTest); // prints "getTest: 321" // don't know how to access POST parameters: var postTest1 = Request.Form["postTest"]; // Request.Form is empty?! Response.Write("postTest1: " + postTest1); // so this prints "postTest1: " var postTest2 = Request.Params["postTest"]; // Request.Params is empty?! Response.Write("postTest2: " + postTest2); // so this prints "postTest2: " It seems that the System.Web.HttpRequest object I'm dealing with in ASP does not contain any information about my POST parameter "postTest". I inspected it in debug mode and none of the members did contain neither the parameter-name "postTest" nor the parameter-value "654". I also tried the BinaryRead method of Request, but unfortunately it is empty. This corresponds to Request.InputStream==null and Request.ContentLength==0. And to make things really confusing the Request.HttpMethod member is set to "GET"?! To isolate the problem I tested the code by using a PHP script instead of the ASPX script. This is very simple: print_r($_GET); // prints all GET variables print_r($_POST); // prints all POST variables And the result is: Array ( [getTest] = 321 ) Array ( [postTest] = 654 ) So with the PHP script it works, I can access the POST data. Why does the ASPX script don't? What am I doing wrong? Is there a special accessor or method in the Response object? Can anyone give a hint or even know how to solve this? Thanks in advance.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >