Search Results

Search found 23653 results on 947 pages for 'oracle workforce management'.

Page 614/947 | < Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >

  • 12c - Utl_Call_Stack...

    - by noreply(at)blogger.com (Thomas Kyte)
    Over the next couple of months, I'll be writing about some cool new little features of Oracle Database 12c - things that might not make the front page of Oracle.com.  I'm going to start with a new package - UTL_CALL_STACK.In the past, developers have had access to three functions to try to figure out "where the heck am I in my code", they were:dbms_utility.format_call_stackdbms_utility.format_error_backtracedbms_utility.format_error_stackNow these routines, while useful, were of somewhat limited use.  Let's look at the format_call_stack routine for a reason why.  Here is a procedure that will just print out the current call stack for us:ops$tkyte%ORA12CR1> create or replace  2  procedure Print_Call_Stack  3  is  4  begin  5    DBMS_Output.Put_Line(DBMS_Utility.Format_Call_Stack());  6  end;  7  /Procedure created.Now, if we have a package - with nested functions and even duplicated function names:ops$tkyte%ORA12CR1> create or replace  2  package body Pkg is  3    procedure p  4    is  5      procedure q  6      is  7        procedure r  8        is  9          procedure p is 10          begin 11            Print_Call_Stack(); 12            raise program_error; 13          end p; 14        begin 15          p(); 16        end r; 17      begin 18        r(); 19      end q; 20    begin 21      q(); 22    end p; 23  end Pkg; 24  /Package body created.When we execute the procedure PKG.P - we'll see as a result:ops$tkyte%ORA12CR1> exec pkg.p----- PL/SQL Call Stack -----  object      line  object  handle    number  name0x6e891528         4  procedure OPS$TKYTE.PRINT_CALL_STACK0x6ec4a7c0        10  package body OPS$TKYTE.PKG0x6ec4a7c0        14  package body OPS$TKYTE.PKG0x6ec4a7c0        17  package body OPS$TKYTE.PKG0x6ec4a7c0        20  package body OPS$TKYTE.PKG0x76439070         1  anonymous blockBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1The bit in red above is the output from format_call_stack whereas the bit in black is the error message returned to the client application (it would also be available to you via the format_error_backtrace API call). As you can see - it contains useful information but to use it you would need to parse it - and that can be trickier than it seems.  The format of those strings is not set in stone, they have changed over the years (I wrote the "who_am_i", "who_called_me" functions, I did that by parsing these strings - trust me, they change over time!).Starting in 12c - we'll have structured access to the call stack and a series of API calls to interrogate this structure.  I'm going to rewrite the print_call_stack function as follows:ops$tkyte%ORA12CR1> create or replace 2  procedure Print_Call_Stack  3  as  4    Depth pls_integer := UTL_Call_Stack.Dynamic_Depth();  5    6    procedure headers  7    is  8    begin  9        dbms_output.put_line( 'Lexical   Depth   Line    Name' ); 10        dbms_output.put_line( 'Depth             Number      ' ); 11        dbms_output.put_line( '-------   -----   ----    ----' ); 12    end headers; 13    procedure print 14    is 15    begin 16        headers; 17        for j in reverse 1..Depth loop 18          DBMS_Output.Put_Line( 19            rpad( utl_call_stack.lexical_depth(j), 10 ) || 20                    rpad( j, 7) || 21            rpad( To_Char(UTL_Call_Stack.Unit_Line(j), '99'), 9 ) || 22            UTL_Call_Stack.Concatenate_Subprogram 23                       (UTL_Call_Stack.Subprogram(j))); 24        end loop; 25    end; 26  begin 27    print; 28  end; 29  /Here we are able to figure out what 'depth' we are in the code (utl_call_stack.dynamic_depth) and then walk up the stack using a loop.  We will print out the lexical_depth, along with the line number within the unit we were executing plus - the unit name.  And not just any unit name, but the fully qualified, all of the way down to the subprogram name within a package.  Not only that - but down to the subprogram name within a subprogram name within a subprogram name.  For example - running the PKG.P procedure again results in:ops$tkyte%ORA12CR1> exec pkg.pLexical   Depth   Line    NameDepth             Number-------   -----   ----    ----1         6       20      PKG.P2         5       17      PKG.P.Q3         4       14      PKG.P.Q.R4         3       10      PKG.P.Q.R.P0         2       26      PRINT_CALL_STACK1         1       17      PRINT_CALL_STACK.PRINTBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1This time - we get much more than just a line number and a package name as we did previously with format_call_stack.  We not only got the line number and package (unit) name - we got the names of the subprograms - we can see that P called Q called R called P as nested subprograms.  Also note that we can see a 'truer' calling level with the lexical depth, we can see we "stepped" out of the package to call print_call_stack and that in turn called another nested subprogram.This new package will be a nice addition to everyone's error logging packages.  Of course there are other functions in there to get owner names, the edition in effect when the code was executed and more. See UTL_CALL_STACK for all of the details.

    Read the article

  • 12c - flashforward, flashback or see it as of now...

    - by noreply(at)blogger.com (Thomas Kyte)
    Oracle 9i exposed flashback query to developers for the first time.  The ability to flashback query dates back to version 4 however (it just wasn't exposed).  Every time you run a query in Oracle it is in fact a flashback query - it is what multi-versioning is all about.However, there was never a flashforward query (well, ok, the workspace manager has this capability - but with lots of extra baggage).  We've never been able to ask a table "what will you look like tomorrow" - but now we do.The capability is called Temporal Validity.  If you have a table with data that is effective dated - has a "start date" and "end date" column in it - we can now query it using flashback query like syntax.  The twist is - the date we "flashback" to can be in the future.  It works by rewriting the query to transparently the necessary where clause and filter out the right rows for the right period of time - and since you can have records whose start date is in the future - you can query a table and see what it would look like at some future time.Here is a quick example, we'll start with a table:ops$tkyte%ORA12CR1> create table addresses  2  ( empno       number,  3    addr_data   varchar2(30),  4    start_date  date,  5    end_date    date,  6    period for valid(start_date,end_date)  7  )  8  /Table created.the new bit is on line 6 (it can be altered into an existing table - so any table  you have with a start/end date column will be a candidate).  The keyword is PERIOD, valid is an identifier I chose - it could have been foobar, valid just sounds nice in the query later.  You identify the columns in your table - or we can create them for you if they don't exist.  Then you just create some data:ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '123 Main Street', trunc(sysdate-5), trunc(sysdate-2) );1 row created.ops$tkyte%ORA12CR1>ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '456 Fleet Street', trunc(sysdate-1), trunc(sysdate+1) );1 row created.ops$tkyte%ORA12CR1>ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '789 1st Ave', trunc(sysdate+2), null );1 row created.and you can either see all of the data:ops$tkyte%ORA12CR1> select * from addresses;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 123 Main Street                27-JUN-13 30-JUN-13      1234 456 Fleet Street               01-JUL-13 03-JUL-13      1234 789 1st Ave                    04-JUL-13or query "as of" some point in time - as  you can see in the predicate section - it is just doing a query rewrite to automate the "where" filters:ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate-3;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 123 Main Street                27-JUN-13 30-JUN-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  cthtvvm0dxvva, child number 0-------------------------------------select * from addresses as of period for valid sysdate-3Plan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!-3) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!-3)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 456 Fleet Street               01-JUL-13 03-JUL-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  26ubyhw9hgk7z, child number 0-------------------------------------select * from addresses as of period for valid sysdatePlan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate+3;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 789 1st Ave                    04-JUL-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  36bq7shnhc888, child number 0-------------------------------------select * from addresses as of period for valid sysdate+3Plan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!+3) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!+3)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.All in all a nice, easy way to query effective dated information as of a point in time without a complex where clause.  You need to maintain the data - it isn't that a delete will turn into an update the end dates a record or anything - but if you have tables with start/end dates, this will make it much easier to query them.

    Read the article

  • PHP suddenly failed after IIS update

    - by James Hay
    All my application pools were stopped this morning after I got to work. I can restart them, but when I try to load the website the app pool crashes again. Update: I've looked in the GAC as the error below suggests and it seems that the file is not there. How do I get it back? Update 2: I found a further error in the event log saying The Module name FastCgiModule path C:\WINDOWS\System32\inetsrv\iisfcgi.dll returned an error from registration. The data is the error. So following the information from here http://forums.iis.net/t/1153937.aspx I removed CGI and my sites are working again. This has fixed the initial problem, but now I don't have FastCGI so I'm fairly sure that PHP will no longer be working (I don't have any PHP at the moment to test). Original Post I'm getting this error in the event viewer: IISMANAGER_ERROR_LOADING_PROVIDER_TYPE IIS Manager could not load type 'Web.Management.PHP.PHPProvider, Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' for module provider 'PHP' that is declared in %windir%\system32\inetsrv\config\administration.config. Verify that the type is correct, and that the assembly that contains the module provider is in the Global Assembly Cache (GAC). Exception:System.IO.FileNotFoundException: Could not load file or assembly 'Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' or one of its dependencies. The system cannot find the file specified. File name: 'Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' at System.RuntimeTypeHandle._GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.RuntimeType.PrivateGetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.Type.GetType(String typeName, Boolean throwOnError) at Microsoft.Web.Management.Server.AdministrationModuleProvider.GetModuleProvider(String userName, String connectionName) WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Process:InetMgr Connection:CT211511\Administrator Everything was working fine last night when I left work, and since they've done the maintenance it's all broken.

    Read the article

  • Windows Server 2008 R2: Introducing the AD Administrative Center

    The Active Directory Administrative Center in Windows Server 2008 R2 is a significant improvement over its predecessor. Although not without limitations, it offers beefed-up management of AD objects, new navigation capabilities, better task-based management options, and improvements to the properties page and search capabilities.

    Read the article

  • WebLogic Scripting Tool Tip &ndash; relax the syntax with the easy button

    - by james.bayer
    I stumbled on to this feature in WLST tonight called easeSyntax.  Apparently it’s a hidden feature that one of the WebLogic support engineers blogged about that allows you to simplify the commands in the interactive mode to have fewer parentheses and quotes.  For example, see how some of the commands instead of typing “ls()” I can type '”ls” or “cd(“/somepath”)” can become “cd /somepath”.  It’s not going to save the world, but it will help cut down on some extra typing. The example I was researching when stumbling into this was for how to print the runtime status of deployed application named “hello” on the “AdminServer”.  See the below output. wls:/base_domain/domainConfig> easeSyntax()   You have chosen to ease syntax for some WLST commands. However, the easy syntax should be strictly used in interactive mode. Easy syntax will not function properly in script mode and when used in loops. You can still use the regular jython syntax although you have opted for easy syntax. Use easeSyntax to turn this off. Use help(easeSyntax) for commands that support easy syntax wls:/base_domain/domainConfig> domainRuntime   wls:/base_domain/domainRuntime> ls dr-- AppRuntimeStateRuntime dr-- CoherenceServerLifeCycleRuntimes dr-- ConsoleRuntime dr-- DeployerRuntime dr-- DeploymentManager dr-- DomainServices dr-- LogRuntime dr-- MessageDrivenControlEJBRuntime dr-- MigratableServiceCoordinatorRuntime dr-- MigrationDataRuntimes dr-- PolicySubjectManagerRuntime dr-- SNMPAgentRuntime dr-- ServerLifeCycleRuntimes dr-- ServerRuntimes dr-- ServerServices dr-- ServiceMigrationDataRuntimes   -r-- ActivationTime Wed Dec 15 22:37:02 PST 2010 -r-- MessageDrivenControlEJBRuntime null -r-- MigrationDataRuntimes null -r-- Name base_domain -rw- Parent null -r-- ServiceMigrationDataRuntimes null -r-- Type DomainRuntime   -r-x preDeregister Void : -r-x restartSystemResource Void : WebLogicMBean(weblogic.management.configuration.SystemResourceMBean)   wls:/base_domain/domainRuntime> cd AppRuntimeStateRuntime/AppRuntimeStateRuntime wls:/base_domain/domainRuntime/AppRuntimeStateRuntime/AppRuntimeStateRuntime> ls   -r-- ApplicationIds java.lang.String[active-cache#[email protected], coherence-web-spi#[email protected], coherence#3. -r-- Name AppRuntimeStateRuntime -r-- Type AppRuntimeStateRuntime   -r-x getCurrentState String : String(appid),String(moduleid),String(subModuleId),String(target) -r-x getCurrentState String : String(appid),String(moduleid),String(target) -r-x getCurrentState String : String(appid),String(target) -r-x getIntendedState String : String(appid) -r-x getIntendedState String : String(appid),String(target) -r-x getModuleIds String[] : String(appid) -r-x getModuleTargets String[] : String(appid),String(moduleid) -r-x getModuleTargets String[] : String(appid),String(moduleid),String(subModuleId) -r-x getModuleType String : String(appid),String(moduleid) -r-x getRetireTimeMillis Long : String(appid) -r-x getRetireTimeoutSeconds Integer : String(appid) -r-x getSubmoduleIds String[] : String(appid),String(moduleid) -r-x isActiveVersion Boolean : String(appid) -r-x isAdminMode Boolean : String(appid),String(java.lang.String) -r-x preDeregister Void :   wls:/base_domain/domainRuntime/AppRuntimeStateRuntime/AppRuntimeStateRuntime> cmo.getCurrentState('hello','AdminServer') 'STATE_ACTIVE' wls:/base_domain/domainRuntime/AppRuntimeStateRuntime/AppRuntimeStateRuntime> cd / wls:/base_domain/domainRuntime>

    Read the article

  • How can I run supervisord without using root?

    - by Jason Baker
    I seem to be having trouble figuring out why supervisord won't run as a non-root user. If I start it with the user set to jason (pid 1000), I get the following in the log file: 2010-05-24 08:53:32,143 CRIT Set uid to user 1000 2010-05-24 08:53:32,143 WARN Included extra file "/home/jason/src/tsched/celeryd.conf" during parsing 2010-05-24 08:53:32,189 INFO RPC interface 'supervisor' initialized 2010-05-24 08:53:32,189 WARN cElementTree not installed, using slower XML parser for XML-RPC 2010-05-24 08:53:32,189 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2010-05-24 08:53:32,190 INFO daemonizing the supervisord process 2010-05-24 08:53:32,191 INFO supervisord started with pid 3444 ...then the process dies for some unknown reason. If I start it without sudo (under the user jason), I get similar output: 2010-05-24 08:51:32,859 INFO supervisord started with pid 3306 2010-05-24 08:52:15,761 CRIT Can't drop privilege as nonroot user 2010-05-24 08:52:15,761 WARN Included extra file "/home/jason/src/tsched/celeryd.conf" during parsing 2010-05-24 08:52:15,807 INFO RPC interface 'supervisor' initialized 2010-05-24 08:52:15,807 WARN cElementTree not installed, using slower XML parser for XML-RPC 2010-05-24 08:52:15,807 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2010-05-24 08:52:15,808 INFO daemonizing the supervisord process 2010-05-24 08:52:15,809 INFO supervisord started with pid 3397 ...and it still doesn't run. If it's any help, here's the supervisord.conf file I'm using: [unix_http_server] file=/tmp/supervisor.sock ; path to your socket file [supervisord] logfile=./supervisord.log ; supervisord log file logfile_maxbytes=50MB ; maximum size of logfile before rotation logfile_backups=10 ; number of backed up logfiles loglevel=debug ; info, debug, warn, trace pidfile=./supervisord.pid ; pidfile location nodaemon=false ; run supervisord as a daemon minfds=1024 ; number of startup file descriptors minprocs=200 ; number of process descriptors user=jason ; default user childlogdir=./supervisord/ ; where child log files will live [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use unix:// schem for a unix sockets. [include] # Uncomment this line for celeryd for Python files=celeryd.conf # Uncomment this line for celeryd for Django. ;files=django/celeryd.conf ...and here's celeryd.conf: [program:celery] command=bin/celeryd --loglevel=INFO --logfile=./celeryd.log environment=PYTHONPATH='./tsched_worker', JIVA_DB_PLATFORM='oracle', ORACLE_HOME='/usr/lib/oracle/xe/app/oracle/product/10.2.0/server', LD_LIBRARY_PATH='/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/lib', TNS_ADMIN='/home/jason', CELERY_CONFIG_MODULE='tsched_worker.celeryconfig' directory=. user=jason numprocs=1 stdout_logfile=/var/log/celeryd.log stderr_logfile=/var/log/celeryd.log autostart=true autorestart=true startsecs=10 ; Need to wait for currently executing tasks to finish at shutdown. ; Increase this if you have very long running tasks. stopwaitsecs = 600 ; if rabbitmq is supervised, set its priority higher ; so it starts first priority=998 Can anyone help me figure out what's going on?

    Read the article

  • Deployment Options for AutoVue 20.0 Users

    - by celine.beck
    AutoVue release 20.0 boasts a brand new architecture. As part of this product rearchitecture, AutoVue can now be deployed either as a desktop deployment to serve the needs of individual users in their personal productivity; or in a Client / Server deployment for those that require connections to enterprise applications / back-end systems. The most common question that we hear from our customers about this new architecture is the following: "Is AutoVue Desktop Version still part of release 20.0 and if so, what is the difference between AutoVue Desktop Version and the Desktop deployment of AutoVue release 20.0?" A detailed answer to these questions is provided in a very complete article entitled Understanding Deployment Options for AutoVue 19.3 Desktop Version users upgrading to AutoVue 20.0 (note 1058254.1) which was posted on My Oracle Support. Is AutoVue Desktop Version still part of AutoVue 20.0? Yes, AutoVue Desktop Version 20.0 is still available to customers and partners, as a maintenance release of AutoVue 19.3. As such, it will not contain any of the new capabilities featured in AutoVue release 20.0. All format enhancements and new format support have been added to release 20.0 Desktop Version though. What is the different between AutoVue Desktop Version 20.0 and the Desktop Deployment of AutoVue release 20.0? AutoVue 20.0 Desktop deployment works like the AutoVue Desktop version. It is installed as a standalone product on each user's machine and runs a local instance of AutoVue. The AutoVue 20.0 Desktop deployment includes all new features, formats and performance enhancements included in release 20.0 (walkthrough capability, improved compare, ...) What deployment options are available to AutoVue 19.3 Desktop Version customers? AutoVue Desktop Version users can evolve at their own pace to the new AutoVue platform. With release 20.0, customers can opt to: Option 1: Stay on AutoVue Desktop Version 20.0 Option 2: Migrate to AutoVue and select the desktop deployment method Option 3: Migrate to AutoVue and select the Client/Server deployment method What is the Client / Server deployment of AutoVue 20.0? The Client/Server deployment has AutoVue installed on a server, to which local client machines connect to access and view documents. AutoVue 20.0 Client Server Deployment allows users to leverage the new online/offline capabilities in release 20.0 and easily switch between online and offline modes of operation. With the Client/Server deployment, customers also get a complete, open and standards-based set of integration tools that allows them to tie AutoVue to any enterprise applications to provide users with a consistent view of data and business objects and expand workflow automation to document-based processes. Related articles: AutoVue Release 20.0 Now Available, New Walkthrough Capability in AutoVue 20.0, Watch the AutoVue 20.0 Release Webcast, April 27 at 12pm EST

    Read the article

  • Easy Listening = CRM On Demand Podcasts

    - by Anne
    OK, here's my NEW favorite resource for CRM On Demand info -- podcasts! Specifically, the CRM On Demand Podcast site -- signed, sealed, and delivered with humor and know-how. Yes, I admit, I know the cast of characters. But let's face it, sometimes dealing with software is just soooo dry! Not so when discussed by the two main commentators, Louis Peters and Robert Davidson, whom someone once referred to as CRM On Demand's "Click and Clack." (Thought that was too good not to pass along!) Anyhow, another huge plus about the site is the option to listen OR to read. Out walking my dog or doing the dishes? Just turn up the podcast. Listening to music or watching TV? I'll read Louis's entertaining write-ups to glean great info about CRM On Demand in a very short period of time. So that you get a better understanding of why I like this site so much, here's a sampling of what's discussed: Five Things about Books of Business As Louis Peters put it in his entry, when you see "Five Things" in the title, "you'll know you're going to get some concrete advice that you can put to work right away." Well, Louis and Robert do just that, pointing you in the right direction when using Books of Business to segment data. Moving to Indexed Fields - A Rough Guide (only an article, not a podcast) I've read all about performance and even helped develop material around it. But nowhere have I heard indexed custom fields referred to as "super heroes." Louis and Robert use imaginative language to describe the process for moving your data to indexed fields for optimal performance. Data Access QA from the Forums I think that everyone would admit that data access and visibility is the most difficult topic to understand in CRM On Demand. Following up on their previous podcast on the same topic, Louis and Robert answer a few key questions from the many postings on the Oracle CRM On Demand forums. And I bet that the scenarios match many companies' business requirements...maybe even yours! We Need to Talk About Adoption Another expert, Tim Koehler, joins Louis to talk about how to drive user adoption: aligning product usage with business results, communicating why and how to use the product, getting feedback on usability, and so on. Hope I've made my point -- turn to these podcasts to hear knowledgeable folks discuss CRM On Demand tips and tricks in entertaining ways. One podcast is even called "SaaS Talk"!

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Software Architecture and Software Architecture Evaluation

    How many of us have worked at places where the concept of software architecture was ridiculed for wasting time and money? Even more ridiculous to them was the concept of evaluating software architecture. I think the next time that I am in this situation again, and I hope that I never am I will have to push for this methodology in the software development life cycle. I have spent way too many hours/days/months/years working poorly architected systems or systems that were just built ADHOC. This in software development must stop. I can understand why systems get like this due to overzealous sales staff, demanding management that wants everything yesterday, and project managers asking if things are done yet before the project has even started. But seriously, some time must be spent designing the applications that we write along with evaluating the architecture so that it will integrate will within the existing systems of an origination. If placed in this situation again, I will strive to gain buying from key players within the business, for example: Senior Software Engineers\Developers, Software Architects, Project Managers, Software Quality Assurance, Technical Services, Operations, and Finance in order for this idea to succeed with upper management. In order to convince these key players I will have to show them the benefits of architecture and even more benefits of evaluating software architecture on a system wide level. Benefits of Software Architecture Evaluation Places Stakeholders in the Same Room to Communicate Ensures Delivery of Detailed Quality Goals Prioritizes Conflicting Goals Requires Clear Explication Improves the Quality of Documentation Discovers Opportunities for Cross-Project Reuse Improves Architecture Practices Once I had key player buy in then and only then would I approach upper management about my plan regarding implementing the concept of software architecture and using evaluation to ensure that the software being designed is the proper architecture for the project. In addition to the benefits listed above I would also show upper management how much time is being wasted by not doing these evaluations. For example, if project X cost us Y amount, then why do we have several implementations in various forms of X and how much money and time could we have saved if we just reused the existing code base to give each system the same functionality that was already created? After this, I would mention what would happen if we had 50 instances of this situation? Then I would show them how the software architecture evaluation process would have prevented this and that the optimization could have leveraged its existing code base to increase the speed and quality of its development. References:Carnegie Mellon Software Engineering Institute (2011). Architecture Tradeoff Analysis Method from http://www.sei.cmu.edu/architecture/tools/evaluate/atam.cfm

    Read the article

  • Setup MSSQL replication with peer to peer topology: problem setting up Conflict Detection

    - by Roel
    Hi, I'm setting up a SQL Replication strategy, using MSSQL2008 with peer-to-peer publications (2 servers, each one subscribes to the other). I followed this HOWTO from MSDN, and the setup seems to be working fine: add a record to one table on server A, query on server B shows the new record. So far, so good. So far I only have one table 'Templates': Id PK (calculated field) NodeId int default 1/2 (Server A = 1, Server B = 2) LocalId int autoid Name nvarchar(100) Now, I would like to enable 'Conflict detection', which should be enabled by default. But every time I try to save the 'Conflict Detection' feature in the Publication Properties I get the following error: Cannot save Peer conflict detection properties. An exception occurred while executing a Transact-SQL statement or batch.(Microsoft.SqlServer.ConnectionInfo) Program Location: at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand) at Microsoft.SqlServer.Replication.ReplicationObject.ExecCommand(String commandIn) at Microsoft.SqlServer.Replication.TransPublication.SetPeerConflictDetection(Boolean enablePeerConflictDetection, Int32 peerOriginatorID) at Microsoft.SqlServer.Management.UI.PubPropSubscriptionOptions.SaveP2PConflictDetection() at Microsoft.SqlServer.Management.UI.PubPropSubscriptionOptions.SaveProperties(ExecutionMode& executionResult) Column name 'Id' does not exist in the target table or view. Changed database context to 'TestDB'. (.Net SqlClient Data Provider) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.2531&EvtSrc=MSSQLServer&EvtID=1911&LinkId=20476 Server Name: SERVER_A Error Number: 1911 Severity: 16 State: 1 Line Number: 2 Program Location: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) Now, I googled the hell out of this error, and nothing shows up. I also can't seem to find out what the exact target table of the error "Column name 'Id' does not exist..." is. Has anyone every done this successfully? Am I missing something? Having this setup without conflict detection feels pretty useless... EDIT OK, so after some more research and setting up with different databases etc, I found out that the calculated 'Id' column of the Templates table is the culprit. I don't know why, but the replication doesn't seem to allow calculated columns (which are also primary key). It works now too, without the 'Id' column, and using the NodeId and LocalId as a combined PK. So now the question is, why isn't it allowed to have a calculated column as PK for replication with conflict detection?

    Read the article

  • Adding Descriptive Flex Field (DFF) through OAF Personalization

    - by Manoj Madhusoodanan
    In this blog I will explain how to add a DFF to a existing OAF page through personalization.I am using Supplier Quick Update Page ( /oracle/apps/pos/supplier/webui/SuppSummPG ). If you want to see how to create DFF please click here. In this scenario I am using a custom DFF. Following are the details. Application -> Payables ( Code: SQLAP )Name -> XXCUST_SUPPLIER_DFFTitle -> XXCUST - Supplier DFFTable Name -> AP_SUPPLIERSDFV View name -> XXCUST_SUPPLIER_DFVReference Fields -> ATTRIBUTE_CATEGORY Following are the Context Field Details. Prompt -> Supplier TypeValue Set -> XXCUST_SUP_TYPE ( Values : External and Internal )Reference Field -> ATTRIBUTE_CATEGORY Below table shows the segment details of XXCUST_SUPPLIER_DFF. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Code Segments Column Value Set Global Data Elements Identification Number ATTRIBUTE1 15 Characters External Type ATTRIBUTE2 XXCUST_EXT_SUP_TYPE Values          Domestic           International Internal Department ATTRIBUTE2 15 Characters Following steps you need to perform to create flex item in the Quick Update page. 1) Click on Personalize Page.In the Personalize Page click on Complete View. 2) Click on Create Item.( Based on where you want to place the DFF choose appropriate layout). 3) Create flex item with following details. 4) If you want to arrange the item in the page click on Reorder. Following is the output.

    Read the article

  • Keywords Optimization For Website Optimization

    Saying that you need to do website optimization sounds like saying you need to get healthy. To get healthy we do 2 things: diet management and exercise. Lets start with diet management. Keywords are like food for your WebPages. This article explains the role of keywords in website optimization.

    Read the article

  • Guidance and Pricing for MSDN 2010

    - by John Alexander
    Sorry for the rather lengthy post here. I get asked this all the time so I decided to post it…Visual Studio 2010 editions will be available on April 12, 2010. Product Features Professional with MSDN Essentials Professional with MSDN Premium with MSDN Ultimate with MSDN Test Professional with MSDN Debugging and Diagnostics IntelliTrace (Historical Debugger)         Static Code Analysis       Code Metrics       Profiling       Debugger   Testing Tools Unit Testing   Code Coverage       Test Impact Analysis       Coded UI Test       Web Performance Testing         Load Testing1         Microsoft Test Manager 2010       Test Case Management2       Manual Test Execution       Fast-Forward for Manual Testing       Lab Management Configuration3       Integrated Development Environment Multiple Monitor Support   Multi-Targeting   One Click Web Deployment   JavaScript and jQuery Support   Extensible WPF-Based Environment Database Development Database Deployment       Database Change Management2       Database Unit Testing       Database Test Data Generation       Data Access   Development Platform Support Windows Development   Web Development   Office and SharePoint Development   Cloud Development   Customizable Development Experience   Architecture and Modeling Architecture Explorer         UML® 2.0 Compliant Diagrams (Activity, Use Case, Sequence, Class, Component)         Layer Diagram and Dependency Validation         Read-only diagrams (UML, Layer, DGML Graphs)         Lab Management Virtual environment setup & tear down3       Provision environment from template3       Checkpoint environment3       Team Foundation Server Version Control2   Work Item Tracking2   Build Automation2   Team Portal2   Reporting & Business Intelligence2   Agile Planning Workbook2   Microsoft Visual Studio Team Explorer 2010   Test Case Management2       MSDN Subscription – Software and Services for Production Use Windows Azure Platform 20 hrs/mo † 50 hrs/mo † 100 hrs/mo † 250 hrs/mo † n/a Microsoft Visual Studio Team Foundation Server 2010   Microsoft Visual Studio Team Foundation Server 2010 CAL   1 1 1 1 Microsoft Expression Studio 3       Microsoft Office Professional Plus 2010, Project Professional 2010, Visio Premium 2010 (following Office 2010 launch)       MSDN Subscription – Software for Development and Testing 4 Windows 7, Windows Server 2008 R2 and SQL Server 2008 Toolkits, Software Development Kits, Driver Development Kits Previous versions of Windows (client and server operation systems)   Previous versions of Microsoft SQL Server   Microsoft Office       Microsoft Dynamics       All other Servers       Windows Embedded operating systems       Teamprise         MSDN Subscription – Other Benefits Technical support incidents 0 2 4 4 2 Priority support in MSDN Forums Microsoft e-learning collections (typically 10 courses or 20 hours) 0 1 2 2 1 MSDN Flash newsletter MSDN Online Concierge MSDN Magazine   System Requirements View View View View View Buy from (MSRP) $799 $1,199 $5,469 $11,899 $2,169 Renew from (MSRP) $549 (upgrade) $799 $2,299 $3,799 $899 † Availability varies by country and subscription level.  Details available on the MSDN site 1. May require one or more Microsoft Visual Studio Load Test Virtual User Pack 2010 2. Requires Team Foundation Server and a Team Foundation Server CAL 3. Requires Microsoft Visual Studio Lab Management 2010 4. Per-user license allows unlimited installations and use for designing, developing, testing, and demonstrating applications. UML is a registered trademark of Object Management Group, Inc. Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries.

    Read the article

  • Devoxx 2011 Trip Report + Pictures

    - by arungupta
    3350 attendees from 40 countries lived in "paradise" for 5 days last week. This paradise had 170+ rock star speakers delivering 200+ hours of technical content in about 150 sessions. And it truly was a paradise with a clear differentiation from other Java conferences. There were several Oracle speakers at the paradise covering the entire gamut of Java platform. I delivered a Java EE 6 hands-on lab (new content), showcased Java EE 7 and GlassFish 4.0 early work at the keynote, and participated in a panel to talk about Contexts and Dependency Injection. The demo in the keynote showed how to deploy a Java EE application in a managed environment. The demo showed a Conference Planner application that can be used by conference organizers to display sessions, tracks, and speaker information. This same application can be deployed and display data from JavaOne 2011 or Devoxx 2011 based upon the SQL chosen for database initialization. If javaone-sf-2011.sql is chosen for datbase initialization then the application looks like as shown: If devoxx-2011.sql is chosen then the application looks like as shown: And of course, clicking on Tracks, Speakers, Sessions shows you information from the respective conference. The complete source code for the application and detailed instructions are availaable at glassfish.org/javaone2011. In short: Download the sample app and unzip Download GlassFish build b05. Download platform-specific Load Balancer template Run "bin/install.sh" to configure GlassFish Pick javaone-sf-2011.sql or devoxx-2011.sql for database initialization You can also watch the application in action in this video: A breaking news shared at the conference was that Devoxx France is coming from April 18- 20 and 75% of the talks will be in French. Stay tuned for more details on that. I'm sure Antonio and gang will put up a great show out there! Just a tip for the first timers to Devoxx ... A bus leaves from Brussels airport to Antwerp city center between 4am - 11pm at the top of every hour, takes about 45 minutes, and costs 10 euros (only cash). Take a tram #6 (going towards Luchtbal) from Astrid station (next to the city center) and get off at the last station for Metropolis. It takes about 15 minutes. Purchase a day pass at the station using kiosks (much cheaper) or you can buy in the bus as well (about double the price). Either way, cash only. Here are a few pictures captured from the event: And the complete album here: Thank you Stephan for giving me an opportunity to speak at my first Devoxx. I hope to be back next year, just in time for Java EE 7 going final!

    Read the article

  • Error trapping for a missing data source in a Spring MVC / Spring JDBC web app [migrated]

    - by Geeb
    I have written a web app that uses Spring MVC libraries and Spring JDBC to connect to an Oracle DB. (I don't use any ORM type libraries as I create stored procedures on Oracle that do my stuff and I'm quite happy with that.) I use a connection pool to Oracle managed by the Tomcat container The app generally works absolutely fine by the way! BUT... I noticed the other day when I tried to set up the app on another Tomcat instance that I had forgotten to configure the connection pool and obviously the app could not get hold of an org.apache.commons.dbcp.BasicDataSource object, so it crashed. I define the pool params in the tomcat "context.conf" In my "web.xml" I have: <servlet> <servlet-name>appServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/Spring/appServlet/servlet-context.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>appServlet</servlet-name> <!-- Map *everything* to appServlet --> <url-pattern>/</url-pattern> </servlet-mapping> <resource-ref> <description>Oracle Datasource example</description> <res-ref-name>jdbc/ora1</res-ref-name> <res-type>org.apache.commons.dbcp.BasicDataSource</res-type> <res-auth>Container</res-auth> </resource-ref> And I have a Spring "servlet-context.xml" where JNDI is used to map the data source object provided by the connection pool to a Spring bean with the ID of "dataSource": <jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/ora1" resource-ref="true" /> Here's the question: Where do I trap the case where the database cannot be accessed for whatever reason? I don't want the user to see a yard-and-a-half of Java stack trace in their browser, rather a nicer message that tells them there is a database problem etc. It seems that my app tries to configure the "dataSource" bean (in "servlet-context.xml") before any code has tested it can actually provide a dataSource object from the pool?! Maybe I'm not fully understanding exactly what is going on in these stages of the app firing up ... Thanks for any advice!

    Read the article

  • What is the difference between a Facebook Group and a Facebook Page

    - by jmort253
    I created a Project Management Stack Exchange Facebook Page to help promote the new Project Management Stack Exchange Site. One of the users suggested a Facebook Group instead. I use Facebook, and I've searched the help pages, but it's not immediately clear to me what the advantages and disadvantages are. Can you provide a list of uses for Groups and uses for Facebook Pages and perhaps some links to resources that help differentiate one from the other?

    Read the article

  • Recruitment Drive - Things Don't Always Go As Planned - Stay Flexible by Kalyan Neelagiri

    - by david.talamelli
    I am one of the Recruiters for Oracle and work in our India Recruitment Team. When we are hiring for multiple positions we often hold Recruitment Events to interview a large number of people as effectively as possible. These Events are often held on the weekend as many people are not free to attend an all day event during the working week. Just recently during a recruitment campaign we were running I was tasked to set up a Recruitment Event for some roles we were hiring for. I have set up and run weekend recruitment events in the past which have all run smoothly. However, this time arranging this recruitment event was quite a challenge for me. The planned event was taking place on a Saturday. I had almost sent out the confirmed scheduled list of candidates to the respective hiring team on Friday and was on track for the event to take place, but unfortunately there was breaking news in the media that there was a strike called in the city because of some political agitations and protests taking place on the event day. The hiring manager had rushed to me asking for my thoughts and ideas. I was in two minds on what to do. One on hand I was not ready to cancel the event because of all the work that so many people had put into getting this prepared and also I did not want to reschedule the event at the last minute if I did not need to. On the other hand I understood it may be best to reschedule the event as people may not be able to attend based on the political protests taking place on the day. In the end I decided to gather and check for other options because this might cause confusion and a problem for the scheduled candidates to drive in to the venue. So we had concluded to reschedule our event plans and moved the event to the next week. The good news is that we successfully executed this recruitment drive the following Saturday. We were glad that 100% of the candidates we able to make it to the new interview date and despite all the agitations in the city we were successful in hiring people for all the roles we had open. Things do not always go as planned. The best laid plans can sometimes be for nought based on external factors outside of our control. What this experience has taught me is that rather than focus on the negatives when you are thrown a curveball the best approach is to stay flexible and focus on finding ways to reach your outcome. Your plans may need to change but you can still achieve the results you are after if you have the right mind set.

    Read the article

  • Need For Cost Effective Zen Cart Customization Services

    Zen Cart is an open source online store management system, which is ideal for online retail stores. It is PHP based content management system which used HTML components backed by robust and scalable features of MySQL database. E-Business owners have adopted shopping Cart as a model for their online retail stores.

    Read the article

  • JNDI Datasource Problem on Tomcat 6, Hibernate

    - by Asuman AKYILDIZ
    I am using Tomcat 6 as application server, Struts-Hibernate and MyEclipse 6.0. My application uses JDBC driver but I should modify it to use JNDI Datasource. I followed steps as described in tomcat 6.0 howto tutorial. I defined my resource in tomcatconf: <Resource name="jdbc/ats" global="jdbc/ats" auth="Container" type="javax.sql.DataSource" driverClassName="oracle.jdbc.OracleDriver" url="jdbc:oracle:thin:@//localhost:1521/MISDEV" username="TEST" password="TEST" maxActive="20" maxIdle="10" maxWait="-1" validationQuery="SELECT 1 from dual" removeAbandoned="true" removeAbandonedTimeout="30" logAbandoned="false"/> I gave reference in my application web.xml: <resource-ref> <description>Oracle Datasource example</description> <res-ref-name>jdbc/ats</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> And I defined datasource-dialect in my hibernate-cfg.xml <property name="connection.datasource">java:comp/env/jdbc/ats</property> <property name="dialect">org.hibernate.dialect.Oracle9Dialect</property> But when I create hibernate session, it can not open the connection: 09:18:11,322 ERROR JDBCExceptionReporter:72 - Connections could not be acquired from the underlying database! org.hibernate.exception.GenericJDBCException: Cannot open connection I also tried to set the properties at runtime: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); It does not open connection again. But, when I use JDBC driver it works: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.connection.url", "jdbc:oracle:thin:@//localhost:1521/MISDEV"); configuration.setProperty("hibernate.connection.username", "test"); configuration.setProperty("hibernate.connection.password", "test"); configuration.setProperty("hibernate.connection.driver_class", "oracle.jdbc.OracleDriver"); configuration.setProperty("hibernate.transaction.factory_class", "org.hibernate.transaction.JDBCTransactionFactory"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); I have been searching for 3 days and no success. What may be de problem?

    Read the article

  • Quickie Guide Getting Java Embedded Running on Raspberry Pi

    - by hinkmond
    Gary C. and I did a Bay Area Java User Group presentation of how to get Java Embedded running on a RPi. See: here. But, if you want the Quickie Guide on how to get Java up and running on the RPi, then follow these steps (which I'm doing right now as we speak, since I got my RPi in the mail on Monday. Woo-hoo!!!). So, follow along at home as I do the same steps here on my board... 1. Download the Win32DiskImager if you are on Windows, or use dd on a Linux PC: https://launchpad.net/win32-image-writer/0.6/0.6/+download/win32diskimager-binary.zip 2. Download the RPi Debian Wheezy image from here: http://files.velocix.com/c1410/images/debian/7/2012-08-08-wheezy-armel/2012-08-08-wheezy-armel.zip 3. Insert a blank 4GB SD Card into your Windows or Linux PC. 4. Use either Win32DiskImager or Linux dd to burn the unzipped image from #2 to the SD Card. 5. Insert the SD Card into your RPi. Connect an Ethernet cable to your RPi to your network. Connect the RPi Power Adapter. 6. The RPi will boot onto your network. Find its IP address using Windows Wireshark or Linux: sudo tcpdump -vv -ieth0 port 67 and port 68 7. ssh to your RPi: ssh <ip_addr_rpi> -l pi <Password: "raspberry"> 8. Download Java SE Embedded: http://www.oracle.com/technetwork/java/embedded/downloads/javase/index.html NOTE: First click accept, then choose the first bundle in the list: ARMv6/7 Linux - Headless EABI, VFP, SoftFP ABI, Little Endian - ejre-7u6-fcs-b24-linux-arm-vfp-client_headless-10_aug_2012.tar.gz 9. scp the bundle from #8 to your RPi: scp <ejre-bundle> pi@<ip_addr_rpi> 10. mkdir /usr/local, untar the bundle from #9 and rename (move) the ejre1.7.0_06 directory to /usr/local/java That's it! You are ready to roll with Java Embedded on your RPi. Hinkmond

    Read the article

  • 2D Barcode Addendum

    - by Tim Dexter
    Having finally got my external drive back(long story) today from Oklahoma (thank you so much Sammy) Im back with a full compliment of Oracle and blogging tools at my disposal. I have missed JDeveloper this past week, which I have found, I immensely prefer over Eclipse (let the flaming commence :0) I use Zoundry Raven for writing articles and its not installed locally but on my external drove, so I have been soldiering on with the blog server's pain in the backside UI for writing. Now I have my favority editor back and things are calming down workwise, I will start to get the Excel template posts out. Today thou, a note about 2D barcode support or more specifically any barcode that needs some data manipulation before the barcode font is applied. I wrote about these fonts a long time back and laid out the java class you would need to write if you had an algorithm from the font manufacturer to use. I missed out a valuable point and James at Luminex fell into the trap. He was wanting to use the datamatrix font from IDAutomation but and had built the java class to be called from the RTF template but it was not encoding or at least did not appear to be. New debugging feature to the rescue. Kan over at the bipconsultng blog documented the feature a while back. Just adding <?xdo-debug-level:'STATEMENT'?> to my test template generated all the debug files in my c:\temp directory. No messing with files, just a simple command ... at last! Kan has documented the feature here. With the log in hand I spotted a java error stack referencing a missing code128a method, huh? Looking at James' class he had the following snippet: ENCODERS.put("code128a",mUtility.getClass().getMethod("code128a",clazz)); ENCODERS.put("code128b",mUtility.getClass().getMethod("code128b", clazz)); ENCODERS.put("code128c",mUtility.getClass().getMethod("code128c", clazz)); ENCODERS.put("pdf417",mUtility.getClass().getMethod("pdf417", clazz)); ENCODERS.put("datamatrix",mUtility.getClass().getMethod("datamatrix", clazz)); His class did not include the other code128 and pdf147 methods and BIP was expecting them. An easy fix, just comment them out, rebuild and deploy and the encoding started working. If you are hitting similar problems, check that class and ensure all of the referenced methods are available, if not, delete or get commenting. James now has purdy labels popping out that his hard ware can read, sweet!

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • JSR 360 and JSR 361: A Big Leap for Java ME 8

    - by terrencebarr
    It might have gone unnoticed to some, but Java ME took a big leap forward a couple of weeks ago with the filing of two new JSRs: JSR 360: “Connected Limited Device Configuration 8″ (aka CLDC 8) JSR 361: “Java ME Embedded Profile” (aka ME EP) Together, these two JSRs will significantly update, enhance, and modernize the Java ME platform, and specifically small embedded Java, with a host of new features and functionality. JSR 360 – Connected Limited Device Configuration 8 CLDC 8 is based on JSR 139 (CLDC 1.1) and updates the core Java ME VM, language support, libraries, and features to be aligned with Java SE 8. This will include: VM updated to comply with the JVM language specification version 2 Support for SE 7/8 language features like Generics, Assertions, Annotations, Try-with-Resources, and more New libraries such as Collections, NIO subset, Logging API subset A consolidated and enhanced Generic Connection Framework for multi-protocol I/O With CLDC 8, Java ME and Java SE are entering their next phase of alignment – making Java the only technology today that truly scales application development, code re-use, and tooling across the whole range of IT platforms, from small embedded to large enterprise. JSR 361 – Java ME Embedded Profile ME EP is based on JSR 228 (IMP-NG) and updates the specification in key areas to provide a powerful and flexible application environment for small embedded Java platforms, building on the features of CLDC 8:  A new, lightweight component and services model Shared libraries Multi-application concurrency, inter-application communication, and event system Application management API optionality, to address low-footprint use cases With ME EP, application developers will have a modern application environment which allows development and deployment of  modular, robust, sophisticated, and footprint-optimized solutions for a wide range of embedded use cases and devices. Summary While these JSRs are still under development, it’s clear that there are exciting new times ahead for Java ME – turning into a serious application platform while maintaining the focus on resource-constrained devices to address the expected explosion of small, smart, and connected embedded platforms. To learn more, click on the above links for JSR 360 and JSR 361. Or review the JavaOne 2012 online presentations on the topic: CON11300: Expanding the reach of the Java ME Platform CON5943: Java ME 8 Service Platform And stay tuned for more in this space! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "jsr 360", "jsr 361", "me 8", embedded, Embedded Java, JCP

    Read the article

< Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >