Search Results

Search found 15870 results on 635 pages for 'oracle openworld 2013'.

Page 414/635 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • asp code for upload data

    - by vicky
    hello everyone i have this code for uploading an excel file and save the data into database.I m not able to write the code for database entry. someone please help <% if (Request("FileName") <> "") Then Dim objUpload, lngLoop Response.Write(server.MapPath(".")) If Request.TotalBytes > 0 Then Set objUpload = New vbsUpload For lngLoop = 0 to objUpload.Files.Count - 1 'If accessing this page annonymously, 'the internet guest account must have 'write permission to the path below. objUpload.Files.Item(lngLoop).Save "D:\PrismUpdated\prism_latest\Prism\uploadxl\" Response.Write "File Uploaded" Next Dim FSYSObj, folderObj, process_folder process_folder = server.MapPath(".") & "\uploadxl" set FSYSObj = server.CreateObject("Scripting.FileSystemObject") set folderObj = FSYSObj.GetFolder(process_folder) set filCollection = folderObj.Files Dim SQLStr SQLStr = "INSERT ALL INTO TABLENAME " for each file in filCollection file_name = file.name path = folderObj & "\" & file_name Set objExcel_chk = CreateObject("Excel.Application") Set ws1 = objExcel_chk.Workbooks.Open(path).Sheets(1) row_cnt = 1 'for row_cnt = 6 to 7 ' if ws1.Cells(row_cnt,col_cnt).Value <> "" then ' col = col_cnt ' end if 'next While (ws1.Cells(row_cnt, 1).Value <> "") for col_cnt = 1 to 10 SQLStr = SQLStr & "VALUES('" & ws1.Cells(row_cnt, 1).Value & "')" next row_cnt = row_cnt + 1 WEnd 'objExcel_chk.Quit objExcel_chk.Workbooks.Close() set ws1 = nothing objExcel_chk.Quit Response.Write(SQLStr) 'set filobj = FSYSObj.GetFile (sub_fol_path & "\" & file_name) 'filobj.Delete next End if End If plz tell me how to save the following excel data to the oracle databse.any help would be appreciated

    Read the article

  • SQL query to translate a list of numbers matched against several ranges, to a list of values

    - by Claes Mogren
    I need to convert a list of numbers that fall within certain ranges into a list of values, ordered by a priority column. The table has the following values: | YEAR | R_MIN | R_MAX | VAL | PRIO | ------------------------------------ 2010 18000 90100 52 6 2010 240000 240099 82 3 2010 250000 259999 50 5 2010 260000 260010 92 1 2010 330000 330010 73 4 2010 330011 370020 50 5 2010 380000 380050 84 2 The ranges will be different for different years. The ranges within one year will never overlap. The input will be a year and a list of numbers that might fall within one these ranges. The list of input number will be small, 1 to 10 numbers. Example of input numbers: (20000, 240004, 375000, 255000) With that input I would like to get a list ordered by the priority column, or a single value: 82 50 52 The only value I'm interested in here is 82, so UNIQUE and MAX_RESULTS=1 would do. It can easily be done with one query per number, and then sorting it in the Java code, but I would prefer to do it in a single SQL query. What SQL query, to be run in an Oracle database, would give me the desired result?

    Read the article

  • why does cx_oracle execute() not like my string now?

    - by Frank Stallone
    I've downloaded cx_oracle some time ago and wrote a script to convert data to XML. I've had to reisntall my OS and grabbed the latest version of cx_Oracle (5.0.3) and all of the sudden my code is broken. The first thing was that cx_Oracle.connect wanted unicode rather string for the username and password, that was very easy to fix. But now it keeps failing on the cursor.execute and tells me my string is not a string even when type() tells me it is a string. Here is a test script I initally used ages ago and worked fine on my old version but does not work on cx_Oracle now. import cx_Oracle ip = 'url.to.oracle' port = 1521 SID = 'mysid' dsn_tns = cx_Oracle.makedsn(ip, port, SID) connection = cx_Oracle.connect(u'name', u'pass', dsn_tns) cursor = connection.cursor() cursor.arraysize = 50 sql = "select isbn, title_code from core_isbn where rownum<=20" print type(sql) cursor.execute(sql) for isbn, title_code in cursor.fetchall(): print "Values from DB:", isbn, title_code cursor.close() connection.close() When I run that I get: Traceback (most recent call last): File "C:\NetBeansProjects\Python\src\db_temp.py", line 48, in cursor.execute(sql) TypeError: expecting None or a string Does anyone know what I may be doing wrong?

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • How should I name my SQL query files? Should I use some methodology?

    - by Mehper C. Palavuzlar
    We have an Oracle 10g database (a huge one) in our company, and I provide employees with data upon their requests. My problem is, I save almost every SQL query I wrote, and now my list has grown too much. I want to organize and rename these .sql files so that I can find the one I want easily. At the moment, I'm using some folders named as Sales Dept, Field Team, Planning Dept, Special etc. and under those folders there are .sql files like Delivery_sales_1, Delivery_sales_2, ... Sent_sold_lostsales_endpoints, ... Sales_provinces_period, Returnrates_regions_bymonths, ... Jack_1, Steve_1, Steve_2, ... I try to name the files regarding their content but this makes file names longer and does not completely meet my needs. Sometimes someone comes and demands a special report, and I give the file his name, but this is also not so good. I know duplicates or very similar files are growing in time but I don't have control over them. Can you show me the right direction to rename all these files and folders and organize my queries for easy and better control? TIA.

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • How to duplicate all data in a table except for a single column that should be changed.

    - by twiga
    I have a question regarding a unified insert query against tables with different data structures (Oracle). Let me elaborate with an example: tb_customers ( id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3) ) tb_suppliers ( id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx, archive_id NUMBER(3) ) The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key. My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id. One solution (that works), is to iterate over all the tables in a stored procedure and do a: CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE); UPDATE temp SET archive_id=something; INSERT INTO temp (select * from temp); DROP TABLE temp; I do not like this solution very much as the DDL commands muck up all restore points. Does anyone else have any solution?

    Read the article

  • PL-SQL - Two statements with begin and end, run fine seperately but not together?

    - by Twiss
    Hi all, Just wondering if anyone can help with this, I have two PLSQL statements for altering tables (adding extra fields) and they are as follows: -- Make GC_NAB field for Next Action By Dropdown begin if 'VARCHAR2' = 'NUMBER' and length('VARCHAR2')>0 and length('')>0 then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2(10, ))'; elsif ('VARCHAR2' = 'NUMBER' and length('VARCHAR2')>0 and length('')=0) or 'VARCHAR2' = 'VARCHAR2' then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2(10))'; else execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2)'; end if; commit; end; -- Make GC_NABID field for Next Action By Dropdown begin if 'NUMBER' = 'NUMBER' and length('NUMBER')>0 and length('')>0 then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER(, ))'; elsif ('NUMBER' = 'NUMBER' and length('NUMBER')>0 and length('')=0) or 'NUMBER' = 'VARCHAR2' then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER())'; else execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER)'; end if; commit; end; When I run these two queries seperately, no problems. However, when run together as shown above, Oracle gives me an error when it starts the second statement: Error report: ORA-06550: line 15, column 1: PLS-00103: Encountered the symbol "BEGIN" 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action: I'm assuming that this means the first statement is not terminated properly... is there anything I should put inbetween the statements to make it work properly? Thanks in advance everyone!

    Read the article

  • workaround for ORA-03113: end-of-file on communication channel

    - by Jefferstone
    The call to TEST_FUNCTION below fails with "ORA-03113: end-of-file on communication channel". A workaround is presented in TEST_FUNCTION2. I boiled down the code as my actual function is far more complex. Tested on Oracle 11G. Anyone have any idea why the first function fails? CREATE OR REPLACE TYPE "EMPLOYEE" AS OBJECT ( employee_id NUMBER(38), hire_date DATE ); CREATE OR REPLACE TYPE "EMPLOYEE_TABLE" AS TABLE OF EMPLOYEE; CREATE OR REPLACE FUNCTION TEST_FUNCTION RETURN EMPLOYEE_TABLE IS table1 EMPLOYEE_TABLE; table2 EMPLOYEE_TABLE; return_table EMPLOYEE_TABLE; BEGIN SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) < 'm' ) AS EMPLOYEE_TABLE) INTO table1 FROM dual; SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) >= 'm' ) AS EMPLOYEE_TABLE) INTO table2 FROM dual; SELECT CAST(MULTISET ( SELECT employee_id, hire_date FROM TABLE(table1) UNION SELECT employee_id, hire_date FROM TABLE(table2) ) AS EMPLOYEE_TABLE) INTO return_table FROM dual; RETURN return_table; END TEST_FUNCTION; CREATE OR REPLACE FUNCTION TEST_FUNCTION2 RETURN EMPLOYEE_TABLE IS table1 EMPLOYEE_TABLE; table2 EMPLOYEE_TABLE; return_table EMPLOYEE_TABLE; BEGIN SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) < 'm' ) AS EMPLOYEE_TABLE) INTO table1 FROM dual; SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) >= 'm' ) AS EMPLOYEE_TABLE) INTO table2 FROM dual; WITH combined AS ( SELECT employee_id, hire_date FROM TABLE(table1) UNION SELECT employee_id, hire_date FROM TABLE(table2) ) SELECT CAST(MULTISET ( SELECT * FROM combined ) AS EMPLOYEE_TABLE) INTO return_table FROM dual; RETURN return_table; END TEST_FUNCTION2; SELECT * FROM TABLE (TEST_FUNCTION()); -- Throws exception ORA-03113. SELECT * FROM TABLE (TEST_FUNCTION2()); -- Works

    Read the article

  • How to call stored procedure by hibernate?

    - by user367097
    Hi I have an oracle stored procedure GET_VENDOR_STATUS_COUNT(DOCUMENT_ID IN NUMBER , NOT_INVITED OUT NUMBER,INVITE_WITHDRAWN OUT NUMBER,... rest all parameters are OUT parameters. In hbm file I have written - <sql-query name="getVendorStatus" callable="true"> <return-scalar column="NOT_INVITED" type="string"/> <return-scalar column="INVITE_WITHDRAWN" type="string"/> <return-scalar column="INVITED" type="string"/> <return-scalar column="DISQUALIFIED" type="string"/> <return-scalar column="RESPONSE_AWAITED" type="string"/> <return-scalar column="RESPONSE_IN_PROGRESS" type="string"/> <return-scalar column="RESPONSE_RECEIVED" type="string"/> { call GET_VENDOR_STATUS_COUNT(:DOCUMENT_ID , :NOT_INVITED ,:INVITE_WITHDRAWN ,:INVITED ,:DISQUALIFIED ,:RESPONSE_AWAITED ,:RESPONSE_IN_PROGRESS ,:RESPONSE_RECEIVED ) } </sql-query> In java I have written - session.getNamedQuery("getVendorStatus").setParameter("DOCUMENT_ID", "DOCUMENT_ID").setParameter("NOT_INVITED", "NOT_INVITED") ... continue till all the parametes . I am getting the sql exception 18:29:33,056 WARN [JDBCExceptionReporter] SQL Error: 1006, SQLState: 72000 18:29:33,056 ERROR [JDBCExceptionReporter] ORA-01006: bind variable does not exist Please let me know what is the exact process of calling a stored procedure from hibernate. I do not want to use JDBC callable statement.

    Read the article

  • How Best to Replace Ugly Queries and Dynamic PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and sometimes requiring dynamic SQL. That is, sometimes SQL queries require set based processing along with significant procedural processing. This is what PL/SQL is custom made for, but as a language it does not begin to compare to C#. Now and then I convert a PL/SQL procedure to C#, and am always amazed at how much cleaner and easier to both read and write the C# version is. The C# program might for example construct a SQL query string piece by piece and/or run several queries and process them as needed. The C# version is usually much faster as well, which must mean that I'm not very good at PL/SQL either. I do not currently have access to LINQ. My question is, how best to package all these little C# programs, which are really just mini reports, that is, replacements for ugly SQL queries? Right now I'm actually using NUnit to hold them, and calling each report a [Test], even though they aren't really tests. NUnit just happens to provide a convenient packaging framework.

    Read the article

  • JDBC programms running long time performance issue

    - by phyerbarte
    My program has an issue with Oracle query performance, I believe the SQL have good performance, because it returns quickly in SQLPlus. But when my program has been running for a long time, like 1 week, the SQL query (using JDBC) becomes slower (In my logs, the query time is much longer than when I originally started the program). When I restart my program, the query performance comes back to normal. I think it is could be something wrong with the way I use the preparedStatement, because the SQL I'm using does not use placeholders "?" at all. Just a complex select query. The query process is done by a util class. Here is the pertinent code building the query: public List<String[]> query(String sql, String[] args) { Connection conn = null; conn = openConnection(); conn.setAutocommit(true); .... PreparedStatement preStatm = null; ResultSet rs = null; ....//set preparedstatment arg code rs = preStatm.executeQuery(); .... finally{ //close rs //close prestatm //close connection } } In my case, the args is always null, so it just passes a query sql to this query method. Is that possible this way could slow down the DB query after program long time running? Or I should use statement instead, or just pass args with "?" in the SQL? How can I find out the root cause for my issue? Thanks.

    Read the article

  • Return REF CURSOR to procedure generated data

    - by ThaDon
    I need to write a sproc which performs some INSERTs on a table, and compile a list of "statuses" for each row based on how well the INSERT went. Each row will be inserted within a loop, the loop iterates over a cursor that supplies some values for the INSERT statement. What I need to return is a resultset which looks like this: FIELDS_FROM_ROW_BEING_INSERTED.., STATUS VARCHAR2 The STATUS is determined by how the INSERT went. For instance, if the INSERT caused a DUP_VAL_ON_INDEX exception indicating there was a duplicate row, I'd set the STATUS to "Dupe". If all went well, I'd set it to "SUCCESS" and proceed to the next row. By the end of it all, I'd have a resultset of N rows, where N is the number of insert statements performed and each row contains some identifying info for the row being inserted, along with the "STATUS" of the insertion Since there is no table in my DB to store the values I'd like to pass back to the user, I'm wondering how I can return the info back? Temporary table? Seems in Oracle temporary tables are "global", not sure I would want a global table, are there any temporary tables that get dropped after a session is done?

    Read the article

  • Cannot locate record in delphi ADO query

    - by Danatela
    I can't locate any record in TADOQuery using PK. First, I was trying to use standard Locate method: PPUQuery.Locate('ID', SpPlansQuery['PPONREC'], []); It always returns False, but manual search (passing the whole query matching ID with given PPONREC which is really slow) finds the desired row. I tried using loPartialKey and switched CursorLocation of query to clUseServer, but it didn't help. Next, I tried to filter my PPUQuery: PPUQuery.Filter := 'ID = ' + VarToStr(SpPlansQuery['PPONREC']); PPUQuery.Filtered := True; PPUQuery.First; But after that the PPUQuery.Eof is True and PPUQuery.RecordCount equals 0. Underlying database is Oracle 9 and the ID is of type INTEGER and is PK of table TPORDER_CMK. PPUQuery.SQL is: SELECT tp.*, la.*, lm.*, ld.*, ld1.*, to_cmk.* FROM ppu_plan.tporder_cmk tp JOIN PPU_PLAN.LARTICLES la ON TP.ARTICLE = LA.ID JOIN PPU_PLAN.LMATERIAL lm ON TP.MATERIAL = lm.id JOIN PPU_PLAN.LCADEP ld ON TP.CADEP = LD.ID JOIN PPU_PLAN.LCADEP ld1 ON TP.PRODUCER = LD1.ID JOIN PPU_PLAN.TORDER_CMK to_cmk ON TP.order_id=TO_cmk.ID WHERE TP.PLAN_ID = :pplan_id What should I try next and how to solve this problem?

    Read the article

  • Timestamps and Intervals: NUMTOYMINTERVAL SYSTDATE CALCULATION SQL QUERY

    - by MeachamRob
    I am working on a homework problem, I'm close but need some help with a data conversion I think. Or sysdate - start_date calculation The question is: Using the EX schema, write a SELECT statement that retrieves the date_id and start_date from the Date_Sample table (format below), followed by a column named Years_and_Months_Since_Start that uses an interval function to retrieve the number of years and months that have elapsed between the start_date and the sysdate. (Your values will vary based on the date you do this lab.) Display only the records with start dates having the month and day equal to Feb 28 (of any year). DATE_ID START_DATE YEARS_AND_MONTHS_SINCE_START 2 Sunday , February 28, 1999 13-8 4 Monday , February 28, 2005 7-8 5 Tuesday , February 28, 2006 6-8 Our EX schema that refers to this question is simply a Date_Sample Table with two columns: DATE_ID NUMBER NOT Null START_DATE DATE I Have written this code: SELECT date_id, TO_CHAR(start_date, 'Day, MONTH DD, YYYY') AS start_date , NUMTOYMINTERVAL((SYSDATE - start_date), 'YEAR') AS years_and_months_since_start FROM date_sample WHERE TO_CHAR(start_date, 'MM/DD') = '02/28'; But my Years and months since start column is not working properly. It's getting very high numbers for years and months when the date calculated is from 1999-ish. ie, it should be 13-8 and I'm getting 5027-2 so I know it's not correct. I used NUMTOYMINTERVAL, which should be correct, but don't think the sysdate-start_date is working. Data Type for start_date is simply date. I tried ROUND but maybe need some help to get it right. Something is wrong with my calculation and trying to figure out how to get the correct interval there. Not sure if I have provided enough information to everyone but I will let you know if I figure it out before you do. It's a question from Murach's Oracle and SQL/PL book, chapter 17 if anyone else is trying to learn that chapter. Page 559.

    Read the article

  • Data base design with Blob

    - by mmuthu
    Hi, I have a situation where i need to store the binary data into database as blob column. There are three different table exists in my database where in i need to store a blob data for each record. Not every record will have the blob data all the time. It is time and user based. The table one will have to store the *.doc files almost for all the record The table two will have to store the *.xml optionally. The table three will have to store images (not sure what is frequency, etc) Now my questions is whether it is a good idea to maintain a separate table to store the blob data pointing it to the respective table PK's (Yes, there will be no FK's and assuming program will maintain it). It will be some thing like below, BLOB|PK_ID|TABLE_NAME Alternatively, is it a good idea to keep the blob column in respective tables. As for as my application runtime is concerned, The table 2 will be read very frequently. Though the blob column will not be required. The table 2 record will gets deleted frequently. Similarly other blob data in respective table will not be accessed frequently. All of the blob content will be read on-demand basis. I'm thinking first approach will work better for me. What do you guys think? Btw, I'm using Oracle.

    Read the article

  • Error 0x800f0922 installing .NET 3.5 on Windows 8

    - by Benjamin Nolan
    I'm trying to install .NET 3.5 on my Windows 8 box and it keeps throwing Error 0x800f0922 at me. From what I've read on answers.microsoft.com and StackOverflow I gather the easiest way to fix this is to perform a system refresh, however this will remove all software I've installed from discs. I've just moved house, so I'd rather not do that as I don't know where all the installation media actually are for a lot of my software, so if possible I'd prefer to track down where the problem is actually occurring. (Also, I have a LOT of software installed. It'd take me a long time to reinstall it all, and I unfortunately haven't got that time.) The on-demand error screen sends me to KB2734782 (can't link it as I'm <10 rep), which doesn't help much. When I run this DISM line from the StackOverflow post: Dism.exe /online /enable-feature /featurename:NetFX3 /All /Source:C:\Windows\WinSxS /LimitAccess I get the following output on the terminal: Microsoft Windows [Version 6.2.9200] (c) 2012 Microsoft Corporation. All rights reserved. C:\Windows\system32>Dism.exe /online /enable-feature /featurename:NetFX3 /All /Source:C:\Windows\WinSxS /LimitAccess Deployment Image Servicing and Management tool Version: 6.2.9200.16384 Image Version: 6.2.9200.16384 Enabling feature(s) [==========================100.0%==========================] Error: 0x800f0922 DISM failed. No operation was performed. For more information, review the log file. The DISM log file can be found at C:\Windows\Logs\DISM\dism.log C:\Windows\system32> Incidentally, it jumps straight from 0 to 100% and then sits on that line for about 5 minutes before the error line occurs. dism.log contains the following lines around that time: (Link to full logs is at bottom of post) 2013-07-02 00:56:58, Info DISM DISM.EXE: Succesfully registered commands for the provider: Edition Manager. 2013-07-02 00:56:58, Info DISM DISM Provider Store: PID=5768 TID=5780 Getting Provider DISM Package Manager - CDISMProviderStore::GetProvider 2013-07-02 00:56:58, Info DISM DISM Provider Store: PID=5768 TID=5780 Provider has previously been initialized. Returning the existing instance. - CDISMProviderStore::Internal_GetProvider 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Processing the top level command token(enable-feature). - CPackageManagerCLIHandler::Private_ValidateCmdLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Attempting to route to appropriate command handler. - CPackageManagerCLIHandler::ExecuteCmdLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Routing the command... - CPackageManagerCLIHandler::ExecuteCmdLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered the option "featurename" with value "NetFX3" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered an unknown option "featurename" with value "NetFX3" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered the option "source" with value "C:\Windows\WinSxS" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered an unknown option "source" with value "C:\Windows\WinSxS" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:59, Info DISM DISM Package Manager: PID=5768 TID=5780 Initiating Changes on Package with values: 5, 7 - CDISMPackage::Internal_ChangePackageState 2013-07-02 00:56:59, Info DISM DISM Package Manager: PID=5768 TID=5780 CBS session options=0x20100! - CDISMPackageManager::Internal_Finalize 2013-07-02 01:00:27, Info DISM DISM Package Manager: PID=5768 TID=2420 Error in operation: (null) (CBS HRESULT=0x800f0922) - CCbsConUIHandler::Error 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed finalizing changes. - CDISMPackageManager::Internal_Finalize(hr:0x800f0922) 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed processing package changes with session options - CDISMPackageManager::ProcessChangesWithOptions(hr:0x800f0922) 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed ProcessChanges. - CPackageManagerCLIHandler::Private_ProcessFeatureChange(hr:0x800f0922) 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed while processing command enable-feature. - CPackageManagerCLIHandler::ExecuteCmdLine(hr:0x800f0922) 2013-07-02 01:00:27, Info DISM DISM Package Manager: PID=5768 TID=5780 Further logs for online package and feature related operations can be found at %WINDIR%\logs\CBS\cbs.log - CPackageManagerCLIHandler::ExecuteCmdLine 2013-07-02 01:00:27, Error DISM DISM.EXE: DISM Package Manager processed the command line but failed. HRESULT=800F0922 cbs.log has the following chunks around then which could be relevant: 2013-07-02 00:55:06, Info CBS Exec: This is a PSF Package. Job has been saved and we are returning to client. 2013-07-02 00:55:06, Info CSI 0000042d@2013/7/1:23:55:06.203 CSI Transaction @0xe2f5e59500 destroyed 2013-07-02 00:55:06, Info CBS Exec: DPX job state saved for one or more packages, aborting the staging and install of execution. 2013-07-02 00:55:06, Info CSI 0000042e@2013/7/1:23:55:06.207 CSI Transaction @0xe2f5e58480 destroyed 2013-07-02 00:55:06, Info CBS Perf: Stage chain complete. 2013-07-02 00:55:06, Info CBS Failed to stage execution chain. [HRESULT = 0x800f0816 - CBS_E_DPX_JOB_STATE_SAVED] 2013-07-02 00:55:06, Info CBS Failed to process single phase execution. [HRESULT = 0x800f0816 - CBS_E_DPX_JOB_STATE_SAVED] 2013-07-02 00:55:06, Info CBS WER: Failure is not worth reporting [HRESULT = 0x800f0816 - CBS_E_DPX_JOB_STATE_SAVED] 2013-07-02 00:55:06, Info CBS Reboot mark cleared and further down: 2013-07-02 00:59:19, Info CSI 000004e6 Begin executing advanced installer phase 38 (0x00000026) index 253 (0x00000000000000fd) (sequence 289) Old component: [l:0]"" New component: [ml:306{153},l:304{152}]"NetFx35CDF-CDF_GenericCommands, Culture=neutral, Version=6.2.9200.16384, PublicKeyToken=31bf3856ad364e35, ProcessorArchitecture=x86, versionScope=NonSxS" Install mode: install Installer ID: {81a34a10-4256-436a-89d6-794b97ca407c} Installer name: [15]"Generic Command" 2013-07-02 00:59:19, Info CSI 000004e7 Performing 1 operations; 1 are not lock/unlock and follow: (0) LockComponentPath (10): flags: 0 comp: {l:16 b:19fc6600b776ce01c91f0000fc07a816} pathid: {l:16 b:19fc6600b776ce01ca1f0000fc07a816} path: [l:214{107}]"\SystemRoot\WinSxS\x86_netfx35cdf-cdf_genericcommands_31bf3856ad364e35_6.2.9200.16384_none_0cec490be12fb858" pid: 7fc starttime: 130171962799582915 (0x01ce76b5e2626ec3) 2013-07-02 00:59:19, Info CSI 000004e8 Performing 1 operations; 1 are not lock/unlock and follow: (0) LockComponentPath (10): flags: 0 comp: {l:16 b:27236700b776ce01cb1f0000fc07a816} pathid: {l:16 b:27236700b776ce01cc1f0000fc07a816} path: [l:210{105}]"\SystemRoot\WinSxS\x86_netfx35cdf-csd_cdf_installer_31bf3856ad364e35_6.2.9200.16384_none_55072425fd5c3716" pid: 7fc starttime: 130171962799582915 (0x01ce76b5e2626ec3) 2013-07-02 00:59:19, Info CSI 000004e9 Calling generic command executable (sequence 1): [122]"C:\Windows\WinSxS\x86_netfx35cdf-csd_cdf_installer_31bf3856ad364e35_6.2.9200.16384_none_55072425fd5c3716\WFServicesReg.exe" CmdLine: [139]""C:\Windows\WinSxS\x86_netfx35cdf-csd_cdf_installer_31bf3856ad364e35_6.2.9200.16384_none_55072425fd5c3716\WFServicesReg.exe" /c /b /v /m /i" 2013-07-02 00:59:20, Info CSI 000004ea Performing 1 operations; 1 are not lock/unlock and follow: (0) LockComponentPath (10): flags: 0 comp: {l:16 b:bd790401b776ce01cd1f0000fc07a816} pathid: {l:16 b:bd790401b776ce01ce1f0000fc07a816} path: [l:234{117}]"\SystemRoot\WinSxS\x86_microsoft.windows.s..ation.badcomponents_31bf3856ad364e35_6.2.9200.16384_none_353ccb4c94858655" pid: 7fc starttime: 130171962799582915 (0x01ce76b5e2626ec3) 2013-07-02 00:59:20, Info CSI 000004eb Creating NT transaction (seq 27), objectname [6]"(null)" 2013-07-02 00:59:20, Info CSI 000004ec Created NT transaction (seq 27) result 0x00000000, handle @0x24b8 2013-07-02 00:59:20, Info CSI 000004ed@2013/7/1:23:59:20.933 Beginning NT transaction commit... 2013-07-02 00:59:22, Info CSI 000004ee@2013/7/1:23:59:22.065 CSI perf trace: CSIPERF:TXCOMMIT;1387723 2013-07-02 00:59:22, Error CSI 000004ef (F) Done with generic command 1; CreateProcess returned 0, CPAW returned S_OK Process exit code 255 (0x000000ff) resulted in success? FALSE Process output: [l:28479 [4096]"DDSet_Entry: WFServicesReg.exe DDSet_Status: CFxInstaller::CopyConfigFilesToTemp is64bit=0 DDSet_Status: CFileHelper::CopyConfigFilesToTempLocation DDSet_Status: CFxInstaller::SetupBaseComponents isInstall=1 DDSet_Status: CFxInstaller::SetupBaseComponents Calling SetupExtensions. isInstall=1 (0x000000FF -- The extended attributes are inconsistent. ??) And a bit further down: 2013-07-02 00:59:22, Error [0x018007] CSI 000004f0 (F) Failed execution of queue item Installer: Generic Command ({81a34a10-4256-436a-89d6-794b97ca407c}) with HRESULT HRESULT_FROM_WIN32(14109). Failure will not be ignored: A rollback will be initiated after all the operations in the installer queue are completed; installer is reliable (2)[gle=0x80004005] [...snip...] 2013-07-02 00:59:22, Info CBS Not able to add pending.xml.bad to Windows Error Report. [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2013-07-02 00:59:28, Info CSI 000004f1@2013/7/1:23:59:28.467 CSI Advanced installer perf trace: CSIPERF:AIDONE;{81a34a10-4256-436a-89d6-794b97ca407c};NetFx35CDF-CDF_GenericCommands, Version = 6.2.9200.16384, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral;10609242us 2013-07-02 00:59:28, Info CSI 000004f2 End executing advanced installer (sequence 289) Completion status: HRESULT_FROM_WIN32(ERROR_ADVANCED_INSTALLER_FAILED) [...snip...] 2013-07-02 01:00:26, Info CBS Exec: Cancelled pending transactions after rollback. [HRESULT = 0x00000000 - S_OK] 2013-07-02 01:00:26, Error CBS Exec: An error occurred while committing the transaction, the transaction could not be rolled back. [HRESULT = 0x800f0922 - CBS_E_INSTALLERS_FAILED] The full DISM and CBS logs are at http://ben.mu/files/dotnet35_dism_cbs.zip as the CBS log is nearly 167MB uncompressed. o.o dism.log gives the timeframe of where its errors occur--00:56:20ish to 01:00:22. Does anyone have any ideas what's actually causing the installation to fail, and if so how I can fix it? Please don't just say "Refresh the OS". :)

    Read the article

  • Deploying axis2 on glassfish

    - by user115524
    I'm trying to deploy Axis2 v1.6.2 war on Glassfish v3.1.2 and I'm having some problems... I need to develop a few web services and since the main app is beeing served on Glassfish I was hoping to deploy axis2 on it so I could test it. I used Glassfish administration pages to deploy a war downloaded from an apache site, but after pointing the application deployment form to this war I'm getting Error and this is the (long) stack trace: [#|2013-06-27T11:34:40.701+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|WebModule[/axis23403634363287739103]StandardWrapper.Throwable java.lang.ExceptionInInitializerError at org.apache.axis2.transport.http.AxisServlet.initConfigContext(AxisServlet.java:584) at org.apache.axis2.transport.http.AxisServlet.init(AxisServlet.java:454) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1453) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1250) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5093) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5380) at com.sun.enterprise.web.WebModule.start(WebModule.java:498) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:917) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:901) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:733) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:2019) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1669) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:109) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:130) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:269) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:301) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:461) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:214) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:207) at org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:148) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer._service(GrizzlyContainer.java:182) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer.service(GrizzlyContainer.java:147) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:148) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117) at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:724) Caused by: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable. at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:310) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685) at org.apache.axis2.deployment.DeploymentEngine.(DeploymentEngine.java:76) ... 68 more |#] -- cut -- ... the end of the stacktrace: [#|2013-06-27T11:34:40.714+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.tools.admin.org.glassfish.deployment.admin|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while invoking class com.sun.enterprise.web.WebApplication start method java.lang.Exception: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable. at com.sun.enterprise.web.WebApplication.start(WebApplication.java:138) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:130) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:269) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:301) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:461) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:214) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:207) at org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:148) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer._service(GrizzlyContainer.java:182) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer.service(GrizzlyContainer.java:147) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:148) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117) at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:724) |#] [#|2013-06-27T11:34:40.715+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while loading the app|#] [#|2013-06-27T11:34:40.862+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.tools.admin.org.glassfish.deployment.admin|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while loading the app : java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.|#] [#|2013-06-27T11:34:40.875+0200|INFO|glassfish3.1.2|org.glassfish.admingui|_ThreadID=85;_ThreadName=admin-thread-pool-4848(5);|Exception Occurred :Error occurred during deployment: Exception while loading the app : java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.. Please see server.log for more details.|#] Is it even possible to deploy axis2 on glassfish?

    Read the article

  • Strange Recurrent Excessive I/O Wait

    - by Chris
    I know quite well that I/O wait has been discussed multiple times on this site, but all the other topics seem to cover constant I/O latency, while the I/O problem we need to solve on our server occurs at irregular (short) intervals, but is ever-present with massive spikes of up to 20k ms a-wait and service times of 2 seconds. The disk affected is /dev/sdb (Seagate Barracuda, for details see below). A typical iostat -x output would at times look like this, which is an extreme sample but by no means rare: iostat (Oct 6, 2013) tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00 0.00 156.00 9.75 21.89 288.12 36.00 57.60 5.50 0.00 44.00 8.00 48.79 2194.18 181.82 100.00 2.00 0.00 16.00 8.00 46.49 3397.00 500.00 100.00 4.50 0.00 40.00 8.89 43.73 5581.78 222.22 100.00 14.50 0.00 148.00 10.21 13.76 5909.24 68.97 100.00 1.50 0.00 12.00 8.00 8.57 7150.67 666.67 100.00 0.50 0.00 4.00 8.00 6.31 10168.00 2000.00 100.00 2.00 0.00 16.00 8.00 5.27 11001.00 500.00 100.00 0.50 0.00 4.00 8.00 2.96 17080.00 2000.00 100.00 34.00 0.00 1324.00 9.88 1.32 137.84 4.45 59.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22.00 44.00 204.00 11.27 0.01 0.27 0.27 0.60 Let me provide you with some more information regarding the hardware. It's a Dell 1950 III box with Debian as OS where uname -a reports the following: Linux xx 2.6.32-5-amd64 #1 SMP Fri Feb 15 15:39:52 UTC 2013 x86_64 GNU/Linux The machine is a dedicated server that hosts an online game without any databases or I/O heavy applications running. The core application consumes about 0.8 of the 8 GBytes RAM, and the average CPU load is relatively low. The game itself, however, reacts rather sensitive towards I/O latency and thus our players experience massive ingame lag, which we would like to address as soon as possible. iostat: avg-cpu: %user %nice %system %iowait %steal %idle 1.77 0.01 1.05 1.59 0.00 95.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 13.16 25.42 135.12 504701011 2682640656 sda 1.52 0.74 20.63 14644533 409684488 Uptime is: 19:26:26 up 229 days, 17:26, 4 users, load average: 0.36, 0.37, 0.32 Harddisk controller: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Harddisks: Array 1, RAID-1, 2x Seagate Cheetah 15K.5 73 GB SAS Array 2, RAID-1, 2x Seagate ST3500620SS Barracuda ES.2 500GB 16MB 7200RPM SAS Partition information from df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 480191156 30715200 425083668 7% /home /dev/sda2 7692908 437436 6864692 6% / /dev/sda5 15377820 1398916 13197748 10% /usr /dev/sda6 39159724 19158340 18012140 52% /var Some more data samples generated with iostat -dx sdb 1 (Oct 11, 2013) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 15.00 0.00 70.00 0.00 656.00 9.37 4.50 1.83 4.80 33.60 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 12.00 836.00 500.00 100.00 sdb 0.00 0.00 0.00 3.00 0.00 32.00 10.67 9.96 1990.67 333.33 100.00 sdb 0.00 0.00 0.00 4.00 0.00 40.00 10.00 6.96 3075.00 250.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 2.62 4648.00 500.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 1.00 0.00 16.00 16.00 1.69 7024.00 1000.00 100.00 sdb 0.00 74.00 0.00 124.00 0.00 1584.00 12.77 1.09 67.94 6.94 86.00 Characteristic charts generated with rrdtool can be found here: iostat plot 1, 24 min interval: http://imageshack.us/photo/my-images/600/yqm3.png/ iostat plot 2, 120 min interval: http://imageshack.us/photo/my-images/407/griw.png/ As we have a rather large cache of 5.5 GBytes, we thought it might be a good idea to test if the I/O wait spikes would perhaps be caused by cache miss events. Therefore, we did a sync and then this to flush the cache and buffers: echo 3 > /proc/sys/vm/drop_caches and directly afterwards the I/O wait and service times virtually went through the roof, and everything on the machine felt like slow motion. During the next few hours the latency recovered and everything was as before - small to medium lags in short, unpredictable intervals. Now my question is: does anybody have any idea what might cause this annoying behaviour? Is it the first indication of the disk array or the raid controller dying, or something that can be easily mended by rebooting? (At the moment we're very reluctant to do this, however, because we're afraid that the disks might not come back up again.) Any help is greatly appreciated. Thanks in advance, Chris. Edited to add: we do see one or two processes go to 'D' state in top, one of which seems to be kjournald rather frequently. If I'm not mistaken, however, this does not indicate the processes causing the latency, but rather those affected by it - correct me if I'm wrong. Does the information about uninterruptibly sleeping processes help us in any way to address the problem? @Andy Shinn requested smartctl data, here it is: smartctl -a -d megaraid,2 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:13 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 20 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1236631092 Blocks received from initiator = 1097862364 Blocks read from cache and sent to initiator = 1383620256 Number of read and write commands whose size <= segment size = 531295338 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36556.93 number of minutes until next internal SMART test = 32 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 509271032 47 0 509271079 509271079 20981.423 0 write: 0 0 0 0 0 5022.039 0 verify: 1870931090 196 0 1870931286 1870931286 100558.708 0 Non-medium error count: 0 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36538 - [- - -] # 2 Background short Completed 16 36514 - [- - -] # 3 Background short Completed 16 36490 - [- - -] # 4 Background short Completed 16 36466 - [- - -] # 5 Background short Completed 16 36442 - [- - -] # 6 Background long Completed 16 36420 - [- - -] # 7 Background short Completed 16 36394 - [- - -] # 8 Background short Completed 16 36370 - [- - -] # 9 Background long Completed 16 36364 - [- - -] #10 Background short Completed 16 36361 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes] smartctl -a -d megaraid,3 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:26 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 19 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 288745640 Blocks received from initiator = 1097848399 Blocks read from cache and sent to initiator = 1304149705 Number of read and write commands whose size <= segment size = 527414694 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36596.83 number of minutes until next internal SMART test = 28 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 610862490 44 0 610862534 610862534 20470.133 0 write: 0 0 0 0 0 5022.480 0 verify: 2861227413 203 0 2861227616 2861227616 100872.443 0 Non-medium error count: 1 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36580 - [- - -] # 2 Background short Completed 16 36556 - [- - -] # 3 Background short Completed 16 36532 - [- - -] # 4 Background short Completed 16 36508 - [- - -] # 5 Background short Completed 16 36484 - [- - -] # 6 Background long Completed 16 36462 - [- - -] # 7 Background short Completed 16 36436 - [- - -] # 8 Background short Completed 16 36412 - [- - -] # 9 Background long Completed 16 36404 - [- - -] #10 Background short Completed 16 36401 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes]

    Read the article

  • CodePlex Daily Summary for Friday, August 22, 2014

    CodePlex Daily Summary for Friday, August 22, 2014Popular ReleasesQuickMon: Version 3.22: This release add two important changes. 1. Config variables at the monitor pack level (global to entire monitor pack for all Collectors) 2. The QuickMon (Windows) service now automatically reloads monitor packs that have been changed since it was started. This means you don't have to restart the service for changes to take effect.SSIS ReportGeneratorTask: ReportGenerator Task 1.8: New version of the SSIS Report Generator Task that supports SQL Server 2008, 2012 and 2014. In addition to minor bug fixes Multi-Value Parameters and Execution Information were integrated. The complete variable and parameter assignment is now a string and can be set dynamically.Corefig for Windows Server 2012 Core and Hyper-V Server 2012: Corefig 1.1.2 ISO: FixesUpdated Hyper-V scripts to use version 2 of the WMI tree. Updated the Hyper-V check for saved VM to look for the proper identifier. Fixed text issues with the licensing tab (thanks to briangw for rooting this problem out). EnhancementsNew (and improved) version number in Corefig.psd1.Outlook 2013 Backup Add-In: Outlook Backup Add-In 1.3: Changelog for new version: Added button in config-window to reset the last backup-time (this will trigger the backup after closing outlook) Minimum interval set to 0 (backup at each closing of outlook) Catch exception when data store entry is corrupt Added two parameters (prefix and suffix) to automatically rename the backup file Updated VSTO-Runtime to 10.0.50325 Upgraded project to Visual Studio 2013 Added optional command to run after backup (e.g. pack backup files, ...) Add...babelua: 1.6.7.0: V1.6.7.0 - 2014.8.21New feature: add a file search window ( ctrl+1 or ALT+L ), like The file search in VC Assistant; Stability improvement: performance improvement when BabeLua load/unload; performance improvement when debugger load lua files;File Explorer for WPF: FileExplorer3_20August2014: Please see Aug14 Update.Open NFe: RDI Open NFe 3.0 (alpha): Atualização para o layout 3.10 da NFe.ODBC Connect: v1.0: ODBC Connect executables for both 32bit and 64bit ODBC data sourcesMSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksMagick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...Fluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.New Projects1thManage: GDT for erevery oneCreateProjectOnCodePlex: This is the first project for CoderCamps.HEAD FIRST C# LAB 1 : A DAY AT THE RACES: This has been provided for educational purposes and general discussion to improve coding practices associated with the resources detailed within Head First C#.Introduce Audit logging to your EF application using Repository & Unit of Work: Introduce Auditing in your application that uses Entity Framework by utilizing the Repository and Unit of Work design patterns.License Registration (C++): Allow to create demo version, activate or not a module.MS Word SharepointWiki Plugin: Scope of the Plugin is to enable a Post to a Sharepoint Wiki from within MS Word with Formatted Text and Images.Send My Zip: This app will help you to send the files were zipped then send the email about password information. This project is currently in setup mode and only availablewinhttp: this is a project for http/https download.Wix Builder: WixBuilder focusses on easily generating a WiX script from a project ouput, compile and link it into msi installer using the WiX Toolset.XiamiSig: ????????。

    Read the article

  • CodePlex Daily Summary for Monday, August 18, 2014

    CodePlex Daily Summary for Monday, August 18, 2014Popular ReleasesMagick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...WallSwitch: WallSwitch 1.2.5: Version 1.2.5 Changes: Added support for sequential order in collage mode. Added option to display multiple images per switch in collage mode. Fixed bug where border width wasn't being loaded properly, and was reverting to default values. Fixed bug where sequential order was repeating images on multiple monitors. Decreased likelihood of random images being repeated.OpenCppCoverage: OpenCppCoverage 0.9.1: - Add Jenkins support. - Command line argument can be placed inside a config file. If you do not have Visual Studio C++ 2013 you need to download redistributable packages: http://www.microsoft.com/en-us/download/details.aspx?id=40784Easy Backup Windows Service: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Force run program as administrator by default. Add 'everyday' schedule element. Update solution to VS 2013.Easy Backup Application: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Fix app location initialization. Force run program as administrator by default. Update solution to VS 2013.TEBookConverter: 1.5: Added: Turkish and French translations Added: A few interface changes Removed: SkinDynamulet: Dynamulet v0.1: DynamoDB Transaction Server v0.1Console parallel nunit tests runner: ConsoleUnitTestsRunner 1.03: bugfixingFluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.MFCMAPI: August 2014 Release: Build: 15.0.0.1042 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010/2013 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeEWSEditor: EwsEditor 1.10 Release: • Export and import of items as a full fidelity steam works - without proxy classes! - I used raw EWS POSTs. • Turned off word wrap for EWS request field in EWS POST windows. • Several windows with scrolling texts boxes were limiting content to 32k - I removed this restriction. • Split server timezone info off to separate menu item from the timezone info windows so that the timezone info window could be used without logging into a mailbox. • Lots of updates to the TimeZone window. • UserAgen...New Projectsballmon: ballmonExchange Database Recovery With and Without Log Files is Possible: This segments giving an overview of Exchange Server transaction log files. It describes process how users can recover their database with & without log filesFabs.Net: Ego tatmini ve gelisme amaçli yaptigim bir projedir.JacoChat: JacoChat is a simple chatting interface that uses my personal webserver as a "wall" for people to chat on.ManagedWin32: ManagedWin32 is a library that exposes the Win32 API to .NET applications.Open XML Extensions: The project provides additions to the Open XML SDK and related projects (e.g., PowerTools for Open XML), starting with MemoryStreams for Open XML Documents.orntic: Project for insurace companyTBOX: The Treasure Box Library: TBOX is a mutli-platform c library for unix, windows, mac, ios, android, etc. It includes asio, stream, container, algorithm, xml and other library modules.WeatherTS: Typescript weather application.?????@/????: ??????????????:????,????,????,???????,????????,??????:????????,?????! ?????????: ????????????????????,????????:??、??、???,?????????????????????! ????-??: ??????????????,????,???????????????。

    Read the article

  • CVE-2006-3744 Multiple Integer overflow vulnerabilities in ImageMagick

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2006-3744 Numeric Errors vulnerability 5.1 ImageMagick Solaris 10 SPARC: 136882-03 X86: 136883-03 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Older SAS1 hardware Vs. newer SAS2 hardware

    - by user12620172
    I got a question today from someone asking about the older SAS1 hardware from over a year ago that we had on the older 7x10 series. They didn't leave an email so I couldn't respond directly, but I said this blog would be blunt, frank, and open so I have no problem addressing it publicly. A quick history lesson here: When Sun first put out the 7x10 family hardware, the 7410 and 7310 used a SAS1 backend connection to a JBOD that had SATA drives in it. This JBOD was not manufactured by Sun nor did Sun own the IP for it. Now, when Oracle took over, they had a problem with that, and I really can’t blame them. The decision was made to cut off that JBOD and it’s manufacturer completely and use our own where Oracle controlled both the IP and the manufacturing. So in the summer of 2010, the cut was made, and the 7410 and 7310 had a hardware refresh and now had a SAS2 backend going to a SAS2 JBOD with SAS2 drives instead of SATA. This new hardware had two big advantages. First, there was a nice performance increase, mostly due to the faster backend. Even better, the SAS2 interface on the drives allowed for a MUCH faster failover between cluster heads, as the SATA drives were the bottleneck on the older hardware. In September of 2010 there was a major refresh of the rest of the 7000 hardware, the controllers and the other family members, and that’s where we got today’s current line-up of the 7x20 series. So the 7x20 has always used the new trays, and the 7410 and 7310 have used the new SAS2 trays since last July of 2010. Now for the bad news. People who have the 7410 and 7310 from BEFORE the July 2010 cutoff have the models with SAS1 HBAs in them to connect to the older SAS1 trays. Remember, that manufacturer cut all ties with us and stopped making the JBOD, so there’s just no way to get more of them, as they don’t exist. There are some options, however. Oracle support does support taking out the SAS1 HBAs in the old 7410 and 7310 and put in newer SAS2 HBAs which can talk to the new trays. Hey, I didn’t say it was a great option, I just said it’s an option. I fully realize that you would then have a SAS1 JBOD full of SATA drives that you could no longer connect. I do know a client that did this, and took the SAS1 JBOD and connected it to another server and formatted the drives and is using it as a plain, non-7000 JBOD. This is not supported by Oracle support. The other option is to just keep it as-is, as it works just fine, but you just can’t expand it. Then you can get a newer 7x20 series, and use the built-in ZFSSA replication feature to move the data over. Now you can use the newer one for your production data and use the older one for DR, snaps and clones.

    Read the article

  • BPEL 11.1.1.2 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations

    - by Steven Chan
    A new certification was released simultaneously with the E-Business Suite 12.1.3 Maintenance Pack late last year:  the use of BPEL 11g Version 11.1.1.2 with E-Business Suite 12.1.3.  There are two major options for SOA-related integrations for the E-Business Suite:Custom integrations using the Oracle Application Server (SOA) Adapter for Oracle ApplicationsPrebuilt SOA integrations for E-Business Suite using BPEL Process ManagerFor more background about these two options, please see this article:BPEL 10.1.3.5 Certified for Prebuilt E-Business Suite 12 SOA Integrations

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >