Search Results

Search found 14145 results on 566 pages for 'level of detail'.

Page 122/566 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Using C# to parse a SOAP Response

    - by Gavin
    I am trying to get the values for faultcode, faultstring, and OrderNumber from the SOAP below <SOAP:Envelope xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP:Body> <faultcode>1234</faultcode> <faultstring>SaveOrder:SetrsOrderMain:Cannot change OrderDate if GLPeriod is closed, new OrderDate is 3/2/2010:Ln:1053</faultstring> <detail> <SOAP:Envelope xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP:Body UserGUID="test"> <m:SaveOrder xmlns:m="http://www.test.com/software/schema/" UserGUID="test"> <Order OrderNumber="1234-1234-123" Caller="" OrderStatus="A" xmlns="http://www.test.com/software/schema/"> Here is my code in C# XDocument doc = XDocument.Load(HttpContext.Current.Server.MapPath("XMLexample.xml")); var errorDetail = new EcourierErrorDetail { FaultCode = from fc in doc.Descendants("faultcode") select fc.Value, FaultString = from fs in c.Descendants("faultstring") select fs.Value, OrderNumber = from o in doc.Descendants("detail").Elements("Order").Attributes("OrderNumber") select o.Value }; return errorDetail; I am able to get the values for both faultcode and faultstring but not the OrderNumber. I am getting "Enumeration yielded no results." Can anyone help? Thanks.

    Read the article

  • Does anyone know why jquery dialog is showing stale content on ajax update ?

    - by oo
    I have a series of links and when i click on a link i want to show a dialog with detail information. This detail is returned from an jquery ajax request. I am using the following code below to show a partial result through ajax onto a jquery dialog. Here is the jquery code: $(document).ready(function() { $('a.click').live('click', function() { var url = '/Tracker/Info?id=' + $(this).attr("id"); var dialogOpts = { modal: true, bgiframe: true, autoOpen: false, height: 600, width: 450, overlay: { opacity: 0.7, background: "black" }, draggable: true, resizeable: true, open: function() { //display correct dialog content $("#dialogDiv").load(url); } }; $("#dialogDiv").dialog(dialogOpts); //end dialog $("#dialogDiv").dialog("open"); }); }); Here is my controller action code: public ActionResult Info(int id) { return PartialView("LabelPartialView", _Repository.GetItem(id)); } Here is the issue: When i click this the first time (lets say i send id = 1234) it works fine. When i click on another item (lets say i send id = 4567) it shows the content from 1234 still. Which i click this second item again (again its 4567), then it will show the content from 4567. Does anyone know why it might not be refreshed the first time? Is this a timing issue?

    Read the article

  • loading record into detailview

    - by summer
    please please i need help..i think i am lost somewhere..basically i followed the example http://www.iphonesdkarticles.com/2008/10/sqlite-tutorial-loading-data-as.html, but i was stuck with error on reading description..something must had gone wrong some well but i am not very sure how should i solve it. this is my code, actually i dun really have an idea on wat's really going in this code..help please..2days!! no solution!! - (void) hydrateDetailViewData { //if detail view is hydrated then do not get it from database if(isDetailViewHydrated) return; if(detailStmt == nil) { const char *sql = "select snapTitle, snapDesc from Snap where snapID =?"; if(sqlite3_prepare_v2(database, sql, -1, &detailStmt, NULL) != SQLITE_OK) NSAssert1(0, @"Error while creating detail view statement. '%s'", sqlite3_errmsg(database)); NSLog(@"SQLite= %d", sqlite3_step(detailStmt)); } if(SQLITE_DONE != sqlite3_step(detailStmt)) { // NSString *descStr = [[NSString alloc] //initWithString:sqlite3_column_text(detailStmt, 2)]; NSString *descStr = [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt,2)]; self.snapDescription = descStr; [descStr release]; } else NSAssert1(0, @"Error getting description of snap2play. '%s'", sqlite3_errmsg(database)); sqlite3_reset(detailStmt); isDetailViewHydrated = YES; //if hydrated, make sure do not get from database again. } }

    Read the article

  • Unnecessary Redundancy with Tables.

    - by Stacey
    My items are listed as follows; This is just a summary of course. But I'm using a method shown for the "Detail" table to represent a type of 'inheritence', so to speak - since "Item" and "Downloadable" are going to be identical except that each will have a few additional fields relevant only to them. My question is in this design pattern. This sort of thing appears many, many times in our projects - is there a more intelligent way to handle it? I basically need to normalize the tables as much as possible. I'm extremely new to databases and so this is all very confusing to me. There are 5 items. Awards, Items, Purchases, Tokens, and Downloads. They are all very, very similar, except each has a few pieces of data relevant only to itself. I've tried to use a declaration field (like an enumerator 'Type' field) in conjunction with nullable columns, but I was told that is a bad approach. What I have done is take everything similar and place it in a single table, and then each type has its own table that references a column in the 'base' table. The problem occurs with relationships, or junctions. Linking all of these back to a customer. Each type takes around 2 additional tables to properly junction all of the data together- and as such, my database is growing very, very large. Is there a smarter practice for this kind of behavior? Item ID | GUID Name | varchar(64) Product ID | GUID Name | varchar(64) Store | GUID [ FK ] Details | GUID [FK] Downloadable ID | GUID Name | varchar(64) Url | nvarchar(2048) Details | GUID [FK] Details ID | GUID Price | decimal Description | text Peripherals [ JUNCTION ] ID | GUID Detail | GUID [FK] Store ID | GUID Addresses | GUID Addresses ID | GUID Name | nvarchar(64) State | int [FK] ZipCode | int Address | nvarchar(64) State ID | int Name | varchar(32)

    Read the article

  • SQL inner join from field defined table?

    - by Wolftousen
    I have a, currently, a total of 6 tables that are part of this question. The primary table, tableA, contains columns that all the entries in the other 5 tables have in common. The other 5 tables have columns which define the entry in tableA in more detail. For example: TableA ID|Name|Volumn|Weight|Description 0 |T1 |0.4 |0.1 |Random text 1 |R1 |5.3 |25 |Random text TableB ID|Color|Shape 0 |Blue |Sphere TableC ID|Direction|Velocity 1 |North |3.4 (column names are just examples don't take them for what they mean...) The ID field in Table A is unique to all other tables (i.e. TableB will have 0, but TableC will not, nor any other Tables). What I would like to do is select all the fields from TableA and the corresponding (according to ID field) detail Table (TableB-F). What I have currently done and not tested is added a field to TableA so it looks like this: TableA ID|Name|Volumn|Weight|Description|Table 0 |T1 |0.4 |0.1 |Random text|TableB 1 |R1 |5.3 |25 |Random text|TableC I have a few questions about this: 1.Is it proper to do such a thing to TableA, as foreign keys wont work in this situation since they all need to link to different tables? 2.If this is proper, would the SQL query look like this (ID would be input by the user)? SELECT * FROM TableA AS a INNER JOIN a.Table AS t ON a.ID = ID; 3.Is there a better way to do this? Thanks for the help.

    Read the article

  • determine week when pick a day

    - by derikluv
    I'm trying to accomplish the following task. I'm developing a custom calendar with three views (day,week and month), there may be something out there already but I'm rewriting this as part of learning tool for me as well. So user will face with Day view when they first visit, with arrows to go back and forth to next day or previous day of course. If they click on Week View, it will give them a 7 days overview with today date as default, and once again they can go back and forth to next week or previous week. The last view is the full month calendar, once they click on the day, it will give them the detail of the day and at the same time reset the default as the day they pick. So if they go back to week view, they will see the detail for the week contain the day that they picked. This is where I have trouble wrapping my head around with, I know there are PHP functions that determine the day of the week but I can't seem to think about how to pass in the date and get the full week starting from Sunday for the day that passed in. For example, if I passed in 10/12/2012, I'd like to start the week at 10/07/12 - 10/13/12. Thank you kindly for your help or pointing to the right direction. Please excuse my grammar/spelling mistakes as well.

    Read the article

  • MAMP + Python MySQLDB - trouble installing

    - by Frederico
    I'm currently running the latest version of MAMP on my Snow Leopard OSX, and I'm trying to install MySQLDB. Downloaded: MySQL-python-1.2.3c1 I went into the setup_posix.py and adjusted the location of the mysql_config to the one in MAMP: mysql_config.path = "/Applications/MAMP/Library/bin/mysql_config" When trying to build I get the error below. Could anyone give me a hand please: creating build/temp.macosx-10.6-universal-2.6 gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch ppc -arch x86_64 -pipe -Dversion_info=(1,2,3,'gamma',1) -D_version_=1.2.3c1 -I/Applications/MAMP/Library/include/mysql -I/System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c _mysql.c -o build/temp.macosx-10.6-universal-2.6/_mysql.o -fno-omit-frame-pointer -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL _mysql.c:36:23: error: my_config.h: No such file or directory _mysql.c:38:19: error: mysql.h: No such file or directory _mysql.c:39:26: error: mysqld_error.h: No such file or directory _mysql.c:40:20: error: errmsg.h: No such file or directory _mysql.c:76: error: expected specifier-qualifier-list before ‘MYSQL’ _mysql.c:90: error: expected specifier-qualifier-list before ‘MYSQL_RES’ _mysql.c: In function ‘_mysql_Exception’: _mysql.c:120: warning: implicit declaration of function ‘mysql_errno’ _mysql.c:120: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:123: error: ‘CR_MAX_ERROR’ undeclared (first use in this function) _mysql.c:123: error: (Each undeclared identifier is reported only once _mysql.c:123: error: for each function it appears in.) _mysql.c:131: error: ‘CR_COMMANDS_OUT_OF_SYNC’ undeclared (first use in this function) _mysql.c:132: error: ‘ER_DB_CREATE_EXISTS’ undeclared (first use in this function) _mysql.c:133: error: ‘ER_SYNTAX_ERROR’ undeclared (first use in this function) _mysql.c:134: error: ‘ER_PARSE_ERROR’ undeclared (first use in this function) _mysql.c:135: error: ‘ER_NO_SUCH_TABLE’ undeclared (first use in this function) _mysql.c:136: error: ‘ER_WRONG_DB_NAME’ undeclared (first use in this function) _mysql.c:137: error: ‘ER_WRONG_TABLE_NAME’ undeclared (first use in this function) _mysql.c:138: error: ‘ER_FIELD_SPECIFIED_TWICE’ undeclared (first use in this function) _mysql.c:139: error: ‘ER_INVALID_GROUP_FUNC_USE’ undeclared (first use in this function) _mysql.c:140: error: ‘ER_UNSUPPORTED_EXTENSION’ undeclared (first use in this function) _mysql.c:141: error: ‘ER_TABLE_MUST_HAVE_COLUMNS’ undeclared (first use in this function) _mysql.c:170: error: ‘ER_DUP_ENTRY’ undeclared (first use in this function) _mysql.c:213: warning: implicit declaration of function ‘mysql_error’ _mysql.c:213: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:213: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_server_init’: _mysql.c:308: warning: label ‘finish’ defined but not used _mysql.c:234: warning: unused variable ‘item’ _mysql.c:233: warning: unused variable ‘groupc’ _mysql.c:233: warning: unused variable ‘i’ _mysql.c:233: warning: unused variable ‘cmd_argc’ _mysql.c:232: warning: unused variable ‘s’ _mysql.c: In function ‘_mysql_ResultObject_Initialize’: _mysql.c:363: error: ‘MYSQL_RES’ undeclared (first use in this function) _mysql.c:363: error: ‘result’ undeclared (first use in this function) _mysql.c:368: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:368: error: ‘fields’ undeclared (first use in this function) _mysql.c:377: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:380: warning: implicit declaration of function ‘mysql_use_result’ _mysql.c:380: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:382: warning: implicit declaration of function ‘mysql_store_result’ _mysql.c:382: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:383: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:386: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:389: warning: implicit declaration of function ‘mysql_num_fields’ _mysql.c:390: error: ‘_mysql_ResultObject’ has no member named ‘nfields’ _mysql.c:391: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:392: warning: implicit declaration of function ‘mysql_fetch_fields’ _mysql.c:438: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_traverse’: _mysql.c:450: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:451: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_clear’: _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:463: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_Initialize’: _mysql.c:475: error: ‘MYSQL’ undeclared (first use in this function) _mysql.c:475: error: ‘conn’ undeclared (first use in this function) _mysql.c:500: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:501: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:525: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:547: warning: implicit declaration of function ‘mysql_init’ _mysql.c:547: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: warning: implicit declaration of function ‘mysql_options’ _mysql.c:550: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: error: ‘MYSQL_OPT_CONNECT_TIMEOUT’ undeclared (first use in this function) _mysql.c:554: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:554: error: ‘MYSQL_OPT_COMPRESS’ undeclared (first use in this function) _mysql.c:555: error: ‘CLIENT_COMPRESS’ undeclared (first use in this function) _mysql.c:558: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:558: error: ‘MYSQL_OPT_NAMED_PIPE’ undeclared (first use in this function) _mysql.c:560: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:560: error: ‘MYSQL_INIT_COMMAND’ undeclared (first use in this function) _mysql.c:562: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:562: error: ‘MYSQL_READ_DEFAULT_FILE’ undeclared (first use in this function) _mysql.c:564: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:564: error: ‘MYSQL_READ_DEFAULT_GROUP’ undeclared (first use in this function) _mysql.c:567: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:567: error: ‘MYSQL_OPT_LOCAL_INFILE’ undeclared (first use in this function) _mysql.c:575: warning: implicit declaration of function ‘mysql_real_connect’ _mysql.c:575: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:590: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_traverse’: _mysql.c:671: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:672: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_clear’: _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:681: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_close’: _mysql.c:696: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:698: warning: implicit declaration of function ‘mysql_close’ _mysql.c:698: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:700: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_affected_rows’: _mysql.c:722: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:723: warning: implicit declaration of function ‘mysql_affected_rows’ _mysql.c:723: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_debug’: _mysql.c:739: warning: implicit declaration of function ‘mysql_debug’ _mysql.c: In function ‘_mysql_ConnectionObject_dump_debug_info’: _mysql.c:757: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:759: warning: implicit declaration of function ‘mysql_dump_debug_info’ _mysql.c:759: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_autocommit’: _mysql.c:783: warning: implicit declaration of function ‘mysql_query’ _mysql.c:783: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_commit’: _mysql.c:806: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_rollback’: _mysql.c:828: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_errno’: _mysql.c:940: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:941: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_error’: _mysql.c:956: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:957: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:957: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_escape_string’: _mysql.c:981: warning: implicit declaration of function ‘mysql_escape_string’ _mysql.c: In function ‘_mysql_escape’: _mysql.c:1088: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_describe’: _mysql.c:1168: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1168: error: ‘fields’ undeclared (first use in this function) _mysql.c:1171: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1172: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1173: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1184: warning: implicit declaration of function ‘IS_NOT_NULL’ _mysql.c: In function ‘_mysql_ResultObject_field_flags’: _mysql.c:1204: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1204: error: ‘fields’ undeclared (first use in this function) _mysql.c:1207: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1208: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1209: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: At top level: _mysql.c:1250: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘_mysql_row_to_tuple’: _mysql.c:1256: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1258: warning: implicit declaration of function ‘mysql_fetch_lengths’ _mysql.c:1258: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1258: warning: assignment makes pointer from integer without a cast _mysql.c:1261: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:1262: error: ‘row’ undeclared (first use in this function) _mysql.c: At top level: _mysql.c:1275: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘_mysql_row_to_dict’: _mysql.c:1280: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1280: error: ‘fields’ undeclared (first use in this function) _mysql.c:1282: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1284: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1284: warning: assignment makes pointer from integer without a cast _mysql.c:1285: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1288: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:1289: error: ‘row’ undeclared (first use in this function) _mysql.c: At top level: _mysql.c:1314: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘_mysql_row_to_dict_old’: _mysql.c:1319: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1319: error: ‘fields’ undeclared (first use in this function) _mysql.c:1321: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1323: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1323: warning: assignment makes pointer from integer without a cast _mysql.c:1324: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1327: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:1328: error: ‘row’ undeclared (first use in this function) _mysql.c: At top level: _mysql.c:1350: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘mysql_fetch_row’: _mysql.c:1361: error: ‘MYSQL_ROW’ undeclared (first use in this function) _mysql.c:1361: error: expected ‘;’ before ‘row’ _mysql.c:1365: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:1366: error: ‘row’ undeclared (first use in this function) _mysql.c:1366: warning: implicit declaration of function ‘mysql_fetch_row’ _mysql.c:1366: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1369: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1372: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1380: error: too many arguments to function ‘convert_row’ _mysql.c: In function ‘_mysql_ResultObject_fetch_row’: _mysql.c:1404: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c:1419: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1431: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:1445: warning: implicit declaration of function ‘mysql_num_rows’ _mysql.c:1445: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_character_set_name’: _mysql.c:1512: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_get_client_info’: _mysql.c:1603: warning: implicit declaration of function ‘mysql_get_client_info’ _mysql.c:1603: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_get_host_info’: _mysql.c:1617: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1618: warning: implicit declaration of function ‘mysql_get_host_info’ _mysql.c:1618: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1618: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_get_proto_info’: _mysql.c:1632: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1633: warning: implicit declaration of function ‘mysql_get_proto_info’ _mysql.c:1633: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_get_server_info’: _mysql.c:1647: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1648: warning: implicit declaration of function ‘mysql_get_server_info’ _mysql.c:1648: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1648: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_info’: _mysql.c:1664: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1665: warning: implicit declaration of function ‘mysql_info’ _mysql.c:1665: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1665: warning: assignment makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_insert_id’: _mysql.c:1697: error: ‘my_ulonglong’ undeclared (first use in this function) _mysql.c:1697: error: expected ‘;’ before ‘r’ _mysql.c:1699: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1701: error: ‘r’ undeclared (first use in this function) _mysql.c:1701: warning: implicit declaration of function ‘mysql_insert_id’ _mysql.c:1701: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_kill’: _mysql.c:1718: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1720: warning: implicit declaration of function ‘mysql_kill’ _mysql.c:1720: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_field_count’: _mysql.c:1739: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1741: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ResultObject_num_fields’: _mysql.c:1756: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1757: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_num_rows’: _mysql.c:1772: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1773: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_ping’: _mysql.c:1802: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1803: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1805: warning: implicit declaration of function ‘mysql_ping’ _mysql.c:1805: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_query’: _mysql.c:1826: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1828: warning: implicit declaration of function ‘mysql_real_query’ _mysql.c:1828: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_select_db’: _mysql.c:1856: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1858: warning: implicit declaration of function ‘mysql_select_db’ _mysql.c:1858: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_shutdown’: _mysql.c:1877: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1879: warning: implicit declaration of function ‘mysql_shutdown’ _mysql.c:1879: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_stat’: _mysql.c:1904: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1906: warning: implicit declaration of function ‘mysql_stat’ _mysql.c:1906: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1906: warning: assignment makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_store_result’: _mysql.c:1927: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1928: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:1937: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_thread_id’: _mysql.c:1966: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1968: warning: implicit declaration of function ‘mysql_thread_id’ _mysql.c:1968: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_use_result’: _mysql.c:1988: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1989: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:1998: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_dealloc’: _mysql.c:2016: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_repr’: _mysql.c:2028: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2029: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ResultObject_data_seek’: _mysql.c:2047: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2048: warning: implicit declaration of function ‘mysql_data_seek’ _mysql.c:2048: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_row_seek’: _mysql.c:2061: error: ‘MYSQL_ROW_OFFSET’ undeclared (first use in this function) _mysql.c:2061: error: expected ‘;’ before ‘r’ _mysql.c:2063: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2064: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:2069: error: ‘r’ undeclared (first use in this function) _mysql.c:2069: warning: implicit declaration of function ‘mysql_row_tell’ _mysql.c:2069: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:2070: warning: implicit declaration of function ‘mysql_row_seek’ _mysql.c:2070: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_row_tell’: _mysql.c:2082: error: ‘MYSQL_ROW_OFFSET’ undeclared (first use in this function) _mysql.c:2082: error: expected ‘;’ before ‘r’ _mysql.c:2084: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2085: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:2090: error: ‘r’ undeclared (first use in this function) _mysql.c:2090: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:2091: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_dealloc’: _mysql.c:2099: warning: implicit declaration of function ‘mysql_free_result’ _mysql.c:2099: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: At top level: _mysql.c:2330: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2337: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:2344: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:2351: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:2358: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:2421: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:2421: error: initializer element is not constant _mysql.c:2421: error: (near initialization for ‘_mysql_ResultObject_memberlist[0].offset’) _mysql.c: In function ‘_mysql_ConnectionObject_getattr’: _mysql.c:2443: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:36:23: error: my_config.h: No such file or directory _mysql.c:38:19: error: mysql.h: No such file or directory _mysql.c:39:26: error: mysqld_error.h: No such file or directory _mysql.c:40:20: error: errmsg.h: No such file or directory _mysql.c:76: error: expected specifier-qualifier-list before ‘MYSQL’ _mysql.c:90: error: expected specifier-qualifier-list before ‘MYSQL_RES’ _mysql.c: In function ‘_mysql_Exception’: _mysql.c:120: warning: implicit declaration of function ‘mysql_errno’ _mysql.c:120: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:123: error: ‘CR_MAX_ERROR’ undeclared (first use in this function) _mysql.c:123: error: (Each undeclared identifier is reported only once _mysql.c:123: error: for each function it appears in.) _mysql.c:131: error: ‘CR_COMMANDS_OUT_OF_SYNC’ undeclared (first use in this function) _mysql.c:132: error: ‘ER_DB_CREATE_EXISTS’ undeclared (first use in this function) _mysql.c:133: error: ‘ER_SYNTAX_ERROR’ undeclared (first use in this function) _mysql.c:134: error: ‘ER_PARSE_ERROR’ undeclared (first use in this function) _mysql.c:135: error: ‘ER_NO_SUCH_TABLE’ undeclared (first use in this function) _mysql.c:136: error: ‘ER_WRONG_DB_NAME’ undeclared (first use in this function) _mysql.c:137: error: ‘ER_WRONG_TABLE_NAME’ undeclared (first use in this function) _mysql.c:138: error: ‘ER_FIELD_SPECIFIED_TWICE’ undeclared (first use in this function) _mysql.c:139: error: ‘ER_INVALID_GROUP_FUNC_USE’ undeclared (first use in this function) _mysql.c:140: error: ‘ER_UNSUPPORTED_EXTENSION’ undeclared (first use in this function) _mysql.c:141: error: ‘ER_TABLE_MUST_HAVE_COLUMNS’ undeclared (first use in this function) _mysql.c:170: error: ‘ER_DUP_ENTRY’ undeclared (first use in this function) _mysql.c:213: warning: implicit declaration of function ‘mysql_error’ _mysql.c:213: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:213: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_server_init’: _mysql.c:308: warning: label ‘finish’ defined but not used _mysql.c:234: warning: unused variable ‘item’ _mysql.c:233: warning: unused variable ‘groupc’ _mysql.c:233: warning: unused variable ‘i’ _mysql.c:233: warning: unused variable ‘cmd_argc’ _mysql.c:232: warning: unused variable ‘s’ _mysql.c: In function ‘_mysql_ResultObject_Initialize’: _mysql.c:363: error: ‘MYSQL_RES’ undeclared (first use in this function) _mysql.c:363: error: ‘result’ undeclared (first use in this function) _mysql.c:368: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:368: error: ‘fields’ undeclared (first use in this function) _mysql.c:377: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:380: warning: implicit declaration of function ‘mysql_use_result’ _mysql.c:380: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:382: warning: implicit declaration of function ‘mysql_store_result’ _mysql.c:382: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:383: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:386: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:389: warning: implicit declaration of function ‘mysql_num_fields’ _mysql.c:390: error: ‘_mysql_ResultObject’ has no member named ‘nfields’ _mysql.c:391: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:392: warning: implicit declaration of function ‘mysql_fetch_fields’ _mysql.c:438: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_traverse’: _mysql.c:450: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:451: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_clear’: _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:463: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_Initialize’: _mysql.c:475: error: ‘MYSQL’ undeclared (first use in this function) _mysql.c:475: error: ‘conn’ undeclared (first use in this function) _mysql.c:500: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:501: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:525: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:547: warning: implicit declaration of function ‘mysql_init’ _mysql.c:547: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: warning: implicit declaration of function ‘mysql_options’ _mysql.c:550: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: error: ‘MYSQL_OPT_CONNECT_TIMEOUT’ undeclared (first use in this function) _mysql.c:554: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:554: error: ‘MYSQL_OPT_COMPRESS’ undeclared (first use in this function) _mysql.c:555: error: ‘CLIENT_COMPRESS’ undeclared (first use in this function) _mysql.c:558: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:558: error: ‘MYSQL_OPT_NAMED_PIPE’ undeclared (first use in this function) _mysql.c:560: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:560: error: ‘MYSQL_INIT_COMMAND’ undeclared (first use in this function) _mysql.c:562: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:562: error: ‘MYSQL_READ_DEFAULT_FILE’ undeclared (first use in this function) _mysql.c:564: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:564: error: ‘MYSQL_READ_DEFAULT_GROUP’ undeclared (first use in this function) _mysql.c:567: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:567: error: ‘MYSQL_OPT_LOCAL_INFILE’ undeclared (first use in this function) _mysql.c:575: warning: implicit declaration of function ‘mysql_real_connect’ _mysql.c:575: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:590: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_traverse’: _mysql.c:671: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:

    Read the article

  • WD MBWE II (White Strip Light) 2TB - unable to access data

    - by user210477
    I have a WD MBWE II (White Strip Light) 2TB - (WD20000H2NC-00) Was working fine until a few days ago. I guess there was a power failure and after that I am unable to access the 'Public' or the 'Download' folder anymore. I have been searching for answers everywhere but came up empty handed. Web GUI still works, SSH works. I hooked up both the drives on my PC and UFS Explorer sees the drive. But so far I am unable to retrieve any of my data. I do not remember what RAID setting I used when I first got the drive. I can see from GUI that it is set as "Stripe". The drive contains 10 years of family pictures which I really do not want to loose. Sadly and stupidly, I didn't even keep a backup of this drive. Can somebody please help or point me in the right direction. Thank you in advance for your help. Disk Utility on Ubuntu reports 1405 bad sectors on one drive. How can I retrieve my data? Please help. Logs below: ~ # mdadm --detail /dev/md[012345678] /dev/md0: Version : 0.90 Creation Time : Wed Jul 15 08:36:17 2009 Raid Level : raid1 Array Size : 1959872 (1914.26 MiB 2006.91 MB) Used Dev Size : 1959872 (1914.26 MiB 2006.91 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:53:29 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 04f7a661:98983b3b:26b29e4f:9b646adb Events : 0.266 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 /dev/md1: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 256896 (250.92 MiB 263.06 MB) Used Dev Size : 256896 (250.92 MiB 263.06 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 30 22:08:21 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : aaa7b859:c475312d:efc5a766:6526b867 Events : 0.10 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 /dev/md3: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 987904 (964.91 MiB 1011.61 MB) Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:26:33 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 3f4099f2:72e6171b:5ba962fd:48464a62 Events : 0.54 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 mdadm: md device /dev/md4 does not appear to be active. mdadm: md device /dev/md5 does not appear to be active. mdadm: md device /dev/md6 does not appear to be active. mdadm: md device /dev/md7 does not appear to be active. mdadm: md device /dev/md8 does not appear to be active. ~ # cat /etc/mtab securityfs /sys/kernel/security securityfs rw 0 0 /dev/md2 /DataVolume xfs rw,usrquota 0 0 /dev/md4 /ExtendVolume xfs rw,usrquota 0 0 ~ # df -k Filesystem 1k-blocks Used Available Use% Mounted on /dev/md0 1929044 145092 1685960 8% / /dev/md3 972344 123452 799500 13% /var /dev/ram0 63412 20 63392 0% /mnt/ram ~ # mdadm -D /dev/md2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 ~ # mdadm -D /dev/md4 mdadm: md device /dev/md4 does not appear to be active. ~ # mount /dev/root on / type ext3 (rw,noatime,data=ordered) proc on /proc type proc (rw) sys on /sys type sysfs (rw) /dev/pts on /dev/pts type devpts (rw) securityfs on /sys/kernel/security type securityfs (rw) /dev/md3 on /var type ext3 (rw,noatime,data=ordered) /dev/ram0 on /mnt/ram type tmpfs (rw) ~ # cat /var/log/messages Oct 29 18:04:50 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 29 18:17:45 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 00:50:11 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 16:29:47 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 18:27:22 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:06:03 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:14:58 shmotashNAS daemon.warn wixEvent[3462]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 19:20:05 shmotashNAS daemon.alert wixEvent[3462]: Thermal Alarm - System temperature exceeded threshold.(66 degrees) Oct 30 19:58:29 shmotashNAS daemon.alert wixEvent[3462]: HDD SMART - HDD 1 SMART Health Status: Failed. Oct 30 22:05:39 shmotashNAS daemon.info init: Starting pid 13043, console /dev/null: '/usr/bin/killall' Oct 30 22:05:39 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:08:09 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:08:09 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:19 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:25 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:37 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:44 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:08:46 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: ****** database does not exist. ret = -1, creating path Oct 30 22:08:49 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:08:50 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4605, console /dev/null: '/bin/touch' Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4607, console /dev/ttyS0: '/sbin/getty' Oct 30 22:09:10 shmotashNAS daemon.info wixEvent[3557]: System Startup - System startup. Oct 30 22:09:16 shmotashNAS daemon.warn wixEvent[3557]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 22:14:14 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:29:36 shmotashNAS daemon.warn wixEvent[3557]: System Reboot - System will reboot. Oct 30 22:29:40 shmotashNAS daemon.info init: Starting pid 5974, console /dev/null: '/usr/bin/killall' Oct 30 22:29:40 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:47:56 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:47:56 shmotashNAS daemon.warn wixEvent[3461]: Network Link - NIC 1 link is down. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:48:09 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4079, console /dev/null: '/bin/touch' Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4080, console /dev/ttyS0: '/sbin/getty' Oct 30 22:48:28 shmotashNAS daemon.info wixEvent[3461]: System Startup - System startup. Oct 30 22:49:01 shmotashNAS daemon.warn wixEvent[3461]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 23:51:11 shmotashNAS daemon.warn wixEvent[3461]: System Reboot - System will reboot. Oct 30 23:51:16 shmotashNAS daemon.info init: Starting pid 6498, console /dev/null: '/usr/bin/killall' Oct 30 23:51:16 shmotashNAS syslog.info System log daemon exiting. Oct 30 23:54:19 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 23:55:44 [Version 01.09.00.96] ++++++++++++++ Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4115, console /dev/null: '/bin/touch' Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4116, console /dev/ttyS0: '/sbin/getty' Oct 30 23:55:58 shmotashNAS daemon.info wixEvent[3476]: System Startup - System startup. Oct 30 23:56:33 shmotashNAS daemon.warn wixEvent[3476]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 00:29:14 shmotashNAS auth.info sshd[5409]: Server listening on 0.0.0.0 port 22. Oct 31 00:31:25 shmotashNAS auth.info sshd[5486]: Accepted password for root from 192.168.1.100 port 50785 ssh2 Oct 31 00:33:44 shmotashNAS auth.info sshd[5565]: Accepted password for root from 192.168.1.100 port 50817 ssh2 Oct 31 00:36:39 shmotashNAS daemon.info init: Starting pid 5680, console /dev/null: '/usr/bin/killall' Oct 31 00:36:39 shmotashNAS syslog.info System log daemon exiting. Oct 31 00:40:44 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:31 - 00:41:00 [Version 01.09.00.96] ++++++++++++++ Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 31 00:41:01 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4101, console /dev/null: '/bin/touch' Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4102, console /dev/ttyS0: '/sbin/getty' Oct 31 00:41:15 shmotashNAS daemon.info wixEvent[3464]: System Startup - System startup. Oct 31 00:41:47 shmotashNAS daemon.warn wixEvent[3464]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 01:13:19 shmotashNAS daemon.info init: Starting pid 5385, console /dev/null: '/usr/bin/killall' Oct 31 01:13:19 shmotashNAS syslog.info System log daemon exiting. Nov 1 13:26:25 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network Link - NIC 1 link is up 100 Mbps full duplex. Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:11:01 - 13:26:38 [Version 01.09.00.96] ++++++++++++++ Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: mc_db_init ... Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === inotify init done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === Walking directory done. Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4078, console /dev/null: '/bin/touch' Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4079, console /dev/ttyS0: '/sbin/getty' Nov 1 13:26:52 shmotashNAS daemon.info wixEvent[3471]: System Startup - System startup. Nov 1 13:27:28 shmotashNAS daemon.warn wixEvent[3471]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Nov 1 13:44:48 shmotashNAS auth.info sshd[5375]: Accepted password for root from 192.168.1.103 port 50217 ssh2 Nov 1 13:51:08 shmotashNAS auth.info sshd[5894]: Accepted password for root from 192.168.1.103 port 50380 ssh2

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • error about ACPI _OSC request failed (AE_NOT_FOUND)

    - by Yavuz Maslak
    I have ubuntu server 11.10 64 bit I see an error in kernel.log. This error comes out when the server reboot. some port of grep APCI in kernel.log; Dec 5 09:08:51 www kernel: [ 0.588605] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 5 09:08:51 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 5 09:08:51 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Which hardware may be cause this error ? root@www:# grep -r ACPI /var/log/kern.log Dec 5 09:08:51 www kernel: [ 0.000000] BIOS-e820: 00000000bf780000 - 00000000bf798000 (ACPI data) Dec 5 09:08:51 www kernel: [ 0.000000] BIOS-e820: 00000000bf798000 - 00000000bf7dc000 (ACPI NVS) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: RSDP 00000000000fb1a0 00014 (v00 ACPIAM) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: RSDT 00000000bf780000 00040 (v01 022410 RSDT1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: FACP 00000000bf780200 00084 (v01 022410 FACP1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: DSDT 00000000bf7804b0 0C359 (v01 A1279 A1279001 00000001 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: FACS 00000000bf798000 00040 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: APIC 00000000bf780390 000D8 (v01 022410 APIC1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: MCFG 00000000bf780470 0003C (v01 022410 OEMMCFG 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: OEMB 00000000bf798040 00072 (v01 022410 OEMB1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: HPET 00000000bf78f4b0 00038 (v01 022410 OEMHPET 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: OSFR 00000000bf78f4f0 000B0 (v01 022410 OEMOSFR 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: SSDT 00000000bf798fe0 00363 (v01 DpgPmm CpuPm 00000012 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: PM-Timer IO Port: 0x808 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x84] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x85] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x86] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x87] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x88] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x89] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x8a] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x8b] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x8c] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x8d] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x8e] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x8f] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x03] address[0xfec8a000] gsi_base[24]) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ0 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ2 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ9 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] Using ACPI (MADT) for SMP configuration information Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000 Dec 5 09:08:51 www kernel: [ 0.009507] ACPI: Core revision 20110413 Dec 5 09:08:51 www kernel: [ 0.499129] PM: Registering ACPI NVS region at bf798000 (278528 bytes) Dec 5 09:08:51 www kernel: [ 0.500749] ACPI: bus type pci registered Dec 5 09:08:51 www kernel: [ 0.502747] ACPI: EC: Look up EC in DSDT Dec 5 09:08:51 www kernel: [ 0.503788] ACPI: Executed 1 blocks of module-level executable AML code Dec 5 09:08:51 www kernel: [ 0.520435] ACPI: SSDT 00000000bf7980c0 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.520863] ACPI: Dynamic OEM Table Load: Dec 5 09:08:51 www kernel: [ 0.520990] ACPI: SSDT (null) 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.521308] ACPI: Interpreter enabled Dec 5 09:08:51 www kernel: [ 0.521366] ACPI: (supports S0 S1 S3 S4 S5) Dec 5 09:08:51 www kernel: [ 0.521611] ACPI: Using IOAPIC for interrupt routing Dec 5 09:08:51 www kernel: [ 0.522622] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACPI motherboard resources Dec 5 09:08:51 www kernel: [ 0.554150] ACPI: No dock devices found. Dec 5 09:08:51 www kernel: [ 0.554267] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 5 09:08:51 www kernel: [ 0.555231] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 5 09:08:51 www kernel: [ 0.588224] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] Dec 5 09:08:51 www kernel: [ 0.588398] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT] Dec 5 09:08:51 www kernel: [ 0.588451] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT] Dec 5 09:08:51 www kernel: [ 0.588473] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P6._PRT] Dec 5 09:08:51 www kernel: [ 0.588492] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P7._PRT] Dec 5 09:08:51 www kernel: [ 0.588512] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P8._PRT] Dec 5 09:08:51 www kernel: [ 0.588540] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE1._PRT] Dec 5 09:08:51 www kernel: [ 0.588559] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE3._PRT] Dec 5 09:08:51 www kernel: [ 0.588579] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE7._PRT] Dec 5 09:08:51 www kernel: [ 0.588605] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 5 09:08:51 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 5 09:08:51 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Dec 5 09:08:51 www kernel: [ 0.597666] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 6 7 10 11 12 14 *15) Dec 5 09:08:51 www kernel: [ 0.598142] ACPI: PCI Interrupt Link [LNKB] (IRQs *5) Dec 5 09:08:51 www kernel: [ 0.598336] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 6 7 10 *11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.598810] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 6 7 *10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.599284] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 6 7 10 11 12 *14 15) Dec 5 09:08:51 www kernel: [ 0.599762] ACPI: PCI Interrupt Link [LNKF] (IRQs *3 4 6 7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.600236] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 6 *7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.600709] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 *4 6 7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.601931] PCI: Using ACPI for IRQ routing Dec 5 09:08:51 www kernel: [ 0.628146] pnp: PnP ACPI init Dec 5 09:08:51 www kernel: [ 0.628211] ACPI: bus type pnp registered Dec 5 09:08:51 www kernel: [ 0.628417] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) Dec 5 09:08:51 www kernel: [ 0.628859] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 5 09:08:51 www kernel: [ 0.628915] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) Dec 5 09:08:51 www kernel: [ 0.628951] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) Dec 5 09:08:51 www kernel: [ 0.628975] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) Dec 5 09:08:51 www kernel: [ 0.629004] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) Dec 5 09:08:51 www kernel: [ 0.629229] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.629779] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.629849] pnp 00:08: Plug and Play ACPI device, IDs PNP0103 (active) Dec 5 09:08:51 www kernel: [ 0.629901] pnp 00:09: Plug and Play ACPI device, IDs INT0800 (active) Dec 5 09:08:51 www kernel: [ 0.630030] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630254] system 00:0b: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630304] pnp 00:0c: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) Dec 5 09:08:51 www kernel: [ 0.630359] pnp 00:0d: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) Dec 5 09:08:51 www kernel: [ 0.630492] system 00:0e: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630986] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 5 09:08:51 www kernel: [ 0.631078] pnp: PnP ACPI: found 16 devices Dec 5 09:08:51 www kernel: [ 0.631135] ACPI: ACPI bus type pnp unregistered Dec 5 09:08:51 www kernel: [ 0.726291] ACPI: Power Button [PWRB] Dec 5 09:08:51 www kernel: [ 0.726452] ACPI: Power Button [PWRF] Dec 5 09:08:51 www kernel: [ 0.726527] ACPI: acpi_idle yielding to intel_idle Dec 7 21:45:22 www kernel: [ 0.000000] BIOS-e820: 00000000bf780000 - 00000000bf798000 (ACPI data) Dec 7 21:45:22 www kernel: [ 0.000000] BIOS-e820: 00000000bf798000 - 00000000bf7dc000 (ACPI NVS) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: RSDP 00000000000fb1a0 00014 (v00 ACPIAM) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: RSDT 00000000bf780000 00040 (v01 022410 RSDT1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: FACP 00000000bf780200 00084 (v01 022410 FACP1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: DSDT 00000000bf7804b0 0C359 (v01 A1279 A1279001 00000001 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: FACS 00000000bf798000 00040 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: APIC 00000000bf780390 000D8 (v01 022410 APIC1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: MCFG 00000000bf780470 0003C (v01 022410 OEMMCFG 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: OEMB 00000000bf798040 00072 (v01 022410 OEMB1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: HPET 00000000bf78f4b0 00038 (v01 022410 OEMHPET 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: OSFR 00000000bf78f4f0 000B0 (v01 022410 OEMOSFR 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: SSDT 00000000bf798fe0 00363 (v01 DpgPmm CpuPm 00000012 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: PM-Timer IO Port: 0x808 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x84] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x85] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x86] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x87] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x88] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x89] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x8a] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x8b] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x8c] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x8d] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x8e] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x8f] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x03] address[0xfec8a000] gsi_base[24]) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ0 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ2 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ9 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] Using ACPI (MADT) for SMP configuration information Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000 Dec 7 21:45:22 www kernel: [ 0.009505] ACPI: Core revision 20110413 Dec 7 21:45:22 www kernel: [ 0.499203] PM: Registering ACPI NVS region at bf798000 (278528 bytes) Dec 7 21:45:22 www kernel: [ 0.500819] ACPI: bus type pci registered Dec 7 21:45:22 www kernel: [ 0.503121] ACPI: EC: Look up EC in DSDT Dec 7 21:45:22 www kernel: [ 0.504162] ACPI: Executed 1 blocks of module-level executable AML code Dec 7 21:45:22 www kernel: [ 0.520821] ACPI: SSDT 00000000bf7980c0 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.521247] ACPI: Dynamic OEM Table Load: Dec 7 21:45:22 www kernel: [ 0.521374] ACPI: SSDT (null) 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.521691] ACPI: Interpreter enabled Dec 7 21:45:22 www kernel: [ 0.521748] ACPI: (supports S0 S1 S3 S4 S5) Dec 7 21:45:22 www kernel: [ 0.521993] ACPI: Using IOAPIC for interrupt routing Dec 7 21:45:22 www kernel: [ 0.523002] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACPI motherboard resources Dec 7 21:45:22 www kernel: [ 0.554533] ACPI: No dock devices found. Dec 7 21:45:22 www kernel: [ 0.554649] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 7 21:45:22 www kernel: [ 0.555620] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 7 21:45:22 www kernel: [ 0.588224] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] Dec 7 21:45:22 www kernel: [ 0.588398] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT] Dec 7 21:45:22 www kernel: [ 0.588451] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT] Dec 7 21:45:22 www kernel: [ 0.588473] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P6._PRT] Dec 7 21:45:22 www kernel: [ 0.588492] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P7._PRT] Dec 7 21:45:22 www kernel: [ 0.588512] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P8._PRT] Dec 7 21:45:22 www kernel: [ 0.588540] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE1._PRT] Dec 7 21:45:22 www kernel: [ 0.588559] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE3._PRT] Dec 7 21:45:22 www kernel: [ 0.588579] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE7._PRT] Dec 7 21:45:22 www kernel: [ 0.588606] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 7 21:45:22 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 7 21:45:22 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Dec 7 21:45:22 www kernel: [ 0.597661] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 6 7 10 11 12 14 *15) Dec 7 21:45:22 www kernel: [ 0.598137] ACPI: PCI Interrupt Link [LNKB] (IRQs *5) Dec 7 21:45:22 www kernel: [ 0.598331] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 6 7 10 *11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.598804] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 6 7 *10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.599278] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 6 7 10 11 12 *14 15) Dec 7 21:45:22 www kernel: [ 0.599756] ACPI: PCI Interrupt Link [LNKF] (IRQs *3 4 6 7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.600230] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 6 *7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.600704] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 *4 6 7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.601926] PCI: Using ACPI for IRQ routing Dec 7 21:45:22 www kernel: [ 0.624115] pnp: PnP ACPI init Dec 7 21:45:22 www kernel: [ 0.624179] ACPI: bus type pnp registered Dec 7 21:45:22 www kernel: [ 0.624382] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) Dec 7 21:45:22 www kernel: [ 0.624821] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 7 21:45:22 www kernel: [ 0.624875] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) Dec 7 21:45:22 www kernel: [ 0.624911] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) Dec 7 21:45:22 www kernel: [ 0.624933] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) Dec 7 21:45:22 www kernel: [ 0.624962] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) Dec 7 21:45:22 www kernel: [ 0.625186] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.625733] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.625803] pnp 00:08: Plug and Play ACPI device, IDs PNP0103 (active) Dec 7 21:45:22 www kernel: [ 0.625856] pnp 00:09: Plug and Play ACPI device, IDs INT0800 (active) Dec 7 21:45:22 www kernel: [ 0.625984] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626206] system 00:0b: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626256] pnp 00:0c: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) Dec 7 21:45:22 www kernel: [ 0.626312] pnp 00:0d: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) Dec 7 21:45:22 www kernel: [ 0.626445] system 00:0e: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626936] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 7 21:45:22 www kernel: [ 0.627027] pnp: PnP ACPI: found 16 devices Dec 7 21:45:22 www kernel: [ 0.627084] ACPI: ACPI bus type pnp unregistered Dec 7 21:45:22 www kernel: [ 0.722086] ACPI: Power Button [PWRB] Dec 7 21:45:22 www kernel: [ 0.722246] ACPI: Power Button [PWRF] Dec 7 21:45:22 www kernel: [ 0.722320] ACPI: acpi_idle yielding to intel_idle

    Read the article

  • Configuring Fed Authentication Methods in OIF / IdP

    - by Damien Carru
    In this article, I will provide examples on how to configure OIF/IdP to map OAM Authentication Schemes to Federation Authentication Methods, based on the concepts introduced in my previous entry. I will show examples for the three protocols supported by OIF: SAML 2.0 SSO SAML 1.1 SSO OpenID 2.0 Enjoy the reading! Configuration As I mentioned in my previous article, mapping Federation Authentication Methods to OAM Authentication Schemes is protocol dependent, since the methods are defined in the various protocols (SAML 2.0, SAML 1.1, OpenID 2.0). As such, the WLST commands to set those mappings will involve: Either the SP Partner Profile and affect all Partners referencing that profile, which do not override the Federation Authentication Method to OAM Authentication Scheme mappings Or the SP Partner entry, which will only affect the SP Partner It is important to note that if an SP Partner is configured to define one or more Federation Authentication Method to OAM Authentication Scheme mappings, then all the mappings defined in the SP Partner Profile will be ignored. WLST Commands The two OIF WLST commands that can be used to define mapping Federation Authentication Methods to OAM Authentication Schemes are: addSPPartnerProfileAuthnMethod() to define a mapping on an SP Partner Profile, taking as parameters: The name of the SP Partner Profile The Federation Authentication Method The OAM Authentication Scheme name addSPPartnerAuthnMethod() to define a mapping on an SP Partner , taking as parameters: The name of the SP Partner The Federation Authentication Method The OAM Authentication Scheme name Note: I will discuss in a subsequent article the other parameters of those commands. In the next sections, I will show examples on how to use those methods: For SAML 2.0, I will configure the SP Partner Profile, that will apply all the mappings to SP Partners referencing this profile, unless they override mapping definition For SAML 1.1, I will configure the SP Partner. For OpenID 2.0, I will configure the SP/RP Partner SAML 2.0 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 2.0 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use BasicScheme as the Authentication Scheme Map BasicSessionScheme  to  the urn:oasis:names:tc:SAML:2.0:ac:classes:Password Federation Authentication Method Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> BasicScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to BasicScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "BasicScheme") Exit the WLST environment:exit() The user will now be challenged via HTTP Basic Authentication defined in the BasicScheme for AcmeSP. Also, as noted earlier, the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via HTTP Basic Authentication, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping BasicScheme To change the Federation Authentication Method mapping for the BasicScheme to urn:oasis:names:tc:SAML:2.0:ac:classes:Password instead of urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport for the saml20-sp-partner-profile SAML 2.0 SP Partner Profile (the profile to which my AcmeSP Partner is bound to), I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:Password", "BasicScheme") Exit the WLST environment:exit() After authentication via HTTP Basic Authentication, OIF/IdP would now issue an Assertion similar to (see that the AuthnContextClassRef was changed from PasswordProtectedTransport to Password): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:Password                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to OAMLDAPPluginAuthnScheme instead of BasicScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will now be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme and BasicScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods. As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthnContextClassRef set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef> OAMLDAPPluginAuthnScheme                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To add the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport mapping, I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to PasswordProtectedTransport): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> SAML 1.1 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 1.1 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:1.0:am:password to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner to OAMLDAPPluginAuthnScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for the SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods (in the SP Partner Profile). As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="OAMLDAPPluginAuthnScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To map the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password for this SP Partner only, I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> LDAPScheme as Authentication Scheme I will now show that by defining a Federation Authentication Mapping at the Partner level, this now ignores all mappings defined at the SP Partner Profile level. For this test, I will switch the default Authentication Scheme for this SP Partner back to LDAPScheme, and the Assertion issued by OIF/IdP will not be able to map this LDAPScheme to a Federation Authentication Method anymore, since A Federation Authentication Method mapping is defined at the SP Partner level and thus the mappings defined at the SP Partner Profile are ignored The LDAPScheme is not listed in the mapping at the Partner level I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for this SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to LDAPScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="LDAPScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping LDAPScheme at Partner Level To fix this issue, we will need to add the LDAPScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password mapping for this SP Partner only. I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OpenID 2.0 In the OpenID 2.0 flows, the RP must request use of PAPE, in order for OIF/IdP/OP to include PAPE information. For OpenID 2.0, the configuration will involve mapping a list of OpenID 2.0 policies to a list of Authentication Schemes. The WLST command will take a list of policies, delimited by the ',' character, instead of SAML 2.0 or SAML 1.1 where a single Federation Authentication Method had to be specified. Test Setup In this setup, OIF is acting as an IdP/OP and is integrated with a remote OpenID 2.0 SP/RP partner identified by AcmeRP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods (the second one is a custom for this use case) LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. No Federation Authentication Method is defined OOTB for OpenID 2.0, so if the IdP/OP issue an SSO response with a PAPE Response element, it will specify the scheme name instead of Federation Authentication Methods After authentication via FORM, OIF/IdP would issue an SSO Response similar to: https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=LDAPScheme&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D Mapping LDAPScheme To map the LDAP Scheme to the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods, I will execute the addSPPartnerAuthnMethod() method (the policies will be comma separated): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeRP", "http://schemas.openid.net/pape/policies/2007/06/phishing-resistant,http://openid-policies/password-protected", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to the two policies): https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fphishing-resistant+http%3A%2F%2Fopenid-policies%2Fpassword-protected&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D In the next article, I will cover how OIF/IdP can be configured so that an SP can request a specific Federation Authentication Method to challenge the user during Federation SSO.Cheers,Damien Carru

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part I, Notation

    - by Ralf Westphal
    You want to avoid the pitfalls of object oriented design? Then this is the right place to start. Use Flow-Oriented Analysis (FOA) and –Design (FOD or just FD for Flow-Design) to understand a problem domain and design a software solution. Flow-Orientation as described here is related to Flow-Based Programming, Event-Based Programming, Business Process Modelling, and even Event-Driven Architectures. But even though “thinking in flows” is not new, I found it helpful to deviate from those precursors for several reasons. Some aim at too big systems for the average programmer, some are concerned with only asynchronous processing, some are even not very much concerned with programming at all. What I was looking for was a design method to help in software projects of any size, be they large or tiny, involing synchronous or asynchronous processing, being local or distributed, running on the web or on the desktop or on a smartphone. That´s why I took ideas from all of the above sources and some additional and came up with Event-Based Components which later got repositioned and renamed to Flow-Design. In the meantime this has generated some discussion (in the German developer community) and several teams have started to work with Flow-Design. Also I´ve conducted quite some trainings using Flow-Orientation for design. The results are very promising. Developers find it much easier to design software using Flow-Orientation than OOAD-based object orientation. Since Flow-Orientation is moving fast and is not covered completely by a single source like a book, demand has increased for at least an overview of the current state of its notation. This page is trying to answer this demand by briefly introducing/describing every notational element as well as their translation into C# source code. Take this as a cheat sheet to put next to your whiteboard when designing software. However, please do not expect any explanation as to the reasons behind Flow-Design elements. Details on why Flow-Design at all and why in this specific way you´ll find in the literature covering the topic. Here´s a resource page on Flow-Design/Event-Based Components, if you´re able to read German. Notation Connected Functional Units The basic element of any FOD are functional units (FU): Think of FUs as some kind of software code block processing data. For the moment forget about classes, methods, “components”, assemblies or whatever. See a FU as an abstract piece of code. Software then consists of just collaborating FUs. I´m using circles/ellipses to draw FUs. But if you like, use rectangles. Whatever suites your whiteboard needs best.   The purpose of FUs is to process input and produce output. FUs are transformational. However, FUs are not called and do not call other FUs. There is no dependency between FUs. Data just flows into a FU (input) and out of it (output). From where and where to is of no concern to a FU.   This way FUs can be concatenated in arbitrary ways:   Each FU can accept input from many sources and produce output for many sinks:   Flows Connected FUs form a flow with a start and an end. Data is entering a flow at a source, and it´s leaving it through a sink. Think of sources and sinks as special FUs which conntect wires to the environment of a network of FUs.   Wiring Details Data is flowing into/out of FUs through wires. This is to allude to electrical engineering which since long has been working with composable parts. Wires are attached to FUs usings pins. They are the entry/exit points for the data flowing along the wires. Input-/output pins currently need not be drawn explicitly. This is to keep designing on a whiteboard simple and quick.   Data flowing is of some type, so wires have a type attached to them. And pins have names. If there is only one input pin and output pin on a FU, though, you don´t need to mention them. The default is Process for a single input pin, and Result for a single output pin. But you´re free to give even single pins different names.   There is a shortcut in use to address a certain pin on a destination FU:   The type of the wire is put in parantheses for two reasons. 1. This way a “no-type” wire can be easily denoted, 2. this is a natural way to describe tuples of data.   To describe how much data is flowing, a star can be put next to the wire type:   Nesting – Boards and Parts If more than 5 to 10 FUs need to be put in a flow a FD starts to become hard to understand. To keep diagrams clutter free they can be nested. You can turn any FU into a flow: This leads to Flow-Designs with different levels of abstraction. A in the above illustration is a high level functional unit, A.1 and A.2 are lower level functional units. One of the purposes of Flow-Design is to be able to describe systems on different levels of abstraction and thus make it easier to understand them. Humans use abstraction/decomposition to get a grip on complexity. Flow-Design strives to support this and make levels of abstraction first class citizens for programming. You can read the above illustration like this: Functional units A.1 and A.2 detail what A is supposed to do. The whole of A´s responsibility is decomposed into smaller responsibilities A.1 and A.2. FU A thus does not do anything itself anymore! All A is responsible for is actually accomplished by the collaboration between A.1 and A.2. Since A now is not doing anything anymore except containing A.1 and A.2 functional units are devided into two categories: boards and parts. Boards are just containing other functional units; their sole responsibility is to wire them up. A is a board. Boards thus depend on the functional units nested within them. This dependency is not of a functional nature, though. Boards are not dependent on services provided by nested functional units. They are just concerned with their interface to be able to plug them together. Parts are the workhorses of flows. They contain the real domain logic. They actually transform input into output. However, they do not depend on other functional units. Please note the usage of source and sink in boards. They correspond to input-pins and output-pins of the board.   Implicit Dependencies Nesting functional units leads to a dependency tree. Boards depend on nested functional units, they are the inner nodes of the tree. Parts are independent, they are the leafs: Even though dependencies are the bane of software development, Flow-Design does not usually draw these dependencies. They are implicitly created by visually nesting functional units. And they are harmless. Boards are so simple in their functionality, they are little affected by changes in functional units they are depending on. But functional units are implicitly dependent on more than nested functional units. They are also dependent on the data types of the wires attached to them: This is also natural and thus does not need to be made explicit. And it pertains mainly to parts being dependent. Since boards don´t do anything with regard to a problem domain, they don´t care much about data types. Their infrastructural purpose just needs types of input/output-pins to match.   Explicit Dependencies You could say, Flow-Orientation is about tackling complexity at its root cause: that´s dependencies. “Natural” dependencies are depicted naturally, i.e. implicitly. And whereever possible dependencies are not even created. Functional units don´t know their collaborators within a flow. This is core to Flow-Orientation. That makes for high composability of functional units. A part is as independent of other functional units as a motor is from the rest of the car. And a board is as dependend on nested functional units as a motor is on a spark plug or a crank shaft. With Flow-Design software development moves closer to how hardware is constructed. Implicit dependencies are not enough, though. Sometimes explicit dependencies make designs easier – as counterintuitive this might sound. So FD notation needs a ways to denote explicit dependencies: Data flows along wires. But data does not flow along dependency relations. Instead dependency relations represent service calls. Functional unit C is depending on/calling services on functional unit S. If you want to be more specific, name the services next to the dependency relation: Although you should try to stay clear of explicit dependencies, they are fundamentally ok. See them as a way to add another dimension to a flow. Usually the functionality of the independent FU (“Customer repository” above) is orthogonal to the domain of the flow it is referenced by. If you like emphasize this by using different shapes for dependent and independent FUs like above. Such dependencies can be used to link in resources like databases or shared in-memory state. FUs can not only produce output but also can have side effects. A common pattern for using such explizit dependencies is to hook a GUI into a flow as the source and/or the sink of data: Which can be shortened to: Treat FUs others depend on as boards (with a special non-FD API the dependent part is connected to), but do not embed them in a flow in the diagram they are depended upon.   Attributes of Functional Units Creation and usage of functional units can be modified with attributes. So far the following have shown to be helpful: Singleton: FUs are by default multitons. FUs in the same of different flows with the same name refer to the same functionality, but to different instances. Think of functional units as objects that get instanciated anew whereever they appear in a design. Sometimes though it´s helpful to reuse the same instance of a functional unit; this is always due to valuable state it holds. Signify this by annotating the FU with a “(S)”. Multiton: FUs on which others depend are singletons by default. This is, because they usually are introduced where shared state comes into play. If you want to change them to be a singletons mark them with a “(M)”. Configurable: Some parts need to be configured before the can do they work in a flow. Annotate them with a “(C)” to have them initialized before any data items to be processed by them arrive. Do not assume any order in which FUs are configured. How such configuration is happening is an implementation detail. Entry point: In each design there needs to be a single part where “it all starts”. That´s the entry point for all processing. It´s like Program.Main() in C# programs. Mark the entry point part with an “(E)”. Quite often this will be the GUI part. How the entry point is started is an implementation detail. Just consider it the first FU to start do its job.   Patterns / Standard Parts If more than a single wire is attached to an output-pin that´s called a split (or fork). The same data is flowing on all of the wires. Remember: Flow-Designs are synchronous by default. So a split does not mean data is processed in parallel afterwards. Processing still happens synchronously and thus one branch after another. Do not assume any specific order of the processing on the different branches after the split.   It is common to do a split and let only parts of the original data flow on through the branches. This effectively means a map is needed after a split. This map can be implicit or explicit.   Although FUs can have multiple input-pins it is preferrable in most cases to combine input data from different branches using an explicit join: The default output of a join is a tuple of its input values. The default behavior of a join is to output a value whenever a new input is received. However, to produce its first output a join needs an input for all its input-pins. Other join behaviors can be: reset all inputs after an output only produce output if data arrives on certain input-pins

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • CodePlex Daily Summary for Monday, November 19, 2012

    CodePlex Daily Summary for Monday, November 19, 2012Popular ReleasesmojoPortal: 2.3.9.4: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2394-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...VidCoder: 1.4.6 Beta: Brought back the x264 advanced options panel due to popular demand. Thank you for all the feedback. x264 Preset/Profile/Tune/Level has been moved back to the Video tab, along with a copy of the "extra options" string. Added Fast Decode and Zero Latency checkboxes to support multiple Tunes. Added cropping option "None". Audio bitrates that are incompatible with the encoder (such as MP3 > 320 kbps) are no longer preset on the list. Fixed crash on opening VidCoder after de-selecting "re...Metodología General Ajustada - MGA: 03.05.02: Cambios Parmenio: Correcciones al formato F03 de programación, se deja en comentarios la validación de la unidad de la actividad sea igul a la del indicador. Cambios John: Integración de código con cambios enviados por Parmenio Bonilla. Generación de instaladores. Soporte técnico por correo electrónico, telefónico y en sitio.SPListViewFilter: Version 1.8: Fixed some bugsDotNetNuke® Store: 03.01.07: What's New in this release? IMPORTANT: this version requires DotNetNuke 04.06.02 or higher! DO NOT REPORT BUGS HERE IN THE ISSUE TRACKER, INSTEAD USE THE DotNetNuke Store Forum! Bugs corrected: - Replaced some hard coded references to the default address provider classes by the corresponding interfaces to allow the creation of another address provider with a different name. New Features: - Added the 'pickup' delivery option at checkout. - Added the 'no delivery' option in the Store Admin ...Bundle Transformer - a modular extension for ASP.NET Web Optimization Framework: Bundle Transformer 1.6.10: Version: 1.6.10 Published: 11/18/2012 Now almost all of the Bundle Transformer's assemblies is signed (except BundleTransformer.Yui.dll); In BundleTransformer.SassAndScss the SassAndCoffee.Ruby library was replaced by my own implementation of the Sass- and SCSS-compiler (based on code of the SassAndCoffee.Ruby library version 2.0.2.0); In BundleTransformer.CoffeeScript added support of CoffeeScript version 1.4.0-3; In BundleTransformer.TypeScript added support of TypeScript version 0....ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.0: +2012-11-18 v3.2.0 -?????????????????SelectedValueArray????????(◇?◆:)。 -???????????????????RecoverPropertiesFromJObject????(〓?〓、????、??、Vian_Pan)。 -????????????,?????????????,???SelectedValueArray???????(sam.chang)。 -??Alert.Show???????????(swtseaman)。 -???????????????,??Icon??IconUrl????(swtseaman)。 -?????????TimePicker(??)。 -?????????,??/res.axd?css=blue.css&v=1。 -????????,?????????????,???????。 -????MenuCheckBox(???????)。 -?RadioButton??AutoPostBack??。 -???????FCKEditor?????????...BugNET Issue Tracker: BugNET 1.2: Please read our release notes for BugNET 1.2: http://blog.bugnetproject.com/bugnet-1-2-has-been-released Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Paint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DirectX Tool Kit: November 15, 2012: November 15, 2012 Added support for WIC2 when available on Windows 8 and Windows 7 with KB 2670838 Cleaned up warning level 4 warningsDotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...fastJSON: v2.0.10: - added MonoDroid projectxUnit.net Contrib: xunitcontrib-resharper 0.7 (RS 7.1, 6.1.1): xunitcontrib release 0.6.1 (ReSharper runner) This release provides a test runner plugin for Resharper 7.1 RTM and 6.1.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release drops 7.0 support and targets the latest revisions of the last two major versions of ReSharper (namely 7.0 and 6.1.1). Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. Also note that all builds work against ALL ...OnTopReplica: Release 3.4: Update to the 3 version with major fixes and improvements. Compatible with Windows 8. Now runs (and requires) .NET Framework v.4.0. Added relative mode for region selection (allows the user to select regions as margins from the borders of the thumbnail, useful for windows which have a variable size but fixed size controls, like video players). Improved window seeking when restoring cloned thumbnail or cloning a window by title or by class. Improved settings persistence. Improved co...DotSpatial: DotSpatial 1.4: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.5: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes Docum...AcDown?????: AcDown????? v4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...New Projects1119P1: So far, I haven't found any bugcoolow: a simple projectDatabase Tools: Windows application for managing SQL Server databases.Editable WILEz Books: Sorry for my bad enlgish. I'm italian. With this project you can write a simple book with images, you can customize text font, color, beckground immage ecc.. simply with a editable txt file.EstimateTracker: Program to track estimate time using XAML, MVVM, WPF, ninject Ioc, nhibernate and Microsoft PrismExtJS based ASP.NET 2.0 Controls: About FineUI ExtJS based professional ASP.NET 2.0 Controls. FineUI Mission Create No JavaScript, No CSS, No UpdatePanel, No ViewState and No WebServices web apGCU: This project supports the Gedcom Utility which allows users to review many Gedcom files for certain information.Heng.Elements: Entity Relationship ModelingiRoboticsPrototype1: iRobotics Prototype 1. Under developmentJamaican kitchen: This a website which will display various jamaican food. these dishes which be ranging from mild to spicey food. JarvisProject1: ???? ?????? ??????? ??????? - ????????????? ?????????? ? ???????? ????????? ??????Java 2D Game Developing Setup: Set of classes as "interface" between game powering code and creative game development in Java. KZ.Express: Project is build to resolve bill managementlbpWGaeBlog: my blog on gaeLogistic Management System: D? án giao nh?n v?n t?i logistics. D? án có r?t nhi?u ký t? D? án giao nh?n v?n t?i logistics. D? án có r?t nhi?u ký t?D? án giao nh?n v?n t?i logistics. D? án Managed3D: Managed3D is a scene graph API that allows developers to have both high-level and low-level access to objects in a 3-dimensional scene.MicroTao: MicroTao is for the future.MVC4 ASPX TourBooking: Website Booking tourMyDay: Simple little spike, using a todo list. Spiking MVC 4, Twitter Bootstrap, code-first migrations in EF 5, and AppHarbor deployments. mytestmusicstoremvc: my Study mvcNDateTime: NDateTime is a javascript library that wrap the most commons properties and methods of .NET DateTime object.onexin: This is test.PersiaCaptchaHandler: A Persian Captcha that use number in lettersPhoneGap/Cordova Libs, PhoneGap Demos, PhoneGap Solutions, PhoneGap Practices: Here you can find PhoneGap/Cordova Libs, demos, solutions, best practices, architectures, etc. Most important, all should be the best and free.PhotOrganizer: Windows application to organize your pictures. Scans folder for number and size of pictures. Moves to destination by year-month and removes duplicate files.PoNCE: PoNCE Engine helps creating of Point and Click quest gamesPragTest: List of my projectsProof of Concept Code: This is pretty much throw-away PoC code. I intend to have a folder for each PoC and the solution file for the PoC under the same folder. Security Center: Security Center is a handy tool to secure your secret notes. 512-bits AES algorithm with your private master password is used to protect your data. SharePoint Metro Sliders: SharePoint 2010 Feature that includes two Metro style image sliders web parts: - Image slider with just one image - Image slider with four smaller images in it.SkilledRES_Portal: SkilledRES Portal consists about Organization Information & Activity Profiling of SkilledRESSuper BASE32: This awesome app let you convert all your music, pictures and video to brand new BASE32 encoding! System.Data.Entity.Repository.Filters: System.Data.Entity.Repository.FiltersThe Media Store: The Media StoreUser Group: Maintain support data for user groupsVivitap: Vivitap Samples and SDK support.Web Scripting and Content Creation - DIY Wedding Cake - Assignment 2 - Prototype: Web Scripting and Content Creation - DIY Wedding Cake - Assignment 2webass2: protoype project for the final project for WSCC .WebTechCoursework2.HRSystem: Human Resource System for Web Technology CourseworkWindows Store Application Library: Windows Store Application Library provides a collection of UI controls and utilities for Windows 8 store application developers.WriteMyName: Código para escrever o nome do autor no começo de código fonte.XMPP Chat for Windows 8 Apps Store: xmpp sample for windows app store???-Windows8: ???Windows8???,?????????

    Read the article

  • ASPNET WebAPI REST Guidance

    - by JoshReuben
    ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. While I may be more partial to NodeJS these days, there is no denying that WebAPI is a well engineered framework. What follows is my investigation of how to leverage WebAPI to construct a RESTful frontend API.   The Advantages of REST Methodology over SOAP Simpler API for CRUD ops Standardize Development methodology - consistent and intuitive Standards based à client interop Wide industry adoption, Ease of use à easy to add new devs Avoid service method signature blowout Smaller payloads than SOAP Stateless à no session data means multi-tenant scalability Cache-ability Testability   General RESTful API Design Overview · utilize HTTP Protocol - Usage of HTTP methods for CRUD, standard HTTP response codes, common HTTP headers and Mime Types · Resources are mapped to URLs, actions are mapped to verbs and the rest goes in the headers. · keep the API semantic, resource-centric – A RESTful, resource-oriented service exposes a URI for every piece of data the client might want to operate on. A REST-RPC Hybrid exposes a URI for every operation the client might perform: one URI to fetch a piece of data, a different URI to delete that same data. utilize Uri to specify CRUD op, version, language, output format: http://api.MyApp.com/{ver}/{lang}/{resource_type}/{resource_id}.{output_format}?{key&filters} · entity CRUD operations are matched to HTTP methods: · Create - POST / PUT · Read – GET - cacheable · Update – PUT · Delete - DELETE · Use Uris to represent a hierarchies - Resources in RESTful URLs are often chained · Statelessness allows for idempotency – apply an op multiple times without changing the result. POST is non-idempotent, the rest are idempotent (if DELETE flags records instead of deleting them). · Cache indication - Leverage HTTP headers to label cacheable content and indicate the permitted duration of cache · PUT vs POST - The client uses PUT when it determines which URI (Id key) the new resource should have. The client uses POST when the server determines they key. PUT takes a second param – the id. POST creates a new resource. The server assigns the URI for the new object and returns this URI as part of the response message. Note: The PUT method replaces the entire entity. That is, the client is expected to send a complete representation of the updated product. If you want to support partial updates, the PATCH method is preferred DELETE deletes a resource at a specified URI – typically takes an id param · Leverage Common HTTP Response Codes in response headers 200 OK: Success 201 Created - Used on POST request when creating a new resource. 304 Not Modified: no new data to return. 400 Bad Request: Invalid Request. 401 Unauthorized: Authentication. 403 Forbidden: Authorization 404 Not Found – entity does not exist. 406 Not Acceptable – bad params. 409 Conflict - For POST / PUT requests if the resource already exists. 500 Internal Server Error 503 Service Unavailable · Leverage uncommon HTTP Verbs to reduce payload sizes HEAD - retrieves just the resource meta-information. OPTIONS returns the actions supported for the specified resource. PATCH - partial modification of a resource. · When using PUT, POST or PATCH, send the data as a document in the body of the request. Don't use query parameters to alter state. · Utilize Headers for content negotiation, caching, authorization, throttling o Content Negotiation – choose representation (e.g. JSON or XML and version), language & compression. Signal via RequestHeader.Accept & ResponseHeader.Content-Type Accept: application/json;version=1.0 Accept-Language: en-US Accept-Charset: UTF-8 Accept-Encoding: gzip o Caching - ResponseHeader: Expires (absolute expiry time) or Cache-Control (relative expiry time) o Authorization - basic HTTP authentication uses the RequestHeader.Authorization to specify a base64 encoded string "username:password". can be used in combination with SSL/TLS (HTTPS) and leverage OAuth2 3rd party token-claims authorization. Authorization: Basic sQJlaTp5ZWFslylnaNZ= o Rate Limiting - Not currently part of HTTP so specify non-standard headers prefixed with X- in the ResponseHeader. X-RateLimit-Limit: 10000 X-RateLimit-Remaining: 9990 · HATEOAS Methodology - Hypermedia As The Engine Of Application State – leverage API as a state machine where resources are states and the transitions between states are links between resources and are included in their representation (hypermedia) – get API metadata signatures from the response Link header - in a truly REST based architecture any URL, except the initial URL, can be changed, even to other servers, without worrying about the client. · error responses - Do not just send back a 200 OK with every response. Response should consist of HTTP error status code (JQuery has automated support for this), A human readable message , A Link to a meaningful state transition , & the original data payload that was problematic. · the URIs will typically map to a server-side controller and a method name specified by the type of request method. Stuff all your calls into just four methods is not as crazy as it sounds. · Scoping - Path variables look like you’re traversing a hierarchy, and query variables look like you’re passing arguments into an algorithm · Mapping URIs to Controllers - have one controller for each resource is not a rule – can consolidate - route requests to the appropriate controller and action method · Keep URls Consistent - Sometimes it’s tempting to just shorten our URIs. not recommend this as this can cause confusion · Join Naming – for m-m entity relations there may be multiple hierarchy traversal paths · Routing – useful level of indirection for versioning, server backend mocking in development ASPNET WebAPI Considerations ASPNET WebAPI implements a lot (but not all) RESTful API design considerations as part of its infrastructure and via its coding convention. Overview When developing an API there are basically three main steps: 1. Plan out your URIs 2. Setup return values and response codes for your URIs 3. Implement a framework for your API.   Design · Leverage Models MVC folder · Repositories – support IoC for tests, abstraction · Create DTO classes – a level of indirection decouples & allows swap out · Self links can be generated using the UrlHelper · Use IQueryable to support projections across the wire · Models can support restful navigation properties – ICollection<T> · async mechanism for long running ops - return a response with a ticket – the client can then poll or be pushed the final result later. · Design for testability - Test using HttpClient , JQuery ( $.getJSON , $.each) , fiddler, browser debug. Leverage IDependencyResolver – IoC wrapper for mocking · Easy debugging - IE F12 developer tools: Network tab, Request Headers tab     Routing · HTTP request method is matched to the method name. (This rule applies only to GET, POST, PUT, and DELETE requests.) · {id}, if present, is matched to a method parameter named id. · Query parameters are matched to parameter names when possible · Done in config via Routes.MapHttpRoute – similar to MVC routing · Can alternatively: o decorate controller action methods with HttpDelete, HttpGet, HttpHead,HttpOptions, HttpPatch, HttpPost, or HttpPut., + the ActionAttribute o use AcceptVerbsAttribute to support other HTTP verbs: e.g. PATCH, HEAD o use NonActionAttribute to prevent a method from getting invoked as an action · route table Uris can support placeholders (via curly braces{}) – these can support default values and constraints, and optional values · The framework selects the first route in the route table that matches the URI. Response customization · Response code: By default, the Web API framework sets the response status code to 200 (OK). But according to the HTTP/1.1 protocol, when a POST request results in the creation of a resource, the server should reply with status 201 (Created). Non Get methods should return HttpResponseMessage · Location: When the server creates a resource, it should include the URI of the new resource in the Location header of the response. public HttpResponseMessage PostProduct(Product item) {     item = repository.Add(item);     var response = Request.CreateResponse<Product>(HttpStatusCode.Created, item);     string uri = Url.Link("DefaultApi", new { id = item.Id });     response.Headers.Location = new Uri(uri);     return response; } Validation · Decorate Models / DTOs with System.ComponentModel.DataAnnotations properties RequiredAttribute, RangeAttribute. · Check payloads using ModelState.IsValid · Under posting – leave out values in JSON payload à JSON formatter assigns a default value. Use with RequiredAttribute · Over-posting - if model has RO properties à use DTO instead of model · Can hook into pipeline by deriving from ActionFilterAttribute & overriding OnActionExecuting Config · Done in App_Start folder > WebApiConfig.cs – static Register method: HttpConfiguration param: The HttpConfiguration object contains the following members. Member Description DependencyResolver Enables dependency injection for controllers. Filters Action filters – e.g. exception filters. Formatters Media-type formatters. by default contains JsonFormatter, XmlFormatter IncludeErrorDetailPolicy Specifies whether the server should include error details, such as exception messages and stack traces, in HTTP response messages. Initializer A function that performs final initialization of the HttpConfiguration. MessageHandlers HTTP message handlers - plug into pipeline ParameterBindingRules A collection of rules for binding parameters on controller actions. Properties A generic property bag. Routes The collection of routes. Services The collection of services. · Configure JsonFormatter for circular references to support links: PreserveReferencesHandling.Objects Documentation generation · create a help page for a web API, by using the ApiExplorer class. · The ApiExplorer class provides descriptive information about the APIs exposed by a web API as an ApiDescription collection · create the help page as an MVC view public ILookup<string, ApiDescription> GetApis()         {             return _explorer.ApiDescriptions.ToLookup(                 api => api.ActionDescriptor.ControllerDescriptor.ControllerName); · provide documentation for your APIs by implementing the IDocumentationProvider interface. Documentation strings can come from any source that you like – e.g. extract XML comments or define custom attributes to apply to the controller [ApiDoc("Gets a product by ID.")] [ApiParameterDoc("id", "The ID of the product.")] public HttpResponseMessage Get(int id) · GlobalConfiguration.Configuration.Services – add the documentation Provider · To hide an API from the ApiExplorer, add the ApiExplorerSettingsAttribute Plugging into the Message Handler pipeline · Plug into request / response pipeline – derive from DelegatingHandler and override theSendAsync method – e.g. for logging error codes, adding a custom response header · Can be applied globally or to a specific route Exception Handling · Throw HttpResponseException on method failures – specify HttpStatusCode enum value – examine this enum, as its values map well to typical op problems · Exception filters – derive from ExceptionFilterAttribute & override OnException. Apply on Controller or action methods, or add to global HttpConfiguration.Filters collection · HttpError object provides a consistent way to return error information in the HttpResponseException response body. · For model validation, you can pass the model state to CreateErrorResponse, to include the validation errors in the response public HttpResponseMessage PostProduct(Product item) {     if (!ModelState.IsValid)     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState); Cookie Management · Cookie header in request and Set-Cookie headers in a response - Collection of CookieState objects · Specify Expiry, max-age resp.Headers.AddCookies(new CookieHeaderValue[] { cookie }); Internet Media Types, formatters and serialization · Defaults to application/json · Request Accept header and response Content-Type header · determines how Web API serializes and deserializes the HTTP message body. There is built-in support for XML, JSON, and form-urlencoded data · customizable formatters can be inserted into the pipeline · POCO serialization is opt out via JsonIgnoreAttribute, or use DataMemberAttribute for optin · JSON serializer leverages NewtonSoft Json.NET · loosely structured JSON objects are serialzed as JObject which derives from Dynamic · to handle circular references in json: json.SerializerSettings.PreserveReferencesHandling =    PreserveReferencesHandling.All à {"$ref":"1"}. · To preserve object references in XML [DataContract(IsReference=true)] · Content negotiation Accept: Which media types are acceptable for the response, such as “application/json,” “application/xml,” or a custom media type such as "application/vnd.example+xml" Accept-Charset: Which character sets are acceptable, such as UTF-8 or ISO 8859-1. Accept-Encoding: Which content encodings are acceptable, such as gzip. Accept-Language: The preferred natural language, such as “en-us”. o Web API uses the Accept and Accept-Charset headers. (At this time, there is no built-in support for Accept-Encoding or Accept-Language.) · Controller methods can take JSON representations of DTOs as params – auto-deserialization · Typical JQuery GET request: function find() {     var id = $('#prodId').val();     $.getJSON("api/products/" + id,         function (data) {             var str = data.Name + ': $' + data.Price;             $('#product').text(str);         })     .fail(         function (jqXHR, textStatus, err) {             $('#product').text('Error: ' + err);         }); }            · Typical GET response: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Mon, 18 Jun 2012 04:30:33 GMT X-AspNet-Version: 4.0.30319 Cache-Control: no-cache Pragma: no-cache Expires: -1 Content-Type: application/json; charset=utf-8 Content-Length: 175 Connection: Close [{"Id":1,"Name":"TomatoSoup","Price":1.39,"ActualCost":0.99},{"Id":2,"Name":"Hammer", "Price":16.99,"ActualCost":10.00},{"Id":3,"Name":"Yo yo","Price":6.99,"ActualCost": 2.05}] True OData support · Leverage Query Options $filter, $orderby, $top and $skip to shape the results of controller actions annotated with the [Queryable]attribute. [Queryable]  public IQueryable<Supplier> GetSuppliers()  · Query: ~/Suppliers?$filter=Name eq ‘Microsoft’ · Applies the following selection filter on the server: GetSuppliers().Where(s => s.Name == “Microsoft”)  · Will pass the result to the formatter. · true support for the OData format is still limited - no support for creates, updates, deletes, $metadata and code generation etc · vnext: ability to configure how EditLinks, SelfLinks and Ids are generated Self Hosting no dependency on ASPNET or IIS: using (var server = new HttpSelfHostServer(config)) {     server.OpenAsync().Wait(); Tracing · tracability tools, metrics – e.g. send to nagios · use your choice of tracing/logging library, whether that is ETW,NLog, log4net, or simply System.Diagnostics.Trace. · To collect traces, implement the ITraceWriter interface public class SimpleTracer : ITraceWriter {     public void Trace(HttpRequestMessage request, string category, TraceLevel level,         Action<TraceRecord> traceAction)     {         TraceRecord rec = new TraceRecord(request, category, level);         traceAction(rec);         WriteTrace(rec); · register the service with config · programmatically trace – has helper extension methods: Configuration.Services.GetTraceWriter().Info( · Performance tracing - pipeline writes traces at the beginning and end of an operation - TraceRecord class includes aTimeStamp property, Kind property set to TraceKind.Begin / End Security · Roles class methods: RoleExists, AddUserToRole · WebSecurity class methods: UserExists, .CreateUserAndAccount · Request.IsAuthenticated · Leverage HTTP 401 (Unauthorized) response · [AuthorizeAttribute(Roles="Administrator")] – can be applied to Controller or its action methods · See section in WebApi document on "Claim-based-security for ASP.NET Web APIs using DotNetOpenAuth" – adapt this to STS.--> Web API Host exposes secured Web APIs which can only be accessed by presenting a valid token issued by the trusted issuer. http://zamd.net/2012/05/04/claim-based-security-for-asp-net-web-apis-using-dotnetopenauth/ · Use MVC membership provider infrastructure and add a DelegatingHandler child class to the WebAPI pipeline - http://stackoverflow.com/questions/11535075/asp-net-mvc-4-web-api-authentication-with-membership-provider - this will perform the login actions · Then use AuthorizeAttribute on controllers and methods for role mapping- http://sixgun.wordpress.com/2012/02/29/asp-net-web-api-basic-authentication/ · Alternate option here is to rely on MVC App : http://forums.asp.net/t/1831767.aspx/1

    Read the article

  • Using telerik radGrid - how to set the Date format for autogenerated column in edit mode

    - by Mark Breen
    Hello All, Using VS2008, and Telerik radGrid version 2010.1.519.35 I have a about 50 DNN modules using telerik radgrid and I need to display my dates in dd/mm/yy format. It is possible to do this easily in view mode, but when I switch to edit mode, it is more of a struggle. I can write a snippit of code to reformat the displayed date values to dd/mm/yy, but for inserts the user must enter mm/dd/yy. IOW, I need to change the culture of the form to en-GB culture. In my DotnetNuke App, I have made a change to the web.config, but it still assumes en-US format. I am not sure whether I need to set this at web.config level, page level or at the column within the control. I am struggling with this for a month or more and any help would be appriciated, thanks Mark Breen Ireland BMW R80GS 1987

    Read the article

  • installing bitnami stacks with virtualbox

    - by dreftymac
    Background: I cannot seem to get the vmdk files to work with VirtualBox as a way of using Bitnami stacks. The documentation says you can use VirtualBox, but there is no detail except how to use VMWare player. I know how to get .iso files working with virtual box, but not the files that Bitnami uses. Question: Anyone have experience getting this specific configuration to work?

    Read the article

  • Fluent NHibernate Map Enum as Lookup Table

    - by Jaimal Chohan
    I have the following (simplified) public enum Level { Bronze, Silver, Gold } public class Member { public virtual Level MembershipLevel { get; set; } } public class MemberMap : ClassMap<Member> { Map(x => x.MembershipLevel); } This creates a table with a column called MembershipLevel with the value as the Enum string value. What I want is for the entire Enum to be created as a lookup table, with the Membe table referencing this with the integer value as the FK. Also, I want to do this without bas***izing my model. Any ideas? [And I want time machine] [With 2 cup holders]

    Read the article

  • any real MVC library in PHP (for GUI apps)

    - by mario
    I'm wondering if there are any abstraction frameworks for one of the PHP gui libraries. We have PHP-GTK, a PHP/Tk interface, and seemingly also PHP-QT. (Not tried any.) I know that writing against the raw Gtk+ interface in Python is just bearable, and it therefore seems not very enticing for PHP. I assume it's the same for Qt, and Tk is pretty low-level too. So I'm looking for something that provides a nicer object structure atop any of the three. Primarily TreeViews are always a chore and php-gtk callbacks are weird in PHP, so I'd like a simplification for that. If it eases adding the GUI/View atop my business logic without much control code, that might already help. And so since GUI apps are an area where MVC or MVP would actually make sense, I'd like to know if any library for that exists. Btw, recently rediscovered PHP interface preprocessor, but that's rather low-level and just provides a simple widget/interface abstraction for Gtk/ncurses/pdf/xhtml output.

    Read the article

  • System calls on Windows

    - by b-gen-jack-o-neill
    Hi, I just want to ask, I know that standart system calls in Linux are done by int instruction pointing into Interrupt Vector Table. I assume this is similiar on Windows. But, how do you call some higher-level specific system routines? Such as how do you tell Windows to create a window? I know this is handled by the code in the dll, but what actually happend at assembler-instruction level? Does the routine in dll calls software interrupt by int instruction, or is there any different approach to handle this? Thanks.

    Read the article

  • Login failed for user 'DOMAIN\MACHINENAME$'

    - by sah302
    I know this is almost duplicate of : http://stackoverflow.com/questions/1269706/the-error-login-failed-for-user-nt-authority-iusr-in-asp-net-and-sql-server-2 and http://stackoverflow.com/questions/97594/login-failed-for-user-username-system-data-sqlclient-sqlexception-with-linq-i but some things don't add up compared to other appliations on my server and I am not sure why. Boxes being used: Web Box SQL Box SQL Test Box My Application: I've got aASP.NET Web Application, which references a class library that uses LINQ-to-SQL. Connection string set up properly in the class library. As per http://stackoverflow.com/questions/97594/login-failed-for-user-username-system-data-sqlclient-sqlexception-with-linq-i I also added this connection string to the Web Application. The connection string uses SQL credentials as so (in both web app and class library): <add name="Namespace.My.MySettings.ConnectionStringProduction" connectionString="Data Source=(SQL Test Box);Initial Catalog=(db name);Persist Security Info=True;User ID=ID;Password=Password" providerName="System.Data.SqlClient" /> This connection confirmed as working via adding it to server explorer. This is the connection string my .dbml file is using. The problem: I get the following error: System.Data.SqlClient.SqlException: Login failed for user 'DOMAIN\MACHINENAME$'. Now referencing this http://stackoverflow.com/questions/1269706/the-error-login-failed-for-user-nt-authority-iusr-in-asp-net-and-sql-server-2 it says that's really the local network service and using any other non-domain name will not work. But I am confused because I've checked both SQL Box and SQL Test Box SQL Management Studio and both have NT AUTHORITY/NETWORK SERVICE under Security - Logins, at the database level, that isn't listed under Security - Users, but at the database level Security - Users I have the user displayed in the connection string. At NTFS level on web server, the permissions have NETWORK SERVICE has full control. The reason why I am confused is because I have many other web applications on my Web Server, that reference databases on both SQL Box and SQL Test Box, and they all work. But I cannot find a difference between them and my current application, other than I am using a class library. Will that matter? Checking NTFS permissions, setup of Security Logins at the server and databases levels, connection string and method of connecting (SQL Server credentials), and IIS application pool and other folder options, are all the same. Why do these applications work without adding the machinename$ to the permissions of either of my SQL boxes? But that is what the one link is telling me to do to fix this problem.

    Read the article

  • Stepping over method without symbols - How to step into?

    - by joedotnot
    Using Visual Studio 2008 SP1 and a VB.NET project; I have some code which i cannot step into. The Immediate Window shows the message "Stepping over method without symbols 'Some.Namespace.Here'" How can i make sure the method always has symbols?! I need to step into every line of code. I am pressing F8 (which is "Step Into" in VS2008, from memory i think it used to be F11 in VS2005). This debugger stuff has always confused me: At the Solution level Property Pages i see a configuration dropdown with 4 values: Active (Debug), Debug, Release, All Configurations. - currently set to "Active (Debug)" At the Project level, i see a configuration dropdown with 2 values: Debug, Release. - currently set to "Debug"

    Read the article

  • The command "".\Bin\mt.exe" -nologo -manifest ... exited with error code 3 in CCNET

    - by soldieraman
    I am trying to build my VS 2008 project in CCNEt and getting the below error <message level="high"><![CDATA[".\Bin\mt.exe" -nologo -manifest "C:\MyProject\MyFile.exe.manifest" -outputresource:"C:\MyProject\bin\Release\MyFile.exe;#1"]]></message> <message level="high"><![CDATA[The system cannot find the path specified.]]></message> <error code="MSB3073" file="C:\WINDOWS\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets" line="3397" column="13"><![CDATA[The command "".\Bin\mt.exe" -nologo -manifest "C:\Work\CI\Abc20ServerTrunkCheckout\ScannerInterface\Abc.ScannerInterface.Tester.exe.manifest" -outputresource:"C:\MyProject\bin\Release\MyFile.exe;#1" exited with code 3.]]></error> This project builds happily on my local server. ALso there is no Bin folder in M.Net\Framework\v3.5.... Any help will be awesome I also did an msbuild on the project and got the same error.

    Read the article

  • Changing CSS on the fly in a UIWebView on iPhone

    - by Shaggy Frog
    Let's say I'm developing an iPhone app that is a catalogue of cars. The user will choose a car from a list, and I will present a detail view for the car, which will describe things like top speed. The detail view will essentially be a UIWebView that is loading an existing HTML file. Different users will live in different parts of the world, so they will like to see the top speed for the car in whatever units are appropriate for their locale. Let's say there are two such units: SI (km/h) and conventional (mph). Let's also say the user will be able to change the display units by hitting a button on the screen; when that happens, the detail screen should switch to show the relevant units. So far, here's what I've done to try and solve this. The HTML might look something like this: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-US" lang="en-US"> <head> <title>Some Car</title> <link rel="stylesheet" media="screen" type="text/css" href="persistent.css" /> <link rel="alternate stylesheet" media="screen" type="text/css" href="si.css" title="si" /> <link rel="alternate stylesheet" media="screen" type="text/css" href="conventional.css" title="conventional" /> <script type="text/javascript" src="switch.js"></script> </head> <body> <h1>Some Car</h1> <div id="si"> <h2>Top Speed: 160 km/h</h2> </div> <div id="conventional"> <h2>Top Speed: 100 mph</h2> </div> </body> The peristent stylesheet, persistent.css: #si { display:none; } #conventional { display:none; } The first alternate stylesheet, si.css: #si { display:inline; } #conventional { display:none; } And the second alternate stylesheet, conventional.css: #si { display:none; } #conventional { display:inline; } Based on a tutorial at A List Apart, my switch.js looks something like this: function disableStyleSheet(title) { var i, a; for (i = 0; (a = document.getElementsByTagName("link")[i]); i++) { if ((a.getAttribute("rel").indexOf("alt") != -1) && (a.getAttribute("title") == title)) { a.disabled = true; } } } function enableStyleSheet(title) { var i, a; for (i = 0; (a = document.getElementsByTagName("link")[i]); i++) { if ((a.getAttribute("rel").indexOf("alt") != -1) && (a.getAttribute("title") == title)) { a.disabled = false; } } } function switchToSiStyleSheet() { disableStyleSheet("conventional"); enableStyleSheet("si"); } function switchToConventionalStyleSheet() { disableStyleSheet("si"); enableStyleSheet("conventional"); } My button action handler looks something like this: - (void)notesButtonAction:(id)sender { static BOOL isUsingSi = YES; if (isUsingSi) { NSString* command = [[NSString alloc] initWithString:@"switchToSiStyleSheet();"]; [self.webView stringByEvaluatingJavaScriptFromString:command]; [command release]; } else { NSString* command = [[NSString alloc] initWithFormat:@"switchToConventionalStyleSheet();"]; [self.webView stringByEvaluatingJavaScriptFromString:command]; [command release]; } isUsingSi = !isUsingSi; } Here's the first problem. The first time the button is hit, the UIWebView doesn't change. The second time it's hit, it looks like the conventional style sheet is loaded. The third time, it switches to the SI style sheet; the fourth time, back to the conventional, and so on. So, basically, only that first button press doesn't seem to do anything. Here's the second problem. I'm not sure how to switch to the correct style sheet upon initial load of the UIWebView. I tried this: - (void)webViewDidFinishLoad:(UIWebView *)webView { NSString* command = [[NSString alloc] initWithString:@"switchToSiStyleSheet();"]; [self.webView stringByEvaluatingJavaScriptFromString:command]; [command release]; } But, like the first button hit, it doesn't seem to do anything. Can anyone help me with these two problems?

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >