Search Results

Search found 10820 results on 433 pages for 'action'.

Page 122/433 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Single entity with single view or two views in mvc3 vs2010?

    - by user2905798
    I have the following entity model public class Employee { public int Employee ID{get;set;} public string employeename{get;set;} public datetime employeeDOb{get;set;} public datetime? employeeDateOfJoin{get;set;} public string empFamilyname{get;set;} public datetime empFamilyDob{get;set;} } here I have to design a view for collecting employee information and employee family information. Since I am working on already available data, where in empFamilyDob was not mandatory. But now it is being made mandatory, the previous data doesn't contain EmpFamilyDob. So naturally I have added this new property EmpFamilyDob to the Model and made it required through DataAnnotations. Now there are two set of views to be developed. 1. A view which simply allows to collect the employee information without employee family information. i.e, empFamilyName and EmpFamilyDob.--This view is used by the Hr section to insert empplyee details Since the empFamilyname and EmpFamilyDob being now made mandatory, some other section will edit the data and update the EmpFamilyName and EmpFamilyDob as and when the information about employee family details are received. I have action controller for CreateNew and Edit Which is being generated by using the default model. There are two user actions being performed. 1.When the user clicks the Create new -- he will be able to update only the Employee information 2.As and when the other section receives the employee family details they update the familyname and family date of birth. i.e, EmployeeFamilyname and EmployeFamilyDob. While creating new record the uses should be able to update employee information only and while editing the information he should be able to update the employeefamily information. Since I have a single view with most of these fields as required and not allowing null , How can I achieve this in a sincle view? I have recorrected the model like this public class Employee { public int Employee ID{get;set;} public string employeename{get;set;} public datetime employeeDOb{get;set;} public datetime? employeeDateOfJoin{get;set;} public string empFamilyname{get;set;} public datetime? empFamilyDob{get;set;} } Now by default I hope the createnew action would insert null value for empFamilyname(string datatype) and empFamilyDob . In the Edit action the user should be made to enter empFamilyname and empFamilyDob(mandatory). As there is every chance that the user might edit other information about the employee(like employeeDob) I don't want to go for partial views. Can you help me out with some illustration. Thanks in advance

    Read the article

  • Can jquery capture the dynamic dom events and perform actions

    - by Zombie15
    I just wanted to know if something like this is possible. Jquery should fire some action on certian custom event. Like Whenever a new row is added to dom dynamically to table then i have certain action like change the background color to example red. That should work across the whole site. Somethings like Event listeners in Doctrine2 or Signals in Django EDIT: Basically i want some thing like where i can create custom event $.AddnewEvent(newRowAdded); Then i can customise that event with my own functions like $.newRowAdded(function(){ blah blah });

    Read the article

  • What's the safest way to remove data from mysql? (PHP/Mysql)

    - by ggfan
    I want to allow users as well as me(the admin) to delete data in mysql. I used to have remove.php that would get $_GETs from whatever that needed to be deleted such as... remove.php?action=post&posting_id=2. But I learned that anyone can simply abuse it and delete all my data. So what's the safest way for users and me to delete information without getting all crazy and hard? I am only a beginner :) I'm not sure if I can use POSTs because there is no forms and the data isn't changing. Is sessions good? Or would there be too many with postings, user information, comments, etc. Ex: James wants to delete one of his postings(it is posting_id=5). So he clicks the remove link and that takes him to remove.php?action=post&posting_id=5.

    Read the article

  • How to remove icon programatically (Android)

    - by user1865039
    By remove the below intent-filter in AndroidManifest.xml, it can remove the icon after install. <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> But i have try the below when on Boot than remove the Icon, but the icon still remain after reboot. I have add the permission, and this reboot receiver is work. public class BootBroadcastReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { PackageManager p = context.getApplicationContext().getPackageManager(); ComponentName componentName = new ComponentName("com.example.remove_icon","com.example.remove_icon.LauncherActivity"); p.setComponentEnabledSetting(componentName,PackageManager.COMPONENT_ENABLED_STATE_DISABLED, PackageManager.DONT_KILL_APP); } }

    Read the article

  • How to distinguish between new and returning anonymous Drupal users?

    - by Matt V.
    Is there an easy way (or a module) in Drupal to distinguish between anonymous users who have never created an account versus those who are returning but are not currently logged in? For non-returning (ie, completely new) users, I'd like to have a front page that is very streamlined and focused on registration as the call-to-action. However, if someone is a returning user but not currently logged in, I'd like to present a lot more information on the front page and have login as the main call-to-action. I realize both pages would still need to have both login and register options available, I just want to make the focus significantly different between the two.

    Read the article

  • Will [WithEvents = Nothing] RemoveHandlers in the derived class?

    - by serhio
    I use to set WithEvents variables to Nothing in Destuctor, because this will "Remove" all the Handlers associated with Handles keyword. Will this have the same effect for derivated classes? Class A Protected WithEvents _Foo as Button Private Sub _Foo_Click Handles _Foo.Click ' ... some Click action ' End Sub Public Sub Dispose(disposing as Boolean) If disposing then _Foo = Nothing ' remove handler _Foo_Click ' End Sub End Class Class B Inherits A Private Sub _Foo_Move Handles _Foo.Move ' ... some Move action ' End Sub ' ????? will or NOT remove OR handler _Foo_Move the base Dispose??' Public Overrides Sub Dispose(disposing as Boolean) 'If disposing then _Foo = Nothing ' MyBase.Dispose(disposing) End Sub End Class

    Read the article

  • using < > in a ASP.NET MVC Form causing problems...

    - by Chompski
    I have a problem that seems like there should be a simple solution but haven't found it yet. I have a pretty simple form that calls an action and passes it a FormCollection via HTTP Post. The form works perfectly until I introduce < or > into the field. Then I end up on a blank page having skipped the Action altogether. Need more information? Have any suggestions? Please help!

    Read the article

  • problem in run oracle server please help

    - by rima
    I used Oracle 11g, from few days ago I face below error: SQL*Plus: Release 11.2.0.1.0 Production on Thu Apr 7 07:33:19 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Enter user-name: pentacms Enter password: ERROR: ORA-01033: ORACLE initialization or shutdown in progress Process ID: 0 Session ID: 0 Serial number: 0 Enter user-name: I try to solve the error, but it raised an other error, I try to open log file but I receive below error(last line) "ERROR at line 1: ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [46], [32689], [32690], [], [], [], [], [], [], [] " please advice me, It's an emergency case. FIXED_TABLE_SEQUENCE ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# -------------------- ------------- -------------- --------------- ------------- TOP_LEVEL_CALL# LOGON_TIM LAST_CALL_ET PDM FAILOVER_TYPE FAILOVER_M FAI --------------- --------- ------------ --- ------------- ---------- --- RESOURCE_CONSUMER_GROUP PDML_STA PDDL_STA PQ_STATU -------------------------------- -------- -------- -------- CURRENT_QUEUE_DURATION ---------------------- CLIENT_IDENTIFIER BLOCKING_SE ---------------------------------------------------------------- ----------- BLOCKING_INSTANCE BLOCKING_SESSION FINAL_BLOCK FINAL_BLOCKING_INSTANCE ----------------- ---------------- ----------- ----------------------- FINAL_BLOCKING_SESSION SEQ# EVENT# ---------------------- ---------- ---------- EVENT ---------------------------------------------------------------- P1TEXT P1 ---------------------------------------------------------------- ---------- P1RAW ---------------- P2TEXT P2 ---------------------------------------------------------------- ---------- P2RAW ---------------- P3TEXT P3 ---------------------------------------------------------------- ---------- P3RAW WAIT_CLASS_ID WAIT_CLASS# ---------------- ------------- ----------- WAIT_CLASS WAIT_TIME ---------------------------------------------------------------- ---------- SECONDS_IN_WAIT STATE WAIT_TIME_MICRO TIME_REMAINING_MICRO --------------- ------------------- --------------- -------------------- TIME_SINCE_LAST_WAIT_MICRO -------------------------- SERVICE_NAME SQL_TRAC SQL_T ---------------------------------------------------------------- -------- ----- SQL_T SQL_TRACE_ SESSION_EDITION_ID CREATOR_ADDR CREATOR_SERIAL# ----- ---------- ------------------ ---------------- --------------- ECID ---------------------------------------------------------------- SYS$USERS DISABLED FALSE SADDR SID SERIAL# AUDSID PADDR USER# ---------------- ---------- ---------- ---------- ---------------- ---------- USERNAME COMMAND OWNERID TADDR ------------------------------ ---------- ---------- ---------------- LOCKWAIT STATUS SERVER SCHEMA# SCHEMANAME ---------------- -------- --------- ---------- ------------------------------ OSUSER PROCESS ------------------------------ ------------------------ MACHINE PORT ---------------------------------------------------------------- ---------- TERMINAL ---------------- PROGRAM TYPE ---------------------------------------------------------------- ---------- SQL_ADDRESS SQL_HASH_VALUE SQL_ID SQL_CHILD_NUMBER SQL_EXEC_ ---------------- -------------- ------------- ---------------- --------- SQL_EXEC_ID PREV_SQL_ADDR PREV_HASH_VALUE PREV_SQL_ID PREV_CHILD_NUMBER ----------- ---------------- --------------- ------------- ----------------- PREV_EXEC PREV_EXEC_ID PLSQL_ENTRY_OBJECT_ID PLSQL_ENTRY_SUBPROGRAM_ID --------- ------------ --------------------- ------------------------- PLSQL_OBJECT_ID PLSQL_SUBPROGRAM_ID --------------- ------------------- MODULE MODULE_HASH ------------------------------------------------ ----------- ACTION ACTION_HASH -------------------------------- ----------- CLIENT_INFO ---------------------------------------------------------------- FIXED_TABLE_SEQUENCE ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# -------------------- ------------- -------------- --------------- ------------- TOP_LEVEL_CALL# LOGON_TIM LAST_CALL_ET PDM FAILOVER_TYPE FAILOVER_M FAI --------------- --------- ------------ --- ------------- ---------- --- RESOURCE_CONSUMER_GROUP PDML_STA PDDL_STA PQ_STATU -------------------------------- -------- -------- -------- CURRENT_QUEUE_DURATION ---------------------- CLIENT_IDENTIFIER BLOCKING_SE ---------------------------------------------------------------- ----------- BLOCKING_INSTANCE BLOCKING_SESSION FINAL_BLOCK FINAL_BLOCKING_INSTANCE ----------------- ---------------- ----------- ----------------------- FINAL_BLOCKING_SESSION SEQ# EVENT# ---------------------- ---------- ---------- EVENT ---------------------------------------------------------------- P1TEXT P1 ---------------------------------------------------------------- ---------- P1RAW ---------------- P2TEXT P2 ---------------------------------------------------------------- ---------- P2RAW ---------------- P3TEXT P3 ---------------------------------------------------------------- ---------- P3RAW WAIT_CLASS_ID WAIT_CLASS# ---------------- ------------- ----------- WAIT_CLASS WAIT_TIME ---------------------------------------------------------------- ---------- SECONDS_IN_WAIT STATE WAIT_TIME_MICRO TIME_REMAINING_MICRO --------------- ------------------- --------------- -------------------- TIME_SINCE_LAST_WAIT_MICRO -------------------------- SERVICE_NAME SQL_TRAC SQL_T ---------------------------------------------------------------- -------- ----- SQL_T SQL_TRACE_ SESSION_EDITION_ID CREATOR_ADDR CREATOR_SERIAL# ----- ---------- ------------------ ---------------- --------------- ECID ---------------------------------------------------------------- FALSE FIRST EXEC 0 000007FF5D4D8D70 2 SADDR SID SERIAL# AUDSID PADDR USER# ---------------- ---------- ---------- ---------- ---------------- ---------- USERNAME COMMAND OWNERID TADDR ------------------------------ ---------- ---------- ---------------- LOCKWAIT STATUS SERVER SCHEMA# SCHEMANAME ---------------- -------- --------- ---------- ------------------------------ OSUSER PROCESS ------------------------------ ------------------------ MACHINE PORT ---------------------------------------------------------------- ---------- TERMINAL ---------------- PROGRAM TYPE ---------------------------------------------------------------- ---------- SQL_ADDRESS SQL_HASH_VALUE SQL_ID SQL_CHILD_NUMBER SQL_EXEC_ ---------------- -------------- ------------- ---------------- --------- SQL_EXEC_ID PREV_SQL_ADDR PREV_HASH_VALUE PREV_SQL_ID PREV_CHILD_NUMBER ----------- ---------------- --------------- ------------- ----------------- PREV_EXEC PREV_EXEC_ID PLSQL_ENTRY_OBJECT_ID PLSQL_ENTRY_SUBPROGRAM_ID --------- ------------ --------------------- ------------------------- PLSQL_OBJECT_ID PLSQL_SUBPROGRAM_ID --------------- ------------------- MODULE MODULE_HASH ------------------------------------------------ ----------- ACTION ACTION_HASH -------------------------------- ----------- CLIENT_INFO ---------------------------------------------------------------- FIXED_TABLE_SEQUENCE ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# -------------------- ------------- -------------- --------------- ------------- TOP_LEVEL_CALL# LOGON_TIM LAST_CALL_ET PDM FAILOVER_TYPE FAILOVER_M FAI --------------- --------- ------------ --- ------------- ---------- --- RESOURCE_CONSUMER_GROUP PDML_STA PDDL_STA PQ_STATU -------------------------------- -------- -------- -------- CURRENT_QUEUE_DURATION ---------------------- CLIENT_IDENTIFIER BLOCKING_SE ---------------------------------------------------------------- ----------- BLOCKING_INSTANCE BLOCKING_SESSION FINAL_BLOCK FINAL_BLOCKING_INSTANCE ----------------- ---------------- ----------- ----------------------- FINAL_BLOCKING_SESSION SEQ# EVENT# ---------------------- ---------- ---------- EVENT ---------------------------------------------------------------- P1TEXT P1 ---------------------------------------------------------------- ---------- P1RAW ---------------- P2TEXT P2 ---------------------------------------------------------------- ---------- P2RAW ---------------- P3TEXT P3 ---------------------------------------------------------------- ---------- P3RAW WAIT_CLASS_ID WAIT_CLASS# ---------------- ------------- ----------- WAIT_CLASS WAIT_TIME ---------------------------------------------------------------- ---------- SECONDS_IN_WAIT STATE WAIT_TIME_MICRO TIME_REMAINING_MICRO --------------- ------------------- --------------- -------------------- TIME_SINCE_LAST_WAIT_MICRO -------------------------- SERVICE_NAME SQL_TRAC SQL_T ---------------------------------------------------------------- -------- ----- SQL_T SQL_TRACE_ SESSION_EDITION_ID CREATOR_ADDR CREATOR_SERIAL# ----- ---------- ------------------ ---------------- --------------- ECID ---------------------------------------------------------------- SADDR SID SERIAL# AUDSID PADDR USER# ---------------- ---------- ---------- ---------- ---------------- ---------- USERNAME COMMAND OWNERID TADDR ------------------------------ ---------- ---------- ---------------- LOCKWAIT STATUS SERVER SCHEMA# SCHEMANAME ---------------- -------- --------- ---------- ------------------------------ OSUSER PROCESS ------------------------------ ------------------------ MACHINE PORT ---------------------------------------------------------------- ---------- TERMINAL ---------------- PROGRAM TYPE ---------------------------------------------------------------- ---------- SQL_ADDRESS SQL_HASH_VALUE SQL_ID SQL_CHILD_NUMBER SQL_EXEC_ ---------------- -------------- ------------- ---------------- --------- SQL_EXEC_ID PREV_SQL_ADDR PREV_HASH_VALUE PREV_SQL_ID PREV_CHILD_NUMBER ----------- ---------------- --------------- ------------- ----------------- PREV_EXEC PREV_EXEC_ID PLSQL_ENTRY_OBJECT_ID PLSQL_ENTRY_SUBPROGRAM_ID --------- ------------ --------------------- ------------------------- PLSQL_OBJECT_ID PLSQL_SUBPROGRAM_ID --------------- ------------------- MODULE MODULE_HASH ------------------------------------------------ ----------- ACTION ACTION_HASH -------------------------------- ----------- CLIENT_INFO ---------------------------------------------------------------- FIXED_TABLE_SEQUENCE ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# -------------------- ------------- -------------- --------------- ------------- TOP_LEVEL_CALL# LOGON_TIM LAST_CALL_ET PDM FAILOVER_TYPE FAILOVER_M FAI --------------- --------- ------------ --- ------------- ---------- --- RESOURCE_CONSUMER_GROUP PDML_STA PDDL_STA PQ_STATU -------------------------------- -------- -------- -------- CURRENT_QUEUE_DURATION ---------------------- CLIENT_IDENTIFIER BLOCKING_SE ---------------------------------------------------------------- ----------- BLOCKING_INSTANCE BLOCKING_SESSION FINAL_BLOCK FINAL_BLOCKING_INSTANCE ----------------- ---------------- ----------- ----------------------- FINAL_BLOCKING_SESSION SEQ# EVENT# ---------------------- ---------- ---------- EVENT ---------------------------------------------------------------- P1TEXT P1 ---------------------------------------------------------------- ---------- P1RAW ---------------- P2TEXT P2 ---------------------------------------------------------------- ---------- P2RAW ---------------- P3TEXT P3 ---------------------------------------------------------------- ---------- P3RAW WAIT_CLASS_ID WAIT_CLASS# ---------------- ------------- ----------- WAIT_CLASS WAIT_TIME ---------------------------------------------------------------- ---------- SECONDS_IN_WAIT STATE WAIT_TIME_MICRO TIME_REMAINING_MICRO --------------- ------------------- --------------- -------------------- TIME_SINCE_LAST_WAIT_MICRO -------------------------- SERVICE_NAME SQL_TRAC SQL_T ---------------------------------------------------------------- -------- ----- SQL_T SQL_TRACE_ SESSION_EDITION_ID CREATOR_ADDR CREATOR_SERIAL# ----- ---------- ------------------ ---------------- --------------- ECID ---------------------------------------------------------------- 16 rows selected. SQL> desc dba_user; ERROR: ORA-04043: object dba_user does not exist SQL> desc dba_users; ERROR: ORA-04043: object dba_users does not exist SQL> desc v$user; ERROR: ORA-04043: object v$user does not exist SQL> desc v$users ERROR: ORA-04043: object v$users does not exist SQL> seleect * from dba_users; SP2-0734: unknown command beginning "seleect * ..." - rest of line ignored. SQL> select * from dba_users; select * from dba_users * ERROR at line 1: ORA-01219: database not open: queries allowed on fixed tables/views only SQL> alter database open; alter database open * ERROR at line 1: ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [46], [32689], [32690], [], [], [], [], [], [], [] SQL> alter database mount; alter database mount * ERROR at line 1: ORA-01100: database already mounted SQL> alter database mount;

    Read the article

  • Does ModSecurity 2.7.1 work with ASP.NET MVC 3?

    - by autonomatt
    I'm trying to get ModSecurity 2.7.1 to work with an ASP.NET MVC 3 website. The installation ran without errors and looking at the event log, ModSecurity is starting up successfully. I am using the modsecurity.conf-recommended file to set the basic rules. The problem I'm having is that whenever I am POSTing some form data, it doesn't get through to the controller action (or model binder). I have SecRuleEngine set to DetectionOnly. I have SecRequestBodyAccess set to On. With these settings, the body of the POST never reaches the controller action. If I set SecRequestBodyAccess to Off it works, so it's definitely something to do with how ModSecurity forwards the body data. The ModSecurity debug shows the following (looks to me as if all passed through): Second phase starting (dcfg 94b750). Input filter: Reading request body. Adding request argument (BODY): name "[0].IsSelected", value "on" Adding request argument (BODY): name "[0].Quantity", value "1" Adding request argument (BODY): name "[0].VariantSku", value "047861" Adding request argument (BODY): name "[1].Quantity", value "0" Adding request argument (BODY): name "[1].VariantSku", value "047862" Input filter: Completed receiving request body (length 115). Starting phase REQUEST_BODY. Recipe: Invoking rule 94c620; [file "*********************"] [line "54"] [id "200001"]. Rule 94c620: SecRule "REQBODY_ERROR" "!@eq 0" "phase:2,auditlog,id:200001,t:none,log,deny,status:400,msg:'Failed to parse request body.',logdata:%{reqbody_error_msg},severity:2" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against REQBODY_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 5549c38; [file "*********************"] [line "75"] [id "200002"]. Rule 5549c38: SecRule "MULTIPART_STRICT_ERROR" "!@eq 0" "phase:2,auditlog,id:200002,t:none,log,deny,status:44,msg:'Multipart request body failed strict validation: PE %{REQBODY_PROCESSOR_ERROR}, BQ %{MULTIPART_BOUNDARY_QUOTED}, BW %{MULTIPART_BOUNDARY_WHITESPACE}, DB %{MULTIPART_DATA_BEFORE}, DA %{MULTIPART_DATA_AFTER}, HF %{MULTIPART_HEADER_FOLDING}, LF %{MULTIPART_LF_LINE}, SM %{MULTIPART_MISSING_SEMICOLON}, IQ %{MULTIPART_INVALID_QUOTING}, IP %{MULTIPART_INVALID_PART}, IH %{MULTIPART_INVALID_HEADER_FOLDING}, FL %{MULTIPART_FILE_LIMIT_EXCEEDED}'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_STRICT_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554bd70; [file "********************"] [line "80"] [id "200003"]. Rule 554bd70: SecRule "MULTIPART_UNMATCHED_BOUNDARY" "!@eq 0" "phase:2,auditlog,id:200003,t:none,log,deny,status:44,msg:'Multipart parser detected a possible unmatched boundary.'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_UNMATCHED_BOUNDARY. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554cbe0; [file "*********************************"] [line "94"] [id "200004"]. Rule 554cbe0: SecRule "TX:/^MSC_/" "!@streq 0" "phase:2,log,auditlog,id:200004,t:none,deny,msg:'ModSecurity internal error flagged: %{MATCHED_VAR_NAME}'" Rule returned 0. Hook insert_filter: Adding input forwarding filter (r 5541fc0). Hook insert_filter: Adding output filter (r 5541fc0). Initialising logging. Starting phase LOGGING. Recording persistent data took 0 microseconds. Audit log: Ignoring a non-relevant request. I can't see anything unusual in Fiddler. I'm using a ViewModel in the parameters of my action. No data is bound if SecRequestBodyAccess is set to On. I'm even logging all the Request.Form.Keys and values via log4net, but not getting any values there either. I'm starting to wonder if ModSecurity actually works with ASP.NET MVC or if there is some conflict with the ModSecurity http Module and the model binder kicking in. Does anyone have any suggestions or can anyone confirm they have ModSecurity working with an ASP.NET MVC website?

    Read the article

  • Attempting Unauthorized operation - SQL 2008 R2 install

    - by Fred L
    I've been banging against this for a few days. Keep getting this unauthorized error when trying to install SQL 2008 R2 on a Windows 7 machine. I've changed permissions on the key, does not fix... Created an admin user, gave specific permissions on that key, does not fix... Disabled all firewalls, installed from a local admin, does not fix... I'm out of patience and ideas! :) Help? 2012-07-06 13:09:11 Slp: Sco: Attempting to set value AppName 2012-07-06 13:09:11 Slp: SetValue: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VSTAHostConfig\SSIS_ScriptComponent\2.0, Name = AppName 2012-07-06 13:09:11 Slp: Sco: Attempting to create base registry key HKEY_LOCAL_MACHINE, machine 2012-07-06 13:09:11 SSIS: Processing Registry ACLs for SID 'S-1-5-21-2383144575-3599344511-819193542-1074' 2012-07-06 13:09:11 Slp: Sco: Attempting to open registry subkey SOFTWARE\Microsoft\Microsoft SQL Server\100 2012-07-06 13:09:11 SSIS: Setting permision on registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100. 2012-07-06 13:09:11 Slp: Sco: Attempting to replace account with sid in security descriptor D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: ReplaceAccountWithSidInSddl -- SDDL to be processed: D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: ReplaceAccountWithSidInSddl -- SDDL to be returned: D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: Sco: Attempting to set security descriptor D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: Sco: Attempting to normalize security descriptor D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: Sco: Attempting to replace account with sid in security descriptor D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: ReplaceAccountWithSidInSddl -- SDDL to be processed: D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: ReplaceAccountWithSidInSddl -- SDDL to be returned: D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: Sco: Attempting to normalize security descriptor D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: Sco: Attempting to replace account with sid in security descriptor D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: ReplaceAccountWithSidInSddl -- SDDL to be processed: D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: ReplaceAccountWithSidInSddl -- SDDL to be returned: D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:11 Slp: Prompting user if they want to retry this action due to the following failure: 2012-07-06 13:09:11 Slp: ---------------------------------------- 2012-07-06 13:09:11 Slp: The following is an exception stack listing the exceptions in outermost to innermost order 2012-07-06 13:09:11 Slp: Inner exceptions are being indented 2012-07-06 13:09:11 Slp: 2012-07-06 13:09:11 Slp: Exception type: Microsoft.SqlServer.Configuration.Sco.ScoException 2012-07-06 13:09:11 Slp: Message: 2012-07-06 13:09:11 Slp: Attempted to perform an unauthorized operation. 2012-07-06 13:09:11 Slp: Data: 2012-07-06 13:09:11 Slp: WatsonData = 100 2012-07-06 13:09:11 Slp: DisableRetry = true 2012-07-06 13:09:11 Slp: Inner exception type: System.UnauthorizedAccessException 2012-07-06 13:09:11 Slp: Message: 2012-07-06 13:09:11 Slp: Attempted to perform an unauthorized operation. 2012-07-06 13:09:11 Slp: Stack: 2012-07-06 13:09:11 Slp: at System.Security.AccessControl.Win32.GetSecurityInfo(ResourceType resourceType, String name, SafeHandle handle, AccessControlSections accessControlSections, RawSecurityDescriptor& resultSd) 2012-07-06 13:09:11 Slp: at System.Security.AccessControl.NativeObjectSecurity.CreateInternal(ResourceType resourceType, Boolean isContainer, String name, SafeHandle handle, AccessControlSections includeSections, Boolean createByName, ExceptionFromErrorCode exceptionFromErrorCode, Object exceptionContext) 2012-07-06 13:09:11 Slp: at Microsoft.SqlServer.Configuration.Sco.SqlRegistrySecurity..ctor(ResourceType resourceType, SafeRegistryHandle handle, AccessControlSections includeSections) 2012-07-06 13:09:11 Slp: at Microsoft.SqlServer.Configuration.Sco.SqlRegistrySecurity.Create(InternalRegistryKey key) 2012-07-06 13:09:11 Slp: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.GetAccessControl() 2012-07-06 13:09:11 Slp: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.SetSecurityDescriptor(String sddl, Boolean overwrite) 2012-07-06 13:09:11 Slp: ---------------------------------------- 2012-07-06 13:09:24 Slp: User has chosen to retry this action 2012-07-06 13:09:24 Slp: Sco: Attempting to normalize security descriptor D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:24 Slp: Sco: Attempting to replace account with sid in security descriptor D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:24 Slp: ReplaceAccountWithSidInSddl -- SDDL to be processed: D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:24 Slp: ReplaceAccountWithSidInSddl -- SDDL to be returned: D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:24 Slp: Sco: Attempting to normalize security descriptor D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:24 Slp: Sco: Attempting to replace account with sid in security descriptor D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:24 Slp: ReplaceAccountWithSidInSddl -- SDDL to be processed: D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:24 Slp: ReplaceAccountWithSidInSddl -- SDDL to be returned: D:(A;OICI;KR;;;S-1-5-21-2383144575-3599344511-819193542-1074) 2012-07-06 13:09:24 Slp: Prompting user if they want to retry this action due to the following failure: 2012-07-06 13:09:24 Slp: ----------------------------------------

    Read the article

  • apache2 error Could not open configuration file /etc/apache2/conf.d/: No such file or directory

    - by Sundar Elumalai
    I have just upgraded my Ubuntu 13.10 and apache2 is not working. When I try to start the apache2 server it is printing following errors: * Starting web server apache2 * The apache2 configtest failed. Output of config test was: apache2: Syntax error on line 263 of /etc/apache2/apache2.conf: Could not open configuration file /etc/apache2/conf.d/: No such file or directory Action 'configtest' failed.

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Virtualbox on Ubuntu 12.04 and 3.5 kernel

    - by kas
    I have installed the 3.5 kernel under Ubuntu 12.04. When I install virtualbox I recieve the following error. Setting up virtualbox (4.1.12-dfsg-2ubuntu0.2) ... * Stopping VirtualBox kernel modules [ OK ] * Starting VirtualBox kernel modules * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. Processing triggers for python-central ... Setting up virtualbox-dkms (4.1.12-dfsg-2ubuntu0.2) ... Loading new virtualbox-4.1.12 DKMS files... First Installation: checking all kernels... Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. * Stopping VirtualBox kernel modules [ OK ] * Starting VirtualBox kernel modules * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Does anyone know how I might be able to resolve this? Edit -- Here is the make.log DKMS make.log for virtualbox-4.1.12 for kernel 3.5.0-18-generic (x86_64) Mon Nov 19 12:12:23 EST 2012 make: Entering directory `/usr/src/linux-headers-3.5.0-18-generic' LD /var/lib/dkms/virtualbox/4.1.12/build/built-in.o LD /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/built-in.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/linux/SUPDrv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/SUPDrv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/SUPDrvSem.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/alloc-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/initterm-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/memobj-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/mpnotification-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/powernotification-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/assert-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/alloc-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/initterm-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.o /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.c: In function ‘rtR0MemObjLinuxDoMmap’: /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.c:1150:9: error: implicit declaration of function ‘do_mmap’ [-Werror=implicit-function-declaration] cc1: some warnings being treated as errors make[2]: *** [/var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.o] Error 1 make[1]: *** [/var/lib/dkms/virtualbox/4.1.12/build/vboxdrv] Error 2 make: *** [_module_/var/lib/dkms/virtualbox/4.1.12/build] Error 2 make: Leaving directory `/usr/src/linux-headers-3.5.0-18-generic'

    Read the article

  • Demo of an Early Beta of Firefox OS Running on a ZTE Developer Phone [Video]

    - by Asian Angel
    Are you curious about Mozilla’s new mobile OS platform? Then here is your chance to see an early beta of Firefox OS in action. This video shows the OS’s built-in web browser, phone dialer, camera, and gallery image viewer running on a developer phone from ZTE. Firefox OS Demo (09-06-12) [via The H Open] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Networking middleware

    - by Howie
    I'm looking for a networking middleware that may be suitable for a medium-sized MMO. I don't care in which language it's written, just that it's high-level, stable and has many of the features that can ease my development. I am making a 2D real-time action game. Come to think of it... I'd be happy with "just" a high-level networking framework that has a few handy features to ease the development of a general networked game.

    Read the article

  • Daily tech links for .net and related technologies - Apr 26-28, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Apr 26-28, 2010 Web Development MVC: Unit Testing Action Filters - Donn ASP.NET MVC 2: Ninja Black Belt Tips - Scott Hanselman Turn on Compile-time View Checking for ASP.NET MVC Projects in TFS Build 2010 - Jim Lamb Web Design List of 25+ New tags introduced in HTML 5 - techfreakstuff 15 CSS Habits to Develop for Frustration-Free Coding - noupe Silverlight, WPF & RIA Essential Silverlight and WPF Skills: The UI Thread, Dispatchers, Background...(read more)

    Read the article

  • Le C++ expressif n° 4 : une bibliothèque de fonctions lambda en à peine 30 lignes - partie 1, un article d'Eric Niebler traduit par cob59

    Dans cet article, Eric Niebler entre dans les détails de la création de grammaires, en particulier sur le rôle des transformées, qui permettent d'appliquer une action spécifique lorsque l'entrée correspond à la grammaire donnée. De cette manière, il est possible d'étendre les fonctionnalités des expressions de Boost.Proto. Cet article explique aussi comment créer sa propre bibliothèques de fonctions pour faciliter la création d'expression Le C++ expressif n° 4 : une bibliothèque de fonctions lambda en à peine 30 lignes - partie 1 Avec l'ajout des transformées, commencez-vous à voir des doma...

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • Passthrough Objects – Duck Typing++

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Can't see a genuine use for this, but I got the idea in my head and wanted to work it through. It's an extension to the idea of duck typing, for scenarios where types have similar behaviour, but implemented in differently-named members. So you may have a set of objects you want to treat as an interface, which don't implement the interface explicitly, and don't have the same member names so they can't be duck-typed into implicitly implementing the interface. In a fictitious example, I want to call Get on whichever ICache implementation is current, and have the call passed through to the relevant method – whether it's called Read, Retrieve or whatever: A sample implementation is up on github here: PassthroughSample. This uses Castle's DynamicProxy behind the scenes in the same way as my duck typing sample, but allows you to configure the passthrough to specify how the inner (implementation) and outer (interface) members are mapped:       var setup = new Passthrough();     var cache = setup.Create("PassthroughSample.Tests.Stubs.AspNetCache, PassthroughSample.Tests")                             .WithPassthrough("Name", "CacheName")                             .WithPassthrough("Get", "Retrieve")                             .WithPassthrough("Set", "Insert")                             .As<ICache>(); - or using some ugly Lambdas to avoid the strings :     Expression<Func<ICache, string, object>> get = (o, s) => o.Get(s);     Expression<Func<Memcached, string, object>> read = (i, s) => i.Read(s);     Expression<Action<ICache, string, object>> set = (o, s, obj) => o.Set(s, obj);     Expression<Action<Memcached, string, object>> insert = (i, s, obj) => i.Put(s, obj);       ICache cache = new Passthrough<ICache, Memcached>()                     .Create()                     .WithPassthrough(o => o.Name, i => i.InstanceName)                     .WithPassthrough(get, read)                     .WithPassthrough(set, insert)                     .As();   - or even in config:   ICache cache = Passthrough.GetConfigured<ICache>(); ...  <passthrough>     <types>       <typename="PassthroughSample.Tests.Stubs.ICache, PassthroughSample.Tests"             passesThroughTo="PassthroughSample.Tests.Stubs.AppFabricCache, PassthroughSample.Tests">         <members>           <membername="Name"passesThroughTo="RegionName"/>           <membername="Get"passesThroughTo="Out"/>           <membername="Set"passesThroughTo="In"/>         </members>       </type>   Possibly useful for injecting stubs for dependencies in tests, when your application code isn't using an IoC container. Possibly it also has an alternative implementation using .NET 4.0 dynamic objects, rather than the dynamic proxy.

    Read the article

  • The SSL Bindings Issue–Web Pro Week 6 of 52

    - by OWScott
    We have a chicken before the egg issue with HTTPS bindings.  This video—week 6 of a 52 week series for the web administrator—covers why HTTPS bindings don’t support host headers the same as HTTP bindings do.  In this video I show the issue and use Wireshark to see it in action. If you haven’t seen the other weeks, you can find past and future videos on the Web Pro Series landing page. The SSL Bindings Issue

    Read the article

  • UnsatisfiedLinkError on xawt when running HEC-HMS.sh

    - by G.Oxsen
    I am a recent adopter of Linux and this problem has got me stumped. I use HEC-HMS and HEC-DSSVue for work on a regular basis. I have been using the widows versions in wine but they are really buggy. So I decided to try out the linux versions. the links below will take you to the download pages for these two programs. They are free programs for Hydrology and data management. Once I install them and attempt to run the shell file (HEC-HMS.sh for example) I get a ton of java errors that I do not understand. If I had to guess I would say that the java files in question can not be found. When I check to see if java is installed it is. Here is the output from the terminal from trying to run HEC-HMS.sh: Exception in thread "Thread-1" java.lang.UnsatisfiedLinkError: /home/smythe/HEC/hec-hms35/java/lib/i386/xawt/libmawt.so: libXtst.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.NativeLibLoader.loadLibraries(Unknown Source) at sun.awt.DebugHelper.<clinit>(Unknown Source) at java.awt.Component.<clinit>(Unknown Source) at javax.swing.ImageIcon.<clinit>(Unknown Source) at hms.i.c(Unknown Source) at hms.i.b(Unknown Source) at hms.K.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception in thread "Thread-4" java.lang.UnsatisfiedLinkError: /home/smythe/HEC/hec-hms35/java/lib/i386/xawt/libmawt.so: libXtst.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLibraries(Unknown Source) at java.awt.Toolkit.<clinit>(Unknown Source) at sun.print.CUPSPrinter.<clinit>(Unknown Source) at sun.print.UnixPrintServiceLookup.getDefaultPrintService(Unknown Source) at sun.print.UnixPrintServiceLookup.refreshServices(Unknown Source) at sun.print.UnixPrintServiceLookup$PrinterChangeListener.run(Unknown Source) Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class java.awt.Toolkit at java.awt.Color.<clinit>(Unknown Source) at hms.model.l.<init>(Unknown Source) at hms.model.ProjectManager.<init>(Unknown Source) at hms.Hms.<init>(Unknown Source) at hms.Hms.main(Unknown Source) Exception in thread "Thread-2" java.lang.NoClassDefFoundError: Could not initialize class sun.print.CUPSPrinter at sun.print.UnixPrintServiceLookup.getDefaultPrintService(Unknown Source) at javax.print.PrintServiceLookup.lookupDefaultPrintService(Unknown Source) at hms.util.f.run(Unknown Source) at java.lang.Thread.run(Unknown Source) I get similar outputs when I try to run HEC-DSSVue.sh. If anyone could shed some light on a solution I would really appreciate it. The problem turned out to be that the program needed 32 bit versions of the particular dependencies.

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • create record in LOV's Popup's

    - by raghu.yadav
    In this post we see ways to present create record options in LOV's popup's.Referring the doc http://download.oracle.com/docs/cd/E12839_01/web.1111/b31973/af_lov.htmwhat doc says about create action: The popup dialog from within an inputListOfValues component or the optional search popup dialog in the inputComboboxListOfValues component also provides the ability to create a new record. For the inputListOfValues component, when the createPopupId attribute is set on the component, a toolbar component with a commandToolbarButton is displayed with a create icon. At runtime, a commandToolbarButton component appears in the LOV popup dialog,

    Read the article

  • Using a WCF Message Inspector to extend AppFabric Monitoring

    - by Shawn Cicoria
    I read through Ron Jacobs post on Monitoring WCF Data Services with AppFabric http://blogs.msdn.com/b/endpoint/archive/2010/06/09/tracking-wcf-data-services-with-windows-server-appfabric.aspx What is immediately striking are 2 things – it’s so easy to get monitoring data into a viewer (AppFabric Dashboard) w/ very little work.  And the 2nd thing is, why can’t this be a WCF message inspector on the dispatch side. So, I took the base class WCFUserEventProvider that’s located in the WCF/WF samples [1] in the following path, \WF_WCF_Samples\WCF\Basic\Management\AnalyticTraceExtensibility\CS\WCFAnalyticTracingExtensibility\  and then created a few classes that project the injection as a IEndPointBehavior There are just 3 classes to drive injection of the inspector at runtime via config: IDispatchMessageInspector implementation BehaviorExtensionElement implementation IEndpointBehavior implementation The full source code is below with a link to the solution file here: [Solution File] using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ServiceModel.Dispatcher; using System.ServiceModel.Channels; using System.ServiceModel; using System.ServiceModel.Configuration; using System.ServiceModel.Description; using Microsoft.Samples.WCFAnalyticTracingExtensibility; namespace Fabrikam.Services { public class AppFabricE2EInspector : IDispatchMessageInspector { static WCFUserEventProvider evntProvider = null; static AppFabricE2EInspector() { evntProvider = new WCFUserEventProvider(); } public object AfterReceiveRequest( ref Message request, IClientChannel channel, InstanceContext instanceContext) { OperationContext ctx = OperationContext.Current; var opName = ctx.IncomingMessageHeaders.Action; evntProvider.WriteInformationEvent("start", string.Format("operation: {0} at address {1}", opName, ctx.EndpointDispatcher.EndpointAddress)); return null; } public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState) { OperationContext ctx = OperationContext.Current; var opName = ctx.IncomingMessageHeaders.Action; evntProvider.WriteInformationEvent("end", string.Format("operation: {0} at address {1}", opName, ctx.EndpointDispatcher.EndpointAddress)); } } public class AppFabricE2EBehaviorElement : BehaviorExtensionElement { #region BehaviorExtensionElement /// <summary> /// Gets the type of behavior. /// </summary> /// <value></value> /// <returns>The type that implements the end point behavior<see cref="T:System.Type"/>.</returns> public override Type BehaviorType { get { return typeof(AppFabricE2EEndpointBehavior); } } /// <summary> /// Creates a behavior extension based on the current configuration settings. /// </summary> /// <returns>The behavior extension.</returns> protected override object CreateBehavior() { return new AppFabricE2EEndpointBehavior(); } #endregion BehaviorExtensionElement } public class AppFabricE2EEndpointBehavior : IEndpointBehavior //, IServiceBehavior { #region IEndpointBehavior public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters) {} public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { throw new NotImplementedException(); } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { endpointDispatcher.DispatchRuntime.MessageInspectors.Add(new AppFabricE2EInspector()); } public void Validate(ServiceEndpoint endpoint) { ; } #endregion IEndpointBehavior } }     [1] http://www.microsoft.com/downloads/details.aspx?FamilyID=35ec8682-d5fd-4bc3-a51a-d8ad115a8792&displaylang=en

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >