Search Results

Search found 6441 results on 258 pages for 'schema compare'.

Page 239/258 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Problem Disabling Roaming Profiles on Grouped Users

    - by user43207
    I'm having some serious issues getting a group of users to stop using roaming profiles. As expected, I have roaming profiles enabled accross the domain. - But am doing GPO filtering, limiting the scope. I originally had it set to authenticated users for Roaming, but as the domain has branched out to multiple locations, I've limited the scope to only people that are near the central office. The GPO that I have linked filtered to a group I have created that include users that I don't want to have roaming profiles. This GPO is sitting at the root of the domain, with the "Forced" setting enabled, so it should override any setting below it. *On a side note, it is the ONLY GPO that I have set to "Forced" right now. I know the GPO is working, since I can see the original registy settings on a user that logged in under roaming profiles - and then that same user logging in after I made the Group Policy changes, the registry reflects a local profile. But unfortunately, even after making those settings - the user is given a roaming profile on one of the servers. A gpresult of that same user account (after the updated gpo) is listed in the code block below. You can see right at the top of that output, that it is infact dealing with a roaming profile. - And sure enough, on the server that's hosting the file share for roaming profiles, it creates a folder for the user once they log in. For testing purposes, I've deleted all copies of the user's profile, roaming and local. But the problem is still here. - So I'm aparently missing something in the group policy settings on a wider scale. Would anybody be able to point me in the direction of what I'm missing here? *gpresult /r*** Microsoft (R) Windows (R) Operating System Group Policy Result tool v2.0 Copyright (C) Microsoft Corp. 1981-2001 Created On 5/15/2010 at 8:59:00 AM RSOP data for ** on * : Logging Mode OS Configuration: Member Workstation OS Version: 6.1.7600 Site Name: N/A Roaming Profile: \\profiles$** Local Profile: C:\Users*** Connected over a slow link?: No USER SETTINGS CN=*****,OU=*****,OU=*****,OU=*****,DC=*****,DC=***** Last time Group Policy was applied: 5/15/2010 at 8:52:02 AM Group Policy was applied from: *****.*****.com Group Policy slow link threshold: 500 kbps Domain Name: USSLINDSTROM Domain Type: Windows 2000 Applied Group Policy Objects ----------------------------- ForceLocalProfilesOnly InternetExplorer_***** GlobalPasswordPolicy The following GPOs were not applied because they were filtered out ------------------------------------------------------------------- DAgentFirewallExceptions Filtering: Denied (Security) WSAdmin_***** Filtering: Denied (Security) NetlogonFirewallExceptions Filtering: Not Applied (Empty) NetLogon_***** Filtering: Denied (Security) WSUSUpdateScheduleManualInstall Filtering: Denied (Security) WSUSUpdateScheduleDaily_0300 Filtering: Denied (Security) WSUSUpdateScheduleThu_0100 Filtering: Denied (Security) AlternateSSLFirewallExceptions Filtering: Denied (Security) SNMPFirewallExceptions Filtering: Denied (Security) WSUSUpdateScheduleSun_0100 Filtering: Denied (Security) SQLServerFirewallExceptions Filtering: Denied (Security) WSUSUpdateScheduleTue_0100 Filtering: Denied (Security) WSUSUpdateScheduleSat_0100 Filtering: Denied (Security) DisableUAC Filtering: Denied (Security) ICMPFirewallExceptions Filtering: Denied (Security) AdminShareFirewallExceptions Filtering: Denied (Security) GPRefreshInterval Filtering: Denied (Security) ServeRAIDFirewallExceptions Filtering: Denied (Security) WSUSUpdateScheduleFri_0100 Filtering: Denied (Security) BlockFirewallExceptions(8400-8410) Filtering: Denied (Security) WSUSUpdateScheduleWed_0100 Filtering: Denied (Security) Local Group Policy Filtering: Not Applied (Empty) WSUS_***** Filtering: Denied (Security) LogonAsService_Idaho Filtering: Denied (Security) ReportServerFirewallExceptions Filtering: Denied (Security) WSUSUpdateScheduleMon_0100 Filtering: Denied (Security) TFSFirewallExceptions Filtering: Denied (Security) Default Domain Policy Filtering: Not Applied (Empty) DenyServerSideRoamingProfiles Filtering: Denied (Security) ShareConnectionsRemainAlive Filtering: Denied (Security) The user is a part of the following security groups --------------------------------------------------- Domain Users Everyone BUILTIN\Users BUILTIN\Administrators NT AUTHORITY\INTERACTIVE CONSOLE LOGON NT AUTHORITY\Authenticated Users This Organization LOCAL *****Users VPNAccess_***** NetAdmin_***** SiteAdmin_***** WSAdmin_***** VPNAccess_***** LocalProfileOnly_***** NetworkAdmin_***** LocalProfileOnly_***** VPNAccess_***** NetAdmin_***** Domain Admins WSAdmin_***** WSAdmin_***** ***** ***** Schema Admins ***** Enterprise Admins Denied RODC Password Replication Group High Mandatory Level

    Read the article

  • Standard Oracle Fusion Middleware Installation fails on SOA ManagedServer start due to classpath pro

    - by Neuquino
    Hi, Trying to install Oracle Fusion Middleware 11gR2 on windows (same thing happens on Linux). I have followed the guidelines provided in the http://download.oracle.com/docs/cd/E12839_01/install.1111/e14318/toc.htm Installing the weblogic (11g) Oracle 11g databse installation Running the RCU utility to create schema Installed and copied relevant files for Java Bridge Configure the Fusion Middleware But i found that the SOA server is not getting up in the enterprise manager its showing as down. When i checked the logs iam getting the following error: oracle.jrf.wls.JRFStartup java.lang.ClassNotFoundException: oracle.jrf.wls.JRFStartup at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at weblogic.management.deploy.classdeployment.ClassDeploymentManager.invokeClass(ClassDeploymentManager.java:253) at weblogic.management.deploy.classdeployment.ClassDeploymentManager.access$000(ClassDeploymentManager.java:54) at weblogic.management.deploy.classdeployment.ClassDeploymentManager$1.run(ClassDeploymentManager.java:205) Truncated. see log file for complete stacktrace <Jul 7, 2009 4:18:48 PM CEST> <Critical> <WebLogicServer> <BEA-000286> <Failed to invoke startup class "SOAStartupClass", java.lang.ClassNotFoundException: oracle.bpel.services.common.util.GenerateBPMCryptoKey java.lang.ClassNotFoundException: oracle.bpel.services.common.util.GenerateBPMCryptoKey at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at weblogic.management.deploy.classdeployment.ClassDeploymentManager.invokeClass(ClassDeploymentManager.java:253) at weblogic.management.deploy.classdeployment.ClassDeploymentManager.access$000(ClassDeploymentManager.java:54) at weblogic.management.deploy.classdeployment.ClassDeploymentManager$1.run(ClassDeploymentManager.java:205) Truncated. see log file for complete stacktrace <Jul 7, 2009 4:19:27 PM CEST> <Error> <Deployer> <BEA-149205> <Failed to initialize the application 'SocketAdapter' due to error weblogic.application.ModuleException: The ra.xml <connectionfactory-impl-class> class 'oracle.tip.adapter.socket.SocketConnectionFactory' could not be loaded from the resource adapter archive/application because of the following error: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/OracleConnectionFactory.weblogic.application.ModuleException: The ra.xml <connectionfactory-impl-class> class 'oracle.tip.adapter.socket.SocketConnectionFactory' could not be loaded from the resource adapter archive/application because of the following error: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/OracleConnectionFactory at weblogic.connector.deploy.ConnectorModule.prepare(ConnectorModule.java:228) at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93) at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:387) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:58) Truncated. see log file for complete stacktrace <Jul 7, 2009 4:19:27 PM CEST> <Error> <Deployer> <BEA-149205> <Failed to initialize the application 'MQSeriesAdapter' due to error weblogic.application.ModuleException: The ra.xml <connectionfactory-impl-class> class 'oracle.tip.adapter.mq.ConnectionFactoryImpl' could not be loaded from the resource adapter archive/application because of the following error: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/OracleConnectionFactory.weblogic.application.ModuleException: The ra.xml <connectionfactory-impl-class> class 'oracle.tip.adapter.mq.ConnectionFactoryImpl' could not be loaded from the resource adapter archive/application because of the following error: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/OracleConnectionFactory at weblogic.connector.deploy.ConnectorModule.prepare(ConnectorModule.java:228) at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93) at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:387) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:58) Truncated. see log file for complete stacktrace <Jul 7, 2009 4:19:27 PM CEST> <Error> <Deployer> <BEA-149205> <Failed to initialize the application 'OracleAppsAdapter' due to error weblogic.application.ModuleException: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/exception/PCResourceException.weblogic.application.ModuleException: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/exception/PCResourceException at weblogic.connector.deploy.ConnectorModule.prepare(ConnectorModule.java:238) at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93) at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:387) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:58) Truncated. see log file for complete stacktrace java.lang.NoClassDefFoundError: oracle/tip/adapter/api/exception/PCResourceException at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2427) at java.lang.Class.privateGetPublicMethods(Class.java:2547) at java.lang.Class.getMethods(Class.java:1410) at weblogic.connector.external.impl.RAComplianceChecker.checkOverrides(RAComplianceChecker.java:972) Truncated. see log file for complete stacktrace Can any one please tell me if i have missed any steps? thanks and regards, Naveen

    Read the article

  • Standard Oracle Fusion Middleware Installation fails on SOA ManagedServer start

    - by Neuquino
    Hi, Trying to install Oracle Fusion Middleware 11gR2 on windows (same thing happens on Linux). I have followed the guidelines provided in the http://download.oracle.com/docs/cd/E12839_01/install.1111/e14318/toc.htm Installing the weblogic (11g) Oracle 11g databse installation Running the RCU utility to create schema Installed and copied relevant files for Java Bridge Configure the Fusion Middleware But i found that the SOA server is not getting up in the enterprise manager its showing as down. When i checked the logs iam getting the following error: oracle.jrf.wls.JRFStartup java.lang.ClassNotFoundException: oracle.jrf.wls.JRFStartup at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at weblogic.management.deploy.classdeployment.ClassDeploymentManager.invokeClass(ClassDeploymentManager.java:253) at weblogic.management.deploy.classdeployment.ClassDeploymentManager.access$000(ClassDeploymentManager.java:54) at weblogic.management.deploy.classdeployment.ClassDeploymentManager$1.run(ClassDeploymentManager.java:205) Truncated. see log file for complete stacktrace <Jul 7, 2009 4:18:48 PM CEST> <Critical> <WebLogicServer> <BEA-000286> <Failed to invoke startup class "SOAStartupClass", java.lang.ClassNotFoundException: oracle.bpel.services.common.util.GenerateBPMCryptoKey java.lang.ClassNotFoundException: oracle.bpel.services.common.util.GenerateBPMCryptoKey at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at weblogic.management.deploy.classdeployment.ClassDeploymentManager.invokeClass(ClassDeploymentManager.java:253) at weblogic.management.deploy.classdeployment.ClassDeploymentManager.access$000(ClassDeploymentManager.java:54) at weblogic.management.deploy.classdeployment.ClassDeploymentManager$1.run(ClassDeploymentManager.java:205) Truncated. see log file for complete stacktrace <Jul 7, 2009 4:19:27 PM CEST> <Error> <Deployer> <BEA-149205> <Failed to initialize the application 'SocketAdapter' due to error weblogic.application.ModuleException: The ra.xml <connectionfactory-impl-class> class 'oracle.tip.adapter.socket.SocketConnectionFactory' could not be loaded from the resource adapter archive/application because of the following error: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/OracleConnectionFactory.weblogic.application.ModuleException: The ra.xml <connectionfactory-impl-class> class 'oracle.tip.adapter.socket.SocketConnectionFactory' could not be loaded from the resource adapter archive/application because of the following error: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/OracleConnectionFactory at weblogic.connector.deploy.ConnectorModule.prepare(ConnectorModule.java:228) at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93) at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:387) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:58) Truncated. see log file for complete stacktrace <Jul 7, 2009 4:19:27 PM CEST> <Error> <Deployer> <BEA-149205> <Failed to initialize the application 'MQSeriesAdapter' due to error weblogic.application.ModuleException: The ra.xml <connectionfactory-impl-class> class 'oracle.tip.adapter.mq.ConnectionFactoryImpl' could not be loaded from the resource adapter archive/application because of the following error: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/OracleConnectionFactory.weblogic.application.ModuleException: The ra.xml <connectionfactory-impl-class> class 'oracle.tip.adapter.mq.ConnectionFactoryImpl' could not be loaded from the resource adapter archive/application because of the following error: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/OracleConnectionFactory at weblogic.connector.deploy.ConnectorModule.prepare(ConnectorModule.java:228) at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93) at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:387) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:58) Truncated. see log file for complete stacktrace <Jul 7, 2009 4:19:27 PM CEST> <Error> <Deployer> <BEA-149205> <Failed to initialize the application 'OracleAppsAdapter' due to error weblogic.application.ModuleException: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/exception/PCResourceException.weblogic.application.ModuleException: java.lang.NoClassDefFoundError: oracle/tip/adapter/api/exception/PCResourceException at weblogic.connector.deploy.ConnectorModule.prepare(ConnectorModule.java:238) at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93) at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:387) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:58) Truncated. see log file for complete stacktrace java.lang.NoClassDefFoundError: oracle/tip/adapter/api/exception/PCResourceException at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2427) at java.lang.Class.privateGetPublicMethods(Class.java:2547) at java.lang.Class.getMethods(Class.java:1410) at weblogic.connector.external.impl.RAComplianceChecker.checkOverrides(RAComplianceChecker.java:972) Truncated. see log file for complete stacktrace Can any one please tell me if i have missed any steps? thanks and regards, Naveen

    Read the article

  • Why do GPUs overheat?

    - by JAD
    About a year ago, I added a 9800GT (1 GB version) and a Corsair CX500 PSU to an HP M8000N computer. A few weeks ago, the HDD overheated and I decided to transfer the GPU & PSU to a new build, which consists of: i3 @ 3.3Ghz Gigabyte H61 Micro ATX Mobo 4GB RAM 500GB WD HDD DVD RW Drive Cooler Master Elite 430 Tower Once I had Win7 up and running, I installed all the essential drivers that came with the Gigabyte Mobo CD. However, whenever I tried installing the Graphics Media Accelerator driver, the computer would crash and enter an endless boot sequence on the next startup. I skipped installing this driver and installed the CD driver for the 9800GT, which by now is a year old. Everything was working fine, WEI rated my GPU at 6.6 graphics & aero performance. However, after updating my Nvidia drivers to the latest, the WEI dropped my rating to 3.3 for Aero, and 4.7 for graphics performance. Just to make sure that everything was ok, I ran Bad Company 2 on medium settings. The first few minutes ran just fine at a smooth framerate, so I dismissed this as Windows being Windows. About 6 hours later, I ran BC2 again. This time I averaged anywhere from 2-5 FPS. I checked the GPU temperature through GPU-Z, and it came back as 120C. The problem with this, is that the computer was on for six hours up to that point. Wouldn't the card have experienced a reactor core meltdown a lot sooner than that? Granted, the computer was "sleeping" some of the time, but still... The next day I took out a temperature gun and ran some tests. I would point the laser at a very specific area on the reverse side of the card (not the fan or "front"), and compare the temp reading with GPU-Z. After leaving the system on idle on idle for a few minutes, I ran BC2 twice. Here are the results: GPU-Z Reading / Temp Gun Reading / Time Null / 22.3°C / Comp is Off 53°C / 33.5°C / 1:49 78°C / 46°C / 1:53 - (First BC2 run; good framerate) 102°C / 64.6°C / 2:01 - (System is again on idle) 113°C / 64.8°C / 2:10 119°C / 71.8°C / 2:17 - (Second BC2 run; poor framerate) I should also mention that I also took a temp recording of another part of the GPU from 2:01-2:17. The temp in this area jumped from 75°C to 82.9°C in that time frame. This pretty much confirms that GPU-Z is reporting the temperature accurately, and the card is overheating. But I'd like to know why; the cars is doing nothing and still the temperature climbs at a steady rate. I thoroughly cleaned the GPU and PSU when I salvaged them from the old HP M8000N computer with a can of compressed air, dust cant be the issue. Similarly, the rest of the computer is brand new. I installed various Nvidia drivers, but no luck. It seems strange to me that a year-old card is suddenly failing on me; aren't they supposed to last at least two years? Could this be a driver issue? Is the motherboard faulty? Could the PSU be overfeeding the card on voltage? Neither case seems likely, as the CPU, RAM and otherwise the rest of the comp has worked flawlessly and has stayed well within respectable temp ranges (the i3 lingers around 50C, the HDD stays at 30C, so does the PSU). How can I pinpoint the issue?

    Read the article

  • How to export computers from Active Directory to XML using Powershell?

    - by CoDeRs
    I am trying to create a powershell scripts for Remote Desktop Connection Manager using the active directory module. My first thought was get a list of computers in AD and parse them out into XML format similar to the OU structure that is in AD. I have no problem with that, the below code will work just but not how I wanted. EG # here is a the array $OUs Americas/Canada/Canada Computers/Desktops Americas/Canada/Canada Computers/Laptops Americas/Canada/Canada Computers/Virtual Computers Americas/USA/USA Computers/Laptops Computers Disabled Accounts Domain Controllers EMEA/UK/UK Computers/Desktops EMEA/UK/UK Computers/Laptops Outside Sales and Service/Laptops Servers I wanted to have the basic XML structured like this Americas Canada Canada Computers Desktops Laptops Virtual Computers USA USA Computers Laptops Computers Disabled Accounts Domain Controllers EMEA UK UK Computers Desktops Laptops Outside Sales and Service Laptops Servers However if you run the below it does not nest the next string in the array it only restarts the from the beginning and duplicating Americas Canada Canada Computers Desktops Americas Canada Canada Computers Laptops Americas Canada Canada Computers Virtual Computers Americas USA USA Computers Laptops RDCMGenerator.ps1 #Importing Microsoft`s PowerShell-module for administering ActiveDirectory Import-Module ActiveDirectory #Initial variables $OUs = @() $RDCMVer = "2.2" $userName = "domain\username" $password = "Hashed Password+" $Path = "$env:temp\test.xml" $allComputers = Get-ADComputer -LDAPFilter "(OperatingSystem=*)" -Properties Name,Description,CanonicalName | Sort-Object CanonicalName | select Name,Description,CanonicalName $allOUObjects = $allComputers | Foreach {"$($_.CanonicalName)"} Function Initialize-XML{ ##<RDCMan schemaVersion="1"> $xmlWriter.WriteStartElement('RDCMan') $XmlWriter.WriteAttributeString('schemaVersion', '1') $xmlWriter.WriteElementString('version',$RDCMVer) $xmlWriter.WriteStartElement('file') $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$env:userdomain) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('userName',$userName) $xmlWriter.WriteElementString('domain',$env:userdomain) $xmlWriter.WriteStartElement('password') $XmlWriter.WriteAttributeString('storeAsClearText', 'false') $XmlWriter.WriteRaw($password) $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('size','1024 x 768') $xmlWriter.WriteElementString('sameSizeAsClientArea','True') $xmlWriter.WriteElementString('fullScreen','False') $xmlWriter.WriteElementString('colorDepth','32') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Group ($groupName){ #Start Group $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$groupName) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Server ($computerName, $computerDescription) { #Start Server $xmlWriter.WriteStartElement('server') $xmlWriter.WriteElementString('name',$computerName) $xmlWriter.WriteElementString('displayName',$computerDescription) $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() #Stop Server } Function Close-XML { $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() # finalize the document: $xmlWriter.Flush() $xmlWriter.Close() notepad $path } #Strip out Domain and Computer Name from CanonicalName foreach($OU in $allOUObjects){ $newSplit = $OU.split("/") $rebildOU = "" for($i=1; $i -le ($newSplit.count - 2); $i++){ $rebildOU += $newSplit[$i] + "/" } $OUs += $rebildOU.substring(0,($rebildOU.length - 1)) } #Remove Duplicate OU's $OUs = $OUs | select -uniq #$OUs # get an XMLTextWriter to create the XML $XmlWriter = New-Object System.XMl.XmlTextWriter($Path,$UTF8) # choose a pretty formatting: $xmlWriter.Formatting = 'Indented' $xmlWriter.Indentation = 1 $XmlWriter.IndentChar = "`t" # write the header $xmlWriter.WriteStartDocument() # # 'encoding', 'utf-8' How? # # set XSL statements #Initialize Pre-Defined XML Initialize-XML ######################################################### # Start Loop for each OU-Path that has a computer in it ######################################################### foreach ($OU in $OUs){ $totalGroupName = "" #Create / Reset Total OU-Path Completed $OU.split("/") | foreach { #Split the OU-Path into individual OU's $groupName = "$_" #Current OU $totalGroupName += $groupName + "/" #Total OU-Path Completed $xmlWriter.WriteStartElement('group') #Start new XML Group Create-Group $groupName #Call function to create XML Group ################################################ # Start Loop for each Computer in $allComputers ################################################ foreach($computer in $allComputers){ $computerOU = $computer.CanonicalName #Set the computers OU-Path $OUSplit = $computerOU.split("/") #Create the Split for the OU-Path $rebiltOU = "" #Create / Reset the stripped OU-Path for($i=1; $i -le ($OUSplit.count - 2); $i++){ #Start Loop for OU-Path to strip out the Domain and Computer Name $rebiltOU += $OUSplit[$i] + "/" #Rebuild the stripped OU-Path } if ($rebiltOU -eq $totalGroupName){ #Compare the Current OU-Path with the computers stripped OU-Path $computerName = $computer.Name #Set the computer name $computerDescription = $computerName + " - " + $computer.Description #Set the computer Description Create-Server $computerName $computerDescription #Call function to create XML Server } } } ################################################### # Start Loop to close out XML Groups created above ################################################### $totalGroupName.split("/") | foreach { #Split the if ($_ -ne "" ){ $xmlWriter.WriteEndElement() #End Group } } } Close-XML

    Read the article

  • Unable to Mange DNS via MMC

    - by IT Helpdesk Team Manager
    When trying to access the DNS service on Microsoft Windows Server 2003 (Build 3790) domain controller/schema master via the MMC DNS snap in or locally via the DNS MMC from Administrative tools I'm getting a red "X" through the icon for the DNS Server. The inability to access DNS management via MMC happens on all domain controllers as well. We've looked at items such as the DHCP client not being started, incorrect DNS setup ( the machine points at itself and another DC ), the DNS service not running ( it is and all DNS queries via NSLOOKUP work correctly ), dslint returns the correct information and functions as expected. There is the following entry in the DNS event log: The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 0000051b dnscmd fails with RPC server unavailable yet RPC is started: C:\Documents and Settings\Administrator.DOMAIN>dnscmd /Info Info query failed status = 1722 (0x000006ba) Command failed: RPC_S_SERVER_UNAVAILABLE 1722 (000006ba) DCDIAG /TEST:DNS /V /E produces the following errors: Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1753 (Type: Win32 - Description: There are no more endpoints available from the endpoint mapper.)] Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1722 (Type: Win32 - Description: The RPC server is unavailable.)] The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. A DNS query for _ldap._tcp.dc._msdcs. returns the correct results. All domain and ADS related activities are working except that I can't manage my DNS via MMC or dnscmd. Any thoughts or solutions would be greatly appreciated. EDIT: Adding Registry export per request: Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc Class Name: <NO CLASS> Last Write Time: 10/18/2012 - 2:29 PM Value 0 Name: DCOM Protocols Type: REG_MULTI_SZ Data: ncacn_ip_tcp Value 1 Name: UuidSequenceNumber Type: REG_DWORD Data: 0xb19bd0f Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\ClientProtocols Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: ncacn_np Type: REG_SZ Data: rpcrt4.dll Value 1 Name: ncacn_ip_tcp Type: REG_SZ Data: rpcrt4.dll Value 2 Name: ncadg_ip_udp Type: REG_SZ Data: rpcrt4.dll Value 3 Name: ncacn_http Type: REG_SZ Data: rpcrt4.dll Value 4 Name: ncacn_at_dsp Type: REG_SZ Data: rpcrt4.dll Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NameService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: DefaultSyntax Type: REG_SZ Data: 3 Value 1 Name: Endpoint Type: REG_SZ Data: \pipe\locator Value 2 Name: NetworkAddress Type: REG_SZ Data: \\. Value 3 Name: Protocol Type: REG_SZ Data: ncacn_np Value 4 Name: ServerNetworkAddress Type: REG_SZ Data: \\. Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NetBios Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\RpcProxy Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: Enabled Type: REG_DWORD Data: 0x1 Value 1 Name: ValidPorts Type: REG_SZ Data: pdc:100-5000 Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\SecurityService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: 9 Type: REG_SZ Data: secur32.dll Value 1 Name: 10 Type: REG_SZ Data: secur32.dll Value 2 Name: 14 Type: REG_SZ Data: schannel.dll Value 3 Name: 16 Type: REG_SZ Data: secur32.dll Value 4 Name: 1 Type: REG_SZ Data: secur32.dll Value 5 Name: 18 Type: REG_SZ Data: secur32.dll Value 6 Name: 68 Type: REG_SZ Data: netlogon.dll

    Read the article

  • How clean is deleting a computer object?

    - by Kevin
    Though quite skilled at software development, I'm a novice when it comes to Active Directory. I've noticed that AD seems to have a lot of stuff buried in the directory and schema which does not appear superficially when using simplified tools such as Active Directory Users and Computers. It kind of feels like the Windows registry, where COM classes have all kinds of intertwined references, many of which are purely by GUID, such that it's not enough to just search for anything referencing "GadgetXyz" by name in order to cleanly remove GadgetXyz. This occasionally leads to the uneasy feeling that I may have useless garbage building up in there which I have no idea how to weed out. For instance, I made the mistake a while back of trying to rename a DC, figuring I could just do it in the usual manner from Control Panel. I found references to the old name buried all over the place which made it impossible to reuse that name without considerable manual cleanup. Even long after I got it all working, I've stumbled upon the old name hidden away in LDAP. (There were no other DCs left in the picture at that time so I don't think it was a tombstone issue.) More specifically, I'm worried about the case of just outright deleting a computer from AD. I understand the cleanest way to do it is to log into the computer itself and tell it to leave the domain. (As an aside, doing this in Windows 8 seems to only disable the computer object and not delete it outright!) My concern is cases where this is not possible, for instance because it was on an already-deleted VM image. I can simply go into Active Directory Users and Computers, find the computer object, click it, and press Delete, and it seems to go away. My question is, is it totally, totally gone, or could this leave hanging references in any Active Directory nook or cranny I won't know to look in? (Excluding of course the expected tombstone records which expire after a set time.) If so, is there any good way to clean up the mess? Thank you for any insight! Kevin ps., It was over a year ago so I don't remember the exact details, but here's the gist of the DC renaming issue. I started with a single 2008 DC named ABC in a physical machine and wanted to end up instead with a DC of the same name running in a vSphere VM. Not wanting to mess with imaging the physical machine, my plan instead was: Rename ABC to XYZ. Fresh install 2008 on a VM, name it ABC, and join it to the domain. (I may have done the latter in the same step as promoting to DC; I don't recall.) dcpromo the new ABC as a 2nd DC, including GC. Make sure the new ABC replicated correctly from XYZ and then transfer the FSMO roles from XYZ to it. Once everything was confirmed to work with the new ABC alone, demote XYZ, remove the AD role, and remove it from the domain. Eventually I managed to do this but it was a much bumpier ride than expected. In particular, I got errors trying to join the new ABC to the domain. These included "The pre-windows 2000 name is already in use" and "No mapping between account names and security IDs was done." I eventually found that the computer object for XYZ had attributes that still referred to it as ABC. Among these were servicePrincipalName, msDS-AdditionalDnsHostName, and msDS-AdditionalSamAccountName. The latter I could not edit via Attribute Editor and instead had to run this against XYZ: NETDOM computername <simple-name> /add:<FQDN> There were some other hitches I don't remember exactly.

    Read the article

  • Openldap/Sasl/GSSAPI on Debian: Key table entry not found

    - by badbishop
    The goal: to make an OpenLDAP server to authenticate using Kerberos V via GSSAPI Setup: several virtual machines running on freshly installed/updated Debian Squeeze A master KDC server kdc.example.com A LDAP server, running OpenLDAP ldap.example.com The problem: tom@ldap:~$ ldapsearch -b 'dc=example,dc=com' SASL/GSSAPI authentication started ldap_sasl_interactive_bind_s: Other (e.g., implementation specific) error (80) additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Key table entry not found) One might suggest to add that bloody keytab entry, but here's the real problem: ktutil: rkt /etc/ldap/ldap.keytab ktutil: list slot KVNO Principal ---- ---- --------------------------------------------------------------------- 1 2 ldap/[email protected] 2 2 ldap/[email protected] 3 2 ldap/[email protected] 4 2 ldap/[email protected] So, the entry as suggested by the OpenLDAP manual is there allright. Deleting and re-creating both service principal and the keytab on ldap.example.com didn't help, I get the same error. And before I make the keytab file readable by openldap, I get "Permission denied" error instead of the one in the subject. Which implies, that the right keytab file is being accessed, as set in /etc/default/slapd. I have my doubts about the following part of slapd config: root@ldap:~# cat /etc/ldap/slapd.d/cn\=config.ldif | grep -v "^#" dn: cn=config objectClass: olcGlobal cn: config olcArgsFile: /var/run/slapd/slapd.args olcLogLevel: 256 olcPidFile: /var/run/slapd/slapd.pid olcToolThreads: 1 structuralObjectClass: olcGlobal entryUUID: d6737f5c-d321-1030-9dbe-27d2a7751e11 olcSaslHost: kdc.example.com olcSaslRealm: EXAMPLE.COM olcSaslSecProps: noplain,noactive,noanonymous,minssf=56 olcAuthzRegexp: {0}"uid=([^/]*),cn=EXAMPLE.COM,cn=GSSAPI,cn=auth" "uid=$1,ou=People,dc=example,dc=com" olcAuthzRegexp: {1}"uid=host/([^/]*).example.com,cn=example.com,cn=gssapi,cn=auth" "cn=$1,ou=hosts,dc=example,dc=com" A HOWTO at https://help.ubuntu.com/community/OpenLDAPServer#Kerberos_Authentication mentiones vaguely: Also, it is frequently necessary to map the Distinguished Name (DN) of an authorized Kerberos client to an existing entry in the DIT. I fail to understand where in the tree this should be defined, what schema should be used, etc. After hours of googling, it's official: I'm stuck! Please, help. Other things checked: Kerberos as such works fine (I can ssh without using a password to any machine in this setup). That means there should be no DNS-related problems. ldapsearch -b 'dc=example,dc=com' -x works OK. SASL/GSSAPI has been tested using sasl-sample-server -m GSSAPI -s ldap and sasl-sample-client -s ldap -n ldap.example.com -u tom without errors: root@ldap:~# sasl-sample-server -m GSSAPI -s ldap Forcing use of mechanism GSSAPI Sending list of 1 mechanism(s) S: R1NTQVBJ Waiting for client mechanism... C: R1NTQVBJAGCCAmUGCSqGSIb3EgECAgEAboICVDCCAlCgAwIBBaEDAgEOogcDBQAgAAAAo4IBamGCAWYwggFioAMCAQWhDRsLRVhBTVBMRS5DT02iIzAhoAMCAQOhGjAYGwRsZGFwGxBsZGFwLmV4YW1wbGUuY29to4IBJTCCASGgAwIBEqEDAgECooIBEwSCAQ8Re8XUnscB8dx6V/cXL+uzSF2/olZvcrVAJHZBZrfRKUFEQmU1Li46bUGK3GZwsn6qUVwmW6lyqVctOIYwGvBpz81Rw/5mj4V5iQudZbIRa+5Ew6W1oBB7ALi2cnPsbUroqzGmEh8/Vw8zSFk7W1gND4DLuWrPXD2xhLDUMMekBn5nXEPTnNAnV4w81Sj3ZlyLZz5OSitGVUEnQweV53z1spWsASHHWod/tSuxb19YeWmY5QHXPLG+lL5+w+Cykr0EhYVj8f8MDWFB8qoN1cr85xDfn18r8JldSw+i18nFKOo8usG+37hZTWynHYvBfMONtG9mLJv82KGPZMydWK7pzyTZDcnSsIjo2AftMZd5pIHMMIHJoAMCARKigcEEgb5aG1k4xgxmUXX7RKfvAbVBVJ12dWOgFFjMYceKjziXwrrOkv8ZwIvef9Yn2KsWznb5L55SXt2c/zlPa5mLKIktvw77hsK1h/GYc7p//BGOsmr47aCqVWsGuTqVT129uo5LNQDeSFwl2jXCkCZJZavOVrqYsM6flrPYE4n5lASTcPitX+/WNsf6WrvZoaexiv1JqyM/MWqS/vMBRMMc5xlurj6OARFvP9aFZoK/BLmfkSyAJj6MLbLVXZtkHiIPgot 'GSSAPI' Sending response... S: YIGZBgkqhkiG9xIBAgICAG+BiTCBhqADAgEFoQMCAQ+iejB4oAMCARKicQRvkxggi9pW+yJ1ExbTwLDclqw/VQ98aPq8mt39hkO6PPfcO2cB+t6vJ01xRKBrT9D2qF2XK0SWD4PQNb5UFbH4RM/bKAxDuCfZ1MHKgIWTLu4bK7VGZTbYydcckU2d910jIdvkkHhaRqUEM4cqp/cR Waiting for client reply... C: got '' Sending response... S: BQQF/wAMAAAAAAAAMBOWqQcACAAlCodrXW66ZObsEd4= Waiting for client reply... C: BQQE/wAMAAAAAAAAFUYbXQQACAB0b20VynB4uGH/iIzoRhw=got '?' Negotiation complete Username: tom Realm: (NULL) SSF: 56 sending encrypted message 'srv message 1' S: AAAASgUEB/8AAAAAAAAAADATlqrqrBW0NRfPMXMdMz+zqY32YakrHqFps3o/vO6yDeyPSaSqprrhI+t7owk7iOsbrZ/idJRxCBm8Wazx Waiting for encrypted message... C: AAAATQUEBv8AAAAAAAAAABVGG17WC1+/kIV9xTMUdq6Y4qYmmTahHVCjidgGchTOOOrBLEwA9IqiTCdRFPVbK1EgJ34P/vxMQpV1v4WZpcztgot '' recieved decoded message 'client message 1' root@ldap:~# sasl-sample-client -s ldap -n ldap.example.com -u tom service=ldap Waiting for mechanism list from server... S: R1NTQVBJrecieved 6 byte message Choosing best mechanism from: GSSAPI returning OK: tom Using mechanism GSSAPI Preparing initial. Sending initial response... C: R1NTQVBJAGCCAmUGCSqGSIb3EgECAgEAboICVDCCAlCgAwIBBaEDAgEOogcDBQAgAAAAo4IBamGCAWYwggFioAMCAQWhDRsLRVhBTVBMRS5DT02iIzAhoAMCAQOhGjAYGwRsZGFwGxBsZGFwLmV4YW1wbGUuY29to4IBJTCCASGgAwIBEqEDAgECooIBEwSCAQ8Re8XUnscB8dx6V/cXL+uzSF2/olZvcrVAJHZBZrfRKUFEQmU1Li46bUGK3GZwsn6qUVwmW6lyqVctOIYwGvBpz81Rw/5mj4V5iQudZbIRa+5Ew6W1oBB7ALi2cnPsbUroqzGmEh8/Vw8zSFk7W1gND4DLuWrPXD2xhLDUMMekBn5nXEPTnNAnV4w81Sj3ZlyLZz5OSitGVUEnQweV53z1spWsASHHWod/tSuxb19YeWmY5QHXPLG+lL5+w+Cykr0EhYVj8f8MDWFB8qoN1cr85xDfn18r8JldSw+i18nFKOo8usG+37hZTWynHYvBfMONtG9mLJv82KGPZMydWK7pzyTZDcnSsIjo2AftMZd5pIHMMIHJoAMCARKigcEEgb5aG1k4xgxmUXX7RKfvAbVBVJ12dWOgFFjMYceKjziXwrrOkv8ZwIvef9Yn2KsWznb5L55SXt2c/zlPa5mLKIktvw77hsK1h/GYc7p//BGOsmr47aCqVWsGuTqVT129uo5LNQDeSFwl2jXCkCZJZavOVrqYsM6flrPYE4n5lASTcPitX+/WNsf6WrvZoaexiv1JqyM/MWqS/vMBRMMc5xlurj6OARFvP9aFZoK/BLmfkSyAJj6MLbLVXZtkHiIP Waiting for server reply... S: YIGZBgkqhkiG9xIBAgICAG+BiTCBhqADAgEFoQMCAQ+iejB4oAMCARKicQRvkxggi9pW+yJ1ExbTwLDclqw/VQ98aPq8mt39hkO6PPfcO2cB+t6vJ01xRKBrT9D2qF2XK0SWD4PQNb5UFbH4RM/bKAxDuCfZ1MHKgIWTLu4bK7VGZTbYydcckU2d910jIdvkkHhaRqUEM4cqp/cRrecieved 156 byte message C: Waiting for server reply... S: BQQF/wAMAAAAAAAAMBOWqQcACAAlCodrXW66ZObsEd4=recieved 32 byte message Sending response... C: BQQE/wAMAAAAAAAAFUYbXQQACAB0b20VynB4uGH/iIzoRhw= Negotiation complete Username: tom SSF: 56 Waiting for encoded message... S: AAAASgUEB/8AAAAAAAAAADATlqrqrBW0NRfPMXMdMz+zqY32YakrHqFps3o/vO6yDeyPSaSqprrhI+t7owk7iOsbrZ/idJRxCBm8Wazxrecieved 78 byte message recieved decoded message 'srv message 1' sending encrypted message 'client message 1' C: AAAATQUEBv8AAAAAAAAAABVGG17WC1+/kIV9xTMUdq6Y4qYmmTahHVCjidgGchTOOOrBLEwA9IqiTCdRFPVbK1EgJ34P/vxMQpV1v4WZpczt

    Read the article

  • How to save POST&GET headers of a web page with "Wireshark"?

    - by brilliant
    Hello everybody, I've been trying to find a python code that would log in to my mail box on yahoo.com from "Google App Engine". I was given this code: import urllib, urllib2, cookielib url = "https://login.yahoo.com/config/login?" form_data = {'login' : 'my-login-here', 'passwd' : 'my-password-here'} jar = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar)) form_data = urllib.urlencode(form_data) # data returned from this pages contains redirection resp = opener.open(url, form_data) # yahoo redirects to http://my.yahoo.com, so lets go there instead resp = opener.open('http://mail.yahoo.com') print resp.read() The author of this script looked into HTML script of yahoo log-in form and came up with this script. That log-in form contains two fields, one for users' Yahoo! ID and another one is for users' password. However, when I tried this code out (substituting mu real Yahoo login for 'my-login-here' and my real password for 'my-password-here'), it just return the log-in form back to me, which means that something didn't work right. Another supporter suggested that I should send an MD5 hash of my password, rather than a plain password. He also noted that in that log-in form there are a lot other hidden fields besides login and password fields (he called them "CSRF protections") that I would also have to deal with: <input type="hidden" name=".tries" value="1"> <input type="hidden" name=".src" value="ym"> <input type="hidden" name=".md5" value=""> <input type="hidden" name=".hash" value=""> <input type="hidden" name=".js" value=""> <input type="hidden" name=".last" value=""> <input type="hidden" name="promo" value=""> <input type="hidden" name=".intl" value="us"> <input type="hidden" name=".bypass" value=""> <input type="hidden" name=".partner" value=""> <input type="hidden" name=".u" value="bd5tdpd5rf2pg"> <input type="hidden" name=".v" value="0"> <input type="hidden" name=".challenge" value="5qUiIPGVFzRZ2BHhvtdGXoehfiOj"> <input type="hidden" name=".yplus" value=""> <input type="hidden" name=".emailCode" value=""> <input type="hidden" name="pkg" value=""> <input type="hidden" name="stepid" value=""> <input type="hidden" name=".ev" value=""> <input type="hidden" name="hasMsgr" value="0"> <input type="hidden" name=".chkP" value="Y"> <input type="hidden" name=".done" value="http://mail.yahoo.com"> He said that I should do the following: Simulate normal login and save login page that I get; Save POST&GET headers with "Wireshark"; Compare login page with those headers and see what fields I need to include with my request; I really don't know how to carry out the first two of these three steps. I have just downloaded "Wireshark" and have tried capturing some packets there. However, I don't know how to "simulate normal login and save the login page". Also, I don't how to save POST$GET headers with "Wireshark". Can anyone, please, guide me through these two steps in "Wireshark"? Or at least tell me what I should start with. Thank You.

    Read the article

  • Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)?

    - by András Szepesházi
    I'm a web application developer using my notebook as a standalone development environment (WAMP stack). I just switched from a Core2-duo Vista 32 bit notebook with 2Gb RAM and SATA HDD, to an i5-2520M Win7 64 bit with 4Gb RAM and 128 GB SDD (Corsair P3 128). My initial experience was what I expected, fast boot, quick load of all the applications (Eclipse takes now 5 seconds as opposed to 30s on my old notebook), overall great experience. Then I started to build up my development stack, both as LAMP (using VirtualBox with a debian guest) and WAMP (windows native apache + mysql + php). I wanted to compare those two. This still all worked great out, then I started to pull in my projects to these stacks. And here came the nasty surprise, one of those projects produced a lot worse response times than on my old notebook (that was true for both the VirtualBox and WAMP stack). Apache, php and mysql configurations were practically identical in all environments. I started to do a lot of benchmarking and profiling, and here is what I've found: All general benchmarks (Performance Test 7.0, HDTune Pro, wPrime2 and some more) gave a big advantage to the new notebook. Nothing surprising here. Disc specific tests showed that read/write operations peaked around 380M/160M for the SSD, and all the different sized block operations also performed very well. Started apache performance benchmarking with Apache Benchmark for a small static html file (10 concurrent threads, 500 iterations). Old notebook: min 47ms, median 111ms, max 156ms New WAMP stack: min 71ms, median 135ms, max 296ms New LAMP stack (in VirtualBox): min 6ms, median 46ms, max 175ms Right here I don't get why the native WAMP stack performed so bad, but at least the LAMP environment brought the expected speed. Apache performance measurement for non-cached php content. The php runs a loop of 1000 and generates sha1(uniqid()) inisde. Again, 10 concurrent threads, 500 iterations were used for the benchmark. Old notebook: min 0ms, median 39ms, max 218ms New WAMP stack: min 20ms, median 61ms, max 186ms New LAMP stack (in VirtualBox): min 124ms, median 704ms, max 2463ms What the hell? The new LAMP performed miserably, and even the new native WAMP was outperformed by the old notebook. php + mysql test. The test consists of connecting to a database and reading a single record form a table using INNER JOIN on 3 more (indexed) tables, repeated 100 times within a loop. Databases were identical. 10 concurrent threads, 100 iterations were used for the benchmark. Old notebook: min 1201ms, median 1734ms, max 3728ms New WAMP stack: min 367ms, median 675ms, max 1893ms New LAMP stack (in VirtualBox): min 1410ms, median 3659ms, max 5045ms And the same test with concurrency set to 1 (instead of 10): Old notebook: min 1201ms, median 1261ms, max 1357ms New WAMP stack: min 399ms, median 483ms, max 539ms New LAMP stack (in VirtualBox): min 285ms, median 348ms, max 444ms Strictly for my purposes, as I'm using a self contained development environment (= low concurrency) I could be satisfied with the second test's result. Though I have no idea why the VirtualBox environment performed so bad with higher concurrency. Finally I performed a test of including many php files. The application that I mentioned at the beginning, the one that was performing so bad, has a heavy bootstrap, loads hundreds of small library and configuration files while initializing. So this test does nothing else just includes about 100 files. Concurrency set to 1, 100 iterations: Old notebook: min 140ms, median 168ms, max 406ms New WAMP stack: min 434ms, median 488ms, max 604ms New LAMP stack (in VirtualBox): min 413ms, median 1040ms, max 1921ms Even if I consider that VirtualBox reached those files via shared folders, and that slows things down a bit, I still don't see how could the old notebook outperform so heavily both new configurations. And I think this is the real root of the slow performance, as the application uses even more includes, and the whole bootstrap will occur several times within a page request (for each ajax call, for example). To sum it up, here I am with a brand new high-performance notebook that loads the same page in 20 seconds, that my old notebook can do in 5-7 seconds. Needless to say, I'm not a very happy person right now. Why do you think I experience these poor performance values? What are my options to remedy this situation?

    Read the article

  • Win 7 Netbook refuses to ping JetDirect card (all other PCs work)

    - by Luke Puplett
    I have an odd thing occuring here. From a Windows 7 netbook, I cannot ping an HP printer on the network, while all other machines (Win7/Vista) can. And the netbook can also ping everything else on the LAN. Example showing that the netbook can ping 192.168.3.4 but not 3.6. C:\Users\backdoor>ping w7ue1m Pinging w7ue1m.corp.biz.co.uk [192.168.3.4] with 32 bytes of data: Reply from 192.168.3.4: bytes=32 time=7ms TTL=128 Reply from 192.168.3.4: bytes=32 time=4ms TTL=128 Reply from 192.168.3.4: bytes=32 time=2ms TTL=128 Reply from 192.168.3.4: bytes=32 time=2ms TTL=128 Ping statistics for 192.168.3.4: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 2ms, Maximum = 7ms, Average = 3ms C:\Users\backdoor>ping uktnprint1 Pinging uktnprint1.corp.biz.co.uk [192.168.3.6] with 32 bytes of data: Reply from 192.168.3.0: Destination host unreachable. Reply from 192.168.3.0: Destination host unreachable. Reply from 192.168.3.0: Destination host unreachable. Reply from 192.168.3.0: Destination host unreachable. Ping statistics for 192.168.3.6: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),`enter code here` The IPCONFIG result for the netbook is fine. IPv4 Address. . . . . . . . . . . : 192.168.3.0 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : 192.168.1.1 Most unusual network thing I've seen in years. I must reiterate that only this netbook is having trouble pinging/printing. Thanks, Luke ** UPDATE ** Am now on a Vista box, and here's the IPCONFIG: IPv4 Address. . . . . . . . . . . : 192.168.3.3 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : 192.168.1.1 Pinging uktnprint1.corp.biz.co.uk [192.168.3.6] with 32 bytes of data: Reply from 192.168.3.6: bytes=32 time=2ms TTL=60 Firewall is off. I'll look into the chance of an IP conflict because it's the only thing I can think of - compare arp caches of each machine. Cheers!

    Read the article

  • phpmyadmin login redirect fails with custom ssl port

    - by baraboom
    The server is running Ubuntu 10.10, Apache 2.2.16, PHP 5.3.3-1ubuntu9.3, phpMyAdmin 3.3.7deb5build0.10.10.1. Since this same server is also running Zimbra on port 443, I've configured apache to serve SSL on port 81. So far, I have one CMS script running on this virtual host successfully. However, when I access /phpmyadmin (set up with the default alias) on my custom ssl port and submit the login form, I am redirected to http://vhost.domain.com:81/index.php?TOKEN=foo (note the http:// instead of the https:// that the login url was using). This generates an Error 400 Bad Request complaining about "speaking plain HTTP to an SSL-enabled server port." I can then manually change the http:// to https:// in the URL and use phpmyadmin as expected. I was annoyed enough to spend an hour trying to fix it and now even more annoyed that I cannot figure it out. I've tried various things, including: Adding $cfg['PmaAbsoluteUri'] = 'https://vhost.domain.com:81/phpmyadmin/'; to the /usr/share/phpmyadmin/config.inc.php file but this did not correct the problem (even though /usr/share/phpmyadmin/libraries/auth/cookie.auth.lib.php looks like it should honor it and use it as the redirect). Adding $cfg['ForceSSL'] = 1; to the same config.inc.php but then apache spirals into an infinite redirect. Adding a rewrite rule to the vhost-ssl conf file in apache but I was unable to figure out the condition to use when http:// was present along with the correct ssl port of :81. Lots of googling. Here are the relevant Apache configuration pieces: /etc/apache2/ports.conf <IfModule mod_ssl.c> NameVirtualHost *:81 Listen 81 </IfModule> /etc/apache2/sites-enabled/vhost-nonssl <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost.domain.com DocumentRoot /home/xxx/sites/vhost/html RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}:81%{REQUEST_URI} </Virtualhost> /etc/apache2/sites-enabled/vhost-ssl <VirtualHost *:81> ServerAdmin webmaster@localhost ServerName vhost.domain.com DocumentRoot /home/xxx/sites/vhost/html <Directory /> Options FollowSymLinks AllowOverride None AuthType Basic AuthName "Restricted Vhost" AuthUserFile /home/xxx/sites/vhost/.users Require valid-user </Directory> <Directory /home/xxx/sites/vhost/html/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> /etc/apache2/conf.d/phpmyadmin.conf Alias /phpmyadmin /usr/share/phpmyadmin (The rest of the default .conf truncated.) Everything in the apache config seems to work ok - the rewrite from non-ssl to ssl, the http authentication, the problem only happens when I am submitting the login form for phpmyadmin from https://vhost.domain.com:81/index.php. Other configs: The phpmyadmin config is completely default and the php.ini has only had some minor changes to memory and timeout limits. These seem to work fine, as mentioned, another php script runs with no problem and phpmyadmin works great once I manually enter in the correct schema after login. I'm looking for either a bandaid I can add to save me the trouble of manually entering in the https:// after login, a real fix that will make phpmyadmin behave as I think it should or some greater understanding of why my desired config is not possible.

    Read the article

  • Long connection times from PHP to MySQL on EC2

    - by Erik Giberti
    I'm having an intermittent issue connecting to a database slave with InnoDB. Intermittently I get connections taking longer than 2 seconds. These servers are hosted on Amazon's EC2. The app server is PHP 5.2/Apache running on Ubuntu. The DB slave is running Percona's XtraDB 5.1 on Ubuntu 9.10. It's using an EBS Raid array for the data storage. We already use skip name resolve and bind to address 0.0.0.0. This is a stub of the PHP code that's failing $tmp = mysqli_init(); $start_time = microtime(true); $tmp-options(MYSQLI_OPT_CONNECT_TIMEOUT, 2); $tmp-real_connect($DB_SERVERS[$server]['server'], $DB_SERVERS[$server]['username'], $DB_SERVERS[$server]['password'], $DB_SERVERS[$server]['schema'], $DB_SERVERS[$server]['port']); if(mysqli_connect_errno()){ $timer = microtime(true) - $start_time; mail($errors_to,'DB connection error',$timer); } There's more than 300Mb available on the DB server for new connections and the server is nowhere near the max allowed (60 of 1,200). Loading on both servers is < 2 on 4 core m1.xlarge instances. Some highlights from the mysql config max_connections = 1200 thread_stack = 512K thread_cache_size = 1024 thread_concurrency = 16 innodb-file-per-table innodb_additional_mem_pool_size = 16M innodb_buffer_pool_size = 13G Any help on tracing the source of the slowdown is appreciated. [EDIT] I have been updating the sysctl values for the network but they don't seem to be fixing the problem. I made the following adjustments on both the database and application servers. net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_keepalive_time = 180 net.ipv4.tcp_max_syn_backlog = 1280 net.ipv4.tcp_synack_retries = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 87380 16777216 [EDIT] Per jaimieb's suggestion, I added some tracing and captured the following data using time. This server handles about 51 queries/second at this the time of day. The connection error was raised once (at 13:06:36) during the 3 minute window outlined below. Since there was 1 failure and roughly 9,200 successful connections, I think this isn't going to produce anything meaningful in terms of reporting. Script: date /root/database_server.txt (time mysql -h database_Server -D schema_name -u appuser -p apppassword -e '') /dev/null 2 /root/database_server.txt Results: === Application Server 1 === Mon Feb 22 13:05:01 EST 2010 real 0m0.008s user 0m0.001s sys 0m0.000s Mon Feb 22 13:06:01 EST 2010 real 0m0.007s user 0m0.002s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Application Server 2 === Mon Feb 22 13:05:01 EST 2010 real 0m0.009s user 0m0.000s sys 0m0.002s Mon Feb 22 13:06:01 EST 2010 real 0m0.009s user 0m0.001s sys 0m0.003s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Database Server === Mon Feb 22 13:05:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s Mon Feb 22 13:06:01 EST 2010 real 0m0.006s user 0m0.010s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s [EDIT] Per a suggestion received on a LinkedIn question, I tried setting the back_log value higher. We had been running the default value (50) and increased it to 150. We also raised the kernel value /proc/sys/net/core/somaxconn (maximum socket connections) to 256 on both the application and database server from the default 128. We did see some elevation in processor utilization as a result but still received connection timeouts.

    Read the article

  • Rsyslog is not working properly, it does not log anything

    - by Victor Henriquez
    I'm running a Debian server and a couple of days ago my rsyslog started to behave very weird, the daemon is running but it doesn't seem to do anything. Many people use the system but I'm the only one with (legal) root access. I'm using the default rsyslogd configuration (if you think is relevant I'll attach it, but it's the one that comes with the package). After I rotated all the log files, they have remained empty: # ls -l /var/log/*.log -rw-r--r-- 1 root root 0 Jun 27 00:25 /var/log/alternatives.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/auth.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/daemon.log -rw-r--r-- 1 root root 0 Jun 27 00:25 /var/log/dpkg.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/kern.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/lpr.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/mail.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/user.log Any try to force a log writing does not have any effect: # logger hey # ls -l /var/log/messages -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/messages Lsof shows that rsyslogd does not have any log files opened: # lsof -p 1855 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME rsyslogd 1855 root cwd DIR 202,0 4096 2 / rsyslogd 1855 root rtd DIR 202,0 4096 2 / rsyslogd 1855 root txt REG 202,0 342076 21649 /usr/sbin/rsyslogd rsyslogd 1855 root mem REG 202,0 38556 32153 /lib/i386-linux-gnu/i686/cmov/libnss_nis-2.13.so rsyslogd 1855 root mem REG 202,0 79728 32165 /lib/i386-linux-gnu/i686/cmov/libnsl-2.13.so rsyslogd 1855 root mem REG 202,0 26456 32163 /lib/i386-linux-gnu/i686/cmov/libnss_compat-2.13.so rsyslogd 1855 root mem REG 202,0 297500 1061058 /usr/lib/rsyslog/imuxsock.so rsyslogd 1855 root mem REG 202,0 42628 32170 /lib/i386-linux-gnu/i686/cmov/libnss_files-2.13.so rsyslogd 1855 root mem REG 202,0 22784 1061106 /usr/lib/rsyslog/imklog.so rsyslogd 1855 root mem REG 202,0 1401000 32169 /lib/i386-linux-gnu/i686/cmov/libc-2.13.so rsyslogd 1855 root mem REG 202,0 30684 32175 /lib/i386-linux-gnu/i686/cmov/librt-2.13.so rsyslogd 1855 root mem REG 202,0 9844 32157 /lib/i386-linux-gnu/i686/cmov/libdl-2.13.so rsyslogd 1855 root mem REG 202,0 117009 32154 /lib/i386-linux-gnu/i686/cmov/libpthread-2.13.so rsyslogd 1855 root mem REG 202,0 79980 17746 /usr/lib/libz.so.1.2.3.4 rsyslogd 1855 root mem REG 202,0 18836 1061094 /usr/lib/rsyslog/lmnet.so rsyslogd 1855 root mem REG 202,0 117960 31845 /lib/i386-linux-gnu/ld-2.13.so rsyslogd 1855 root 0u unix 0xebe8e800 0t0 640 /dev/log rsyslogd 1855 root 3u FIFO 0,5 0t0 2474 /dev/xconsole rsyslogd 1855 root 4u unix 0xebe8e400 0t0 645 /var/spool/postfix/dev/log rsyslogd 1855 root 5r REG 0,3 0 4026532176 /proc/kmsg I was so frustrated that even reinstall the rsyslog package, but it still refuses to log anything: # apt-get remove --purge rsyslog # apt-get install rsyslog I thought someone had hacked the system, so run rkhunter, chkrootkit, unhide in an attempt to find hide processes / ports and nmap in a remote host to compare with the ports shown by netstat. And I know this doesn't mean anything, but all looks ok. The system also have an iptables firewall that is very restrictive with incoming / outgoing connections. This is driving me crazy, any idea what is going on here? [EDIT - disk space info] # df -h Filesystem Size Used Avail Use% Mounted on rootfs 24G 22G 629M 98% / /dev/root 24G 22G 629M 98% / devtmpfs 10M 112K 9.9M 2% /dev tmpfs 76M 48K 76M 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 151M 40K 151M 1% /tmp tmpfs 151M 0 151M 0% /run/shm

    Read the article

  • Thoughts on home NAS server

    - by user826955
    I currently have a NAS with a 2x2TB HDD 1x16GB SSD layout on a mini-itx atom board. The NAS is in a Lian Li PC-Q07 case. On this system I was running freebsd 8 with a gmirror raid 1 setup, which was enough for my needs. So far I was using the NAS for: Fileserver with AFP protocol (only mac clients used) SVN server hosting all my source trees of my projects JIRA (performance was okay-ish) Timemachine backup for the macs The power consumption was about 38W, although I did not put HDDs asleep when unused (I think this is not possible in a raid setup). I liked the NAS because: the performance was good through gigabit LAN (enough for my needs) power consumption was good its a pretty small case and fits in one of my cupboards I disliked the NAS a bit because: it was a bit noisy, the Q07 case vibrated a good amount because of the HDDs. I switched the NAS off every evening I do not have a real backup of the data on the NAS, only the internal raid 1 as safety. I really dont want to loose my source trees under no circumstances, so I would really be sleeping better if I knew I had regular backups somewhere. Recently, the board seemed to have died, I can't boot anymore. Thus, I was thinking about a redesign of my NAS (I still have to find out what parts are broken, I probably need to replace the mainboard and SSD. HDDs seem to be okay). First of all, I was wondering what other users have as backup for their NAS? Are you actually using a second NAS, and regularly copying over the data to have it safe? Or is there any better solution to this? I was thinking about getting a cheap NAS like the synology DS112j with only one disk, and use rsync or something similar to regularly copy data over to the second NAS (wake the second NAS upon start, shut it down after copy). Although this approach seems somewhat weird, It would have the benefit (?) that I could use a single disk instead of raid in the main NAS, and put the disk asleep when idle, and have the NAS running 24/7 with low energy consumption (I found no way to do this with a gmirror setup). Is there any recommended backup solution for a small NAS? Then I was thinking about a different raid setup. Since I have to buy a new mainboard as well as SSD, I might as well switch over to a i3 board with more ram, and also switch to ZFS. I am not familar with ZFS, I've never used it, but I read and hear much about it. Would it be viable to set up a ZFS storage with only 2 disks? Can I easily extend this storage with more disks, once I choose to add some? I could maybe get a new case like the Fractal Design Array R2 which has more 3,5" slots. I could as well get another 2 disks, but I would prefer sticking with the existing 2 for enegery/heat/noise reasons. Should I go for a ZFS storage or stick to my gmirror setup? I would also like to keep freebsd as operating system, and also I dont need any web gui or something (that is, I dont need/want to use FreeNAS or Openfiler etc). Does anyone maybe have a sample setup in use so I can compare energy consumption/noise/software setup? Any guidance towards the NAS of my dreams (silent, low energy, safe w/ backups) much appreciated.

    Read the article

  • Rosewill RSV-S5 and it's transferespeeds

    - by DoomStone
    I have just bought a Rosewill RSV-S5, I have installed 5x1,5Tb Western Digital Green disks in it. After that have I created a Raid5 on them all with the software that followed with the hardware. Not the raid it self works fine, but it is SLOW, I can only obtain a maximum of 25 MB/s, and if SABnzbd+ is downloading with 5 MB/s is it having a hard time streaming a normal DIVX (700 mb) movie. Is this normal or is there something wrong? Edit: should be able to handle 3 Gbps = 384 megabytes / second Edit 2: As you can see am I only downloading with 3,76 MB/s and I'm trying to watch V s02e08 (720p), but it is completely unwatchable, as I can see 30 sec, and the it buffers for 20 sec. Edit: Other information there might be required I'm running Windows Server 2008 R2, optimized for program performance. Windows is installed on a 60GB SSD. I have a 50 Mb/s internet connection and a 1 Gb/s LAN, all connected with Cat6 Ethernet cables. The MCE is using a Gigabyte EP35C-DS3R motherboard with 2 GB DDR2 ram. Edit 3: I have used chunk sizes for 128 KB Edit 4: I found this on newegg Pros: Enclosure for 5x2TB hard drive is fine. This is basically a rebranded San Digital TR5M-B product. For support Rosewill tells you to contact San Digital. No direct support from Silicon Image for the computer raid card. Cons: Includes computer Silicon Image 3132 raid card, extremely slow raid 5 write (our tests ~10MB/s). Compare to regular internal local drive write 30-60MB/s. We basically dumped the Sil3132 card and replaced with High Point RocketRaid 622 card for extra $69.99. Note for RR622, turn off ECRC (end to end CRC check) for card to work on IBM xserver. What took 12hrs to copy now took 2-3hrs. San Digital realized the problem and has the newer model TR5M-BP TowerRaid Plus that comes with High Point RocketRaid 622 card. Rosewill should discontinue this product and go with TR5M-BP. Could not get Silicon Image raid management software to work with complicated 2008R2 server with 10 NICs, application doesn't know how to talk to localhost port with all those NICs. No updates from Silicon Image and support from San Digital ignored. Gave up on Sil3132 card. Save yourself from a lot of headaches, get the RR622 card too if you are going to buy this product. Other Thoughts: The newer model is TR5M-BP TowerRaid Plus, comes with High Point RocketRaid 622 raid card for the PC instead of Silicon Image Sil3132. According to San Digital, raid 5 performance for Sil3132 read 80MB/s write 19MB/s, and RR622 read 154MB/s write 149MB/s. Our RR622 tests gave (8TB raid 5) write ~80-110MB/s copying 40GB file took 8mins. So I have now ordered a HighPoint RocketRAID 622 2P ext SATA III and hopes that it will solve my problems.

    Read the article

  • Apache SSL reverse proxy to a Embed Tomcat

    - by ggarcia24
    I'm trying to put in place a reverse proxy for an application that is running a tomcat embed server over SSL. The application needs to run over SSL on the port 9002 so I have no way of "disabling SSL" for this app. The current setup schema looks like this: [192.168.0.10:443 - Apache with mod_proxy] --> [192.168.0.10:9002 - Tomcat App] After googling on how to make such a setup (and testing) I came across this: https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/861137 Which lead to make my current configuration (to try to emulate the --secure-protocol=sslv3 option of wget) /etc/apache2/sites/enabled/default-ssl: <VirtualHost _default_:443> SSLEngine On SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key SSLProxyEngine On SSLProxyProtocol SSLv3 SSLProxyCipherSuite SSLv3 ProxyPass /test/ https://192.168.0.10:9002/ ProxyPassReverse /test/ https://192.168.0.10:9002/ LogLevel debug ErrorLog /var/log/apache2/error-ssl.log CustomLog /var/log/apache2/access-ssl.log combined </VirtualHost> The thing is that the error log is showing error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol Complete request log: [Wed Mar 13 20:05:57 2013] [debug] mod_proxy.c(1020): Running scheme https handler (attempt 0) [Wed Mar 13 20:05:57 2013] [debug] mod_proxy_http.c(1973): proxy: HTTP: serving URL https://192.168.0.10:9002/ [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2011): proxy: HTTPS: has acquired connection for (192.168.0.10) [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2067): proxy: connecting https://192.168.0.10:9002/ to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2193): proxy: connected / to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2444): proxy: HTTPS: fam 2 socket created to connect to 192.168.0.10 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2576): proxy: HTTPS: connection complete to 192.168.0.10:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection to child 0 established (server demo1agrubu01.demo.lab:443) [Wed Mar 13 20:05:57 2013] [info] Seeding PRNG with 656 bytes of entropy [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/connect initialization [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: unknown state [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1897): OpenSSL: read 7/7 bytes from BIO#7f122800a100 [mem: 7f1230018f60] (BIO dump follows) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1830): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1869): | 0000: 15 03 01 00 02 02 50 ......P | [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1875): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1903): OpenSSL: Exit: error in unknown state [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] SSL Proxy connect failed [Wed Mar 13 20:05:57 2013] [info] SSL Library Error: 336032002 error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 0 with abortive shutdown (server example1.domain.tld:443) [Wed Mar 13 20:05:57 2013] [error] (502)Unknown error 502: proxy: pass request body failed to 172.31.4.13:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [error] [client 192.168.0.10] proxy: Error during SSL Handshake with remote server returned by /dsfe/ [Wed Mar 13 20:05:57 2013] [error] proxy: pass request body failed to 192.168.0.10:9002 (172.31.4.13) from 172.31.4.13 () [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2029): proxy: HTTPS: has released connection for (172.31.4.13) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1884): OpenSSL: Write: SSL negotiation finished successfully [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 6 with standard shutdown (server example1.domain.tld:443) If I do a wget --secure-protocol=sslv3 --no-check-certificate https://192.168.0.10:9002/ it works perfectly, but from apache is not working. I'm on an Ubuntu Server with the latest updates running apache2 with mod_proxy and mod_ssl enabled: ~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" ~# dpkg -s apache2 ... Version: 2.2.22-1ubuntu1.2 ... ~# dpkg -s openssl ... Version: 1.0.1-4ubuntu5.7 ... Hope that anyone may help

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit Results of DBBC CheckDB: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 7928, Level 16, State 1, Line 1 The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline. Msg 5030, Level 16, State 12, Line 1 The database could not be exclusively locked to perform the operation. Msg 7926, Level 16, State 1, Line 1 Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details. Msg 823, Level 24, State 3, Line 1 The operating system returned error 1(error not found) to SQL Server during a write at offset 0x00000674706000 in file 'G:\AX40_Dynamics_Live.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

    Read the article

  • This task is currently locked by a running workflow and cannot be edited. Limitation to both Nintex and SPD workflow

    - by ybbest
    Note, this post is from Nintex Forum here. These limitations apply to both SharePoint designer Workflow and Nintex Workflow as Nintex using the SharePoint workflow engine. The common cause that I experience is that ‘parent’ workflow is generating more than one task at once. This is common as you can have multiple approvers for certain approval process. You could also have workflow running when the task is created, one of the common scenario is you would like to set a custom column value in your approval task. For me this is huge limitation, as Nintex lover I really hope Nintex could solve this problem with Microsoft going forward. Introduction “This task is currently locked by a running workflow and cannot be edited” is a common message that is seen when an error occurs while the SharePoint workflow engine is processing a task item associated with a workflow. When a workflow processes a task normally, the following sequence of events is expected to occur: 1.       The process begins. 2.       The workflow places a ‘lock’ on the task so nothing else can change the values while the workflow is processing. 3.       The workflow processes the task. 4.       The lock is released when the task processing is finished. When the message is encountered, it usually indicates that an error occurred between step 2 and 4. As a result, the lock is never released. Therefore, the ‘task locked’ message is not an error itself, rather a symptom of another error – the ‘task locked’ message does not indicate what went wrong. In most cases, once this message is encountered, the workflow cannot be made to continue and must be terminated and started again. The following is a guide that can help troubleshoot the cause of these messages.  Some initial observations to narrow down the potential causes are: Is the error consistent or intermittent? When the error is consistent, it will happen every time the workflow is run. When it is intermittent, it may happen regularly, but not every time. Does the error occur the first time the user tries to respond to a task, or do they respond and notice the workflow does not continue, and when they respond again the error occurs? If the message is present when the user first responds to the task, the issue would have occurred when the task was created. Otherwise, it would have occurred when the user attempted to respond to the task. Causes Modifying the task list A cause of this error appearing consistently the first time a user tries to respond to a task is a modification to the default task list schema. For example, changing the ‘Assigned to’ field in a task list to be a multiple selection will cause the behaviour. Deleting the workflow task then restoring it from the Recycle bin If you start a workflow, delete the workflow task then restore it from the Recycle Bin in SharePoint, the workflow will fail with the ‘task locked’ error.  This is confirmed behaviour whether using a SharePoint Designer or a Nintex workflow.  You will need to terminate the workflow and start it again. Parallel simultaneous responses A cause of this error appearing inconsistently is multiple users responding to tasks in parallel at the same time. In this scenario, one task will complete correctly and the other will not process. When the user tries again, the ‘task locked’ message will display. Nintex included a workaround for this issue in build 11000. In build 11000 and later, one of the users will receive a message on the task form when they attempt to respond, stating that they need to try again in a few moments. Additional processing on the task A cause of this error appearing consistently and inconsistently is having an additional system running on the items in the task list. Some examples include: a workflow running on the task list, an event receiver running on the task list or another automated process querying and updating workflow tasks. Note: This Microsoft help article (http://office.microsoft.com/en-us/sharepointdesigner/HA102376561033.aspx#5) explains creating a workflow that runs on the task list to update a field on the task. Our experience shows that this causes the ‘Task Locked’ issues when the ‘parent’ workflow is generating more than one task at once. Isolated system error If the error is a rare event, or a ‘one off’ event, then an isolated system error may have occurred. For example, if there is a database connectivity issue while the workflow is processing the task response, the task will lock. In this case, the user will respond to a task but the workflow will not continue. When they respond again, the ‘task locked’ message will display. In this case, there will be an error in the SharePoint ULS Logs at the time that the user originally responded. Temporary delay while workflow processes If the workflow is taking a long time to process after a user submits a task, they may notice and try to respond to the task again. They will see the task locked error, but after a number of attempts (or after waiting some time) the task response page eventually indicates the task has been responded to. In this case, nothing actually went wrong, and the error message gives an accurate indication of what is happening – the workflow temporarily locked the task while it was processing. This scenario may occur in a very large workflow, or after the SharePoint application pool has just started. Modifying the task via a web service with an invalid url If the Nintex Workflow web service is used to respond to or delegate a task, the site context part of the url must be a valid alternative access mapping url. For example, if you access the web service via the IP address of the SharePoint server, and the IP address is not a valid AAM, the task can become locked. The workflow has become stuck without any apparent errors This behaviour can occur as a result of a bug in the SharePoint 2010 workflow engine.  If you do not have the August 2010 Cumulative Update (or later) for SharePoint, and your workflow uses delays, “Flexi-task”, State machine”, “Task Reminder” actions or variables, you could be affected. Check the SharePoint 2010 Updates site here: http://technet.microsoft.com/en-us/sharepoint/ff800847.  The October CU is recommended http://support.microsoft.com/kb/2553031.   The fix is described as “Consider the following scenario. You add a Delay activity to a workflow. Then, you set the duration for the Delay activity. You deploy the workflow in SharePoint Foundation 2010. In this scenario, the workflow is not resumed after the duration of the Delay activity”. If you find this is occurring in your environment, install the October CU, terminate all the running workflows affected and run them afresh. Investigative steps The first step to isolate the issue is to create a new task list on the site and configure the workflow to use it.  Any customizations that were made to the original task list should not be made to the new task list. If the new task list eliminates the issue, then the cause can be attributed to the original task list or a change that was made to it. To change the task list that the workflow uses: In Workflow Designer select Settings -> Startup Options Then configure the task list as required If any of the scenarios above do not help, check the SharePoint logs for any messages with a category of ‘Workflow Infrastructure’. Conclusion The information in this article has been gathered from observations and investigations by Nintex. The sources of these issues are the underlying SharePoint workflow engine. This article will be updated if further causes are discovered. From <http://connect.nintex.com/forums/thread/6503.aspx>

    Read the article

  • Our Look at Opera 10.50 Web Browser

    - by Asian Angel
    Everyone has been talking about the newest version of Opera recently but perhaps you have not looked at it too closely yet. Today we will take a look at 10.50 and let you see what this “new browser” is all about. The New Engines Carakan JavaScript Engine: Runs web applications up to 7 times faster than its predecessor Futhark Vega Graphics Library: Enables super fast and smooth graphics on everything from tab switching to webpage animation Presto 2.5: Provides support for HTML5, CSS2.1 and the latest CSS3 standards A Look at the Features Available If you have installed or used older versions of Opera before then the default look after a clean install will probably seem rather different. The main differences in appearance are mainly located within the “glass border” areas of the browser. The “Speed Dial” setup looks and works just as well as in previous versions. You can set a favorite wallpaper or image as your background and choose the number of “dials” using the “Configure Speed Dial Command”. One of the “standout” differences is the “O Button”. All of the menus have been condensed into this single access point but it only takes a few moments to find what you are looking for. If you have used the style before in earlier versions of Opera some of the items have been moved around. For those who prefer the “Menu Bar” that can be easily restored using the “Show Menu Bar Command”. If desired you can actually “extend” the “Tab Bar” downwards to display thumbnails of your open tabs. Just use your mouse to grab the bottom of the “Tab Bar” and adjust it to suit your personal needs. The only problem with this feature is that it will quickly use up a good sized portion of your available UI and browser window space. The “Password Manager” is ready to access when needed…the background for the button will turn a shiny metallic blue when you open a webpage that you have “Login Information” saved for. One of the new features is a small “Recycle Bin Button” in the upper right corner. Clicking on this will display a list of recently closed tabs letting you have easy access to any tabs that you may have accidentally closed. This is definitely a great feature to have as an easy access button. For those who were used to how the “Zoom Feature” looked before it has a new “look” to it. Instead of the pop-up menu-type listing of “view sizes” present before you now have a slider button that you can use to adjust the zooming level. For our default setup here the “Sidebar Panels” available were: “Bookmarks, Widgets, Unite, Notes, Downloads, History, & Panels”. Additional panels such as “Links, Windows, Search, Info, etc.” are available if you want and/or need them (accessible using the “Panels Plus Sign Button”). The “Opera Link Button” makes it easy for you to synchronize your “Speed Dial, Bookmarks, Personal Bar, Custom Searches, History & Notes”. Note: “Opera Link” requires an account and can be signed up for using the link provided below. Want to share files with your family and friends? “Unite” allows you to do that and more. With “Unite” you can: “Stream Music, Show Photo Galleries, Share Files and/or Folders, & host webpages directly from your browser”. We have a more in-depth look at “Unite” in our article here. Note: Use of “Unite” requires an Opera account. Got a slow internet connection? “Opera Turbo” can help with that by running the web traffic through their “compression servers” to speed up your web browsing. Keep in mind that “Opera Turbo” will not engage if you are accessing a secure website (i.e. your bank’s website) thus preserving your security. Note: “Opera Turbo” can be set up to automatically detect slow internet connections (i.e. crowded Wi-Fi in a cafe). Opera has a built-in “Private Browsing Mode” now for those who prefer anonymous browsing and want to keep the “history records clean” on their computer. To access it go to “Tabs and windows” and select “New private tab” or “New private window” as desired. When you open your new “Private Tab or Window” you will see the following message with details on how Opera will handle browsing information and a large “door hanger symbol”. Notice that the one tab is locked into “Private Browsing Mode” while the others are still working in “Regular Browsing Mode”. Very nice! A miniature version of the “door hanger symbol” will be present on any tab that is locked into “Private Browsing Mode”. If you are using Windows 7 then you will love how things look from your “Taskbar”. Here you can see four very nice looking thumbnails for the tabs that we had open. All that you have to do is click on the desired thumbnail… The “Context Menu” looks just as lovely as the thumbnails and definitely has some terrific functionality built into it. Add Enhanced Aero Capability If you love “Aero” and want more for your new Opera install then we have the perfect theme for you. The theme’s name is Z1-AV69 and once you have downloaded it you will need to place it in the “Skins Subfolder” in Opera’s “Program Files Folder”. Note: For our example we used version 1.10 but version 2.00 is now available (link provided below). Once you have restarted Opera, go to the “O Menu” and select “Appearance”. When the “Appearance Window” opens click on “Z1-Glass Skin” and then click “OK”. All of a sudden you will have more “Aero Goodness” to enjoy. Compare this screenshot with the one at the top of this article…the only part that is not transparent now is the browser window area itself. Want even more “Aero Goodness”? Right click on the “Tab Bar” and set “Tab Bar Placement” to “Left”. Note: You can achieve the same effect by setting the “Tab Bar Placement” to “Right”. With the “Speed Dial” visible you will be able to see your wallpaper with ease. While this is obviously not for everyone it does make for a great visual trick. Portable Versions Perhaps you need this wonderful new version of Opera to go with you wherever you do during the day. Not a problem…just visit the Opera USB website to choose a version that works best for you. You can select from “Zip or Exe” setup files and if needed update an older portable version using a “Zipped Update Files Package”. If you are updating an older version keep in mind that you will need to delete the old “OperaUSB.exe. File” due to changes with the new setup files. During our tests updating older portable versions went well for the most part but we did experience a few “odd UI quirks” here and there…so we recommend setting up a clean install if possible. Conclusion The new 10.50 release is a pleasure to use and is a recommended install for your system. Whether you are considering trying Opera for the first time or have been using it for a bit we think that you will pleased with everything that the 10.50 release has to offer. For those who would like to add User Scripts to Opera be certain to look at our how-to article here. Links Download Opera 10.50 for your location (Windows) Get the latest Snapshot versions for Linux & Mac Sign up for an Opera Link account View In-Depth detail on Opera 10.50’s features Download the Z1-AV69 Aero Theme Download Portable Opera 10.50 Similar Articles Productive Geek Tips Set the Speed Dial as the Opera Startup PageSet Up User Scripts in Opera BrowserScan Files for Viruses Before You Download With Dr.WebTurn Your Computer into a File, Music, and Web Server with Opera UniteSet the Default Browser on Ubuntu From the Command Line TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • Parallelism in .NET – Part 18, Task Continuations with Multiple Tasks

    - by Reed
    In my introduction to Task continuations I demonstrated how the Task class provides a more expressive alternative to traditional callbacks.  Task continuations provide a much cleaner syntax to traditional callbacks, but there are other reasons to switch to using continuations… Task continuations provide a clean syntax, and a very simple, elegant means of synchronizing asynchronous method results with the user interface.  In addition, continuations provide a very simple, elegant means of working with collections of tasks. Prior to .NET 4, working with multiple related asynchronous method calls was very tricky.  If, for example, we wanted to run two asynchronous operations, followed by a single method call which we wanted to run when the first two methods completed, we’d have to program all of the handling ourselves.  We would likely need to take some approach such as using a shared callback which synchronized against a common variable, or using a WaitHandle shared within the callbacks to allow one to wait for the second.  Although this could be accomplished easily enough, it requires manually placing this handling into every algorithm which requires this form of blocking.  This is error prone, difficult, and can easily lead to subtle bugs. Similar to how the Task class static methods providing a way to block until multiple tasks have completed, TaskFactory contains static methods which allow a continuation to be scheduled upon the completion of multiple tasks: TaskFactory.ContinueWhenAll. This allows you to easily specify a single delegate to run when a collection of tasks has completed.  For example, suppose we have a class which fetches data from the network.  This can be a long running operation, and potentially fail in certain situations, such as a server being down.  As a result, we have three separate servers which we will “query” for our information.  Now, suppose we want to grab data from all three servers, and verify that the results are the same from all three. With traditional asynchronous programming in .NET, this would require using three separate callbacks, and managing the synchronization between the various operations ourselves.  The Task and TaskFactory classes simplify this for us, allowing us to write: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAll( new[] {server1, server2, server3 }, (tasks) => { // Propogate exceptions (see below) Task.WaitAll(tasks); return this.CompareTaskResults( tasks[0].Result, tasks[1].Result, tasks[2].Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is clean, simple, and elegant.  The one complication is the Task.WaitAll(tasks); statement. Although the continuation will not complete until all three tasks (server1, server2, and server3) have completed, there is a potential snag.  If the networkClass.GetResults method fails, and raises an exception, we want to make sure to handle it cleanly.  By using Task.WaitAll, any exceptions raised within any of our original tasks will get wrapped into a single AggregateException by the WaitAll method, providing us a simplified means of handling the exceptions.  If we wait on the continuation, we can trap this AggregateException, and handle it cleanly.  Without this line, it’s possible that an exception could remain uncaught and unhandled by a task, which later might trigger a nasty UnobservedTaskException.  This would happen any time two of our original tasks failed. Just as we can schedule a continuation to occur when an entire collection of tasks has completed, we can just as easily setup a continuation to run when any single task within a collection completes.  If, for example, we didn’t need to compare the results of all three network locations, but only use one, we could still schedule three tasks.  We could then have our completion logic work on the first task which completed, and ignore the others.  This is done via TaskFactory.ContinueWhenAny: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAny( new[] {server1, server2, server3 }, (firstTask) => { return this.ProcessTaskResult(firstTask.Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, instead of working with all three tasks, we’re just using the first task which finishes.  This is very useful, as it allows us to easily work with results of multiple operations, and “throw away” the others.  However, you must take care when using ContinueWhenAny to properly handle exceptions.  At some point, you should always wait on each task (or use the Task.Result property) in order to propogate any exceptions raised from within the task.  Failing to do so can lead to an UnobservedTaskException.

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >