Search Results

Search found 1232 results on 50 pages for 'dc'.

Page 19/50 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Spaces in OU causing drupal ldapgroups lookup to deny login

    - by jgreep
    I'm trying to configure mapping of LDAP Groups to Drupal roles. The DN for the group that I have been given contains a space in the OU: CN=CommunityUsers,OU=Distribution Groups,DC=TLD,DC=AD Drupal is authenticating if there is no space, but under no circumstances can I have the space removed. Can I change the way I specify the DN?

    Read the article

  • Linq 2 SQL Grouping Question

    - by Jack Marchetti
    var groups = from p in dc.Pool join pm in dc.PoolMembers on p.ID equals pm.PoolID group p by p.Group into grp select new { grp.ID }; This isn't working. Basically I want to do the grouping, and then select certain columns, but when I do select new { grp. } I get no intellisense, so I'm obviously doing something wrong. Any ideas?

    Read the article

  • Hibernate find by criteria get single result

    - by GigaPr
    Hi, i am experimenting using Hibernate. I am trying to get a User by id this is what i do public User get(DetachedCriteria dc){ List<User> users = getHibernateTemplate().findByCriteria(dc); if(users != null) { return users.get(0); } else return null; } but it fails when the user is not in the database. Could you help me to understand how to achieve this? Thanks

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • windows 2003 server : can't join domain

    - by phill
    I originally tried to rejoin a computer to a network which led to a "cannot find domain" error. The username/password box don't even come up. some tests i ran: I can ping the server, however I can't ping the domain name domain1.local. nslookup can't find the domain either. It looks to the isp's dns instead of my own to resolve the local machines. So i go to the dns and run netdiag.exe and gives me this error. DNS test . . . . . . . . . . . . . : Failed [WARNING] Cannot find a primary authoritative DNS server for the name 'stmartinsrv.stmartin.local.'. [RCODE_SERVER_FAILURE] The name 'srv.domain1.local.' may not be registered in DNS. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.156.1'. Please wait for 30 minutes for DNS server replication. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.157.1'. Please wait for 30 minutes for DNS server replication. [FATAL] No DNS servers have the DNS records for this DC registered. Redir and Browser test . . . . . . : Passed List of NetBt transports currently bound to the Redir NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The redir is bound to 1 NetBt transport. List of NetBt transports currently bound to the browser NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The browser is bound to 1 NetBt transport. then running dcdiag C:\Program Files\Support Toolsdcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\SRV Starting test: Connectivity The host 1c99f63c-49ec-40db-b3d3-6265c00fbd3e._msdcs.domain1.local cou ld not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (1c99f63c-49ec-40db-b3d3-6265c00fbd3e._msdcs.domain1.local) couldn't be resolved, the server name (srv.domain1.local) resolved to the IP address (192.168.1.21) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... SRV failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\SRV Skipping all tests, because server SRV is not responding to directory service requests Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : domain1 Starting test: CrossRefValidation ......................... domain1 passed test CrossRefValidation Starting test: CheckSDRefDom ......................... domain1 passed test CheckSDRefDom Running enterprise tests on : domain1.local Starting test: Intersite ......................... domain1.local passed test Intersite Starting test: FsmoCheck ......................... domain1.local passed test FsmoCheck from previous postings, I've tried adding the domain suffix to the nic ip properties to both the client machine and the dc server which didn't help. note: there is only one nic on the server any ideas? thanks in advance

    Read the article

  • Active Directory Time Synchronisation - Time-Service Event ID 50

    - by George
    I have an Active Directory domain with two DCs. The first DC in the forest/domain is Server 2012, the second is 2008 R2. The first DC holds the PDC Emulator role. I sporadically receive a warning from the Time-Service source, event ID 50: The time service detected a time difference of greater than %1 milliseconds for %2 seconds. The time difference might be caused by synchronization with low-accuracy time sources or by suboptimal network conditions. The time service is no longer synchronized and cannot provide the time to other clients or update the system clock. When a valid time stamp is received from a time service provider, the time service will correct itself. Time sync in the domain is configured with the second DC to synchronise using the /syncfromflags:DOMHIER flag. The first DC is configured to sync time using a /syncfromflags:MANUAL /reliable:YES, from a peerlist consisting of a number of UK based stratum 2 servers, such as ntp2d.mcc.ac.uk. I'm confused why I receive this event warning. It implies that my PDC emulator cannot synchronise time with a supposedly reliable external time source, and it quotes a time difference of 5 seconds for 900 seconds. It's worth also mentioning that I used to use a UK pool from ntp.org but I would receive the warning much more often. Since updating to a number of UK based academic time servers, it seems to be more reliable. Can someone with more experience shed some light on this - perhaps it is purely transient? Should I disregard the warning? Is my configuration sound? EDIT: I should add that the DCs are virtual, and installed on two separate VMware ESXi/vSphere physical hosts. I can also confirm that as per MDMarra's comment and best practice, VMware timesync is disabled, since: c:\Program Files\VMware\VMware Tools\VMwareToolboxCmd.exe timesync status returns Disabled. EDIT 2 Some strange new issue has cropped up. I've noticed a pattern. Originally, the event ID 50 warnings would occur at about 1230pm each day. This is interesting since our veeam backup happens at 12 midday. Since I made the changes discussed here, I now receive an event ID 51 instead of 50. The new warning says that: The time sample received from peer server.ac.uk differs from the local time by -40 seconds (Or approximately 40 seconds). This has happened two days in a row. Now I'm even more confused. Obviously the time never updates until I manually intervene. The issue seems to be related to virtualisation and veeam. Something may be occuring when veeam is backing up the PDCe. Any suggestions? UPDATE & SUMMARY msemack's excellent list of resources below (the accepted answer) provided enough information to correctly configure the time service in the domain. This should be the first port of call for any future people looking to verify their configuration. The final "40 second jump" issue I have resolved (there are no more warnings) through adjusting the VMware time sync settings as noted in the veeam knowledge base article here: http://www.veeam.com/kb1202 In any case, should any future reader use ESXi, veeam or not, the resources here are an excellent source of information on the time sync topic and msemack's answer is particularly invaluable.

    Read the article

  • Troubleshooting sudoers via ldap

    - by dafydd
    The good news is that I got sudoers via ldap working on Red Hat Directory Server. The package is sudo-1.7.2p1. I have some LDAP/Kerberos users in an LDAP group called wheel, and I have this entry in LDAP: # %wheel, SUDOers, example.com dn: cn=%wheel,ou=SUDOers,dc=example,dc=com cn: %wheel description: Members of group wheel have access to all privileges. objectClass: sudoRole objectClass: top sudoCommand: ALL sudoHost: ALL sudoUser: %wheel So, members of group wheel have administrative privileges via sudo. This has been tested and works fine. Now, I have this other sudo privilege set up to allow members of a group called Administrators to perform two commands as the non-root owner of those commands. # %Administrators, SUDOers, example.com dn: cn=%Administrators,ou=SUDOers,dc=example,dc=com sudoRunAsGroup: appGroup sudoRunAsUser: appOwner cn: %Administrators description: Allow members of the group Administrators to run various commands . objectClass: sudoRole objectClass: top sudoCommand: appStop sudoCommand: appStart sudoCommand: /path/to/appStop sudoCommand: /path/to/appStart sudoUser: %Administrators Unfortunately, members of Administrators are still refused permission to run appStart or appStop: -bash-3.2$ sudo /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as root on host.example.com. -bash-3.2$ sudo -u appOwner /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as appOwner on host.example.com. /var/log/secure shows me these two sets of messages for the two attempts: Oct 31 15:02:36 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:37 host sudo: pam_krb5[1508]: TGT verified using key for 'host/[email protected]' Oct 31 15:02:37 host sudo: pam_krb5[1508]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:37 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=root ; COMMAND=/path/to/appStop Oct 31 15:02:52 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:52 host sudo: pam_krb5[1547]: TGT verified using key for 'host/[email protected]' Oct 31 15:02:52 host sudo: pam_krb5[1547]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:52 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=appOwner; COMMAND=/path/to/appStop The questions: Does sudo have some sort of verbose or debug mode where I can actually watch it capture the sudoers privilege list and determine whether or not Aaron should have the privilege to run this command? (This question is probably independent of where the sudoers database is kept.) Does sudo work with some background mechanism that might have a log level I could turn up? Right now, I can't fix a problem I can't identify. Is this an LDAP search failure? Is this a group member matching failure? Identifying why the command fails will help me identify the fix... Next step: Recreate the privilege in /etc/sudoers, and see if it works locally... Cheers!

    Read the article

  • Iptables blocking mysql port 3306

    - by valmar
    I got a Tomcat server running a web application that must access a mysql server via Hibernate on the same machine. So, I added a rule for port 3306 to my iptables script but tomcat cannot connect to the mysql server for some reason. I need to reset all iptables rules - Then tomcat can connect to the mysql server again. All the other iptables rules work perfectly though. What's wrong? Here is my script: iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport 24 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp -s localhost --dport 8009 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -d localhost --dport 8009 -j ACCEPT iptables -A INPUT -p tcp -s localhost --dport 3306 -j ACCEPT iptables -A OUTPUT -p tcp -d localhost --dport 3306 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT iptables -A INPUT -p tcp --dport 25 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 25 -j ACCEPT iptables -A INPUT -p tcp --dport 587 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 587 -j ACCEPT iptables -A INPUT -p tcp --dport 465 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 465 -j ACCEPT iptables -A INPUT -p tcp --dport 110 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 110 -j ACCEPT iptables -A INPUT -p tcp --dport 995 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 995 -j ACCEPT iptables -A INPUT -p tcp --dport 143 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 143 -j ACCEPT iptables -A INPUT -p tcp --dport 993 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 993 -j ACCEPT iptables -A INPUT -j DROP My /etc/hosts file: # nameserver config # IPv4 127.0.0.1 localhost 46.4.7.93 mydomain.com 46.4.7.93 Ubuntu-1004-lucid-64-minimal 46.4.7.93 horst # IPv6 ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts Having a look into the iptables logs, gives me this: Jun 22 16:52:43 Ubuntu-1004-lucid-64-minimal kernel: [ 435.111780] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=52432 DF PROTO=TCP SPT=56108 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:52:46 Ubuntu-1004-lucid-64-minimal kernel: [ 438.110555] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=52433 DF PROTO=TCP SPT=56108 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:52:46 Ubuntu-1004-lucid-64-minimal kernel: [ 438.231954] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=48020 DF PROTO=TCP SPT=56109 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:52:49 Ubuntu-1004-lucid-64-minimal kernel: [ 441.229778] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=48021 DF PROTO=TCP SPT=56109 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:53:57 Ubuntu-1004-lucid-64-minimal kernel: [ 508.731839] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=78.92.97.67 DST=46.4.7.93 LEN=64 TOS=0x00 PREC=0x00 TTL=122 ID=23053 DF PROTO=TCP SPT=1672 DPT=445 WINDOW=65535 RES=0x00 SYN URGP=0 Jun 22 16:53:59 Ubuntu-1004-lucid-64-minimal kernel: [ 511.625038] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=78.92.97.67 DST=46.4.7.93 LEN=64 TOS=0x00 PREC=0x00 TTL=122 ID=23547 DF PROTO=TCP SPT=1672 DPT=445 WINDOW=65535 RES=0x00 SYN URGP=0 Jun 22 16:54:22 Ubuntu-1004-lucid-64-minimal kernel: [ 533.981995] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=27.254.39.16 DST=46.4.7.93 LEN=48 TOS=0x00 PREC=0x00 TTL=117 ID=6549 PROTO=TCP SPT=6005 DPT=33796 WINDOW=64240 RES=0x00 ACK SYN URGP=0 Jun 22 16:54:44 Ubuntu-1004-lucid-64-minimal kernel: [ 556.297038] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=94.78.93.41 DST=46.4.7.93 LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=7712 PROTO=TCP SPT=57598 DPT=445 WINDOW=512 RES=0x00 SYN URGP=0

    Read the article

  • IMAPSync Migration to Exchange 2010 SP1: Exchange drops connections while checking for existence of folders

    - by Benjamin Priestman
    I'm migrating from ZImbra Collaboration Suite to Exchange 2010 SP1. I'm testing IMAPSync as a possible migration tool and have hit a problem with the IMAP server in Exchange 2010. For each account it migrates, IMAPSync loops through the list of folders in the source mailbox and tests for the existence of each one in the destination mailbox. It then goes on to create those folders that do not exist and copy over the messages. It's the intial testing for the existence of the folders that is giving me a problem. The response given by the Exchange server when the folder does not yet exist is given as an error: "R=""16 NO IMAPSyncTest/8 doesn't exist."" After ten of these errors have been issued in succession, the Exchange server appears to stop responding to the IMAP session. Enabling protocol logging for IMAP confirms that the 10th request for a non-existant folder is the last request to be logged on the server. IMAPSync carries on merrily without seeming to realise its connection has gone and thus fails to create any folders. I've logged this with the tool's creator. Does anyone have any idea why Exchange is stopping responding to the connections though? The behaviour looks rather like throttling, although the 'ten strikes and you're out' trigger does not seem to correspond to any of the triggers on the ThrottlingPolicies. Just to check, I've tried creating a new ThrottlingPolicy, turned everything that I think might be relevant up to 11 and applied it to the my test mailbox. Policy settings are listed below, along with IMAP settings. Everything else should be pretty much as default. Throttling Policy RunspaceId : afa3159c-32a6-4906-986f-8adfbe50868b IsDefault : False AnonymousMaxConcurrency : 1 AnonymousPercentTimeInAD : AnonymousPercentTimeInCAS : AnonymousPercentTimeInMailboxRPC : EASMaxConcurrency : 10 EASPercentTimeInAD : EASPercentTimeInCAS : EASPercentTimeInMailboxRPC : EASMaxDevices : 10 EASMaxDeviceDeletesPerMonth : EWSMaxConcurrency : 10 EWSPercentTimeInAD : 50 EWSPercentTimeInCAS : 90 EWSPercentTimeInMailboxRPC : 60 EWSMaxSubscriptions : 5000 EWSFastSearchTimeoutInSeconds : 60 EWSFindCountLimit : 1000 IMAPMaxConcurrency : 1000 IMAPPercentTimeInAD : 400 IMAPPercentTimeInCAS : 400 IMAPPercentTimeInMailboxRPC : 400 OWAMaxConcurrency : 5 OWAPercentTimeInAD : 30 OWAPercentTimeInCAS : 150 OWAPercentTimeInMailboxRPC : 150 POPMaxConcurrency : 20 POPPercentTimeInAD : POPPercentTimeInCAS : POPPercentTimeInMailboxRPC : PowerShellMaxConcurrency : 18 PowerShellMaxTenantConcurrency : PowerShellMaxCmdlets : PowerShellMaxCmdletsTimePeriod : ExchangeMaxCmdlets : PowerShellMaxCmdletQueueDepth : PowerShellMaxDestructiveCmdlets : PowerShellMaxDestructiveCmdletsTimePeriod : RCAMaxConcurrency : 1000 RCAPercentTimeInAD : 400 RCAPercentTimeInCAS : 400 RCAPercentTimeInMailboxRPC : 400 CPAMaxConcurrency : 20 CPAPercentTimeInCAS : 205 CPAPercentTimeInMailboxRPC : 200 MessageRateLimit : RecipientRateLimit : ForwardeeLimit : CPUStartPercent : 75 AdminDisplayName : ExchangeVersion : 0.10 (14.0.100.0) Name : TestMigrationThrottling DistinguishedName : CN=TestMigrationThrottling,CN=Global Settings,CN=Our Company,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=cimex,DC=com Identity : TestMigrationThrottling Guid : 240049b3-2023-4df1-8edc-fbfc1fc80b87 ObjectCategory : domain.com/Configuration/Schema/ms-Exch-Throttling-Policy ObjectClass : {top, msExchGenericPolicy, msExchThrottlingPolicy} WhenChanged : 21/04/2011 18:48:19 WhenCreated : 21/04/2011 18:07:20 WhenChangedUTC : 21/04/2011 17:48:19 WhenCreatedUTC : 21/04/2011 17:07:20 OrganizationId : OriginatingServer : a-domain-controller IsValid : True IMAPSettings RunspaceId : afa3159c-32a6-4906-986f-8adfbe50868b ProtocolName : IMAP4 Name : 1 MaxCommandSize : 10240 ShowHiddenFoldersEnabled : False UnencryptedOrTLSBindings : {192.168.x.x:143} SSLBindings : {192.168.x.x:993} InternalConnectionSettings : {mail.office.domain.com:143:TLS, mail.office.domain.com:993:SSL} ExternalConnectionSettings : {mail.office.domain.com:143:TLS, mail.office.domain.com:993:SSL} X509CertificateName : mail.domain.com Banner : The Microsoft Exchange IMAP4 service is ready. LoginType : SecureLogin AuthenticatedConnectionTimeout : 00:30:00 PreAuthenticatedConnectionTimeout : 00:01:00 MaxConnections : 2147483647 MaxConnectionFromSingleIP : 2147483647 MaxConnectionsPerUser : 16 MessageRetrievalMimeFormat : BestBodyFormat ProxyTargetPort : 143 CalendarItemRetrievalOption : iCalendar OwaServerUrl : EnableExactRFC822Size : False LiveIdBasicAuthReplacement : False SuppressReadReceipt : False ProtocolLogEnabled : True EnforceCertificateErrors : False LogFileLocation : C:\Program Files\Microsoft\Exchange Server\V14\Logging\Imap4 LogFileRollOverSettings : Daily LogPerFileSizeQuota : 0 B (0 bytes) ExtendedProtectionPolicy : None EnableGSSAPIAndNTLMAuth : True Server : CMX-OFFICE-EX01 AdminDisplayName : ExchangeVersion : 0.10 (14.0.100.0) DistinguishedName : CN=1,CN=IMAP4,CN=Protocols,CN=EXCHANGE01,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=Our COmpany,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain,DC=com Identity : EXCHANGE01\1 Guid : 48f9dc37-74c2-4fb0-a042-641f863f45f2 ObjectCategory : domain.com/Configuration/Schema/ms-Exch-Protocol-Cfg-IMAP-Server ObjectClass : {top, protocolCfg, protocolCfgIMAP, protocolCfgIMAPServer} WhenChanged : 21/04/2011 17:03:39 WhenCreated : 15/04/2011 13:51:58 WhenChangedUTC : 21/04/2011 16:03:39 WhenCreatedUTC : 15/04/2011 12:51:58 OrganizationId : OriginatingServer : a-domain-server IsValid : True

    Read the article

  • Ubuntu 14.04, OpenLDAP TLS problems

    - by larsemil
    So i have set up an openldap server using this guide here. It worked fine. But as i want to use sssd i also need TLS to be working for ldap. So i looked into and followed the TLS part of the guide. And i never got any errors and slapd started fine again. BUT. It does not seem to work when i try to use ldap over tls. root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se ldap_start_tls: Protocol error (2) additional info: unsupported extended operation Ganking up the debug level some notches returns some more information: root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se -d 5 ldap_url_parse_ext(ldap://83.209.243.253) ldap_create ldap_url_parse_ext(ldap://83.209.243.253:389/??base) ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 83.209.243.253:389 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 83.209.243.253:389 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({) ber: ber_flush2: 31 bytes to sd 3 ldap_result ld 0x7f25df51e220 msgid 1 wait4msg ld 0x7f25df51e220 msgid 1 (infinite timeout) wait4msg continue ld 0x7f25df51e220 msgid 1 all 1 ** ld 0x7f25df51e220 Connections: * host: 83.209.243.253 port: 389 (default) refcnt: 2 status: Connected last used: Fri Jun 6 08:52:16 2014 ** ld 0x7f25df51e220 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f25df51e220 request count 1 (abandoned 0) ** ld 0x7f25df51e220 Response Queue: Empty ld 0x7f25df51e220 response count 0 ldap_chkResponseList ld 0x7f25df51e220 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f25df51e220 NULL ldap_int_select read1msg: ld 0x7f25df51e220 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 42 contents: read1msg: ld 0x7f25df51e220 msgid 1 message type extended-result ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f25df51e220 0 new referrals read1msg: mark request completed, ld 0x7f25df51e220 msgid 1 request done: ld 0x7f25df51e220 msgid 1 res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_extended_result ber_scanf fmt ({eAA) ber: ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_err2string ldap_start_tls: Protocol error (2) additional info: unsupported extended operation ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 3 ldap_free_connection: actually freed So no good information there neither. In /var/log/syslog i get: Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 ACCEPT from IP=83.209.243.253:56440 (IP=0.0.0.0:389) Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 EXT oid=1.3.6.1.4.1.1466.20037 Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 RESULT tag=120 err=2 text=unsupported extended operation Jun 6 08:55:42 master slapd[21383]: conn=1008 op=1 UNBIND Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 closed If i portscan the host i get the following: Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 08:56 CEST Nmap scan report for h83-209-243-253.static.se.alltele.net (83.209.243.253) Host is up (0.0072s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 389/tcp open ldap 636/tcp open ldapssl But when i check certs root@master:~# openssl s_client -connect daladevelop.se:636 -showcerts -state CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:unknown state 140244859233952:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 317 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- And i feel like i am clearly out in deep water not knowing at all where to go from here. Anny hints appreciated on what to do or to get better debug logging... EDIT: This is my config slapcated from cn=config and it does not mention at all anything about TLS. I have inserted my certinfo.ldif: root@master:~# cat certinfo.ldif dn: cn=config add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem - add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/daladevelop_slapd_cert.pem - add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/daladevelop_slapd_key.pem and when doing that i only got this as an answer. root@master:~# sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f certinfo.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry "cn=config" So still no wiser.

    Read the article

  • FFMPEG dropping frames while encoding JPEG sequence at color change

    - by Matt
    I'm trying to put together a slide show using imagemagick and FFMPEG. I use imagemagick to expand a single photo into 30fps video (imagemagick also handles things like putting some text captions on the frames along the way). When I go to let ffmpeg digest it into a video it clips along nicely on the color parts of the video, but when it gets to a black and white section it reports "frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703" and drops every frame of video until it hits something with color. As you can imagine this results in entire photos being removed from the slideshow. Here is my latest dump... ffmpeg -y -r 30 -i "teststream/%06d.jpg" -c:v libx264 -r 30 newffmpeg.mp4 ffmpeg version git-2012-12-10-c3bb333 Copyright (c) 2000-2012 the FFmpeg developers built on Dec 10 2012 22:02:04 with gcc 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 79.101 / 54. 79.101 libavformat 54. 49.100 / 54. 49.100 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 26.101 / 3. 26.101 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'teststream/%06d.jpg': Duration: 00:12:02.80, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj444p, 720x480 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0x3450140] using SAR=1/1 [libx264 @ 0x3450140] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x3450140] profile High, level 3.0 [libx264 @ 0x3450140] 264 - core 129 r2 1cffe9f - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'newffmpeg.mp4': Metadata: encoder : Lavf54.49.100 Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuvj420p, 720x480 [SAR 1:1 DAR 3:2], q=-1--1, 15360 tbn, 30 tbc Stream mapping: Stream #0:0 - #0:0 (mjpeg - libx264) Press [q] to stop, [?] for help Input stream #0:0 frame changed from size:720x480 fmt:yuvj444p to size:720x480 fmt:yuvj422p Input stream #0:0 frame changed from size:720x480 fmt:yuvj422p to size:720x480 fmt:yuvj444pp=584 frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703 video:5179kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.472425% [libx264 @ 0x3450140] frame I:9 Avg QP:20.10 size: 33933 [libx264 @ 0x3450140] frame P:636 Avg QP:24.12 size: 6737 [libx264 @ 0x3450140] frame B:1385 Avg QP:27.04 size: 514 [libx264 @ 0x3450140] consecutive B-frames: 2.5% 15.2% 13.2% 69.2% [libx264 @ 0x3450140] mb I I16..4: 8.3% 80.3% 11.5% [libx264 @ 0x3450140] mb P I16..4: 1.5% 2.5% 0.2% P16..4: 41.7% 18.0% 10.3% 0.0% 0.0% skip:25.9% [libx264 @ 0x3450140] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 26.6% 0.6% 0.1% direct: 0.2% skip:72.3% L0:35.0% L1:60.3% BI: 4.7% [libx264 @ 0x3450140] 8x8 transform intra:64.1% inter:75.1% [libx264 @ 0x3450140] coded y,uvDC,uvAC intra: 51.6% 78.0% 43.7% inter: 10.6% 14.9% 2.1% [libx264 @ 0x3450140] i16 v,h,dc,p: 29% 19% 6% 46% [libx264 @ 0x3450140] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 15% 17% 5% 9% 10% 7% 8% 6% [libx264 @ 0x3450140] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 18% 11% 5% 9% 10% 6% 6% 4% [libx264 @ 0x3450140] i8c dc,h,v,p: 46% 18% 24% 12% [libx264 @ 0x3450140] Weighted P-Frames: Y:20.1% UV:18.7% [libx264 @ 0x3450140] ref P L0: 59.2% 23.2% 13.1% 4.3% 0.2% [libx264 @ 0x3450140] ref B L0: 88.7% 8.3% 3.0% [libx264 @ 0x3450140] ref B L1: 95.0% 5.0% [libx264 @ 0x3450140] kb/s:626.88 Received signal 2: terminating. One last note: If I remove the -r 30 from the input and output it works flawlessly. I have no idea why the -r 30 is causing it to freak out.

    Read the article

  • Dual booting on separate hard drives

    - by tornadorider
    I have windows XP professional installed on 1 hard drive and Ubuntu 10.10 on my second hard drive. On start up the computer completely skips the grub menu and boots straight into 10.10. I have tried running os-prober with the windows hard drive mounted and then updating grub but it didnt work. Any ideas? I have changed the boot order so that the HDD with xp on it is first however the computer still booted into linux. I tried running grub-install /dev/sda and got this /usr/sbin/grub-setup: warn: Sector 32 is already in use by FlexNet; avoiding it. This software may cause boot or other problems in future. Please ask its authors not to store data in the boot track.. /usr/sbin/grub-setup: warn: Sector 33 is already in use by FlexNet; avoiding it. This software may cause boot or other problems in future. Please ask its authors not to store data in the boot track.. Installation finished. No error reported I checked using disk utility and the code for my xp hard drive is sdb so i ran the camand grub-install /dev/sdb shich gave me this Installation finished. No error reported. So i rebooted but it still didnt work. Any other ideas? Additional info gedit /boot/grub/grub.cfg: # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga } insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.35-28-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c linux /boot/vmlinuz-2.6.35-28-generic root=UUID=d682c9bd-dd89-4827-9802-a1f921ebe21c ro quiet splash initrd /boot/initrd.img-2.6.35-28-generic } menuentry 'Ubuntu, with Linux 2.6.35-28-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c echo 'Loading Linux 2.6.35-28-generic ...' linux /boot/vmlinuz-2.6.35-28-generic root=UUID=d682c9bd-dd89-4827-9802-a1f921ebe21c ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.35-28-generic } menuentry 'Ubuntu, with Linux 2.6.35-22-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c linux /boot/vmlinuz-2.6.35-22-generic root=UUID=d682c9bd-dd89-4827-9802-a1f921ebe21c ro quiet splash initrd /boot/initrd.img-2.6.35-22-generic } menuentry 'Ubuntu, with Linux 2.6.35-22-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c echo 'Loading Linux 2.6.35-22-generic ...' linux /boot/vmlinuz-2.6.35-22-generic root=UUID=d682c9bd-dd89-4827-9802-a1f921ebe21c ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.35-22-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### if [ "x${timeout}" != "x-1" ]; then if keystatus; then if keystatus --shift; then set timeout=-1 else set timeout=0 fi else if sleep --interruptible 3 ; then set timeout=0 fi fi fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### sudo fdisk -l: Disk /dev/sda: 80.1 GB, 80060424192 bytes 255 heads, 63 sectors/track, 9733 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0008a483 Device Boot Start End Blocks Id System /dev/sda1 * 1 9352 75112448 83 Linux /dev/sda2 9352 9734 3068929 5 Extended /dev/sda5 9352 9734 3068928 82 Linux swap / Solaris Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc5d6c5d6 Device Boot Start End Blocks Id System /dev/sdb1 1 60800 488375968+ 7 HPFS/NTFS sudo blkid /dev/sda1: UUID="d682c9bd-dd89-4827-9802-a1f921ebe21c" TYPE="ext4" /dev/sda5: UUID="09e9c2cb-d903-4f0b-a181-536951845231" TYPE="swap" /dev/sdb1: UUID="B21844EB1844AFE1" TYPE="ntfs" sudo os-prober (nothing) Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks on the same drive in partition #1 for (,msdos1)/boot/grub. => Grub 2 is installed in the MBR of /dev/sdb and looks on the same drive in partition #1 for (,msdos1)/boot/grub. sda1: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 10.10 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sda2: _________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: sda5: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: sdb1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows XP Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows XP Boot files/dirs: =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Disk /dev/sda: 80.1 GB, 80060424192 bytes 255 heads, 63 sectors/track, 9733 cylinders, total 156368016 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sda1 * 2,048 150,226,943 150,224,896 83 Linux /dev/sda2 150,228,990 156,366,847 6,137,858 5 Extended /dev/sda5 150,228,992 156,366,847 6,137,856 82 Linux swap / Solaris Drive: sdb ___________________ _____________________________________________________ Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sdb1 * 63 976,751,999 976,751,937 7 HPFS/NTFS blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/sda1 d682c9bd-dd89-4827-9802-a1f921ebe21c ext4 /dev/sda2: PTTYPE="dos" /dev/sda5 09e9c2cb-d903-4f0b-a181-536951845231 swap /dev/sda: PTTYPE="dos" /dev/sdb1 B21844EB1844AFE1 ntfs /dev/sdb: PTTYPE="dos" ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sda1 / ext4 (rw,errors=remount-ro,commit=0) =========================== sda1/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga } insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.35-28-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c linux /boot/vmlinuz-2.6.35-28-generic root=UUID=d682c9bd-dd89-4827-9802-a1f921ebe21c ro quiet splash initrd /boot/initrd.img-2.6.35-28-generic } menuentry 'Ubuntu, with Linux 2.6.35-28-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c echo 'Loading Linux 2.6.35-28-generic ...' linux /boot/vmlinuz-2.6.35-28-generic root=UUID=d682c9bd-dd89-4827-9802-a1f921ebe21c ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.35-28-generic } menuentry 'Ubuntu, with Linux 2.6.35-22-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c linux /boot/vmlinuz-2.6.35-22-generic root=UUID=d682c9bd-dd89-4827-9802-a1f921ebe21c ro quiet splash initrd /boot/initrd.img-2.6.35-22-generic } menuentry 'Ubuntu, with Linux 2.6.35-22-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c echo 'Loading Linux 2.6.35-22-generic ...' linux /boot/vmlinuz-2.6.35-22-generic root=UUID=d682c9bd-dd89-4827-9802-a1f921ebe21c ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.35-22-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set d682c9bd-dd89-4827-9802-a1f921ebe21c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### if [ "x${timeout}" != "x-1" ]; then if keystatus; then if keystatus --shift; then set timeout=-1 else set timeout=0 fi else if sleep --interruptible 3 ; then set timeout=0 fi fi fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. menuentry "Windows XP" { set root=(hd1,1) chainloader (hd1,1)+1 } ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sda1/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/sda1 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=09e9c2cb-d903-4f0b-a181-536951845231 none swap sw 0 0 =================== sda1: Location of files loaded by Grub: =================== 51.7GB: boot/grub/core.img 58.5GB: boot/grub/grub.cfg 1.2GB: boot/initrd.img-2.6.35-22-generic 1.3GB: boot/initrd.img-2.6.35-28-generic 58.2GB: boot/vmlinuz-2.6.35-22-generic 51.7GB: boot/vmlinuz-2.6.35-28-generic 1.3GB: initrd.img 1.2GB: initrd.img.old 51.7GB: vmlinuz 58.2GB: vmlinuz.old =========================== Unknown MBRs/Boot Sectors/etc ======================= Unknown BootLoader on sda2 00000000 d9 ed 13 ab ff a8 33 8c 01 b2 47 99 e1 4a b1 f1 |......3...G..J..| 00000010 69 5f a7 29 a4 1a 03 9e 31 b9 45 02 71 e6 58 78 |i_.)....1.E.q.Xx| 00000020 3d f6 ee 7b 3e 33 1b 82 c6 7d cf 1a c8 e7 bc 2f |=..{>3...}...../| 00000030 b9 e1 70 75 cf 18 aa e7 d5 7e 3c f1 b4 e7 9e 3a |..pu.....~<....:| 00000040 55 38 f1 b4 ee 78 59 0b 5e f7 3c 4c 57 73 9c 2a |U8...xY.^.<LWs.*| 00000050 28 f1 19 ed 11 9c b2 19 e2 80 92 1c 7b 84 ee 0b |(...........{...| 00000060 e2 c0 ac af 0a 50 42 b9 cf 0c dc 2c 20 77 85 dc |.....PB...., w..| 00000070 8f 70 5f 7b 84 9b a1 f7 8c 2d ee 70 5c ae f7 39 |.p_{.....-.p\..9| 00000080 63 f7 09 8a ec 79 4c ed 9f cc ad 3c f8 1b 47 7d |c....yL....<..G}| 00000090 3f 97 d5 16 cb 29 45 38 25 61 36 08 de 10 93 0f |?....)E8%a6.....| 000000a0 95 4f ea 54 f9 89 ff f1 bf 9a cc bb fd b6 22 b1 |.O.T..........".| 000000b0 65 08 05 21 78 19 46 b0 24 7e fb de d4 b3 ba d6 |e..!x.F.$~......| 000000c0 ec 11 65 82 ee 10 1d 12 04 91 da 6d 67 47 ea 9b |..e........mgG..| 000000d0 6f b0 aa fb cb 67 10 64 86 e8 26 85 fb f9 50 77 |o....g.d..&...Pw| 000000e0 9d 13 9b 9e d9 11 f3 a1 50 1b 11 b7 93 79 9f ab |........P....y..| 000000f0 c1 b6 86 0f 35 ed d4 9f dc f8 db bd ed 45 3a 68 |....5........E:h| 00000100 54 68 4a 1d d1 fc b8 c9 72 b4 d7 7b 60 e7 39 2f |ThJ.....r..{`.9/| 00000110 2a 0a 4e 52 72 52 c6 e2 2a 55 6a 2a e1 82 40 71 |*.NRrR..*Uj*..@q| 00000120 11 11 e0 53 d6 ff 1b a9 c6 65 df 1e b7 15 6f a2 |...S.....e....o.| 00000130 15 02 a4 6d 19 b7 78 57 a6 ee 9e 36 08 7d 6f 7c |...m..xW...6.}o|| 00000140 fd f7 7c d5 40 ff 0f c7 97 dc aa 00 ce 8b bb dc |..|.@...........| 00000150 e2 eb 1c 50 74 d8 14 cc 9a d6 5c a2 ab f2 67 f9 |...Pt.....\...g.| 00000160 58 ed 43 79 0e 78 7a 5c a6 f8 7b e8 05 4e 62 8a |X.Cy.xz\..{..Nb.| 00000170 0a 5f 22 ee a6 38 b9 e1 32 45 97 08 cc 75 66 c6 |._"..8..2E...uf.| 00000180 b3 a2 2d 89 a1 e9 95 21 28 53 fd dd be b1 b2 a2 |..-....!(S......| 00000190 78 3f a3 c9 3d e3 31 54 88 cf 78 0d e1 21 a8 74 |x?..=.1T..x..!.t| 000001a0 06 60 9d 21 c6 7a 24 e1 cc 28 f8 98 e0 99 e3 fc |.`.!.z$..(......| 000001b0 fa 8b eb d5 56 03 20 b8 54 ba c6 ee 9f 57 00 fe |....V. .T....W..| 000001c0 ff ff 82 fe ff ff 02 00 00 00 00 a8 5d 00 00 00 |............]...| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200

    Read the article

  • Simple GET operation with JSON data in ADF Mobile

    - by PadmajaBhat
    Usecase: This sample uses a RESTful service which contains a GET method that fetches employee details for an employee with given employee ID along with other methods. The data is fetched in JSON format. This RESTful service is then invoked via ADF Mobile and the JSON data thus obtained is parsed and rendered in mobile in a table. Prerequisite: Download JDev build JDEVADF_11.1.2.4.0_GENERIC_130421.1600.6436.1 or higher with mobile support.  Steps: Run EmployeeService.java in JSONService.zip. This is a simple service with a method, getEmpById(id) that takes employee ID as parameter and produces employee details in JSON format. Copy the target URL generated on running this service. The target URL will be as shown below: http://127.0.0.1:7101/JSONService-Project1-context-root/jersey/project1 Now, let us invoke this service in our mobile application. For this, create an ADF Mobile application.  Name the application JSON_SearchByEmpID and finish the wizard. Now, let us create a connection to our service. To do this, we create a URL Connection. Invoke new gallery wizard on ApplicationController project.  Select URL Connection option. In the Create URL Connection window, enter connection name as ‘conn’. For URL endpoint, supply the URL you copied earlier on running the service. Remember to use your system IP instead of localhost. Test the connection and click OK. At this point, a connection to the REST service has been created. Since JSON data is not supported directly in WSDC wizard, we need to invoke the operation through Java code using RestServiceAdapter. For this, in the ApplicationController project, create a Java class called ‘EmployeeDC’. We will be creating DC from this class. Add the following code to the newly created class to invoke the getEmpById method. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public Employee fetchEmpDetails(){ RestServiceAdapter restServiceAdapter = Model.createRestServiceAdapter(); restServiceAdapter.clearRequestProperties(); restServiceAdapter.setConnectionName("conn"); //URL connection created with this name restServiceAdapter.setRequestType(RestServiceAdapter.REQUEST_TYPE_GET); restServiceAdapter.addRequestProperty("Content-Type", "application/json"); restServiceAdapter.addRequestProperty("Accept", "application/json; charset=UTF-8"); restServiceAdapter.setRetryLimit(0); restServiceAdapter.setRequestURI("/getById/"+inputEmpID); String response = ""; JSONBeanSerializationHelper jsonHelper = new JSONBeanSerializationHelper(); try { response = restServiceAdapter.send(""); //Invoke the GET operation System.out.println("Response received!"); Employee responseObject = (Employee) jsonHelper.fromJSON(Employee.class, response); return responseObject; } catch (Exception e) { } return null; } Here, in lines 2 to 9, we create the RestServiceAdapter and set various properties required to invoke the web service. At line 4, we are pointing to the connection ‘conn’ created previously. Since we want to invoke getEmpById method of the service, which is defined by the URL http://IP:7101/REST_Sanity_JSON-Project1-context-root/resources/project1/getById/{id} we are updating the request URI to point to this URI at line 9. inputEmpID is a variable that will hold the value input by the user for employee ID. This we will be creating in a while. As the method we are invoking is a GET operation and consumes json data, these properties are being set in lines 5 through 7. Finally, we are sending the request in line 13. In line 15, we use jsonHelper.fromJSON to convert received JSON data to a Java object. The required Java objects' structure is defined in class Employee.java whose structure is provided later. Since the response from our service is a simple response consisting of attributes like employee Id, name, design etc, we will just return this parsed response (line 16) and use it to create DC. As mentioned previously, we would like the user to input the employee ID for which he/she wants to perform search. So, in the same class, define a variable inputEmpID which will hold the value input by the user. Generate accessors for this variable. Lastly, we need to create Employee class. Employee class will define how we want to structure the JSON object received from the service. To design the Employee class, run the services’ method in the browser or via analyzer using path parameter as 1. This will give you the output JSON structure. Ours is a simple service that returns a JSONObject with a set of data. Hence, Employee class will just contain this set of data defined with the proper data types. Create Employee.java in the same project as EmployeeDC.java and write the below code: package application; import oracle.adfmf.java.beans.PropertyChangeListener; import oracle.adfmf.java.beans.PropertyChangeSupport; public class Employee { private String dept; private String desig; private int id; private String name; private int salary; private PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this); public void setDept(String dept) {         String oldDept = this.dept; this.dept = dept; propertyChangeSupport.firePropertyChange("dept", oldDept, dept); } public String getDept() { return dept; } public void setDesig(String desig) { String oldDesig = this.desig; this.desig = desig; propertyChangeSupport.firePropertyChange("desig", oldDesig, desig); } public String getDesig() { return desig; } public void setId(int id) { int oldId = this.id; this.id = id; propertyChangeSupport.firePropertyChange("id", oldId, id); } public int getId() { return id; } public void setName(String name) { String oldName = this.name; this.name = name; propertyChangeSupport.firePropertyChange("name", oldName, name); } public String getName() { return name; } public void setSalary(int salary) { int oldSalary = this.salary; this.salary = salary; propertyChangeSupport.firePropertyChange("salary", oldSalary, salary); } public int getSalary() { return salary; } public void addPropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.addPropertyChangeListener(l); } public void removePropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.removePropertyChangeListener(l);     } } Now, let us create a DC out of EmployeeDC.java.  DC as shown below is created. Now, you can design the mobile page as usual and invoke the operation of the service. To design the page, go to ViewController project and locate adfmf-feature.xml. Create a new feature called ‘SearchFeature’ by clicking the plus icon. Go the content tab and add an amx page. Call it SearchPage.amx. Call it SearchPage.amx. Remove primary and secondary buttons as we don’t need them and rename the header. Drag and drop inputEmpID from the DC palette onto Panel Page in the structure pane as input text with label. Next, drop fetchEmpDetails method as an ADF button. For a change, let us display the output in a table component instead of the usual form. However, you will notice that if you drag and drop Employee onto the structure pane, there is no option for ADF Mobile Table. Hence, we will need to create the table on our own. To do this, let us first drop Employee as an ADF Read -Only form. This step is needed to get the required bindings. We will be deleting this form in a while. Now, from the Component palette, search for ‘Table Layout’. Drag and drop this below the command button.  Within the tablelayout, insert ‘Row Layout’ and ‘Cell Format’ components. Final table structure should be as shown below. Here, we have also defined some inline styling to render the UI in a nice manner. <amx:tableLayout id="tl1" borderWidth="2" halign="center" inlineStyle="vertical-align:middle;" width="100%" cellPadding="10"> <amx:rowLayout id="rl1" > <amx:cellFormat id="cf1" width="30%"> <amx:outputText value="#{bindings.dept.hints.label}" id="ot7" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf2"> <amx:outputText value="#{bindings.dept.inputValue}" id="ot8" /> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl2"> <amx:cellFormat id="cf3" width="30%"> <amx:outputText value="#{bindings.desig.hints.label}" id="ot9" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf4" > <amx:outputText value="#{bindings.desig.inputValue}" id="ot10"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl3"> <amx:cellFormat id="cf5" width="30%"> <amx:outputText value="#{bindings.id.hints.label}" id="ot11" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf6" > <amx:outputText value="#{bindings.id.inputValue}" id="ot12"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl4"> <amx:cellFormat id="cf7" width="30%"> <amx:outputText value="#{bindings.name.hints.label}" id="ot13" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf8"> <amx:outputText value="#{bindings.name.inputValue}" id="ot14"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl5"> <amx:cellFormat id="cf9" width="30%"> <amx:outputText value="#{bindings.salary.hints.label}" id="ot15" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf10"> <amx:outputText value="#{bindings.salary.inputValue}" id="ot16"/> </amx:cellFormat> </amx:rowLayout>     </amx:tableLayout> The values used in the output text of the table come from the bindings obtained from the ADF Form created earlier. As we have used the bindings and don’t need the form anymore, let us delete the form.  One last thing before we deploy. When user changes employee ID, we want to clear the table contents. For this we associate a value change listener with the input text box. Click New in the resulting dialog to create a managed bean. Next, we create a method within the managed bean. For this, click on the New button associated with method. Call the method ‘empIDChange’. Open myClass.java and write the below code in empIDChange(). public void empIDChange(ValueChangeEvent valueChangeEvent) { // Add event code here... //Resetting the values to blank values when employee id changes AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext(); ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.dept.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.desig.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.id.inputValue}", int.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.name.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.salary.inputValue}", int.class); ve.setValue(adfELContext, ""); } That’s it. Deploy the application to android emulator or device. Some snippets from the app.

    Read the article

  • How can I transfer output that appears on the console and format it so that it appears on a web page

    - by lojayna
    package collabsoft.backlog_reports.c4; import java.sql.CallableStatement; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.Statement; //import collabsoft.backlog_reports.c4.Report; public class Report { private Connection con; public Report(){ connectUsingJDBC(); } public static void main(String args[]){ Report dc = new Report(); dc.reviewMeeting(6, 8, 10); dc.createReport("dede",100); //dc.viewReport(100); // dc.custRent(3344,123,22,11-11-2009); } /** the following method is used to connect to the database **/ public void connectUsingJDBC() { // This is the name of the ODBC data source String dataSourceName = "Simple_DB"; try { // loading the driver in the memory Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); // This is the connection URL String dbURL = "jdbc:odbc:" + dataSourceName; con = DriverManager.getConnection("jdbc:mysql://localhost:3306/Collabsoft","root",""); // This line is used to print the name of the driver and it would throw an exception if a problem occured System.out.println("User connected using driver: " + con.getMetaData().getDriverName()); //Addcustomer(con,1111,"aaa","aaa","aa","aam","111","2222","111"); //rentedMovies(con); //executePreparedStatement(con); //executeCallableStatement(con); //executeBatch(con); } catch (Exception e) { e.printStackTrace(); } } /** *this code is to link the SQL code with the java for the task *as an admin I should be able to create a report of a review meeting including notes, tasks and users *i will take the task id and user id and note id that will be needed to be added in the review *meeting report and i will display the information related to these ida **/ public void reviewMeeting(int taskID, int userID, int noteID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL report_review_meeting(?,?,?)}"); callableStatement.setInt(1,taskID); callableStatement.setInt(2,userID); callableStatement.setInt(3,noteID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } ////////////////////////////////// ///////////////////////////////// public void allproject(int projID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL all_project(?)}"); callableStatement.setInt(1,projID); //callableStatement.setInt(2,userID); //callableStatement.setInt(3,noteID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } /////////////////////////////// /** * here i take the event id and i take a string report and then * i relate the report with the event **/ public void createReport(String report,int E_ID )// law el proc bt return table { try{ Statement st = con.createStatement(); st.executeUpdate("UPDATE e_vent SET e_vent.report=report WHERE e_vent.E_ID= E_ID;"); /* CallableStatement callableStatement = con.prepareCall("{CALL Create_report(?,?)}"); callableStatement.setString(1,report); callableStatement.setInt(2,E_ID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); }*/ } catch(Exception e) { System.out.println("E"); System.out.println(e); } } /** *in the following method i view the report of the event having the ID eventID **/ public void viewReport(int eventID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL view_report(?)}"); callableStatement.setInt(1,eventID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } } // the result of these methods is being showed on the console , i am using WIcket and i want it 2 be showed on the web how is that done ?!

    Read the article

  • i want to show the result of my code on a web page because it is being showed on a console??

    - by lojayna
    package collabsoft.backlog_reports.c4; import java.sql.CallableStatement; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.Statement; //import collabsoft.backlog_reports.c4.Report; public class Report { private Connection con; public Report(){ connectUsingJDBC(); } public static void main(String args[]){ Report dc = new Report(); dc.reviewMeeting(6, 8, 10); dc.createReport("dede",100); //dc.viewReport(100); // dc.custRent(3344,123,22,11-11-2009); } /** the following method is used to connect to the database **/ public void connectUsingJDBC() { // This is the name of the ODBC data source String dataSourceName = "Simple_DB"; try { // loading the driver in the memory Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); // This is the connection URL String dbURL = "jdbc:odbc:" + dataSourceName; con = DriverManager.getConnection("jdbc:mysql://localhost:3306/Collabsoft","root",""); // This line is used to print the name of the driver and it would throw an exception if a problem occured System.out.println("User connected using driver: " + con.getMetaData().getDriverName()); //Addcustomer(con,1111,"aaa","aaa","aa","aam","111","2222","111"); //rentedMovies(con); //executePreparedStatement(con); //executeCallableStatement(con); //executeBatch(con); } catch (Exception e) { e.printStackTrace(); } } /** *this code is to link the SQL code with the java for the task *as an admin I should be able to create a report of a review meeting including notes, tasks and users *i will take the task id and user id and note id that will be needed to be added in the review meeting report and i will display the information related to these ida */ public void reviewMeeting(int taskID, int userID, int noteID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL report_review_meeting(?,?,?)}"); callableStatement.setInt(1,taskID); callableStatement.setInt(2,userID); callableStatement.setInt(3,noteID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } ////////////////////////////////// ///////////////////////////////// public void allproject(int projID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL all_project(?)}"); callableStatement.setInt(1,projID); //callableStatement.setInt(2,userID); //callableStatement.setInt(3,noteID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } /////////////////////////////// /** * here i take the event id and i take a string report and then * i relate the report with the event **/ public void createReport(String report,int E_ID )// law el proc bt return table { try{ Statement st = con.createStatement(); st.executeUpdate("UPDATE e_vent SET e_vent.report=report WHERE e_vent.E_ID= E_ID;"); /* CallableStatement callableStatement = con.prepareCall("{CALL Create_report(?,?)}"); callableStatement.setString(1,report); callableStatement.setInt(2,E_ID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); }*/ } catch(Exception e) { System.out.println("E"); System.out.println(e); } } /** in the following method i view the report of the event having the ID eventID */ public void viewReport(int eventID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL view_report(?)}"); callableStatement.setInt(1,eventID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } } // the result of these methods is being showed on the console , i am using WIcket and i want it 2 be showed on the web how is that done ?! thnxxx

    Read the article

  • How can I further optimize this color difference function?

    - by aLfa
    I have made this function to calculate color differences in the CIE Lab colorspace, but it lacks speed. Since I'm not a Java expert, I wonder if any Java guru around has some tips that can improve the speed here. The code is based on the matlab function mentioned in the comment block. /** * Compute the CIEDE2000 color-difference between the sample color with * CIELab coordinates 'sample' and a standard color with CIELab coordinates * 'std' * * Based on the article: * "The CIEDE2000 Color-Difference Formula: Implementation Notes, * Supplementary Test Data, and Mathematical Observations,", G. Sharma, * W. Wu, E. N. Dalal, submitted to Color Research and Application, * January 2004. * available at http://www.ece.rochester.edu/~gsharma/ciede2000/ */ public static double deltaE2000(double[] lab1, double[] lab2) { double L1 = lab1[0]; double a1 = lab1[1]; double b1 = lab1[2]; double L2 = lab2[0]; double a2 = lab2[1]; double b2 = lab2[2]; // Cab = sqrt(a^2 + b^2) double Cab1 = Math.sqrt(a1 * a1 + b1 * b1); double Cab2 = Math.sqrt(a2 * a2 + b2 * b2); // CabAvg = (Cab1 + Cab2) / 2 double CabAvg = (Cab1 + Cab2) / 2; // G = 1 + (1 - sqrt((CabAvg^7) / (CabAvg^7 + 25^7))) / 2 double CabAvg7 = Math.pow(CabAvg, 7); double G = 1 + (1 - Math.sqrt(CabAvg7 / (CabAvg7 + 6103515625.0))) / 2; // ap = G * a double ap1 = G * a1; double ap2 = G * a2; // Cp = sqrt(ap^2 + b^2) double Cp1 = Math.sqrt(ap1 * ap1 + b1 * b1); double Cp2 = Math.sqrt(ap2 * ap2 + b2 * b2); // CpProd = (Cp1 * Cp2) double CpProd = Cp1 * Cp2; // hp1 = atan2(b1, ap1) double hp1 = Math.atan2(b1, ap1); // ensure hue is between 0 and 2pi if (hp1 < 0) { // hp1 = hp1 + 2pi hp1 += 6.283185307179586476925286766559; } // hp2 = atan2(b2, ap2) double hp2 = Math.atan2(b2, ap2); // ensure hue is between 0 and 2pi if (hp2 < 0) { // hp2 = hp2 + 2pi hp2 += 6.283185307179586476925286766559; } // dL = L2 - L1 double dL = L2 - L1; // dC = Cp2 - Cp1 double dC = Cp2 - Cp1; // computation of hue difference double dhp = 0.0; // set hue difference to zero if the product of chromas is zero if (CpProd != 0) { // dhp = hp2 - hp1 dhp = hp2 - hp1; if (dhp > Math.PI) { // dhp = dhp - 2pi dhp -= 6.283185307179586476925286766559; } else if (dhp < -Math.PI) { // dhp = dhp + 2pi dhp += 6.283185307179586476925286766559; } } // dH = 2 * sqrt(CpProd) * sin(dhp / 2) double dH = 2 * Math.sqrt(CpProd) * Math.sin(dhp / 2); // weighting functions // Lp = (L1 + L2) / 2 - 50 double Lp = (L1 + L2) / 2 - 50; // Cp = (Cp1 + Cp2) / 2 double Cp = (Cp1 + Cp2) / 2; // average hue computation // hp = (hp1 + hp2) / 2 double hp = (hp1 + hp2) / 2; // identify positions for which abs hue diff exceeds 180 degrees if (Math.abs(hp1 - hp2) > Math.PI) { // hp = hp - pi hp -= Math.PI; } // ensure hue is between 0 and 2pi if (hp < 0) { // hp = hp + 2pi hp += 6.283185307179586476925286766559; } // LpSqr = Lp^2 double LpSqr = Lp * Lp; // Sl = 1 + 0.015 * LpSqr / sqrt(20 + LpSqr) double Sl = 1 + 0.015 * LpSqr / Math.sqrt(20 + LpSqr); // Sc = 1 + 0.045 * Cp double Sc = 1 + 0.045 * Cp; // T = 1 - 0.17 * cos(hp - pi / 6) + // + 0.24 * cos(2 * hp) + // + 0.32 * cos(3 * hp + pi / 30) - // - 0.20 * cos(4 * hp - 63 * pi / 180) double hphp = hp + hp; double T = 1 - 0.17 * Math.cos(hp - 0.52359877559829887307710723054658) + 0.24 * Math.cos(hphp) + 0.32 * Math.cos(hphp + hp + 0.10471975511965977461542144610932) - 0.20 * Math.cos(hphp + hphp - 1.0995574287564276334619251841478); // Sh = 1 + 0.015 * Cp * T double Sh = 1 + 0.015 * Cp * T; // deltaThetaRad = (pi / 3) * e^-(36 / (5 * pi) * hp - 11)^2 double powerBase = hp - 4.799655442984406; double deltaThetaRad = 1.0471975511965977461542144610932 * Math.exp(-5.25249016001879 * powerBase * powerBase); // Rc = 2 * sqrt((Cp^7) / (Cp^7 + 25^7)) double Cp7 = Math.pow(Cp, 7); double Rc = 2 * Math.sqrt(Cp7 / (Cp7 + 6103515625.0)); // RT = -sin(delthetarad) * Rc double RT = -Math.sin(deltaThetaRad) * Rc; // de00 = sqrt((dL / Sl)^2 + (dC / Sc)^2 + (dH / Sh)^2 + RT * (dC / Sc) * (dH / Sh)) double dLSl = dL / Sl; double dCSc = dC / Sc; double dHSh = dH / Sh; return Math.sqrt(dLSl * dLSl + dCSc * dCSc + dHSh * dHSh + RT * dCSc * dHSh); }

    Read the article

  • Join 3 tables in 1 LINQ-EF

    - by user100161
    I have to fill warehouse table cOrders with program using Ado.NET EF. I have SQL command but i don't know how to do this with LINQ. static void Main(string[] args) { var SPcontex = new PI_NorthwindSPEntities(); var contex = new NorthwindEntities(); dCustomers dimenzijaCustomers = new dCustomers(); dDatum dimenzijaDatum = new dDatum(); ... CREATE TABLE PoslovnaInteligencija.dbo.cOrders( cOrdersID int PRIMARY KEY IDENTITY(1,1), OrderID int NOT NULL, dCustomersID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dCustomers(dCustomersID), dEmployeesID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dEmployees(dEmployeesID), OrderDateID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dDatum(sifDatum), RequiredDateID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dDatum(sifDatum), ShippedDateID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dDatum(sifDatum), dShippersID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dShippers(dShippersID), dShipID int FOREIGN KEY REFERENCES PoslovnaInteligencija.dbo.dShip(dShipID), Freight money, WaitingDay int ) INSERT INTO PoslovnaInteligencija.dbo.cOrders (OrderID, dCustomersID, dEmployeesID, OrderDateID, RequiredDateID, dShippersID, dShipID, Freight, ShippedDateID, WaitingDay) SELECT OrderID, dc.dCustomersID, de.dEmployeesID, orderD.sifDatum, requiredD.sifDatum, dShippersID, ds.dShipID, Freight, ShippedDateID=CASE WHEN (ShippedDate IS NULL) THEN -1 ELSE shippedD.sifDatum END, WaitingDay=CASE WHEN (shippedD.sifDatum - orderD.sifDatum) IS NULL THEN -1 ELSE shippedD.sifDatum - orderD.sifDatum END FROM PoslovnaInteligencija.dbo.dShippers AS s, PoslovnaInteligencija.dbo.dCustomers AS dc, PoslovnaInteligencija.dbo.dEmployees AS de, PoslovnaInteligencija.dbo.dShip AS ds,PoslovnaInteligencija.dbo.dDatum AS orderD, PoslovnaInteligencija.dbo.dDatum AS requiredD, PoslovnaInteligencija.dbo.Orders AS o LEFT OUTER JOIN PoslovnaInteligencija.dbo.dDatum AS shippedD ON shippedD.datum=DATEADD(dd, 0, DATEDIFF(dd, 0, o.ShippedDate)) WHERE o.ShipVia=s.ShipperID AND dc.CustomerID=o.CustomerID AND de.EmployeeID=o.EmployeeID AND ds.ShipName=o.ShipName AND orderD.datum=DATEADD(dd, 0, DATEDIFF(dd, 0, o.OrderDate)) AND requiredD.datum=DATEADD(dd, 0, DATEDIFF(dd, 0, o.RequiredDate));

    Read the article

  • deleting HBITMAP causes an access violation at runtime.

    - by Oliver
    Hi, I have the following code to take a screenshot of a window, and get the colour of a specific pixel in it: void ProcessScreenshot(HWND hwnd){ HDC WinDC; HDC CopyDC; HBITMAP hBitmap; RECT rt; GetClientRect (hwnd, &rt); WinDC = GetDC (hwnd); CopyDC = CreateCompatibleDC (WinDC); //Create a bitmap compatible with the DC hBitmap = CreateCompatibleBitmap (WinDC, rt.right - rt.left, //width rt.bottom - rt.top);//height SelectObject (CopyDC, hBitmap); BitBlt (CopyDC, //destination 0,0, rt.right - rt.left, //width rt.bottom - rt.top, //height WinDC, //source 0, 0, SRCCOPY); COLORREF col = ::GetPixel(CopyDC,145,293); // Do some stuff with the pixel colour.... delete hBitmap; ReleaseDC(hwnd, WinDC); ReleaseDC(hwnd, CopyDC); } the line 'delete hBitmap;' causes a runtime error: an access violation. I guess I can't just delete it like that? Because bitmaps take up a lot of space, if I don't get rid of it I will end up with a huge memory leak. My question is: Does releasing the DC the HBITMAP is from deal with this, or does it stick around even after I have released the DC? If the later is the case, how do I correctly get rid of the HBITMAP?

    Read the article

  • SqlBulkCopy slow as molasses

    - by Chris
    I'm looking for the fastest way to load bulk data via c#. I have this script that does the job but slow. I read testimonies that SqlBulkCopy is the fastest. 1000 records 2.5 seconds. files contain anywhere near 5000 records to 250k What are some of the things that can slow it down? Table Def: CREATE TABLE [dbo].[tempDispositions]( [QuotaGroup] [varchar](100) NULL, [Country] [varchar](50) NULL, [ServiceGroup] [varchar](50) NULL, [Language] [varchar](50) NULL, [ContactChannel] [varchar](10) NULL, [TrackingID] [varchar](20) NULL, [CaseClosedDate] [varchar](25) NULL, [MSFTRep] [varchar](50) NULL, [CustEmail] [varchar](100) NULL, [CustPhone] [varchar](100) NULL, [CustomerName] [nvarchar](100) NULL, [ProductFamily] [varchar](35) NULL, [ProductSubType] [varchar](255) NULL, [CandidateReceivedDate] [varchar](25) NULL, [SurveyMode] [varchar](1) NULL, [SurveyWaveStartDate] [varchar](25) NULL, [SurveyInvitationDate] [varchar](25) NULL, [SurveyReminderDate] [varchar](25) NULL, [SurveyCompleteDate] [varchar](25) NULL, [OptOutDate] [varchar](25) NULL, [SurveyWaveEndDate] [varchar](25) NULL, [DispositionCode] [varchar](5) NULL, [SurveyName] [varchar](20) NULL, [SurveyVendor] [varchar](20) NULL, [BusinessUnitName] [varchar](25) NULL, [UploadId] [int] NULL, [LineNumber] [int] NULL, [BusinessUnitSubgroup] [varchar](25) NULL, [FileDate] [datetime] NULL ) ON [PRIMARY] and here's the code private void BulkLoadContent(DataTable dt) { OnMessage("Bulk loading records to temp table"); OnSubMessage("Bulk Load Started"); using (SqlBulkCopy bcp = new SqlBulkCopy(conn)) { bcp.DestinationTableName = "dbo.tempDispositions"; bcp.BulkCopyTimeout = 0; foreach (DataColumn dc in dt.Columns) { bcp.ColumnMappings.Add(dc.ColumnName, dc.ColumnName); } bcp.NotifyAfter = 2000; bcp.SqlRowsCopied += new SqlRowsCopiedEventHandler(bcp_SqlRowsCopied); bcp.WriteToServer(dt); bcp.Close(); } }

    Read the article

  • Active Directory Membership Provider - how to expand on this?

    - by Jaxidian
    I'm working on getting an MVC app up and running via AD Membership Provider and I'm having some issues figuring this out. I have a base configuration setup and working when I login as [email protected] + password. <connectionStrings> <add name="MyConnString" connectionString="LDAP://domaincontroller/OU=Product Users,DC=my,DC=domain,DC=com" /> </connectionStrings> <membership defaultProvider="MyProvider"> <providers> <clear /> <add name="MyProvider" connectionStringName="MyConnString" connectionUsername="my.domain.com\service_account" connectionPassword="biguglypassword" type="System.Web.Security.ActiveDirectoryMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </providers> </membership> However, I'd LIKE to do some other things and I'm not sure how to go about them. Login without typing the domain (i.e. the "@my.domain.com"). I realize that this could only work if I limit myself to just one domain - that's fine. Organize users in up to N different OUs within a single OU. As you can tell from my current connection string, I'm authenticating users in my Product Users OU. I would LIKE to create OUs for various companies within this OU and put the users into those OUs. How can I authenticate across all of these different OUs? I'm trying to figure out how the Active Directory Membership Provider ties in with the Profile and Role providers. Are there AD versions of those too or am I stuck with SQL, home-grown, or finding something somebody else has coded up? Many thanks!!

    Read the article

  • A local error has occurred while connecting to AD in Windows 2008 server

    - by Sara
    There's Active directory on windows 2000 advance server, I have a web server on Windows 2008 server Enterprise Edition, the following code works fine in Winsows 2003 server but when I installed Win 2008 server, it gives me the following error, the webserver is not subdomain of the AD server. but they have the same range IP address. A local error has occurred.\r\n"} System.Exception system.DirectoryServices.DirectoryServicesCOMException} I want to Authenticate Via AD from my webserver, I even test the port 389 and it was open(by telnet), I even added port 389 UDP and TCP to firewall of webserver to be sure it is open, even I turned the firewall off but nothing changed. I don't know what's wrong with Windows 2008 server that cannot run my code, I search Internet but I found nothing. any solution would be helpful. Thank you public bool IsAuthenticated(string username, string pwd,string group) { string domainAndUsername = "LDAP://192.xx.xx.xx:389/DC=test,DC=oc,DC=com" ; string usr="CN=" + username + ",CN=" + group; DirectoryEntry entry = new DirectoryEntry(domainAndUsername, usr, pwd, AuthenticationTypes.Secure ); try { DirectorySearcher search = new DirectorySearcher(entry); search.Filter = "(SAMAccountName=" + username + ")"; SearchResult result = search.FindOne(); if (result == null) { return false; } } catch (Exception ex) { return false; } return true; }

    Read the article

  • [C#]How to introduce retry logic into LINQ to SQL to deal with timeouts?

    - by codemonkie
    I need to find ways to add retry mechanism to my DB calls in case of timeouts, LINQ to SQL is used to call some sprocs in my code... using (MyDataContext dc = new MyDataContext()) { int result = -1; //denote failure int count = 0; while ((result < 0) && (count < MAX_RETRIES)) { result = dc.myStoredProc1(...); count++; } result = -1; count = 0; while ((result < 0) && (count < MAX_RETRIES)) { result = dc.myStoredProc2(...); count++; } ... ... } Not sure if the code above is right or posed any complications. It'll be nice to throw an exception after MAX_RETRIES has reached, but I dunno how and where to throw them appropriately :-) Any helps appreciated.

    Read the article

  • Linq. Help me tune this!

    - by dtrick
    I have a linq query that is causing some timeout issues. Basically, I have a query that is returning the top 100 results from a table that has approximately 500,000 records. Here is the query: using (var dc = CreateContext()) { var accounts = string.IsNullOrEmpty(searchText) ? dc.Genealogy_Accounts .Where(a => a.Genealogy_AccountClass.Searchable) .OrderByDescending(a => a.ID) .Take(100) : dc.Genealogy_Accounts .Where(a => (a.Code.StartsWith(searchText) || a.Name.StartsWith(searchText)) && a.Genealogy_AccountClass.Searchable) .OrderBy(a => a.Code) .Take(100); return accounts.Select(a => } } Oddly enough it is the first linq query that is causing the timeout. I thought that by doing a 'Take' we wouldn't need to scan all 500k of records. However, that must be what is happening. I'm guessing that the join to find what is 'searchable' is causing the issue. I'm not able to denormalize the tables... so I'm wondering if there is a way to rewrite the linq query to get it to return quicker... or if I should just write this query as a Stored Procedure (and if so, what might it look like). Thanks.

    Read the article

  • Writing to the DataContext

    - by user738383
    I have a function. This function takes an IEnumerable<Customer> (Customer being an entity). What the function needs to do is tell the DataContext (which has a collection of Customers as a property) that its Customers property needs to be overwritten with this passed in IEnumerable<Customer>. I can't use assignment because DomainContext.Customers cannot be assigned to, as it is read only. I guess it's not clear what I'm asking, so I suppose I should say... how do I do that? So we have DataContext.Customers (of type System.Data.Linq.Table) which wants to be replaced with a System.Collections.Generic.IEnumerable. I can't just assign the latter to the former because DataContext's properties are read only. But there must be a way. Edit: here's an image: Further edit: Yes, this image does not feature a collection of the type 'Customer' but rather 'Connection'. It doesn't matter though, they are both created from tables within the linked SQL database. So there is a dc.Connections, a dc.Customers, a dc.Media and so on. Thanks in advance.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >