Search Results

Search found 33834 results on 1354 pages for 'site column'.

Page 443/1354 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • How to obtain a random sub-datatable from another data table

    - by developerit
    Introduction In this article, I’ll show how to get a random subset of data from a DataTable. This is useful when you already have queries that are filtered correctly but returns all the rows. Analysis I came across this situation when I wanted to display a random tag cloud. I already had the query to get the keywords ordered by number of clicks and I wanted to created a tag cloud. Tags that are the most popular should have more chance to get picked and should be displayed larger than less popular ones. Implementation In this code snippet, there is everything you need. ' Min size, in pixel for the tag Private Const MIN_FONT_SIZE As Integer = 9 ' Max size, in pixel for the tag Private Const MAX_FONT_SIZE As Integer = 14 ' Basic function that retreives Tags from a DataBase Public Shared Function GetTags() As MediasTagsDataTable ' Simple call to the TableAdapter, to get the Tags ordered by number of clicks Dim dt As MediasTagsDataTable = taMediasTags.GetDataValide ' If the query returned no result, return an empty DataTable If dt Is Nothing OrElse dt.Rows.Count < 1 Then Return New MediasTagsDataTable End If ' Set the font-size of the group of data ' We are dividing our results into sub set, according to their number of clicks ' Example: 10 results -> [0,2] will get font size 9, [3,5] will get font size 10, [6,8] wil get 11, ... ' This is the number of elements in one group Dim groupLenth As Integer = CType(Math.Floor(dt.Rows.Count / (MAX_FONT_SIZE - MIN_FONT_SIZE)), Integer) ' Counter of elements in the same group Dim counter As Integer = 0 ' Counter of groups Dim groupCounter As Integer = 0 ' Loop througt the list For Each row As MediasTagsRow In dt ' Set the font-size in a custom column row.c_FontSize = MIN_FONT_SIZE + groupCounter ' Increment the counter counter += 1 ' If the group counter is less than the counter If groupLenth <= counter Then ' Start a new group counter = 0 groupCounter += 1 End If Next ' Return the new DataTable with font-size Return dt End Function ' Function that generate the random sub set Public Shared Function GetRandomSampleTags(ByVal KeyCount As Integer) As MediasTagsDataTable ' Get the data Dim dt As MediasTagsDataTable = GetTags() ' Create a new DataTable that will contains the random set Dim rep As MediasTagsDataTable = New MediasTagsDataTable ' Count the number of row in the new DataTable Dim count As Integer = 0 ' Random number generator Dim rand As New Random() While count < KeyCount Randomize() ' Pick a random row Dim r As Integer = rand.Next(0, dt.Rows.Count - 1) Dim tmpRow As MediasTagsRow = dt(r) ' Import it into the new DataTable rep.ImportRow(tmpRow) ' Remove it from the old one, to be sure not to pick it again dt.Rows.RemoveAt(r) ' Increment the counter count += 1 End While ' Return the new sub set Return rep End Function Pro’s This method is good because it doesn’t require much work to get it work fast. It is a good concept when you are working with small tables, let says less than 100 records. Con’s If you have more than 100 records, out of memory exception may occur since we are coping and duplicating rows. I would consider using a stored procedure instead.

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • Tracking download of non-html (like pdf) downloads with jQuery and Google Analytics

    - by developerit
    Hi folks, it’s been quite calm at Developer IT’s this summer since we were all involved in other projects, but we are slowly comming back. In this post, we will present a simple way of tracking files download with Google Analytics with the help of jQuery. We work for a client that offers a lot of pdf files to download on their web site and wanted to know which one are the most popular. They use Google Analytics for a long time now and we did not want to have a second interface in order to present those stats to our client. So usign IIS logs was not a idea to consider. Since Google already offers us a splendid web interface and a powerful API, we deceided to hook up simple javascript code into the jQuery click event to notify Analytics that a pdf has been requested. (function ($) { function trackLink(e) { var url = $(this).attr('href'); //alert(url); // for debug purpose // old page tracker code pageTracker._trackPageview(url); // you can use the new one too _gaq.push(["_trackPageview",url]); //always return true, in order for the browser to continue its job return true; } // When DOM ready $(function () { // hook up the click event $('.pdf-links a').click(trackLink); }); })(jQuery); You can be more presice or even be sure not to miss one click by changing the selector which hooks up the click event. I have been usign this code to track AJAX requests and it works flawlessly.

    Read the article

  • Insufficient Permissions Problems with MSDeploy and TFS Build 2010

    - by jdanforth
    I ran into these problems on a TFS 2010 RC setup where I wanted to deploy a web site as part of the nightly build: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (3481): Web deployment task failed.(An error occurred when reading the IIS Configuration File 'MACHINE/REDIRECTION'. The identity performing the operation was 'NT AUTHORITY\NETWORK SERVICE'.)  An error occurred when reading the IIS Configuration File 'MACHINE/REDIRECTION'. The identity performing the operation was 'NT AUTHORITY\NETWORK SERVICE'. Filename: \\?\C:\Windows\system32\inetsrv\config\redirection.config Error: Cannot read configuration file due to insufficient permissions  As you can see I’m running the build service as NETWORK SERVICE which is quite usual. The first thing I did then was to give NETWORK SERVICE read access to the whole directory where redirection.config is sitting; C:\Windows\system32\inetsrv\config. That gave me a new error: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (3481): Web deployment task failed. (Attempted to perform an unauthorized operation.) The reason for this problem was that NETWORK SERVICE didn’t have write permission to the place where I’ve told MSDeploy to put the web site physically on the disk. Once I’d given the NETWORK SERVICE the right permissions, MSDeploy completed as expected! NOTE! I’ve not had this problem with TFS 2010 RTM, so it might be just a RC issue!

    Read the article

  • PHP/APC fatal error, apc_mmap: mmap failed

    - by Sudowned
    I'm seeing some intermittent CPU usage spikes to 100%, sort of correlated to these log entries: [27-Feb-2012 13:29:29] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 [27-Feb-2012 13:29:30] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 [27-Feb-2012 13:29:31] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 [27-Feb-2012 13:29:31] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 phpinfo() indicates that APC is set up, and as far as I can tell this error doesn't cause visible 500 errors on the live site, which is a WordPress install that gets about 600k views monthly. Google's been unhelpful so far, and I was hoping that someone here had some insight as to what's causing this and how to fix it. Curiously, this error only shows up /usr/local/apache2/logs/error_log and not the error_log for the cpanel-configured site.

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – World Shapefile Download and Upload to Database – Spatial Database

    - by pinaldave
    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it. VDS Technologies has all the spatial files for every location for free. You can download the spatial file from here. If you cannot find the spatial file you are looking for, please leave a comment here, and I will send you the necessary details. Unzip the file to a folder and it will have the following content. Then, download Shape2SQL tool from SharpGIS. This is one of the best tools available to convert shapefiles to SQL tables. Afterwards, run the .exe file. When the file is run for the first time, it will ask for the database properties. Provide your database details. Select the appropriate shape files and the tool will fill up the essential details automatically. If you do not want to create the index on the column, uncheck the box beside it. The screenshot below is simply explains the procedure. You also have to be careful regarding your data, whether that is GEOMETRY or GEOGRAPHY. In this example,  it is GEOMETRY data. Click “Upload to Database”. It will show you the uploading process. Once the shape file is uploaded, close the application and open SQL Server Management Studio (SSMS). Run the following code in SSMS Query Editor. USE Spatial GO SELECT * FROM dbo.world GO This will show the complete map of world after you click on Spatial Results in Spatial Tab. In Spatial Results Set, the Zoom feature is available. From the Select label column, choose the country name in order to show the country name overlaying the country borders. Let me know if this tutorial is helpful enough. I am planning to write a few more posts about this later. Note: Please note that the images displayed here do not reflect the original political boundaries. These data are pretty old and can probably draw incorrect maps as well. I have personally spotted several parts of the map where some countries are located a little bit inaccurately. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Spatial, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • IIS7 - Lock Violation error, HTTP handlers, modules, and the <clear /> element

    - by Daniel Schaffer
    I have an ASP.NET site that uses its own set of HTTP handlers and does not need any modules. So, in IIS6, all I had to do was this in my web.config: <httpModules> <clear /> </httpModules> However, if I try to do the same in the system.webServer area for IIS7, I get a 500 error when I try to view the site, and in IIS manager when I try to view the handler mappings, I get a popup box with the message: There was an error while performing this operation Details: Filename: \?\C:\Sites\TheWebSiteGoesHere\web.config Line number: 39 Error: Lock violation Line 39 is where the <clear /> element is. Some googling led me to a solution involving running this command: %windir%\system32\inetsrv\appcmd.exe unlock config -section:system.webServer/modules ...but that did not solve the problem.

    Read the article

  • 14+ Real Estate WordPress Themes

    - by Aditi
    If you are looking for a great WordPress real estate theme. Below is a list of some of the best wordpress real estate themes, so you can find one, which is the best suited for you and be at par with increasing industry demands in real estates business.We have covered only the best themes available. The Themes are flexible & can be used by anybody in real estate business. If you are realtor, agent, appraiser or realty these can be modified as per your use. Estate It is an immensely powerful and simple to manage business theme. It offers advanced SEO control, clean code and styling modification features. It has new “Properties” management facility when installed – proving it’s far more than just a WordPress theme. It offers flexible page templates, an advanced search facility that allows you to drill down into properties based on very specific criteria, Google Maps integration and smart property images management. It is a complete web solution. It also has IDX functionality due to dsIDXpress plugin integration, which allows multi-listing services. Price: $200 View Demo Download ElegantEstate It makes your WordPress blog into a full-feature real estate website. The theme makes browsing your listings easy, and adds special integration features for property info, photos, Google Maps and more. Help increase sales by establishing an elegant and professional online presence today. It has opera compatibility, Netscape compatibility, Safari compatibility, WordPress 3.0 compatibility. It comes with five color schemes, threaded comments, optional blog-style structure, Gravatar ready, firefox compatible, IE8 + IE7 + IE6 compatible, advertisement ready, widget ready sidebars, theme options page, custom thumbnail images, PSD files, valid XHTML + CSS, smooth table less design, ePanel theme options, page templates, complete localization and many more features. Price: $39 (Package includes more than 55 themes) View Demo Download Open House Open House is fully compatible with WordPress 3.0+ and a highly customizable Real Estate WordPress theme. It has Google Maps Integration with Street View. It has a professional look for Agents and Realtors both. It is best suited for all markets and countries with theme localization, translation and internationalization. It provides for English, Spanish and Portuguese language files in the Developer Package. It has custom scripts, which makes it easy to add/delete/modify listings. It also includes photo gallery with a lightbox effect, gorgeous photo fade animations and automatic Google Maps integration. The theme can be used as a single or multi-agent website with individual Agent-Realtor pages with listings and biography information, Agent photo uploader, financing calculator.There is Multi Category search for potential customers to locate the house they want. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Residence Real Estate It is a WordPress 3.0+ compatible stunning real estate theme. It has a dynamic real estate framework management module for easy edit-delete-add more features options, which makes this theme super easy to customize to the market needs. It allows you to add your own labels and values in your own language and switch the theme to your own language with English and Spanish files included with the ability to add your own language. It offers Multi-Category search with breadcrumb filtered results, easy photo gallery management with drag-drop sorting of images. It allows you to build your own multi-category search section menu with custom labels-choices and unlimited dropdown menus. They have been presented in a professional module with search results in breadcrumb navigation. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Smooth Smooth is a WordPress Real Estate theme. It is a complete theme, which comes with Multi Category Search, Google Maps Integration, Agent Photo and Logo uploader that offers a professional and extremely affordable solution for Realtors and Agents to showcase their properties with ease. You can add your listings with the extremely easy and flexible Dynamic Real Estate Framework, edit-add-modify-delete all features, labels and values within the WordPress administration and upload unlimited photos to your galleries with latest WordPress 3.0+ features. It is a complete solution for real estate sites. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Homeowners It is another WordPress Real Estate theme, which is a fast loading optimized theme with Google Maps Integration, fully compatible with WordPress 3.0 features and all Real Estate markets. It has a professional clean look and it is full of features extremely easy to modify. It also provides for 12 new styles provided. English, Spanish and Portuguese language files are provided in the Developer Package. Homeowners WordPress Real Estate features custom scripts that make add/delete/modify listings an easy task with an included photo gallery with a lightbox effect and automatic Google Map integration with street view (New) Agents will have access only to their own listings and add the listing management for their account making this theme an ideal affordable solution for Realtors and Real Estate agencies. The theme can be used as a single or multi-agent website with individual Agent-Realtor pages with listings and biography information, Agent photo uploader, financing calculator. Multi category search has also been provided. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Real Agent Real Estate This theme is a WordPress 3.0+ compatible clean grid based real estate theme. It has a dynamic real estate framework management module for easy edit-delete-add more features options. It is easy to customize according to market. It allows you to add your own labels and values in your own language switch the theme to your own language with English and Spanish files included with the ability to add your own language. Multi-Category search with breadcrumb filtered results, easy photo gallery management with drag-drop sorting of images. You can upload property photos in bulk with the native WordPress uploader and the new image editing and resizing options in WordPress 3.0+. The theme features 5 different color styles, blue, black, red, green and purple with professional layouts, logo and agent photo uploaders. This theme is best suited for individual or multiple agents both. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Agent Press The AgentPress theme is an ideal solution for real estate agents. It offers multiple page templates that can be used to create a complete real estate website. You can create from single property templates to a custom homepage easily with it. It is compatible to WordPress 3.0 and 3.1. It has custom background/header, property template, 6 layout options, fixed width, threaded comments and many more features. Price: $99.95 View Demo Download Real Estate It is one of the best Real Estate themes. It offers single click auto install of the site, Allow user to pay & submit properties on your site, Multi-agent site with profiles, Strategically built real estate site with professional design, User dashboard to edit/renew their submissions, Auto generated Google Maps and Image Slideshows and many more unique features. Once the users search property as per their criteria, the properties are listed with all the necessary parameters that let them select the property of their choice. Users can also add the property to favorite so they can check the property later from their member area dashboard. Admin may display different sidebar on this page and add widgets of their choice. This theme is full of custom, dynamic widgets such as top agents, finance calculator, user login; advertise blocks, testimonials and so on. There is a property details page where users can see the actual property. The agent details is displayed with the full contact details and appropriate links so the visitor can get all info about the property being sold, seller and may contact them by filling out a simple form. The email will be sent directly to the person who listed the property. Price: $89.95 Single | $159.95 Developer View Demo Download Broker Real Estate It is also a WordPress 3.0+ compatible real estate theme. It has a featured property slideshow, dynamic real estate framework management module for easy edit-delete-add more features. You can add your own labels and values in your own language. It offers multi-category search with breadcrumb-filtered results, easy photo gallery management with drag-drop sorting of images. You can also build your own multi-category search section menu with custom labels-choices and unlimited dropdown menus. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Decasa It has custom search panel that lets your user easily browse your properties by keyword search or category select drop downs. It offers the property exposé, which is a user-friendly overview over the most important details of each real estate object. You can easily add this data through a post settings meta box on the post edit screen. You can easily create a real estate image gallery. Its theme options panel makes it easy to make the basic theme settings. It supports the new WordPress post thumbnail feature. When uploading an image file the theme will automatically create all the necessary image size. You can also create your own custom menu easily and fast with drag and drop without touching any code. Price: 39 € View Demo Download RealtorPress A real estate premium WordPress theme from PremiumPress. Versatile WordPress Theme that can be used by individual agents or real estate companies. The theme allows you to easily add property listings via the custom backend admin area or import CSV spreadsheets. It features customisable search options, Google maps integration, real estate data custom field creator, image management tools and more. Price: $79 | Premium Collection: $259 (all PremiumPress themes) View Demo Download Related posts:21+ WordPress Photo Blog & Portfolio Themes 14+ WordPress Portfolio Themes Professional WordPress Business Themes

    Read the article

  • WinRT WebView and Cookies

    - by javarg
    Turns out that WebView Control in WinRT is much more limited than it’s counterpart in WPF/Silverlight. There are some great articles out there in how to extend the control in order for it to support navigation events and some other features. For a personal project I'm working on, I needed to grab cookies a Web Site generated for the user. Basically, after a user authenticated to a Web Site I needed to get the authentication cookies and generate some extra requests on her behalf. In order to do so, I’ve found this great article about a similar case using SharePoint and Azure ACS. The secret is to use a p/invoke to native InternetGetCookieEx to get cookies for the current URL displayed in the WebView control.   void WebView_LoadCompleted(object sender, NavigationEventArgs e) { var urlPattern = "http://someserver.com/somefolder"; if (e.Uri.ToString().StartsWith(urlPattern)) { var cookies = InternetGetCookieEx(e.Uri.ToString()); // Do something with the cookies } } static string InternetGetCookieEx(string url) { uint sizeInBytes = 0; // Gets capacity length first InternetGetCookieEx(url, null, null, ref sizeInBytes, INTERNET_COOKIE_HTTPONLY, IntPtr.Zero); uint bufferCapacityInChars = (uint)Encoding.Unicode.GetMaxCharCount((int)sizeInBytes); // Now get cookie data var cookieData = new StringBuilder((int)bufferCapacityInChars); InternetGetCookieEx(url, null, cookieData, ref bufferCapacityInChars, INTERNET_COOKIE_HTTPONLY, IntPtr.Zero); return cookieData.ToString(); }   Function import using p/invoke follows: const int INTERNET_COOKIE_HTTPONLY = 0x00002000; [DllImport("wininet.dll", CharSet = CharSet.Unicode, SetLastError = true)] static extern bool InternetGetCookieEx(string pchURL, string pchCookieName, StringBuilder pchCookieData, ref System.UInt32 pcchCookieData, int dwFlags, IntPtr lpReserved); Enjoy!

    Read the article

  • IIS, multiple CPU cores, application pools and worker processes - best configuration for a single si

    - by Egghead Design
    Hi We use Kentico CMS and I've exchanged emails with them about a web garden deployment. We have a single site running on a server with 8 cpu cores. In line with Kentico's advice, we have not altered the application pool web garden setting from the default i.e. it is set to a maximum number of worker processes of 1. Our experience is that the site only uses one of the cpu cores - the others are idling. When I emailed them about this, their response was that the OS/IIS would handle this and use other cores as necessary even though the application pool only has a single worker process. Now, I've a lot of respect for the guys at Kentico, but this doesn't seem right to me? Surely, if we want to use all cores, we need to permit eight worker processes (and implement session state storage in SQL server)? Many thanks Tony

    Read the article

  • How to log invalid client SSL certificate in SSL

    - by matra
    I have a IIS web site which requires client certificate. I have turned off CRL checking. The client is unable to access the web site - he gets 403.17 (certificate expired) error. I would like to log the certificate he is using, becaue I think he is using the wrong certificate. Is there a way to do this? I probably can not use WireShark, because client certificatethat is passed from the client is probably already encryped. I am running a WIndows 2003 server. Matra

    Read the article

  • Can't add client machine to windows server 2008 domain controller

    - by Patrick J Collins
    A bit of background before I dive into the gritty details: I have a single server running Windows 2003 Server where I host my ASP.net website and SQL Server + Reports. I've been creating ordinary windows user accounts to authenticate my users, and I enabled integrated windows authentication with impersonation. I've set up a bunch of user groups which correspond to certain roles (admin, power user, normal user, etc) and I test membership to enable or disable certain features. Overall, I'm pretty happy with the solution, it was quick to setup and I don't have to worry about messing around storing passwords and whatnot. Well, what I'm trying to do now is set up a new environment with 3 servers (Web, SQL, Reports) and I'd like these three servers to share common user accounts. I understand that I could add these three machines to a domain, which means installing Active Directory on one of the machines. I am barking up the wrong tree here? Would you suggest an alternative configuration? Assuming that I stick with AD, I have a couple of questions regarding DNS. To be honest, I'd rather not fiddle around with the DNS settings because my ISP already has their own DNS server which works just fine. It would appear however that DNS and AD are intertwined. Firstly, if I am to create a new domain in called mycompany.net, do I actually need to be the registered owner of that domain name and ensure the DNS entry points to the IP address of the machine hosting AD? Secondly, for the two other machines that I am trying to add to the domain, do I need to fiddle with their DNS settings? I've tried setting the preferred DNS Server IP address to that of my newly installed AD, but no luck. At this point, I can't add the two other machines to the domain. Here are some diagnostics that I have run based on a few suggestions I read on forums (sorry they're in French, although I could translate if needed). I ran nltest, which seems to indicate that the client can discover the domain controller. When I run dcdiag, the call to DsGetDcName fails with error 1722, not really sure what that means. Any suggestions? Thanks! C:\Users\Administrator>nltest /dsgetdc:mycompany.net Contrôleur de domaine : \\REPORTS.mycompany.net Adresse : \\111.111.111.111 GUID dom : 3333a4ec-ca56-4f02-bb9e-76c29c6c3832 Nom dom : mycompany.net Nom de la forêt : mycompany.net Nom de site du contrôleur de domaine : Default-First-Site-Name Nom de notre site : Default-First-Site-Name Indicateurs : PDC GC DS LDAP KDC TIMESERV WRITABLE DNS_DC DNS_DOMAIN DNS _FOREST CLOSE_SITE FULL_SECRET La commande a été correctement exécutée C:\Users\Administrator>dcdiag /s:mycompany.net /u: mycompany.net \pcollins /p:somepass Diagnostic du serveur d'annuaire Exécution de l'installation initiale : * Forêt AD identifiée. Collecte des informations initiales terminée. Exécution des tests initiaux nécessaires Test du serveur : Default-First-Site-Name\REPORTS Démarrage du test : Connectivity ......................... Le test Connectivity de REPORTS a réussi Exécution des tests principaux Test du serveur : Default-First-Site-Name\REPORTS Démarrage du test : Advertising Erreur irrécupérable : l'appel DsGetDcName (REPORTS) a échoué ; erreur 1722 Le localisateur n'a pas pu trouver le serveur. ......................... Le test Advertising de REPORTS a échoué Démarrage du test : FrsEvent Impossible d'interroger le journal des événements File Replication Service sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test FrsEvent de REPORTS a échoué Démarrage du test : DFSREvent Impossible d'interroger le journal des événements DFS Replication sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test DFSREvent de REPORTS a échoué Démarrage du test : SysVolCheck [REPORTS] Une opération net use ou LsaPolicy a échoué avec l'erreur 53, Le chemin réseau n'a pas été trouvé.. ......................... Le test SysVolCheck de REPORTS a échoué Démarrage du test : KccEvent Impossible d'interroger le journal des événements Directory Service sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test KccEvent de REPORTS a échoué Démarrage du test : KnowsOfRoleHolders ......................... Le test KnowsOfRoleHolders de REPORTS a réussi Démarrage du test : MachineAccount Impossible d'ouvrir le canal avec [REPORTS] : échec avec l'erreur 53 : Le chemin réseau n'a pas été trouvé. Impossible d'obtenir le nom de domaine NetBIOS Échec : impossible de tester le nom principal de service (SPN) HOST Échec : impossible de tester le nom principal de service (SPN) HOST ......................... Le test MachineAccount de REPORTS a réussi Démarrage du test : NCSecDesc ......................... Le test NCSecDesc de REPORTS a réussi Démarrage du test : NetLogons [REPORTS] Une opération net use ou LsaPolicy a échoué avec l'erreur 53, Le chemin réseau n'a pas été trouvé.. ......................... Le test NetLogons de REPORTS a échoué Démarrage du test : ObjectsReplicated ......................... Le test ObjectsReplicated de REPORTS a réussi Démarrage du test : Replications ......................... Le test Replications de REPORTS a réussi Démarrage du test : RidManager ......................... Le test RidManager de REPORTS a réussi Démarrage du test : Services Impossible d'ouvrir IPC distant à [REPORTS.mycompany.net] : erreur 0x35 « Le chemin réseau n'a pas été trouvé. » ......................... Le test Services de REPORTS a échoué Démarrage du test : SystemLog Impossible d'interroger le journal des événements System sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test SystemLog de REPORTS a échoué Démarrage du test : VerifyReferences ......................... Le test VerifyReferences de REPORTS a réussi Exécution de tests de partitions sur ForestDnsZones Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de ForestDnsZones a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de ForestDnsZones a réussi Exécution de tests de partitions sur DomainDnsZones Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de DomainDnsZones a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de DomainDnsZones a réussi Exécution de tests de partitions sur Schema Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de Schema a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de Schema a réussi Exécution de tests de partitions sur Configuration Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de Configuration a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de Configuration a réussi Exécution de tests de partitions sur mycompany Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de mycompany a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de mycompany a réussi Exécution de tests d'entreprise sur mycompany.net Démarrage du test : LocatorCheck Avertissement : l'appel DcGetDcName(GC_SERVER_REQUIRED) a échoué ; erreur 1722 Serveur de catalogue global introuvable - Les catalogues globaux ne fonctionnent pas. Avertissement : l'appel DcGetDcName(PDC_REQUIRED) a échoué ; erreur 1722 Contrôleur principal de domaine introuvable. Le serveur contenant le rôle PDC ne fonctionne pas. Avertissement : l'appel DcGetDcName(TIME_SERVER) a échoué ; erreur 1722 Serveur de temps introuvable. Le serveur contenant le rôle PDC ne fonctionne pas. Avertissement : l'appel DcGetDcName(GOOD_TIME_SERVER_PREFERRED) a échoué ; erreur 1722 Serveur de temps introuvable. Avertissement : l'appel DcGetDcName(KDC_REQUIRED) a échoué ; erreur 1722 Centre de distribution de clés introuvable : les centres de distribution de clés ne fonctionnent pas. ......................... Le test LocatorCheck de mycompany.net a échoué Démarrage du test : Intersite ......................... Le test Intersite de mycompany.net a réussi Update 1 : I am under the distinct impression that the problem is caused by some security settings. I have read elsewhere that the client needs to be able to access the fileshare sysvol. I had to enable Client for Microsoft Windows and File and Printer Sharing which were previously disabled. When I now run dcdiag the Advertising test works, which I suppose is forward progress. It currently chokes on the Services step (unable to open remote IPC). Démarrage du test : Services Impossible d'ouvrir IPC distant à [REPORTS.locbus.net] : erreur 0x35 « Le chemin réseau n'a pas été trouvé. » ......................... Le test Services de REPORTS a échoué The original English version of that error message : Could not open Remote ipc to [server] Update 2 : I attach some more diagnostics : Netsetup.log (client): 09/24/2009 13:27:09:773 ----------------------------------------------------------------- 09/24/2009 13:27:09:773 NetpValidateName: checking to see if 'WEB' is valid as type 1 name 09/24/2009 13:27:12:773 NetpCheckNetBiosNameNotInUse for 'WEB' [MACHINE] returned 0x0 09/24/2009 13:27:12:773 NetpValidateName: name 'WEB' is valid for type 1 09/24/2009 13:27:12:805 ----------------------------------------------------------------- 09/24/2009 13:27:12:805 NetpValidateName: checking to see if 'WEB' is valid as type 5 name 09/24/2009 13:27:12:805 NetpValidateName: name 'WEB' is valid for type 5 09/24/2009 13:27:12:852 ----------------------------------------------------------------- 09/24/2009 13:27:12:852 NetpValidateName: checking to see if 'MYCOMPANY.NET' is valid as type 3 name 09/24/2009 13:27:12:992 NetpCheckDomainNameIsValid [ Exists ] for 'MYCOMPANY.NET' returned 0x0 09/24/2009 13:27:12:992 NetpValidateName: name 'MYCOMPANY.NET' is valid for type 3 09/24/2009 13:27:21:320 ----------------------------------------------------------------- 09/24/2009 13:27:21:320 NetpDoDomainJoin 09/24/2009 13:27:21:320 NetpMachineValidToJoin: 'WEB' 09/24/2009 13:27:21:320 OS Version: 6.0 09/24/2009 13:27:21:320 Build number: 6002 09/24/2009 13:27:21:320 ServicePack: Service Pack 2 09/24/2009 13:27:21:414 SKU: Windows Server® 2008 Standard 09/24/2009 13:27:21:414 NetpDomainJoinLicensingCheck: ulLicenseValue=1, Status: 0x0 09/24/2009 13:27:21:414 NetpGetLsaPrimaryDomain: status: 0x0 09/24/2009 13:27:21:414 NetpMachineValidToJoin: status: 0x0 09/24/2009 13:27:21:414 NetpJoinDomain 09/24/2009 13:27:21:414 Machine: WEB 09/24/2009 13:27:21:414 Domain: MYCOMPANY.NET 09/24/2009 13:27:21:414 MachineAccountOU: (NULL) 09/24/2009 13:27:21:414 Account: MYCOMPANY.NET\pcollins 09/24/2009 13:27:21:414 Options: 0x25 09/24/2009 13:27:21:414 NetpLoadParameters: loading registry parameters... 09/24/2009 13:27:21:414 NetpLoadParameters: DNSNameResolutionRequired not found, defaulting to '1' 0x2 09/24/2009 13:27:21:414 NetpLoadParameters: status: 0x2 09/24/2009 13:27:21:414 NetpValidateName: checking to see if 'MYCOMPANY.NET' is valid as type 3 name 09/24/2009 13:27:21:523 NetpCheckDomainNameIsValid [ Exists ] for 'MYCOMPANY.NET' returned 0x0 09/24/2009 13:27:21:523 NetpValidateName: name 'MYCOMPANY.NET' is valid for type 3 09/24/2009 13:27:21:523 NetpDsGetDcName: trying to find DC in domain 'MYCOMPANY.NET', flags: 0x40001010 09/24/2009 13:27:22:039 NetpDsGetDcName: failed to find a DC having account 'WEB$': 0x525, last error is 0x79 09/24/2009 13:27:22:039 NetpDsGetDcName: status of verifying DNS A record name resolution for 'KING.MYCOMPANY.NET': 0x0 09/24/2009 13:27:22:039 NetpDsGetDcName: found DC '\\KING.MYCOMPANY.NET' in the specified domain 09/24/2009 13:27:30:039 NetUseAdd to \\KING.MYCOMPANY.NET\IPC$ returned 53 09/24/2009 13:27:30:039 NetpJoinDomain: status of connecting to dc '\\KING.MYCOMPANY.NET': 0x35 09/24/2009 13:27:30:039 NetpDoDomainJoin: status: 0x35 09/24/2009 13:27:30:148 ----------------------------------------------------------------- ipconfig /all (on client): Configuration IP de Windows Nom de l'hôte . . . . . . . . . . : WEB Suffixe DNS principal . . . . . . : Type de noeud. . . . . . . . . . : Hybride Routage IP activé . . . . . . . . : Non Proxy WINS activé . . . . . . . . : Non Carte Ethernet Connexion au réseau local : Suffixe DNS propre à la connexion. . . : Description. . . . . . . . . . . . . . : Intel 21140-Based PCI Fast Ethernet Adapter (Emulated) Adresse physique . . . . . . . . . . . : **-15-5D-A1-17-** DHCP activé. . . . . . . . . . . . . . : Non Configuration automatique activée. . . : Oui Adresse IPv4. . . . . . . . . . . : **.***.163.122(préféré) Masque de sous-réseau. . . . . . . . . : 255.255.255.0 Passerelle par défaut. . . . . . . . . : **.***.163.2 Serveurs DNS. . . . . . . . . . . . . : **.***.163.123 NetBIOS sur Tcpip. . . . . . . . . . . : Activé ipconfig /all (on server): Configuration IP de Windows Nom de l'hôte . . . . . . . . . . : KING Suffixe DNS principal . . . . . . : mycompany.net Type de noeud. . . . . . . . . . : Hybride Routage IP activé . . . . . . . . : Non Proxy WINS activé . . . . . . . . : Non Liste de recherche du suffixe DNS.: locbus.net Carte Ethernet Connexion au réseau local : Suffixe DNS propre à la connexion. . . : Description. . . . . . . . . . . . . . : Intel 21140-Based PCI Fast Ethernet Adapter (Emulated) Adresse physique . . . . . . . . . . . : **-15-5D-A1-1E-** DHCP activé. . . . . . . . . . . . . . : Non Configuration automatique activée. . . : Oui Adresse IPv4. . . . . . . . . . . : **.***.163.123(préféré) Masque de sous-réseau. . . . . . . . . : 255.255.255.0 Passerelle par défaut. . . . . . . . . : **.***.163.2 Serveurs DNS. . . . . . . . . . . . . : 127.0.0.1 NetBIOS sur Tcpip. . . . . . . . . . . : Activé nslookup (on client): Serveur : *******.***.com Address: **.***.163.123 Nom : mycompany.net Addresses: ****:****:a37b::****:a37b **.****.163.123

    Read the article

  • Mixing SSL and non-SSL content in an Apache2 virtual host

    - by gravyface
    I have a (hopefully) common scenario for one of my sites that I just can't seem to figure out how to deploy correctly. I have the following site and directories for example.com: These need to require SSL: /var/www/example.com/admin /var/www/example.com/order These need to be non-SSL: /var/www/example.com/maps These need to support both: /var/www/example.com/css /var/www/example.com/js /var/www/example.com/img I have two virtual host declarations for the one site in my /sites-available/example.com file; the top one is *:443 the second one is *:80. Since I have two sites, and if a request comes in on 443, the top virtualhost is used, same with the bottom if it's a port 80 request. However, I can't seem to enforce my SSL requirements using SSLRequireSSL because I'm assuming a port 80 request to /admin or /order is not even hitting the *:443 vhost. Should I just Deny All to /order and /admin within the *:80 virtual host so that if you try to request it on 80, you'll get a 403 Forbidden?

    Read the article

  • AppCmd returns error: Object 'SET' is not supported

    - by RHPT
    I am trying to set SSL Host Headers and Secure Site Bindings in IIS7. I followed the directions on this website http://www.digicert.com/ssl-support/ssl-host-headers-iis-7.htm (among others), but when I run the appcmd command mentioned, I get the error "Object 'SET' is not supported. Run 'appcmd.exe /?' to display supported objects". I have also tryed "appcmd site set" but it still returns the same error. What am I doing wrong? The server I am working on is Windows 2008 R2 x64, if that matters. Thank you.

    Read the article

  • Session state has been disabled for ASP.NET.The Report Viewer control requires that session state be

    - by Patrick Olurotimi Ige
    While i was trying to setup some reports on a SPF 2010 site. After trying to add a Sql Reporting Webpart i get the error: Session state has been disabled for ASP.NET. The Report Viewer control requires that session state be enabled in local mode I never came across that error before when using RSWebaprts in Sharepoint 2007:) But i think is related to ASP.NET sessions state:( Any to fix it you would have to start to make sure the SharePoint Server ASP.NET Session State Service is enabled. Unfortunately in Sharepiint SPF2010 and SP 2010 you would have to do it using PowerShell By doing :  Enable-SPSessionStateService -Defaultprovision PS C:\> PSSnapin - Shows you all the PS Snapins PS C:\> Add-PSSnapin "Microsoft.SharePoint.PowerShell" -- Adds the Sharepoint PS Snapin PS C:\> get-command -Noun SP* -- Show all SP cmdlets PS C:\> Add-SPShellAdmin -- (If you get error regarding the access to the farm you use this command to add an acct to able to run the Shell admin) PS C:\> Get-Help Enable-SPSessionStateService - Shows helpp for  Enable-SPSessionStateService PS C:\> Enable-SPSessionStateService -Defaultprovision - (enables/activates SPSessionStateService) you have more oprions available when you run the Get-Help Enable-SPSessionStateService   After running the cmd you should see the SharePoint Server ASP.NET Session State Service started in your service applications in the Central Admin Site. And of course be able to add your RS webparts Enjoy

    Read the article

  • Which cloud hosting should I use? [closed]

    - by Alyssa Marie Isk
    Possible Duplicate: How to find web hosting that meets my requirements? If anyone wants to get some real life karma by giving a tiny non-profit pointers, please advise! I posted a thread about our website with highly variable traffic (www.WorldOceansDay.org). The event is on June 8th, and the traffic goes from 100-400/day in the off-season, to about 200,000 trying to access the site at any one time on June 8th. It's a Wordpress site hosted on GoDaddy shared hosting and predictably crashed horribly. From the internet's feedback, we've decided to move to a cloud server to handle the traffic, but I'm a huge newbie and I don't have very reliable mentorship, so I'm turning to crowdsourcing. We're trying to decide between Amazon Web Services and RackSpace Cloud servers. Our sys admin consultant also suggested GoDaddy's new 4GH but I have had such incredibly bad experiences with GoDaddy thus far that I am hesitant. From what I've read on the internet, RackSpace might be cheaper? Would AWS totally break the bank? We don't have a ton of money to spend on hosting. We'll also be using CloudFlare to cache and serve the pages since they're dynamic. I've found a few AWS & RackSpace calculators but I am not 100% on how to find those numbers... GoDaddy? Google Analytics? AWS calc is here: http://calculator.s3.amazonaws.com/calc5.html Rackspace is on the right: http://www.rackspace.com/cloud/cloud_hosting_products/servers/pricing/?0a313380 If anyone can help, or through some miracle feels like walking me through this, I would be incredibly appreciative.

    Read the article

  • SSL / HTTP / No Response to Curl

    - by Alex McHale
    I am trying to send commands to a SOAP service, and getting nothing in reply. The SOAP service is at a completely separate site from either server I am testing with. I have written a dummy script with the SOAP XML embedded. When I run it at my local site, on any of three machines -- OSX, Ubuntu, or CentOS 5.3 -- it completes successfully with a good response. I then sent the script to our public host at Slicehost, where I fail to get the response back from the SOAP service. It accepts the TCP socket and proceeds with the SSL handshake. I do not however receive any valid HTTP response. This is the case whether I use my script or curl on the command line. I have rewritten the script using SOAP4R, Net::HTTP and Curb. All of which work at my local site, none of which work at the Slicehost site. I have tried to assemble the CentOS box as closely to match my Slicehost server as possible. I rebuilt the Slice to be a stock CentOS 5.3 and stock CentOS 5.4 with the same results. When I look at a tcpdump of the bad sessions on Slicehost, I see my script or curl send the XML to the remote server, and nothing comes back. When I look at the tcpdump at my local site, I see the response just fine. I have entirely disabled iptables on the Slice. Does anyone have any ideas what could be causing these results? Please let me know what additional information I can furnish. Thank you! Below is a wire trace of a sample session. The IP that starts with 173 is my server while the IP that starts with 12 is the SOAP server's. No. Time Source Destination Protocol Info 1 0.000000 173.45.x.x 12.36.x.x TCP 36872 > https [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=137633469 TSER=0 WS=6 Frame 1 (74 bytes on wire, 74 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 0, Len: 0 No. Time Source Destination Protocol Info 2 0.040000 12.36.x.x 173.45.x.x TCP https > 36872 [SYN, ACK] Seq=0 Ack=1 Win=8760 Len=0 MSS=1460 Frame 2 (62 bytes on wire, 62 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 0, Ack: 1, Len: 0 No. Time Source Destination Protocol Info 3 0.040000 173.45.x.x 12.36.x.x TCP 36872 > https [ACK] Seq=1 Ack=1 Win=5840 Len=0 Frame 3 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 1, Ack: 1, Len: 0 No. Time Source Destination Protocol Info 4 0.050000 173.45.x.x 12.36.x.x SSLv2 Client Hello Frame 4 (156 bytes on wire, 156 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 1, Ack: 1, Len: 102 Secure Socket Layer No. Time Source Destination Protocol Info 5 0.130000 12.36.x.x 173.45.x.x TCP [TCP segment of a reassembled PDU] Frame 5 (1434 bytes on wire, 1434 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 1, Ack: 103, Len: 1380 Secure Socket Layer No. Time Source Destination Protocol Info 6 0.130000 173.45.x.x 12.36.x.x TCP 36872 > https [ACK] Seq=103 Ack=1381 Win=8280 Len=0 Frame 6 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 103, Ack: 1381, Len: 0 No. Time Source Destination Protocol Info 7 0.130000 12.36.x.x 173.45.x.x TLSv1 Server Hello, Certificate, Server Hello Done Frame 7 (1280 bytes on wire, 1280 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 1381, Ack: 103, Len: 1226 [Reassembled TCP Segments (2606 bytes): #5(1380), #7(1226)] Secure Socket Layer No. Time Source Destination Protocol Info 8 0.130000 173.45.x.x 12.36.x.x TCP 36872 > https [ACK] Seq=103 Ack=2607 Win=11040 Len=0 Frame 8 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 103, Ack: 2607, Len: 0 No. Time Source Destination Protocol Info 9 0.130000 173.45.x.x 12.36.x.x TLSv1 Client Key Exchange, Change Cipher Spec, Encrypted Handshake Message Frame 9 (236 bytes on wire, 236 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 103, Ack: 2607, Len: 182 Secure Socket Layer No. Time Source Destination Protocol Info 10 0.190000 12.36.x.x 173.45.x.x TLSv1 Change Cipher Spec, Encrypted Handshake Message Frame 10 (97 bytes on wire, 97 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 2607, Ack: 285, Len: 43 Secure Socket Layer No. Time Source Destination Protocol Info 11 0.190000 173.45.x.x 12.36.x.x TLSv1 Application Data Frame 11 (347 bytes on wire, 347 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 285, Ack: 2650, Len: 293 Secure Socket Layer No. Time Source Destination Protocol Info 12 0.190000 173.45.x.x 12.36.x.x TCP [TCP segment of a reassembled PDU] Frame 12 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 578, Ack: 2650, Len: 1460 Secure Socket Layer No. Time Source Destination Protocol Info 13 0.450000 12.36.x.x 173.45.x.x TCP https > 36872 [ACK] Seq=2650 Ack=578 Win=64958 Len=0 Frame 13 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 2650, Ack: 578, Len: 0 No. Time Source Destination Protocol Info 14 0.450000 173.45.x.x 12.36.x.x TCP [TCP segment of a reassembled PDU] Frame 14 (206 bytes on wire, 206 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 2038, Ack: 2650, Len: 152 No. Time Source Destination Protocol Info 15 0.510000 12.36.x.x 173.45.x.x TCP [TCP Dup ACK 13#1] https > 36872 [ACK] Seq=2650 Ack=578 Win=64958 Len=0 Frame 15 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: Dell_fb:49:a1 (00:21:9b:fb:49:a1), Dst: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6) Internet Protocol, Src: 12.36.x.x (12.36.x.x), Dst: 173.45.x.x (173.45.x.x) Transmission Control Protocol, Src Port: https (443), Dst Port: 36872 (36872), Seq: 2650, Ack: 578, Len: 0 No. Time Source Destination Protocol Info 16 0.850000 173.45.x.x 12.36.x.x TCP [TCP Retransmission] [TCP segment of a reassembled PDU] Frame 16 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 578, Ack: 2650, Len: 1460 Secure Socket Layer No. Time Source Destination Protocol Info 17 1.650000 173.45.x.x 12.36.x.x TCP [TCP Retransmission] [TCP segment of a reassembled PDU] Frame 17 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 578, Ack: 2650, Len: 1460 Secure Socket Layer No. Time Source Destination Protocol Info 18 3.250000 173.45.x.x 12.36.x.x TCP [TCP Retransmission] [TCP segment of a reassembled PDU] Frame 18 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 578, Ack: 2650, Len: 1460 Secure Socket Layer No. Time Source Destination Protocol Info 19 6.450000 173.45.x.x 12.36.x.x TCP [TCP Retransmission] [TCP segment of a reassembled PDU] Frame 19 (1514 bytes on wire, 1514 bytes captured) Ethernet II, Src: 40:40:17:3a:f4:e6 (40:40:17:3a:f4:e6), Dst: Dell_fb:49:a1 (00:21:9b:fb:49:a1) Internet Protocol, Src: 173.45.x.x (173.45.x.x), Dst: 12.36.x.x (12.36.x.x) Transmission Control Protocol, Src Port: 36872 (36872), Dst Port: https (443), Seq: 578, Ack: 2650, Len: 1460 Secure Socket Layer

    Read the article

  • Learning HTML5 - Sample Sites

    - by Albers
    Part of the challenge with HTML5 is understanding the range of different technologies and finding good samples. The following are some of the sites I have found most useful. IE TestDrive http://ie.microsoft.com/testdrive/ A good set of demos using touch, appcache, IndexDB, etc. Some of these only work with IE10. Be sure to click the "More Demos" link at the bottom for a longer list of Demos in a nicely organized list form. Chrome Experiments http://www.chromeexperiments.com/ Chrome browser-oriented sumbitted sites with a heavy emphasis on display technologies (WebGL & Canvas) Adobe Expressive Web http://beta.theexpressiveweb.com/ Adobe provides a dozen HTML5 & CSS3 samples. I seem to end up playing the "Breakout" style Canvas demo every time I visit the site. Mozilla Demo Studio https://developer.mozilla.org/en-US/demos/tag/tech:html5/ About 100 varied HTML5-related submitted web sites. If you click the "Browse By Technology" button there are other samples for File API, IndexedDB, etc. Introducing HTML5 samples http://html5demos.com/ Specific Tech examples related to the "Introducing HTML5" book by Bruce Lawson & Remy Sharp HTML5 Gallery http://html5gallery.com/ HTML5 Gallery focuses on "real" sites - sites that were not specifically intended to showcase a particular HTML5 feature. The actual use of HTML5 tech can vary from link to link, but it is useful to see real-world uses. FaceBook Developers HTML5 Showcase http://developers.facebook.com/html5/showcase/ A good list of high profile HTML5 applications, games and demos (including the Financial Times, GMail, Kindle web reader, and Pirates Love Daisies). HTML5 Studio http://studio.html5rocks.com/ Another Google site - currently 14 samples of concepts like slideshows, Geolocation, and WebGL using HTML5.

    Read the article

  • Debugging JuniperSetupClientInstaller.exe Problems

    - by Damon
    I recently moved from Windows 7 to Windows 2008 server so I can run SharePoint on my physical machine and not through a VPC, so I've been trying to get everything re-installed on my system.  As part of that process, I tried re-establishing a connection back to one of client's corporate networks and their system prompted me to run JuniperSetupClientInstaller.exe.  Normally this runs, finishes, and you can connect to the VPN no problem.  This time, however, it failed.  Unfortunately, there were no error messages to let me know why - it just didn't work. I've had success running application in "compatability mode" so I gave that a shot - same problem.  But during the installation I noticed that JuniperSetupClientInstaller.exe unpacks a number of files into a directory (you can see the exact location in the details of the installer) and then runs a DIFFERENT application - JuniperSetupClient.exe.  If you navigate to that directory, you will see a text file named JuniperSetupClient.log that contains information about the setup process. In my case, I installed a SharePoint site on Port 3333 - which the Juniper software needs to communicate with the VPN.  There was a nice message in the log file saying the VPN software could not bind to port 3333 which quickly alerted me to the issue, and moving the site off that port number fixed the issue.  However, it would have been nice to had an error message of sorts because I spent a chunk of time futilely researching compatibility issues. 

    Read the article

  • Beware: Upgrade to ASP.NET MVC 2.0 with care if you use AntiForgeryToken

    - by James Crowley
    If you're thinking of upgrading to MVC 2.0, and you take advantage of the AntiForgeryToken support then be careful - you can easily kick out all active visitors after the upgrade until they restart their browser. Why's this?For the anti forgery validation to take place, ASP.NET MVC uses a session cookie called "__RequestVerificationToken_Lw__". This gets checked for and de-serialized on any page where there is an AntiForgeryToken() call. However, the format of this validation cookie has apparently changed between MVC 1.0 and MVC 2.0. What this means is that when you make to switch on your production server to MVC 2.0, suddenly all your visitors session cookies are invalid, resulting in calls to AntiForgeryToken() throwing exceptions (even on a standard GET request) when de-serializing it: [InvalidCastException: Unable to cast object of type 'System.Web.UI.Triplet' to type 'System.Object[]'.]   System.Web.Mvc.AntiForgeryDataSerializer.Deserialize(String serializedToken) +104[HttpAntiForgeryException (0x80004005): A required anti-forgery token was not supplied or was invalid.]   System.Web.Mvc.AntiForgeryDataSerializer.Deserialize(String serializedToken) +368   System.Web.Mvc.HtmlHelper.GetAntiForgeryTokenAndSetCookie(String salt, String domain, String path) +209   System.Web.Mvc.HtmlHelper.AntiForgeryToken(String salt, String domain, String path) +16   System.Web.Mvc.HtmlHelper.AntiForgeryToken() +10  <snip> So you've just kicked all your active users out of your site with exceptions until they think to restart their browser (to clear the session cookies). The only work around for now is to either write some code that wipes this cookie - or disable use of AntiForgeryToken() in your MVC 2.0 site until you're confident all session cookies will have expired. That in itself isn't very straightforward, given how frequently people tend to hibernate/standby their machines - the session cookie will only clear once the browser has been shut down and re-opened. Hope this helps someone out there!

    Read the article

  • Forum software advice needed

    - by David Thompson
    Hello All ... we want to migrate our sites current forum (proprietary built) to a newer, more modern (feature rich) platform. I've been looking around at the available options and have narrowed it down to vBulletin, Vanilla or Phorum (unless you have another suggestion ?). I hope someone here can give me some feedback on their experiences either migrating to a new forum or working deeply with one. The current forum we have has approx 2.2 million threads in it and is contained in a MySQL database. Data Migration is obviously the first issue, is one of the major Forum vendors better or worse in this regard ? The software needs to be able to be clustered and cached to ensure availability and performance. We want it to be PHP based and store it's data in MySQL. The code needs to be open to allow us to highly customise the software both to strip out a lot of stuff and be able to integrate our sites features. A lot of the forums I've looked at have a lot of duplicate features to our main site, in particular member management, profiles etc. I realise we'll have to do a good bit of development in removing these and tieing it all back to the main site so we want to find a platform that makes this kind of integration as easy as possible. Finally I guess if 'future proofing' the forum (as best as possible) given the above. Which platform will allow us to customise it but also allow us to keep instep with upgrades. Which forum software has the best track record for bringing online new features in a timely manner ? etc. etc. I know it's a big question but if anyone here has any experience in some or all of the above I'd be very grateful.

    Read the article

  • Help me choose a desktop: which of these two should I buy?

    - by Sammy
    I just want the more powerful of the two: Choice 1: http://www.bestbuy.com/site/Gateway+-+Desktop+with+AMD+Phenom%26%23153%3B+II+Quad-Core+Processor/9698936.p?id=1218153428687&skuId=9698936 Choice 2: /site/HP+-+Pavilion+Desktop+with+AMD+Phenom%26%23153%3B+II+Quad-Core+Processor/9694506.p?id=1218150609828&skuId=9694506 I can't post more than one hyperlink since I am a new user, so please add bestbuy domain name before choice 2. The latter choice is a bit more expensive but not by much so I don't care about that. As for what I intend to use my machine for, just regular web surfing, light gaming, web development related work, etc. But that doesn't really matter, of these two I just want to know which is the better more powerful system and which you would buy if you were in my position.

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >