Search Results

Search found 13586 results on 544 pages for 'trusted domain'.

Page 201/544 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Login Problem Windows Authentication

    - by user109280
    Duplicate of: http://stackoverflow.com/questions/881928/windows-authentication-trusted-connection-problem I logged in the Windows Server(Machine 1) as "abc\user1 ". Windows Server machine is in abc domain. MSSQL Server is in the "abc" domain on Machine 1 and have mixed mode.authentication. It has account "abc\user1 " and "abc\user2 ". Both has role of sysadmin and serveradmin. I logged in another machine(Machine 2) using "abc\user2 ". Same Domain. Run the ant which connect to MSSQL Server. URL is formed as follows. jdbc:sqlserver://%DB_IP%:%DB_PORT%;SelectMethod=cursor;integratedSecurity=true;DatabaseName=dbname; 1) From Machine 2, If I use "abc\user2" credential for connection, then it works fine. since integratedSecurity=true. 2) From Machine 2, If I use "abc\user1" credential for connection, then it doesn't fine, since integratedSecurity=true and take System Credentials i.e "abc\user2". Even if I make integratedSecurity=false , then also it doesn't connect using "abc\user1" What changes to URL I have make to work for "abc\user1" from Machine2 for connection. what properties to be added in url? OR Driver doesn't support to use another domain\User Credentials? What need to set on MSSQL Server ?? Deepak

    Read the article

  • How to Load assembly to AppDomain with all references recursively?

    - by abatishchev
    I want to load to new AppDomin some assembly which has a complex references tree (MyDll.dll - Microsoft.Office.Interop.Excel.dll - Microsoft.Vbe.Interop.dll - Office.dll - stdole.dll) As far as I understood, when an assembly is been loaded to AppDomain, it's references would not be loaded automatically, and I have to load them manually. So when I do: string dir = @"SomePath"; // different from AppDomain.CurrentDomain.BaseDirectory string path = System.IO.Path.Combine(dir, "MyDll.dll"); AppDomainSetup setup = AppDomain.CurrentDomain.SetupInformation; setup.ApplicationBase = dir; AppDomain domain = AppDomain.CreateDomain("SomeAppDomain", null, setup); domain.Load(AssemblyName.GetAssemblyName(path)); and got FileNotFoundException: Could not load file or assembly 'MyDll, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. I think the key word is one of its dependencies. Ok, I do next before domain.Load(AssemblyName.GetAssemblyName(path)); foreach (AssemblyName refAsmName in Assembly.ReflectionOnlyLoadFrom(path).GetReferencedAssemblies()) { domain.Load(refAsmName); } But got FileNotFoundException again, on another (referenced) assembly. How to load all references recursively? Have I to create references tree before loading root assembly? How to get an assembly's references without loading it?

    Read the article

  • How to deploy to multiple redundant production servers with "cap deploy"?

    - by Chad Johnson
    Capistrano is working great to deploy to a single server. However, I have multiple production API servers for my web application. When I deploy, my code needs to get deployed to every API server at once. Specifying each server manually is NOT the solution I am looking for (e.g. I don't want to do "cap api1 deploy; cap api2 deploy"). Is there a way, using Capistrano, to deploy to all servers at once, with just a simple "cap deploy"? I'm wondering what changes I would need to make to a typical deploy.rb file, whether I'd need to create a separate file for each server, and whether and how the Capfile would need to be changed. Also, I need to be able to specify a different deploy_to path for each server. And ideally, I wouldn't have to repeat things in different config files for different servers (eg. wouldn't have to specify :repository, :application, etc. multiple times). I have spent hours searching Google on this and looking through tutorials, but I have found nothing helpful. Here is a snippet from my current deploy.rb file: set :application, "testapplication" set :repository, "ssh://domain.com//srv/hg/#{application}" set :scm, :mercurial set :deploy_to, "/srv/www/#{application}" role :web, "domain.com" role :app, "domain.com" role :db, "domain.com", :primary => true, :norelease => true Should I just use the multistage extension and do this? task :deploy_everything do system "cap api1 deploy" system "cap api2 deploy" system "cap api2 deploy" end That could work, but I feel like this isn't what this extension is meant for...

    Read the article

  • Chache problem running two consecutive HTTP GET requests from an APP1 to an APP2

    - by user502052
    I use Ruby on Rails 3 and I have 2 applications (APP1 and APP2) working on two subdomains: app1.domain.local app2.domain.local and I am tryng to run two consecutive HTTP GET requests from APP1 to APP2 like this: Code in APP1 (request): response1 = Net::HTTP.get( URI.parse("http://app2.domain.local?test=first&id=1") ) response2 = Net::HTTP.get( URI.parse("http://app2.domain.local/test=second&id=1") ) Code in APP2 (response): respond_to do |format| if <model_name>.find(params[:id]).<field_name> == "first" <model_name>.find(params[:id]).update_attribute ( <field_name>, <field_value> ) format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } elsif <model_name>.find(params[:id]).<field_name> == "second" format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } end end After the first request I get the correct XML (response1 is what I expect), but on the second it isn't (response2 isn't what I expect). Doing some tests I found that the second time that <model_name>.find(params[:id]).<field_name> run (for the elsif statements) it returns always a blank value so that the code in the elseif statement is never run. Is it possible that the problem is related on caching <model_name>.find(params[:id]).<field_name>? P.S.: I read about eTag and Conditional GET, but I am not sure that I must use that approach. I would like to keep all simple.

    Read the article

  • how to connect to MSSQL using activerecord, JDBC, JTDS and Integrated Security

    - by Rob
    As per the above, I've tried: establish_connection(:adapter => "jdbcmssql", :url => "jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain='mynetwork';", :username => 'user', :password=>'pass' ) establish_connection(:adapter => "jdbcmssql", :url => 'jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain="mynetwork";user="mynetwork\user"' ) establish_connection(:adapter => "jdbcmssql", :url => "jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain='mynetwork';", :username=>'user' ) establish_connection(:adapter => "jdbcmssql", :url => "jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain='mynetwork';integratedSecurity='true'", :username=>'user' ) .. and various other combinations. Each time I get: net/sourceforge/jtds/jdbc/SQLDiagnostic.java:368:in `addDiagnostic': java.sql.SQLException: Login failed for user ''. The user is not associated with a trusted SQL Server connection. (NativeException) Any tips? Thanks, activerecord (2.3.5) activerecord-jdbc-adapter (0.9.6) activerecord-jdbcmssql-adapter (0.9.6) jdbc-jtds (1.2.5) jruby 1.4.0 (ruby 1.8.7 patchlevel 174) (2009-11-02 69fbfa3) (Java HotSpot(TM) Client VM 1.6.0_18) [x86-java]

    Read the article

  • Generating ul from array with php fails

    - by Toni Michel Caubet
    Given a $files array I am trying to generate a list, like this: <ul class=""> <? for($i=0;$i < count($files); $i++) { $text = str_replace("http://domain.com/files/uploads/", "", $files[$i]); $file = $files[$i]; $notme = sesion()>0 && $obj['id'] != sesion(); ?> <li> <a target="_blank" href="<?=$file?>"><?=$text?></a> <? if($notme){ ?> <span class="add_to_files" data-file="<?=$file?>">Agregar al gestor</span> <? } ?> </li> <? } ?> </ul> This is the output: Please note how the span html is wrong, <ul class=""> <li> <a href="http://domain.com/files/uploads/388400967232883.jpg" target="_blank">388400967232883.jpg</a> <span target="_blank" href="http://domain.com/files/uploads/388400967232883.jpg" url"="" data-file="&lt;a class=" class="add_to_files">http://domain.com/files/uploads/388400967232883.jpg"&gt;Agregar al gestor</span> </li> </ul> Any idea why?

    Read the article

  • Map inheritance from generic class in Linq To SQL

    - by Ksenia Mukhortova
    Hi everyone, I'm trying to map my inheritance hierarchy to DB using Linq to SQL: Inheritance is like this, classes are POCO, without any LINQ to SQL attributes: public interface IStage { ... } public abstract class SimpleStage<T> : IStage where T : Process { ... } public class ConcreteStage : SimpleStage<ConcreteProcess> { ... } Here is the mapping: <Database Name="NNN" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="dbo.Stage" Member="Stage"> <Type Name="BusinessLogic.Domain.IStage"> <Column Name="ID" Member="ID" DbType="Int NOT NULL IDENTITY" IsPrimaryKey="true" IsDbGenerated="true" AutoSync="OnInsert" /> <Column Name="StageType" Member="StageType" IsDiscriminator="true" /> <Type Name="BusinessLogic.Domain.SimpleStage" IsInheritanceDefault="true"> <Type Name="BusinessLogic.Domain.ConcreteStage" IsInheritanceDefault="true" InheritanceCode="1"/> </Type> </Type> </Table> </Database> In the runtime I get error: System.InvalidOperationException was unhandled Message="Mapping Problem: Cannot find runtime type for type mapping 'BusinessLogic.Domain.SimpleStage'." Neither specifying SimpleStage, nor SimpleStage<T> in mapping file helps - runtime keeps producing different types of errors. DC is created like this: StreamReader sr = new StreamReader(@"MappingFile.map"); XmlMappingSource mapping = XmlMappingSource.FromStream(sr.BaseStream); DataContext dc = new DataContext(@"connection string", mapping); If Linq to SQL doesn't support this, could you, please, advise some other ORM, which does. Thanks in advance, Regards! Ksenia

    Read the article

  • New user register, script auto create problems!

    - by SKY
    Hi, im currently trying to create a php script that when a new user register, a script (eg:wordpress,blog etc..) will install for them. I'm currently got the code below for just single setup, but how can i setup a form for multi user? which only allowing them to input the username (subdomain) and password. <?php class scriptname_Config { public static $title = 'new_script_title'; // Domain name and path where new script will installed in public static $domain = 'username.domain.com'; public static $absolutePath = '/new_register_username/'; // Settings for general mysql database public static $db = array( 'host' => 'localhost', 'database' => 'scriptname', 'user' => 'root', 'password' => '', 'prefix' => 'scriptname_' ); } define( 'scriptname_BASE_URL', 'http://'.scriptname_Config::$domain.scriptname_Config::$absolutePath ); ?> Or any tutorial that will help is appreciate! Thanks!

    Read the article

  • Finding out inside which iframe a script is executing

    - by juandopazo
    I have a page with several iframes. One of this iframes has a page from a different domain. Inside this iframe there's another iframe with a page from the parent domain. my page from mydomain.com -> an iframe -> iframe "#foo" from another-domain.com> -> iframe "#bar" from mydomain.com -> another iframe I need to get a reference to the "#foo" node inside the main page. The security model should allow me to do that because "#bar" has the same domain as the main page. So what I'm doing is iterating through the window.top array and comparing each element to the window object which is currently the "#bar" window object. My test code looks like: for (var i = 0; i < top.length; i++) { for (var j = 0; j < top[i].length; j++) { if (top[i][j] == window) { alert("The iframe number " + i + " contains me"); } } } This works fine in all browsers, but Internet Explorer 6 throws a security error when accesing top[i][j]. Any ideas on how to solve this on IE6? Thanks!

    Read the article

  • How do I set up MVP for a Winforms solution?

    - by JonWillis
    Question moved from Stackoverflow - http://stackoverflow.com/questions/4971048/how-do-i-set-up-mvp-for-a-winforms-solution I have used MVP and MVC in the past, and I prefer MVP as it controls the flow of execution so much better in my opinion. I have created my infrastructure (datastore/repository classes) and use them without issue when hard coding sample data, so now I am moving onto the GUI and preparing my MVP. Section A I have seen MVP using the view as the entry point, that is in the views constructor method it creates the presenter, which in turn creates the model, wiring up events as needed. I have also seen the presenter as the entry point, where a view, model and presenter are created, this presenter is then given a view and model object in its constructor to wire up the events. As in 2, but the model is not passed to the presenter. Instead the model is a static class where methods are called and responses returned directly. Section B In terms of keeping the view and model in sync I have seen. Whenever a value in the view in changed, i.e. TextChanged event in .Net/C#. This fires a DataChangedEvent which is passed through into the model, to keep it in sync at all times. And where the model changes, i.e. a background event it listens to, then the view is updated via the same idea of raising a DataChangedEvent. When a user wants to commit changes a SaveEvent it fires, passing through into the model to make the save. In this case the model mimics the view's data and processes actions. Similar to #b1, however the view does not sync with the model all the time. Instead when the user wants to commit changes, SaveEvent is fired and the presenter grabs the latest details and passes them into the model. in this case the model does not know about the views data until it is required to act upon it, in which case it is passed all the needed details. Section C Displaying of business objects in the view, i.e. a object (MyClass) not primitive data (int, double) The view has property fields for all its data that it will display as domain/business objects. Such as view.Animals exposes a IEnumerable<IAnimal> property, even though the view processes these into Nodes in a TreeView. Then for the selected animal it would expose SelectedAnimal as IAnimal property. The view has no knowledge of domain objects, it exposes property for primitive/framework (.Net/Java) included objects types only. In this instance the presenter will pass an adapter object the domain object, the adapter will then translate a given business object into the controls visible on the view. In this instance the adapter must have access to the actual controls on the view, not just any view so becomes more tightly coupled. Section D Multiple views used to create a single control. i.e. You have a complex view with a simple model like saving objects of different types. You could have a menu system at the side with each click on an item the appropriate controls are shown. You create one huge view, that contains all of the individual controls which are exposed via the views interface. You have several views. You have one view for the menu and a blank panel. This view creates the other views required but does not display them (visible = false), this view also implements the interface for each view it contains (i.e. child views) so it can expose to one presenter. The blank panel is filled with other views (Controls.Add(myview)) and ((myview.visible = true). The events raised in these "child"-views are handled by the parent view which in turn pass the event to the presenter, and visa versa for supplying events back down to child elements. Each view, be it the main parent or smaller child views are each wired into there own presenter and model. You can literately just drop a view control into an existing form and it will have the functionality ready, just needs wiring into a presenter behind the scenes. Section E Should everything have an interface, now based on how the MVP is done in the above examples will affect this answer as they might not be cross-compatible. Everything has an interface, the View, Presenter and Model. Each of these then obviously has a concrete implementation. Even if you only have one concrete view, model and presenter. The View and Model have an interface. This allows the views and models to differ. The presenter creates/is given view and model objects and it just serves to pass messages between them. Only the View has an interface. The Model has static methods and is not created, thus no need for an interface. If you want a different model, the presenter calls a different set of static class methods. Being static the Model has no link to the presenter. Personal thoughts From all the different variations I have presented (most I have probably used in some form) of which I am sure there are more. I prefer A3 as keeping business logic reusable outside just MVP, B2 for less data duplication and less events being fired. C1 for not adding in another class, sure it puts a small amount of non unit testable logic into a view (how a domain object is visualised) but this could be code reviewed, or simply viewed in the application. If the logic was complex I would agree to an adapter class but not in all cases. For section D, i feel D1 creates a view that is too big atleast for a menu example. I have used D2 and D3 before. Problem with D2 is you end up having to write lots of code to route events to and from the presenter to the correct child view, and its not drag/drop compatible, each new control needs more wiring in to support the single presenter. D3 is my prefered choice but adds in yet more classes as presenters and models to deal with the view, even if the view happens to be very simple or has no need to be reused. i think a mixture of D2 and D3 is best based on circumstances. As to section E, I think everything having an interface could be overkill I already do it for domain/business objects and often see no advantage in the "design" by doing so, but it does help in mocking objects in tests. Personally I would see E2 as a classic solution, although have seen E3 used in 2 projects I have worked on previously. Question Am I implementing MVP correctly? Is there a right way of going about it? I've read Martin Fowler's work that has variations, and I remember when I first started doing MVC, I understood the concept, but could not originally work out where is the entry point, everything has its own function but what controls and creates the original set of MVC objects.

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Strange Flash AS3 xml Socket behavior

    - by Rnd_d
    I have a problem which I can't understand. To understand it I wrote a socket client on AS3 and a server on python/twisted, you can see the code of both applications below. Let's launch two clients at the same time, arrange them so that you can see both windows and press connection button in both windows. Then press and hold any button. What I'm expecting: Client with pressed button sends a message "some data" to the server, then the server sends this message to all the clients(including the original sender) . Then each client moves right the button 'connectButton' and prints a message to the log with time in the following format: "min:secs:milliseconds". What is going wrong: The motion is smooth in the client that sends the message, but in all other clients the motion is jerky. This happens because messages to those clients arrive later than to the original sending client. And if we have three clients (let's name them A,B,C) and we send a message from A, the sending time log of B and C will be the same. Why other clients recieve this messages later than the original sender? By the way, on ubuntu 10.04/chrome all the motion is smooth. Two clients are launched in separated chromes. windows screenshot Can't post linux screenshot, need more than 10 reputation to post more hyperlinks. Listing of log, four clients simultaneously: [16:29:33.280858] 62.140.224.1 >> some data [16:29:33.280912] 87.249.9.98 << some data [16:29:33.280970] 87.249.9.98 << some data [16:29:33.281025] 87.249.9.98 << some data [16:29:33.281079] 62.140.224.1 << some data [16:29:33.323267] 62.140.224.1 >> some data [16:29:33.323326] 87.249.9.98 << some data [16:29:33.323386] 87.249.9.98 << some data [16:29:33.323440] 87.249.9.98 << some data [16:29:33.323493] 62.140.224.1 << some data [16:29:34.123435] 62.140.224.1 >> some data [16:29:34.123525] 87.249.9.98 << some data [16:29:34.123593] 87.249.9.98 << some data [16:29:34.123648] 87.249.9.98 << some data [16:29:34.123702] 62.140.224.1 << some data AS3 client code package { import adobe.utils.CustomActions; import flash.display.Sprite; import flash.events.DataEvent; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.KeyboardEvent; import flash.events.MouseEvent; import flash.events.SecurityErrorEvent; import flash.net.XMLSocket; import flash.system.Security; import flash.text.TextField; public class Main extends Sprite { private var socket :XMLSocket; private var textField :TextField = new TextField; private var connectButton :TextField = new TextField; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(event:Event = null):void { socket = new XMLSocket(); socket.addEventListener(Event.CONNECT, connectHandler); socket.addEventListener(DataEvent.DATA, dataHandler); stage.addEventListener(KeyboardEvent.KEY_DOWN, keyDownHandler); addChild(textField); textField.y = 50; textField.width = 780; textField.height = 500; textField.border = true; connectButton.selectable = false; connectButton.border = true; connectButton.addEventListener(MouseEvent.MOUSE_DOWN, connectMouseDownHandler); connectButton.width = 105; connectButton.height = 20; connectButton.text = "click here to connect"; addChild(connectButton); } private function connectHandler(event:Event):void { textField.appendText("Connect\n"); textField.appendText("Press and hold any key\n"); } private function dataHandler(event:DataEvent):void { var now:Date = new Date(); textField.appendText(event.data + " time = " + now.getMinutes() + ":" + now.getSeconds() + ":" + now.getMilliseconds() + "\n"); connectButton.x += 2; } private function keyDownHandler(event:KeyboardEvent):void { socket.send("some data"); } private function connectMouseDownHandler(event:MouseEvent):void { var connectAddress:String = "ep1c.org"; var connectPort:Number = 13250; Security.loadPolicyFile("xmlsocket://" + connectAddress + ":" + String(connectPort)); socket.connect(connectAddress, connectPort); } } } Python server code from twisted.internet import reactor from twisted.internet.protocol import ServerFactory from twisted.protocols.basic import LineOnlyReceiver import datetime class EchoProtocol(LineOnlyReceiver): ##### name = "" id = 0 delimiter = chr(0) ##### def getName(self): return self.transport.getPeer().host def connectionMade(self): self.id = self.factory.getNextId() print "New connection from %s - id:%s" % (self.getName(), self.id) self.factory.clientProtocols[self.id] = self def connectionLost(self, reason): print "Lost connection from "+ self.getName() del self.factory.clientProtocols[self.id] self.factory.sendMessageToAllClients(self.getName() + " has disconnected.") def lineReceived(self, line): print "[%s] %s >> %s" % (datetime.datetime.now().time(), self, line) if line=="<policy-file-request/>": data = """<?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"> <!-- Policy file for xmlsocket://ep1c.org --> <cross-domain-policy> <allow-access-from domain="*" to-ports="%s" /> </cross-domain-policy>""" % PORT self.send(data) else: self.factory.sendMessageToAllClients( line ) def send(self, line): print "[%s] %s << %s" % (datetime.datetime.now().time(), self, line) if line: self.transport.write( str(line) + chr(0)) else: print "Nothing to send" def __str__(self): return self.getName() class ChatProtocolFactory(ServerFactory): protocol = EchoProtocol def __init__(self): self.clientProtocols = {} self.nextId = 0 def getNextId(self): id = self.nextId self.nextId += 1 return id def sendMessageToAllClients(self, msg): for client in self.clientProtocols: self.clientProtocols[client].send(msg) def sendMessageToClient(self, id, msg): self.clientProtocols[id].send(msg) PORT = 13250 print "Starting Server" factory = ChatProtocolFactory() reactor.listenTCP(PORT, factory) reactor.run()

    Read the article

  • Metro, Authentication, and the ASP.NET Web API

    - by Stephen.Walther
    Imagine that you want to create a Metro style app written with JavaScript and you want to communicate with a remote web service. For example, you are creating a movie app which retrieves a list of movies from a movies service. In this situation, how do you authenticate your Metro app and the Metro user so not just anyone can call the movies service? How can you identify the user making the request so you can return user specific data from the service? The Windows Live SDK supports a feature named Single Sign-On. When a user logs into a Windows 8 machine using their Live ID, you can authenticate the user’s identity automatically. Even better, when the Metro app performs a call to a remote web service, you can pass an authentication token to the remote service and prevent unauthorized access to the service. The documentation for Single Sign-On is located here: http://msdn.microsoft.com/en-us/library/live/hh826544.aspx In this blog entry, I describe the steps that you need to follow to use Single Sign-On with a (very) simple movie app. We build a Metro app which communicates with a web service created using the ASP.NET Web API. Creating the Visual Studio Solution Let’s start by creating a Visual Studio solution which contains two projects: a Windows Metro style Blank App project and an ASP.NET MVC 4 Web Application project. Name the Metro app MovieApp and the ASP.NET MVC application MovieApp.Services. When you create the ASP.NET MVC application, select the Web API template: After you create the two projects, your Visual Studio Solution Explorer window should look like this: Configuring the Live SDK You need to get your hands on the Live SDK and register your Metro app. You can download the latest version of the SDK (version 5.2) from the following address: http://www.microsoft.com/en-us/download/details.aspx?id=29938 After you download the Live SDK, you need to visit the following website to register your Metro app: https://manage.dev.live.com/build Don’t let the title of the website — Windows Push Notifications & Live Connect – confuse you, this is the right place. Follow the instructions at the website to register your Metro app. Don’t forget to follow the instructions in Step 3 for updating the information in your Metro app’s manifest. After you register, your client secret is displayed. Record this client secret because you will need it later (we use it with the web service): You need to configure one more thing. You must enter your Redirect Domain by visiting the following website: https://manage.dev.live.com/Applications/Index Click on your application name, click Edit Settings, click the API Settings tab, and enter a value for the Redirect Domain field. You can enter any domain that you please just as long as the domain has not already been taken: For the Redirect Domain, I entered http://superexpertmovieapp.com. Create the Metro MovieApp Next, we need to create the MovieApp. The MovieApp will: 1. Use Single Sign-On to log the current user into Live 2. Call the MoviesService web service 3. Display the results in a ListView control Because we use the Live SDK in the MovieApp, we need to add a reference to it. Right-click your References folder in the Solution Explorer window and add the reference: Here’s the HTML page for the Metro App: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>MovieApp</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- Live SDK --> <script type="text/javascript" src="/LiveSDKHTML/js/wl.js"></script> <!-- WebServices references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/default.js"></script> </head> <body> <div id="tmplMovie" data-win-control="WinJS.Binding.Template"> <div class="movieItem"> <span data-win-bind="innerText:title"></span> <br /><span data-win-bind="innerText:director"></span> </div> </div> <div id="lvMovies" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplMovie') }"> </div> </body> </html> The HTML page above contains a Template and ListView control. These controls are used to display the movies when the movies are returned from the movies service. Notice that the page includes a reference to the Live script that we registered earlier: <!-- Live SDK --> <script type="text/javascript" src="/LiveSDKHTML/js/wl.js"></script> The JavaScript code looks like this: (function () { "use strict"; var REDIRECT_DOMAIN = "http://superexpertmovieapp.com"; var WEBSERVICE_URL = "http://localhost:49743/api/movies"; function init() { WinJS.UI.processAll().done(function () { // Get element and control references var lvMovies = document.getElementById("lvMovies").winControl; // Login to Windows Live var scopes = ["wl.signin"]; WL.init({ scope: scopes, redirect_uri: REDIRECT_DOMAIN }); WL.login().then( function(response) { // Get the authentication token var authenticationToken = response.session.authentication_token; // Call the web service var options = { url: WEBSERVICE_URL, headers: { authenticationToken: authenticationToken } }; WinJS.xhr(options).done( function (xhr) { var movies = JSON.parse(xhr.response); var listMovies = new WinJS.Binding.List(movies); lvMovies.itemDataSource = listMovies.dataSource; }, function (xhr) { console.log(xhr.statusText); } ); }, function(response) { throw WinJS.ErrorFromName("Failed to login!"); } ); }); } document.addEventListener("DOMContentLoaded", init); })(); There are two constants which you need to set to get the code above to work: REDIRECT_DOMAIN and WEBSERVICE_URL. The REDIRECT_DOMAIN is the domain that you entered when registering your app with Live. The WEBSERVICE_URL is the path to your web service. You can get the correct value for WEBSERVICE_URL by opening the Project Properties for the MovieApp.Services project, clicking the Web tab, and getting the correct URL. The port number is randomly generated. In my code, I used the URL  “http://localhost:49743/api/movies”. Assuming that the user is logged into Windows 8 with a Live account, when the user runs the MovieApp, the user is logged into Live automatically. The user is logged in with the following code: // Login to Windows Live var scopes = ["wl.signin"]; WL.init({ scope: scopes, redirect_uri: REDIRECT_DOMAIN }); WL.login().then(function(response) { // Do something }); The scopes setting determines what the user has permission to do. For example, access the user’s SkyDrive or access the user’s calendar or contacts. The available scopes are listed here: http://msdn.microsoft.com/en-us/library/live/hh243646.aspx In our case, we only need the wl.signin scope which enables Single Sign-On. After the user signs in, you can retrieve the user’s Live authentication token. The authentication token is passed to the movies service to authenticate the user. Creating the Movies Service The Movies Service is implemented as an API controller in an ASP.NET MVC 4 Web API project. Here’s what the MoviesController looks like: using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using JWTSample; using MovieApp.Services.Models; namespace MovieApp.Services.Controllers { public class MoviesController : ApiController { const string CLIENT_SECRET = "NtxjF2wu7JeY1unvVN-lb0hoeWOMUFoR"; // GET api/values public HttpResponseMessage Get() { // Authenticate // Get authenticationToken var authenticationToken = Request.Headers.GetValues("authenticationToken").FirstOrDefault(); if (authenticationToken == null) { return new HttpResponseMessage(HttpStatusCode.Unauthorized); } // Validate token var d = new Dictionary<int, string>(); d.Add(0, CLIENT_SECRET); try { var myJWT = new JsonWebToken(authenticationToken, d); } catch { return new HttpResponseMessage(HttpStatusCode.Unauthorized); } // Return results return Request.CreateResponse( HttpStatusCode.OK, new List<Movie> { new Movie {Title="Star Wars", Director="Lucas"}, new Movie {Title="King Kong", Director="Jackson"}, new Movie {Title="Memento", Director="Nolan"} } ); } } } Because the Metro app performs an HTTP GET request, the MovieController Get() action is invoked. This action returns a set of three movies when, and only when, the authentication token is validated. The Movie class looks like this: using Newtonsoft.Json; namespace MovieApp.Services.Models { public class Movie { [JsonProperty(PropertyName="title")] public string Title { get; set; } [JsonProperty(PropertyName="director")] public string Director { get; set; } } } Notice that the Movie class uses the JsonProperty attribute to change Title to title and Director to director to make JavaScript developers happy. The Get() method validates the authentication token before returning the movies to the Metro app. To get authentication to work, you need to provide the client secret which you created at the Live management site. If you forgot to write down the secret, you can get it again here: https://manage.dev.live.com/Applications/Index The client secret is assigned to a constant at the top of the MoviesController class. The MoviesController class uses a helper class named JsonWebToken to validate the authentication token. This class was created by the Windows Live team. You can get the source code for the JsonWebToken class from the following GitHub repository: https://github.com/liveservices/LiveSDK/blob/master/Samples/Asp.net/AuthenticationTokenSample/JsonWebToken.cs You need to add an additional reference to your MVC project to use the JsonWebToken class: System.Runtime.Serialization. You can use the JsonWebToken class to get a unique and validated user ID like this: var user = myJWT.Claims.UserId; If you need to store user specific information then you can use the UserId property to uniquely identify the user making the web service call. Running the MovieApp When you first run the Metro MovieApp, you get a screen which asks whether the app should have permission to use Single Sign-On. This screen never appears again after you give permission once. Actually, when I first ran the app, I get the following error: According to the error, the app is blocked because “We detected some suspicious activity with your Online Id account. To help protect you, we’ve temporarily blocked your account.” This appears to be a bug in the current preview release of the Live SDK and there is more information about this bug here: http://social.msdn.microsoft.com/Forums/en-US/messengerconnect/thread/866c495f-2127-429d-ab07-842ef84f16ae/ If you click continue, and continue running the app, the error message does not appear again.  Summary The goal of this blog entry was to describe how you can validate Metro apps and Metro users when performing a call to a remote web service. First, I explained how you can create a Metro app which takes advantage of Single Sign-On to authenticate the current user against Live automatically. You learned how to register your Metro app with Live and how to include an authentication token in an Ajax call. Next, I explained how you can validate the authentication token – retrieved from the request header – in a web service. I discussed how you can use the JsonWebToken class to validate the authentication token and retrieve the unique user ID.

    Read the article

  • Frequently getting booted from Securemote VPN-1 Connection

    - by Nick L.
    I connect to my office's network remotely through the Checkpoint SecuRemote E75 (R75) VPN application, but recently it's been causing me a lot of issues when connecting from home. I connect through a WRT54GL router running DD-WRT v24 firmware, so I have no clue if that affects anything. I took a dump of the logs for Checkpoint and here are the messages that populate when I get booted but I have no clue how to decipher them and my IT department is completely clueless in terms of resolving the situation. I'm thinking the router is blocking the keep alive connection or something along those lines, but I have no idea how to fix the problem. [ 2388 2932][30 Aug 22:47:49][TR_OFFICE_MODE] TR_OFFICE_MODE::TrOfficeMode::OmSendIpFrameCB: Not sending packet because it's not to the enc domain [ 2388 2932][30 Aug 22:47:50][TR_EVENTS] TR_EVENTS::Raise: Running registered cb... [ 2388 2932][30 Aug 22:47:50][TrComInf] TrComInf::TrComInfSendAsynchronic: __start__ 22:47:50.606 [ 2388 2932][30 Aug 22:47:50][TrComInf] TrComInf::TrComInf::TrComInfSendAsynchronic: Acquiring mutex [ 2388 2932][30 Aug 22:47:50][messaging] messaging::send_all: Sending Message {{ 2 }} , len 185 [ 2388 2932][30 Aug 22:47:50][tcpserver] TcpMultiPipe::pipe_if_send: Message (193 bytes) written successfully to socket 0x224 [ 2388 2932][30 Aug 22:47:50][TrComInf] TrComInf::TrComInf::TrComInfSendAsynchronic: Released mutex [ 2388 2932][30 Aug 22:47:50][TrComInf] TrComInf::TrComInfSendAsynchronic: __end__ 22:47:50.606. Total time - 0 milliseconds [ 2388 2932][30 Aug 22:47:50][TR_SRV2CL] TR_SRV2CL::SendNotification: Successfully sent notification of type TR_NOTIFICATION_TRAFFIC_IDLE [ 2388 2932][30 Aug 22:47:50][vna] vna_trap: received VNA_TRAP_FORWARD_PACKET [ 2388 2932][30 Aug 22:47:50][vna] vna_traffic_fwd_do : forwarding packet with 98 bytes [ 2388 2932][30 Aug 22:47:50][TR_OFFICE_MODE] TrOfficeMode::OmSendIpFrameCB: Packet to destination 192.168.162.15 of protocol 17 [ 2388 2932][30 Aug 22:47:50][TR_OFFICE_MODE] TR_OFFICE_MODE::TrOfficeMode::OmSendIpFrameCB: Not sending packet because it's not to the enc domain [ 2388 2932][30 Aug 22:47:51][vna] vna_trap: received VNA_TRAP_FORWARD_PACKET [ 2388 2932][30 Aug 22:47:51][vna] vna_traffic_fwd_do : forwarding packet with 98 bytes [ 2388 2932][30 Aug 22:47:51][TR_OFFICE_MODE] TrOfficeMode::OmSendIpFrameCB: Packet to destination 192.168.162.15 of protocol 17 [ 2388 2932][30 Aug 22:47:51][TR_OFFICE_MODE] TR_OFFICE_MODE::TrOfficeMode::OmSendIpFrameCB: Not sending packet because it's not to the enc domain [ 2388 2392][30 Aug 22:47:52][TracService] service_ctrl_ex: Called with ctrl_code 14 [ 2388 2392][30 Aug 22:47:52][TracService] service_ctrl_ex: System got SERVICE_CONTROL_SESSIONCHANGE message event type 4 session 2 [ 2388 2392][30 Aug 22:47:52][TracService] service_ctrl_ex: Console/remote disconnect has occured in session 2 [ 2388 2932][30 Aug 22:47:52][vna] vna_trap: received VNA_TRAP_FORWARD_PACKET [ 2388 2932][30 Aug 22:47:52][vna] vna_traffic_fwd_do : forwarding packet with 98 bytes [ 2388 2932][30 Aug 22:47:52][TR_OFFICE_MODE] TrOfficeMode::OmSendIpFrameCB: Packet to destination 192.168.162.15 of protocol 17 [ 2388 2932][30 Aug 22:47:52][TR_OFFICE_MODE] TR_OFFICE_MODE::TrOfficeMode::OmSendIpFrameCB: Not sending packet because it's not to the enc domain [ 2388 2932][30 Aug 22:47:52][TR_CONN_MANAGER] TR_CONN_MANAGER::ConnEnum: Returning connection at position 1 [ 2388 2932][30 Aug 22:47:52][TR_EVENTS] TR_EVENTS::Raise: Running registered cb... [ 2388 2932][30 Aug 22:47:52][TR_CONN_MANAGER] TR_CONN_MANAGER::ConnEventMainHandler: no gw handle [ 2388 2932][30 Aug 22:47:52][TR_CONN_MANAGER] TR_CONN_MANAGER::ConnEventMainHandler: Current connection state is TR_CONN_STATE_CONNECTED. Receiving event of type CONN_EVENT_SYSTEM_SESSION_LOGOFF. Connection handle = 1. System state: TR_SYSTEM_STATE_RUNNING [ 2388 2932][30 Aug 22:47:52][CONFIG_MANAGER] suspend_tunnel_while_locked return value false, because it is Default variable. Scope: site 12.43.159.10, gw NULL ,user USER [ 2388 2932][30 Aug 22:47:52][TR_CONN_MANAGER] TR_CONN_MANAGER::ConnEventConnectedHandler: no gw handle [ 2388 2932][30 Aug 22:47:52][TR_CONN_MANAGER] TR_CONN_MANAGER::ConnEventConnectedHandler: receive session logoff event while connected. cancelling connection Thanks all. :)

    Read the article

  • Permission denied: /home/.htaccess pcfg_openfile: unable to check htaccess file

    - by phoebebright
    This domain was working this morning, now I get a 403 error and the message above in my error log. I'm not using .htaccess files but I have been doing some copy on the server so may have messed things up but no changes to this domain (unless by accident!). What is this pcfg_openfile thing anyway? Done lots of googleing but none of the solutions seemed to fit these circumstances. Server is ubuntu Hardy Heron.

    Read the article

  • IIS 7 Authentication: Certain users can't authenticate, while almost all others can.

    - by user35335
    I'm using IIS 7 Digest authentication to control access to a certain directory containing files. Users access the files through a department website from inside our network and outside. I've set NTFS permissions on the directory to allow a certain AD group to view the files. When I click a link to one of those files on the website I get prompted for a username and password. With most users everything works fine, but with a few of them it prompts for a password 3 times and then get: 401 - Unauthorized: Access is denied due to invalid credentials. But other users that are in the group can get in without a problem. If I switch it over to Windows Authentication, then the trouble users can log in fine. That directory is also shared, and users that can't log in through the website are able to browse to the share and view files in it, so I know that the permissions are ok. Here's the portion of the IIS log where I tried to download the file (/assets/files/secure/WWGNL.pdf): 2010-02-19 19:47:20 xxx.xxx.xxx.xxx GET /assets/images/bullet.gif - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 218 2010-02-19 19:47:20 xxx.xxx.xxx.xxx GET /assets/images/bgOFF.gif - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 218 2010-02-19 19:47:21 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 2 5 0 2010-02-19 19:47:36 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 0 2010-02-19 19:47:43 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 15 2010-02-19 19:47:46 xxx.xxx.xxx.xxx GET /manager/media/script/_session.gif 0.19665693119168282 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 203 2010-02-19 19:47:46 xxx.xxx.xxx.xxx POST /manager/index.php - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 296 2010-02-19 19:47:56 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 15 2010-02-19 19:47:59 xxx.xxx.xxx.xxx GET /favicon.ico - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 404 0 2 0 Here's the Failed Logon attempt in the Security Log: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 2/19/2010 11:47:43 AM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: WEB4.net.domain.org Description: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: jim.lastname Account Domain: net.domain.org Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: - Source Network Address: 10.5.16.138 Source Port: 50065 Detailed Authentication Information: Logon Process: WDIGEST Authentication Package: WDigest Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-a5ba-3e3b0328c30d}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2010-02-19T19:47:43.890Z" /> <EventRecordID>2276316</EventRecordID> <Correlation /> <Execution ProcessID="612" ThreadID="692" /> <Channel>Security</Channel> <Computer>WEB4.net.domain.org</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">jim.lastname</Data> <Data Name="TargetDomainName">net.domain.org</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc000006a</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">WDIGEST</Data> <Data Name="AuthenticationPackageName">WDigest</Data> <Data Name="WorkstationName">-</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">10.5.16.138</Data> <Data Name="IpPort">50065</Data> </EventData> </Event>

    Read the article

  • Remote Email Access?

    - by Tyler
    I have remote email access from an iPhone or my Android phone, but I cannot setup a Windows Email Client to check my email using the exact same information I provided in my phones. The email system is an Exchange 2003 and I hate using the cheap Outlook Web App that it has. User: [email protected] Password: 1234 Server: mail.domain.com And that works for they phones. So why can't I get it to work on my email client? Maybe a DNS problem?

    Read the article

  • Unable to Install SQL Server 2008 on Win Server 2008 R2 Datacenter

    - by MikeKusold
    I have been trying for the past three days to install SQL Server 2008 with SharePoint integrated mode in VMware Player, however I keep getting the following error: Reporting Services in SharePoint integrated mode is not supported for WORKGROUP edition I setup ADDS and have my computer part of that domain (therefore not a WORKGROUP). I am currently at my wits end and any help would be appreciated. Current Roles installed: Application Server, Active Directory Domain Services, Web Server (IIS) Features: Desktop Experience, Group Policy Management, Ink and Handwriting Services, Remote Server Administration Tools, Windows Process Activation Service, .NET Framework 3.5.1 Features

    Read the article

  • windows 2003 server : can't find a primary authoritative dns server for the name srv.domain1.local [

    - by phill
    I originally tried to rejoin a computer to a network which led to a "cannot find domain" error. The username/password box don't even come up. some tests i ran: I can ping the server, however I can't ping the domain name domain1.local. nslookup can't find the domain either. It looks to the isp's dns instead of my own to resolve the local machines. So i go to the dns and run netdiag.exe and gives me this error. DNS test . . . . . . . . . . . . . : Failed [WARNING] Cannot find a primary authoritative DNS server for the name 'stmartinsrv.stmartin.local.'. [RCODE_SERVER_FAILURE] The name 'srv.domain1.local.' may not be registered in DNS. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.156.1'. Please wait for 30 minutes for DNS server replication. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.157.1'. Please wait for 30 minutes for DNS server replication. [FATAL] No DNS servers have the DNS records for this DC registered. Redir and Browser test . . . . . . : Passed List of NetBt transports currently bound to the Redir NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The redir is bound to 1 NetBt transport. List of NetBt transports currently bound to the browser NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The browser is bound to 1 NetBt transport. from previous postings, I've tried adding the domain suffix to the nic ip properties to both the client machine and the dc server which didn't help. any ideas? thanks in advance

    Read the article

  • ssh tunnel error "ssh_exchange_identification: Connection closed by remote host"

    - by Jacob Ewing
    I'm trying to use an ssh tunnel from my office machine to my home machine, and get an error when I try to use it. What I'm doing is starting one shell like so: ssh -gL 12345:my.home.domain:22 my.home.domain This is giving me a proper shell, no problem. What I normally do then is ssh to my home machine through this office machine, like so: ssh -p 12345 127.0.0.1 This has always worked for me, until last week, when I set up a new system on my home machine (switching from Ubuntu to Debian). Now I get an error. I can still open up my initial ssh connection, but when I try to use that tunnel, I get (on the office machine) this error: ssh_exchange_identification: Connection closed by remote host Also, when that happens, the open shell that I have the tunnelling set up through gets this line spat out at it: channel 3: open failed: connect failed: Connection timed out At which point, I'm at a loss. If any more info is needed, I'll be happy to post it. ============= further to that ============== After fiddling around further, I've found that I'm getting a different response from the server (my home machine that is) when I try to telnet in on the various ports. If I try: telnet my.home.domain 22 I get this back: Trying <my ip address>... Connected to <my domain>. Escape character is '^]'. SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze2 Which is what I would expect. After setting up the tunnel though, and then telnetting to that, I see this response: Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. ============== and further still ================== As per kbulgrien's suggestion, here is the output from the client machine with the -v option: ssh -vp 24600 127.0.0.1 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 127.0.0.1 [127.0.0.1] port 24600. debug1: Connection established. debug1: identity file /home/jacob/.ssh/id_rsa type -1 debug1: identity file /home/jacob/.ssh/id_rsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_dsa type -1 debug1: identity file /home/jacob/.ssh/id_dsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

  • Google Apps Email Question

    - by robihot
    Google Apps Has anyone created (and used) a GROUP email which will email ALL domain users. (e.i. "All users within domainName.com") I have some domain users that are telling me that they are NOT receiving their emails. Please and Thanks !

    Read the article

  • FreeNAS and AD authentication on Windows 2008 R2

    - by FrancisV
    Has anyone successfully used AD authentication using the latest version of FreeNAS with Windows 2008 R2 domain controllers? I wanted to use FreeNAS to host files and share them via CIFS but I couldn't make FreeNAS authenticate with a Windows 2008 R2 domain controller. Ultimately, the new CIFS shares will be referenced in the DFS namespace that we already have running on Windows 2008 R2 servers. Any tip you can share with me?

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >