Search Results

Search found 48340 results on 1934 pages for 'microsoft test manager'.

Page 117/1934 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Workaround for datadude deployment bug - NullReferenceException

    - by jamiet
    I have come across a bug in Visual Studio 2010 Database Projects (aka datadude aka DPro aka Visual Studio Database Development Tools aka Visual Studio Team Edition for Database Professionals aka Juneau aka SQL Server Data Tools) that other people may encounter so, for the purposes of googling, I'm writing this blog post about it. Through my own googling I discovered that a Connect bug had already been raised about it (VS2010 Database project deploy - “SqlDeployTask” task failed unexpectedly, NullReferenceException), and coincidentally enough it was raised by my former colleague Tom Hunter (whom I have mentioned here before as the superhuman Tom Hunter) although it has not (at this time) received a reply from Microsoft. Tom provided a repro, namely that this syntactically valid function definition: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN (    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte) would produce this nasty unhelpful error upon deployment: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\TeamData\Microsoft.Data.Schema.TSqlTasks.targets(120,5): Error MSB4018: The "SqlDeployTask" task failed unexpectedly.System.NullReferenceException: Object reference not set to an instance of an object.   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.VariableSubstitution(SqlScriptProperty propertyValue, IDictionary`2 variables, Boolean& isChanged)   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.ArePropertiesEqual(IModelElement source, IModelElement target, ModelPropertyClass propertyClass, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareProperties(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareAllElementsForOneType(ModelElementClass type, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean compareOrphanedElements)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareStore(ModelStore source, ModelStore target, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.Build.SchemaDeployment.CompareModels()   at Microsoft.Data.Schema.Build.SchemaDeployment.PrepareBuildPlan()   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute(Boolean executeDeployment)   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute()   at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute()   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()   at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult)   Done executing task "SqlDeployTask" -- FAILED.  Done building target "DspDeploy" in project "Lloyds.UKTax.DB.UKtax.dbproj" -- FAILED. Done executing task "CallTarget" -- FAILED.Done building target "DBDeploy" in project It turns out there are a certain set of circumstances that need to be met for this error to occur: The object being deployed is an inline function  (may also exist for multistatement and scalar functions - I haven't tested that) That object includes SQLCMD variable references The object has already been deployed successfully Just to reiterate that last bullet point, the error does not occur when you deploy the function for the first time, only on the subsequent deployment.   Luckily I have a direct line into a guy on the development team so I fired off an email on Friday evening and today (Monday) I received a reply back telling me that there is a simple fix, one simply has to remove the parentheses that wrap the SQL statement. So, in the case of Tom's repro, the function definition simpy has to be changed to: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN --(    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte--) I have commented out the offending parentheses rather than removing them just to emphasize the point. Thereafter the function will deploy fine. I tested this out on my own project this morning and can confirm that this fix does indeed work.   I have been told that the bug CAN be reproduced in the Release Candidate (RC) 0 build of SQL Server Data Tools in SQL Server 2010 so am hoping that a fix makes it in for the Release-To-Manufacturing (RTM) build. Hope this helps @jamiet

    Read the article

  • Troubleshooting High-CPU Utilization for SQL Server

    - by Susantha Bathige
    The objective of this FAQ is to outline the basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance. The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below: Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information: Note that to see all processes you should select Show processes from all user. In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%). Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting. Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (see http://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC) SELECT scheduler_id ,cpu_id ,status ,runnable_tasks_count ,active_workers_count ,load_factor ,yield_count FROM sys.dm_os_schedulers WHERE scheduler_id See below for the BOL definitions for the above columns. scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler. cpu_id – ID of the CPU with which this scheduler is associated. status – Indicates the status of the scheduler. runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue. active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended. current_tasks_count - Number of current tasks that are associated with this scheduler. load_factor – Internal value that indicates the perceived load on this scheduler. yield_count – Internal value that is used to indicate progress on this scheduler.                                                                 Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE. There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy. When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load. There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.   The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records). SELECT TOP 10 st.text  ,st.dbid  ,st.objectid  ,qs.total_worker_time  ,qs.last_worker_time  ,qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top:      Note the BOL definitions for the important columns: total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled. last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.   I re-ran the same query again after few seconds and was returned the below output. After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.      In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load. I have used SQL Server 2008 (SP1) to test all the queries used in this article.

    Read the article

  • Network config with pppoe and Ubuntu 13.10

    - by Pavel
    I have an internet connection that is using pppoe. In my windows I do not assign an ip address for my network and I am able to connect using password and username. I installed Ubintu 13.10 today, and I did pppoeconf, and I setup an ip/network mask and changed the mac address in the options, and I was able to connect to the Internet. I restarted the computer, and I got a message saying that the wired connection is not managed. Internet was not working. I went to network manager file, and I changed the option to true, but I still can't connect to the Internet. I am pretty new to linux. How can I get my Internet working? Thanks

    Read the article

  • Add a Graphical User Interface (GUI) to the Microsoft Robocopy Command Line Tool

    - by Lori Kaufman
    Robocopy, or “Robust File Copy,” is a command line directory replication tool from Microsoft. It is available as part of Windows 7 and Vista as a standard feature, and was available as part of the Windows Server 2003 Resource Kit. NOTE: For Windows XP, you can obtain Robocopy by downloading the resource kit. Robocopy allows you to setup simple or advanced backup strategies. It provides such features as multi-threaded copying, mirroring or synchronization mode, automatic retry, and the ability to resume the copying process. If you are comfortable with using command line tools, you can run Robocopy directly on the command line using the command syntax and options. You can also download the command line reference and usage notes for Robocopy as a PDF file. If you are more comfortable using a graphical user interface, or GUI, rather than the command line, there are a couple of options for adding a GUI to the Robocopy command line tool, making it easier to use. Both tools, RoboMirror and RichCopy, are discussed below and links to download each tool are provided. How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model #ssas #tabular #bism

    - by Marco Russo (SQLBI)
    I, Alberto and Chris spent many months (many nights, holidays and also working days of the last months) writing the book we would have liked to read when we started working with Analysis Services Tabular. A book that explains how to use Tabular, how to model data with Tabular, how Tabular internally works and how to optimize a Tabular model. All those things you need to start on a real project in order to make an happy customer. You know, we’re all consultants after all, so customer satisfaction is really important to be paid for our job! Now the book writing is finished, we’re in the final stage of editing and reviews and we look forward to get our print copy. Its title is very long: Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model. But the important thing is that you can already (pre)order it. This is the list of chapters: 01. BISM Architecture 02. Guided Tour on Tabular 03. Loading Data Inside Tabular 04. DAX Basics 05. Understanding Evaluation Contexts 06. Querying Tabular 07. DAX Advanced 08. Understanding Time Intelligence in DAX 09. Vertipaq Engine 10. Using Tabular Hierarchies 11. Data modeling in Tabular 12. Using Advanced Tabular Relationships 13. Tabular Presentation Layer 14. Tabular and PowerPivot for Excel 15. Tabular Security 16. Interfacing with Tabular 17. Tabular Deployment 18. Optimization and Monitoring And this is the book cover – have a good read!

    Read the article

  • Keyboard locking up in Visual Studio 2010, Part 2

    - by Jim Wang
    Last week I posted about looking into the keyboard locking up issue in Visual Studio.  So far it looks like not a lot of people have replied to provide concrete repro steps, which confirms my suspicion that this is somewhat of a random issue. So at this point, I have a couple of choices.  I can either wait for somebody in the community to provide a repro of the problem that I can reliably run into, or I can do the work myself. I’m going to do both, so while I’m waiting for more possible bug reports, I’m going to write a tool that models the behavior of a typical Visual Studio user and use that to hopefully isolate the problem. I’ve chosen to go with this path since given the information in the bug reports, it seems people hit the issue with many different configurations in many different scenarios.  This means that me sitting down without any solid repro steps is likely not going to be a good use of time.  Instead, I’m going to go with a model-based testing approach where I will define a series of actions that a user in VS can do, and then proceed to run my model.  I’ll let you guys know how this works out for isolating bugs :) I’m using an internal tool for the model engine and AutoIt for the UI automation (I want something lightweight for a one-off).  One of the challenges will be getting feedback: AutoIt is great at driving, but not so great at understanding what success and failure means.

    Read the article

  • Technical development decision for my newly established software company

    - by test test
    I have a new software company where I am planning to develop CRM system. So I have settled down on the technological approach I am going to use:- I will use an open source Java-based CRM engine. I will use a third party reporting tool named JasperReports for providing reports capabilities for the CRM. I will develop the interface and any customization which the customer might ask for using asp.net mvc framework since my knowledge and experience are based on asp.net. And I will use the CRM API to integrate my asp.net web application with the Java-based CRM. I have developed a simple demo which integrate these three main components (CRM engine, asp.net application and the reporting tool) and they worked well. But I am afraid of the following risk that I might face if I go with the above approach: I should hire developers with different skills and experience: Developers with Java skills to be able to modify the Java-based CRM and writing plug-ins -when needed- to extend the CRM capabilities. Other developers with asp.net skills to be able to build the application such as application forms, the portal from where users will be able to start the CRM processes, searching capabilities, etc. So might the above point raise some risks when I start hiring a new team and start building the CRM application, OR I am on the right track at this early stage?

    Read the article

  • 'Important security update' for Firefox and flash plugin, but the update cannot be selected

    - by geoffrey
    [This question has been updated as I now have the same problem with Firefox in addition to flash plugin] The update manager (on Ubuntu 12.04, 64bit) shows an 'important security update' for flashplugin-installer:i386, firefox, and firefox-globalmenu. The update is unticked, and cannot be selected, and therefore cannot be updated (I can update other packages without problems). Actually the flashplugin-installer package does not appear to be installed on my computer (judging from the Software Centre). I can't remember how I installed flash, probably directly from the Adobe website. The updater asks me if I want to do a partial upgrade. When running sudo apt-get update && sudo apt-get upgrade from terminal, I get the following: The following packages have been kept back: firefox firefox-globalmenu flashplugin-installer:i386

    Read the article

  • Error while upgrading

    - by arun
    When i have select to install updates from the update manager it says: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/l/linux/linux-image-3.0.0-13-generic_3.0.0-13.22_i386.deb Size mismatch I have tried several times and through terminal too but the size mismatch repeats itself there too. What is this size mismatch error ? Please Help ? tried sudo apt-get update and the sudo apt-get upgrade but the again error comes:: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/x/xserver-xorg-video-intel/xserver-xorg-video-intel_2.15.901-1ubuntu2.1_i386.deb Size mismatch E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

    Read the article

  • TDD with limited resources

    - by bunglestink
    I work in a large company, but on a just two man team developing desktop LOB applications. I have been researching TDD for quite a while now, and although it is easy to realize its benefits for larger applications, I am having a hard time trying to justify the time to begin using TDD on the scale of our applications. I understand its advantages in automating testing, improving maintainability, etc., but on our scale, writing even basic unit tests for all of our components could easily double development time. Since we are already undermanned with extreme deadlines, I am not sure what direction to take. While other practices such as agile iterative development make perfect since, I am kind of torn over the productivity trade-offs of TDD on a small team. Are the advantages of TDD worth the extra development time on small teams with very tight schedules?

    Read the article

  • Microsoft Sponsored - Give Camp

    - by Ken Lovely, MCSE, MCDBA, MCTS
    Are you ready to connect with the local tech community for a good cause? GiveCamp needs your support. For one weekend in June, we’ll take on the technology wish lists of 20 non-profit organizations, and we’re looking for about 100 volunteers, both technical and non-technical, to help us do it. A typical GiveCamp draws 75 to 100 volunteers. Individuals can work with their colleagues in company teams, or they can opt to be matched with fellow volunteers who have complementary skill sets. Everyone is welcome to head home for the evenings – but there are always the diehards who work from Friday kickoff straight through Sunday afternoon. Food and drinks, especially of the caffeinated variety, are provided, along with game systems for breaks. Technical volunteers We're looking for graphic or UX designers, developers with .NET/Java/LAMP/Open Source/CMS experience, project managers, system/network administrators, DBAs, and non-profit technical consultants and web strategists. Non-technical volunteers Beyond the technology, there are many other aspects that make GiveCamp a success. We need non-technical volunteers to run errands, help with setting up and cleaning up, and everything in between. Whether you can offer a couple hours of your time or join GiveCamp for a couple days, your support is needed Sign up at; http://www.eventbrite.com/event/650615007 Feel free to contact me or Dani Diaz of Microsoft for more information

    Read the article

  • SQLAuthority News – Download – Microsoft SQL Server Compact 4.0

    - by pinaldave
    Microsoft SQL Server Compact 4.0 is a free, embedded database that software developers can use for building ASP.NET websites and Windows desktop applications. SQL Server Compact 4.0 has a small footprint and supports private deployment of its binaries within the application folder, easy application development in Visual Studio and WebMatrix, and seamless migration of schema and data to SQL Server. You can download very small file of SQL Server CE from here. Books Online is the primary documentation for SQL Server Compact 4.0. Books Online includes the following types of information: Setup and upgrade instructions. Information about new features and backward compatibility. Conceptual descriptions of the technologies and features in SQL Server Compact 4.0. Procedural topics describing how to use the various features in SQL Server Compact 4.0. Tutorials that guide you through common tasks. Reference documentation for the graphical tools, programming languages, and application programming interfaces (APIs) that are supported by SQL Server Compact 4.0. You can download SQL Server CE Book Online here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Building Single Page Apps on the Microsoft Stack

    - by Stephen.Walther
    Thank you everyone who came to my talk last night on Building Single Page Apps on the Microsoft Stack. I’ve attached the slides and code samples below. Here’s a quick summary of the talk. I argued that Single Page Apps are better than traditional Server Side Apps because: Single Page Apps are Stateful – In a traditional server-side app, whenever you navigate to a new page, all of your previous state is lost. It is like rebooting your computer whenever you perform any action In a Single Page App, Your Presentation Layer is Not Miles Away – In a traditional server-side app, because everything happens on the server, your presentation layer is separated from the user by space and time. In a Single Page App, the presentation layer is in the browser and not the server (which is the right place for a presentation layer). A Single Page App Respects the Web – It is easier to take advantage of HTML5 and related standards when building a Single Page App. Next, I recommended using the following four technologies when building a web application: Knockout – This is how you create your presentation layer. ASP.NET Web API – This is how you expose JSON data from your web server and perform server-side validation. HTML5 – This is how you implement client-side validation. Sammy – This is how you implement client-side routing and create a Single Page App with multiple virtual pages. There are code samples in the download (look in the Samples folder) which demonstrate how all of these technologies work when building Single Page Apps. Powerpoint Sample Code

    Read the article

  • Play Updated Retro Arcade Games for Free Courtesy of Microsoft

    - by Jason Fitzpatrick
    As part of a promotion for Internet Explorer 10, Microsoft comissioned Atari to update several classic retro games from the arcade and the Atari consoles. Fortunately for those of us looking for a retro gaming fix, you can play the games in any HTML5-enabled browser. While game play is really smooth on most games there are a few little quirks that using Internet Explorer 10 does take care of. First, if you’re not on IE10, you’ll see a little advertisement before you begin playing each game. Second, some games call on some of the new touch/motion variable functionality built into IE10 for the Windows 8 tablet experience and they’ll stall out when they reach the point in the game they need to call that variable. That said, we played quite a few games without any hiccups at all. Hit up the link below to play classics like Asteroids, Lunar Lander, Pong, Super Breakout, and more. Atari IE10 Promo Gallery [Atari] HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • OSX Server 3, Mac clients binding to OD and Profile Manager failing

    - by dbf
    I've made a setup containing a Mac Mini with OSX Server 3 (Mavericks 10.9.2) using Open Directory and Profile Manager (Mail, etc all set up and working). Now the thing is, internally on the local network, everything works great. Clients can bind to the OD and the users are able to login. I can install trust and settings profiles (either custom or group profiles) and all services in the profiles mentioned are being configured correctly. I can log in and out, hump around and do it a 100 times on different macs with different users, it works. My goal is to make this service publicly. The domain is with a FQDN which I own, for simplicity let's say server.domain.com. Now the only way for me to bind the clients to the OD is using LDAP mapping RCF2307 (without SSL) and a DN suffix of dc=server,dc=domain,dc=com using the Directory Utility. The options from server, or open directory will throw several errors like Connection failed to node '/LDAPv3/server.domain.com (2100). First of all I don't really understand the problem why clients can't bind to the OD like it does locally, with and without SSL (all ports are open, literally all ports are open, not just 389,636 and 1640, wasn't sure if I was missing any). When the clients are using LDAP mapping RFC2307 to bind (without SSL only), clients are able to authenticate, login and even load the Trust profile. But every Settings profile will fail with a Debug Message: Unable to find GUID in user record OD or fail to install saying missing user identification. Is there any way to get this to work without RFC2307? Because there is quite some stuff missing when using RFC2307 and not pull the mapping from the server or use open directory. Is this setup even possible? Or should I use VPN to authenticate with the OD? The network setup is a Modem/Router (DHCP off) with WAN NATted to an Airport Extreme (Using DHCP+NAT). The AE does notify with a double NAT message but I haven't had any problems with it on any other service. So WAN - 192.168.2.220 (static), AE - 10.0.1.* (dhcp) Output of DIG from the outside using dig server.domain.com ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 77 IN A 91.50.*.* (valid WAN IP) ;; SERVER 172.*.*.1#53(172.*.*.1) (iPhone) DIG locally from a client and server (same output) ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 10800 IN A 10.0.1.11 ;; AUTHORITY SECTION: server.domain.com. 10800 IN NS domain.com. (used for email send in relay) server.domain.com. 10800 IN NS server.domain.com. ;; SERVER 10.0.1.11#53(10.0.1.11) Are there any things I should check? Only have OSX. -- double NAT issue, plugged in the server directly on the Modem/Router with a static IP and issue remains. Guess that rules out the double NAT thing. -- changeip -checkhostname comes with There is nothing to change, e.g. success. Primary address = 10.0.1.11 Current HostName = server.domain.com DNS HostName = server.domain.com For now, I've made a workaround by using an admin account that forces a permanent VPN connection on boot. That means before it comes to the login, a connection is already made or underway. I will continue this post when I have more time, also locating all the necessary .log files of each application involved. I have some suspicions but have to debug a bit more when I have more time on my hands .. Unless, of course, I get sidetracked with having a life. Which is arguably not very likely. krypted.com

    Read the article

  • Failed update of Ubuntu 10.10 results in unbootable system

    - by chessweb
    Hi, yesterday I performed an automatic security update suggested by the update manager on my virtualized (with VirtualBox on a Windows 7 host) Ubuntu 10.10 installation. The update somehow failed and left me with an unbootable system. When I try to boot, I am told that various folders, files, and what not are missing. Then the system drops into a busybox and leaves me with an (initramfs) prompt. This happens with all kernels I get offered by GRUB, although the error messages are quite different from kernel to kernel. Well, the short of it is this: I don't have the slightest idea on how to get back to a working system and this site is the final straw I'm willing to grab. A complete disaster like this following an update initiated and executed by the system is unheard of in Windows-land; at least I haven't heard of it, yet, and therefore I am going to abandon Ubuntu and Linux altogeteher, if there is no remedy. Regards, RSel

    Read the article

  • SQLAuthority News – Microsoft Whitepaper – AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas

    - by pinaldave
    SQL Server 2012 has many interesting features but the most talked feature is AlwaysOn. Performance tuning is always a hot topic. I see lots of need of the same and lots of business around it. However, many times when people talk about performance tuning they think of it as a either query tuning, performance tuning, or server tuning. All are valid points, but performance tuning expert usually understands the business workload and business logic before making suggestions. For example, if performance tuning expert analysis workload and realize that there are plenty of reports as well read only queries on the server they can for sure consider alternate options for the same. If read only data is not required real time or it can accept the data which is delayed a bit it makes sense to divide the workload. A secondary replica of the original data which can serve all the read only queries and report is a good idea in most of the cases where there is plenty of workload which is not dependent on the real time data. SQL Server 2012 has introduced the feature of AlwaysOn which can very well fit in this scenario and provide a solution in Read-Only Workloads. Microsoft has recently announced a white paper which is based on absolutely the same subject. I recommend it to read for every SQL Enthusiast who is are going to implement a solution to offload read-only workloads to secondary replicas. Download white paper AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • Mobile Internet Modem Olivetti Olicard 155 does not work in Ubuntu 12.10

    - by João Fracassi
    I have a problem, I bought a 3G mobile modem, but does not work in Ubuntu 12.10, also did not work on Ubuntu 12.04. I installed the USB Modeswitch and has the same flaw, does not recognize the modem within the connection manager, I connect the modem to the USB port and wait up to 20 minutes and does not recognize the modem. I ran the command recognizes the modem and lsusb as "Bus 001 Device 007: ID 0b3c: C004 Olivetti TechCenter." Does anyone have this modem, you can help me solve the problem, or does anyone have a solution? Because the manufacturer does not support Linux and according to some forums usbmodeswitch supports esste model. Olivetti Olicard modem 155.

    Read the article

  • How-To Geek Gets the Microsoft MVP Award, Thanks to You

    - by The Geek
    The How-To Geek has won a Microsoft MVP award for the second year in a row, and it’s all thanks to you, our great readers that keep the site going. Join us for some mutual back-patting and some terrible photography of all the award stuff. Of course, if you’re familiar with the MVP award you’ll probably know that it’s actually for a single person, but in my opinion the award belongs to the entire How-To Geek community, without which this site would be nothing. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Five Sleek Audi R8 Car Themes for Chrome and Iron MS Notepad Replacement Metapad Returns with a New Beta Version Spybot Search and Destroy Now Available as a Portable App (PortableApps.com) ShapeShifter: What Are Dreams? [Video] This Computer Runs on Geek Power Wallpaper Bones, Clocks, and Counters; A Look at the First 35,000 Years of Computing

    Read the article

  • Join Oracle Database at Microsoft TechEd next week.

    - by Mandy Ho
    For the past nine years, Oracle has been a proud sponsor of Microsoft TechEd. TechEd is Mircosoft's premier technology conference for IT professionals and developers. This year, Oracle will demonstrate its latest database software for MS Windows, including Oracle Database 11g Enterprise and Express editions, TimesTen and MySQL.  Developers can learn how to develop .Net applications for the Oracle Database using the latest technologies, such as Entity Framework, LINQ and WCF Data Services. Attendees can also learn the new MySQL features enabling rapid installation, GUI Based application design, backup & recovery and much more within a Windows environment. Oracle will have a BOF (Birds of a Feather Session) on Tuesday, June 12, from 3:15 to 4:30. The topic will be Big Data: The Next Frontier for Innovation, Competition and Productivity. Otherwise you can visit Oracle everyday during the expo hours from Mon, June 11 to Thursday, June 14 at our booth #613. Talk to experts on TimesTen and MySQL on Windows and .NET. Also, we will have our 3D interactive demos on Oracle's engineered systems showing off Oracle Exadata, Database Appliance and more. Visit  http://northamerica.msteched.com/ for more information. 

    Read the article

  • Why can't I upgrade my kernel via the terminal?

    - by Alvar
    If I type sudo apt-get update && sudo apt-get upgrade I can only see that the kernel packages are kept back, and not installed. As the screenshot shows. If I then start the update manager I can install the kernel, with no problems at all. As the second screenshot shows. Why is this? The kernel is a new package and not an upgrade of an old one, this is why you can't use the command upgrade that upgrades packages. You need to use the command dist-upgrade to install new packages.

    Read the article

  • Ubuntu 12.10 64bit fresh install, wireless issue!

    - by Dave
    Just installed a fresh Ubuntu 12.10 64bit on my laptop, run the update manager, restarted and suddenly I can't use my wifi anymore. Ubuntu software center installed automatically the wifi additional driver as you can see in my screenshot. If I mark the option "Do not use the device" and apply changes, restarting Ubuntu my wifi is back and I can use it. If I run iwconfig my terminal is showing this Now if I use Ubuntu for more than 20 minutes surfing the web my wifi it keeps to be connected but I don't receive any signal from it. Any page I try to open it simply don't open (just waiting icon). If I disconnect my wifi and connect it again, same issue, it doesn't work. The only way to make it work again is to restart Ubuntu. And the same story it happens again after aprox. 20, 30 minutes. WIFI device details: 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller [14e4:4727] (rev 01) Thanks, Dave

    Read the article

  • Can I force window to open on top of other windows when opened by keyboard shortcut?

    - by Rasmus
    I use SpaceFM as my primary file manager on Ubuntu. I typically open folder directly by keyboard shortcuts, so, e.g. Ctrl+Super+W opens my Work folder. Specifically, I use execute the command spacefm -w /home/rasmus/Work/ by the above shortcut, with the -w ensuring that SpaceFM opens a new window. However, this new window is not always open on top of the last active window on the workspace. This is rather annoying, as it means I sometimes have to "dig" for the newly opened window. So, my question is: Is there something additional I can add to the executed command that will ensure that the fresh window is opened on top? Alternative solutions to the same effect are welcome.

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • Microsoft ADO.NET 4 Step by Step

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Many years ago, I wrote Pro ADO.NET 2.0. I still think that in the plethora of new data access technologies that have come out since, the basic core ADO.NET fundamentals are still every developer must know, and sadly they do not know. So for some crazy reason, I still see every project make the same data access related mistakes over and over again. Anyway, the challenge is that on top of the core ADO.NET fundamentals, there is a vast array of other new technologies you must learn. The important of which is Entity Framework. So, I was asked to, and I was pleased to be the technical reviewer for Microsoft ADO.NET 4, Step by Step, by Tim Patrick. This book introduces the reader not just to the basic ADO.NET principles, but also Entity Framework, LINQ to SQL, and WCF Data Services. So what you may ask is a SharePoint guy like me doing with such interest in ADO.NET land? Well, that’s what the other side says, what is a hardcore data access sorta guy doing in SharePoint land? :). I have authored/co-authored 4 books so far on data access (1,2,3,4), and one on pure SharePoint, and now one on SharePoint 2010 BI. These are very intertwined topics. And LINQ to SQL and LINQ to SharePoint are almost copy paste of each other. WCF Data services are literally the same in both. And many Entity Framework concepts also apply within SharePoint. So there, I did these both for “interest” reasons. Comment on the article ....

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >