Search Results

Search found 14969 results on 599 pages for 'tfs 2008'.

Page 509/599 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • The Birth of SSAS Compare

    - by Red Gate Software BI Tools Team
    Noemi Moreno, Red Gate Business Intelligence Specialist Software vendors – even Microsoft – tend to forget about the needs of business intelligence developers. We are a rare and rather invisible species. For example, BIDS remained in VS 2008 until SQL Server 2012. It took until this release before we got something as simple as an “undo” function. Before I joined Red Gate as a BI specialist, I worked on SQL Development. I’ll never forget the time I discovered Red Gate’s SQL Compare tool and how it reduced the task of preparing a database release from a couple of days to ten minutes. When I moved to SSAS, MDX and cubes, I became frustrated with the deployment process because I couldn’t find a tool that made Cube releases as easy as they are with SQL Compare. This became my quest. I pitched the idea to a few people in Red Gate’s regular Down Tools Week, when everyone puts down their day-to-day tasks and works on their own projects. My task was to reason with a roomful of cynical developers, hardened to the blandishments of project managers, for help to develop a tool that would compare two different SSAS databases and create the script to process only the objects that needed processing, thereby reducing release time to only a few minutes. I walked to the podium and gave them the full story of the distressed BI specialists, doomed to spend tedious hours preparing deployment scripts. A few developers recovered from their torpor to cast a languid eye at my presentation. It wasn’t enough. In a sudden impulse, I blurted out a promise to perform a flamenco dance for just the team if the tool was able to successfully compare two SSAS databases and generate a script by the end of the week. I was lucky enough that some of them believed me and jumped in: David Pond (Dev), Matt Burton (Dev), Tilman Bregler (Dev), Shobana Sekar (Test), Ruchija Raj (Test), Nick Sutherland (Product Manager) and Irma Tanovic (BI). They didn’t know that Irma and I would be away on a conference in Amsterdam and would leave them without our support. But to my surprise, they had a working tool by the time we came back – basic, and with a few bugs, but a working tool nonetheless! Seeing it compare a very basic SSAS database, detect the changes and generate the scripts was amazing! Something that normally takes half a day was done in under a minute. Since then, a few months have passed and a BI Tools team has been created at Red Gate to work full time on BI tools for BI developers, starting with SSAS Compare. How cool is that? So download the free beta and give us your feedback. And the flamenco? I still need to deliver that. Tilman reminds me every day! I need to get the full flamenco costume.

    Read the article

  • Exporting from the GAC

    - by TATWORTH
    Recently I had need to export from the GAC - here are some useful resources:http://gacassemblyexporter.codeplex.com/SourceControl/list/changesetshttp://blogs.msdn.com/b/johnwpowell/archive/2009/01/14/how-to-copy-an-assembly-from-the-gac.aspxThere is an alternative method at http://aspdotnetcodebook.blogspot.co.uk/2008/09/get-copy-of-dll-in-gac-or-add-reference.html that involves de-installing what is part of the operating system - I would recommend this as a method of last resort.

    Read the article

  • WHERE x = @x OR @x IS NULL

    - by steveh99999
    Every SQL DBA and developer should read the blog of MVP Erland Sommarskog – but particularly  his article on dynamic search conditions in T-SQL. I’ve linked above to his SQL 2005 article but his 2008 version is also a must-read. I seem to regularly come across uses of the SQL in the title above… Erland’s article explains in detail why this is inefficient, but I came across a nice example recently… A stored procedure contained the following code :- WHERE @Name is null or [Name] like @Name as a nonclustered index exists on the Name column, you might assume this would be handled efficiently by SQL Server. However, I got the following output from SET STATISTICS IO Table 'xxxxx'. Scan count 15, logical reads 47760, physical reads 9, read-ahead reads 13872, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Note the high number of logical reads… After a bit of investigation, we found that @Name could never actually be set to NULL in this particular example. ie the @x IS NULL was spurious… So, we changed the call to WHERE  [Name] like @Name Now, how much more efficient is this code ? Table 'xxxxx'. Scan count 3, logical reads 24, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0 A nice easy win in this case…… a full index scan has been replaced by a significantly more efficient index seek. I managed to recreate the same behaviour on Adventureworks – here’s a quick query to demonstrate :- USE adventureworks SET STATISTICS IO ON DECLARE @id INT = 51721 SELECT * FROM Sales.SalesOrderDetail WHERE @id IS NULL OR salesorderid = @id SELECT * FROM Sales.SalesOrderDetail WHERE salesorderid = @id Take a look at the STATISTICS IO output and compare the actual query plans used to prove the impact of  WHERE @id IS NULL. And just to follow some of Erland’s advice – here’s how you could get similar performance if it was possible that @id could actually sometimes contain NULL. DECLARE @sql NVARCHAR(4000), @parameterlist NVARCHAR(4000) DECLARE @id INT = 51721 – or change to NULL to prove query is functionally correct SET @sql = 'SELECT * FROM Sales.SalesOrderDetail WHERE 1 = 1' IF @id IS NOT NULL SET @sql = @sql + ' AND salesorderid = @id' IF @id IS NULL SET @sql = @sql + ' AND salesorderid IS NULL' SET @parameterlist = '@id INT' EXEC sp_executesql @sql, @parameterlist,@id Sometimes I think we focus too much on hardware and SQL Server configuration – when really the answer is focus on writing efficient SQL.

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • C# OpenGL problem with animation

    - by user3696303
    there is a program that simulates a small satellite and requires that a rotation animation of the satellite along the three axes. But when you try to write an animation problem during compilation: the program simply closes (shutdown occurs when swapbuffers, mainloop, redisplay), when you write the easiest programs have the same problem arose. Trying to catch exception by try-catch but here is not exception. How to solve this? I suffer with this a few days. Work in c# visual studio 2008 framework namespace WindowsFormsApplication6 { public partial class Form1 : Form { public Form1() { try { InitializeComponent(); AnT1.InitializeContexts(); } catch(Exception) { Glut.glutDisplayFunc(Draw); Glut.glutTimerFunc(50, Timer, 0); Glut.glutMainLoop(); } } void Timer(int Unused) { Glut.glutPostRedisplay(); Glut.glutTimerFunc(50, Timer, 0); } private void AnT1_Load(object sender, EventArgs e) { Glut.glutInit(); Glut.glutInitDisplayMode(Glut.GLUT_RGB | Glut.GLUT_DOUBLE | Glut.GLUT_DEPTH); Gl.glClearColor(255, 255, 255, 1); Gl.glViewport(0, 0, AnT1.Width, AnT1.Height); Gl.glMatrixMode(Gl.GL_PROJECTION); Gl.glLoadIdentity(); Glu.gluPerspective(45, (float)AnT1.Width / (float)AnT1.Height, 0.1, 200); Gl.glMatrixMode(Gl.GL_MODELVIEW); Gl.glLoadIdentity(); Gl.glEnable(Gl.GL_DEPTH_TEST); Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT); Gl.glPushMatrix(); double xy = 0.2; Gl.glTranslated(xy, 0, 0); xy += 0.2; Draw(); Glut.glutSwapBuffers(); Glut.glutPostRedisplay(); Gl.glPushMatrix(); Draw(); Gl.glPopMatrix(); } void Draw() { Gl.glLoadIdentity(); Gl.glColor3f(0.502f, 0.502f, 0.502f); Gl.glTranslated(-1, 0, -6); Gl.glRotated(95, 1, 0, 0); Glut.glutSolidCylinder(0.7, 2, 60, 60); Gl.glLoadIdentity(); Gl.glColor3f(0, 0, 0); Gl.glTranslated(-1, 0, -6); Gl.glRotated(95, 1, 0, 0); Glut.glutWireCylinder(0.7, 2, 20, 20); } } }

    Read the article

  • Accessing controls of .aspx file in .aspx.cs without any declaration.!!??

    I am able to access the controls of ".aspx" file in ".aspx.cs" directly without any declaration in ".aspx.cs" or in designer.cs. How is this possible? This is happeing only if I open website as using File System. Create a new ASP.NET web site application with Visual Studio 2008. So following three files will be created automatically              "Default.aspx",              "Default.aspx.cs"              "Default.designer.cs" Now Delete "Default.designer.cs" perminently. Just create a button in Default.aspx file    <asp:Button runat="server" Text="Save Plan" ID="btnSave" />   Close the Solution and open the website as File System.               File -> Open Web Site -> File System -> Select Web Site Folder and Open the project.                   Now btnSave is automatically recognized in Default.aspx.cs without any declaration in Default.aspx.cs as bellow                            System.Web.UI.WebControls.Button btnSave; How btnSave is being recognized by .cs file without defining it anywhere as an object of System.Web.UI.WebControls.Button? Note: This happens only if you open Web Site from File System.           and No Declaration at all for btnSave. Please refer this article on this. span.fullpost {display:none;}

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

  • Accessing controls of .aspx file in .aspx.cs without any declaration.!!??

    I am able to access the controls of ".aspx" file in ".aspx.cs" directly without any declaration in ".aspx.cs" or in designer.cs. How is this possible? This is happeing only if I open website as using File System. Create a new ASP.NET web site application with Visual Studio 2008. So following three files will be created automatically              "Default.aspx",              "Default.aspx.cs"              "Default.designer.cs" Now Delete "Default.designer.cs" perminently. Just create a button in Default.aspx file    <asp:Button runat="server" Text="Save Plan" ID="btnSave" />   Close the Solution and open the website as File System.               File -> Open Web Site -> File System -> Select Web Site Folder and Open the project.                   Now btnSave is automatically recognized in Default.aspx.cs without any declaration in Default.aspx.cs as bellow                            System.Web.UI.WebControls.Button btnSave; How btnSave is being recognized by .cs file without defining it anywhere as an object of System.Web.UI.WebControls.Button? Note: This happens only if you open Web Site from File System.           and No Declaration at all for btnSave. Please refer this article on this. span.fullpost {display:none;}

    Read the article

  • Visual Studio 2010/2012 Context Menus and a Keyboard

    - by SergeyPopov
    As a software developer, I spend a lot of time using Visual Studio. I have to say that I completely satisfied with Visual Studio generally. Nevertheless, sometimes Visual Studio starts annoying me. One issue which poisoned my existence for a long time is that context menu behavior in VS2010 is a little different than it was in VS2005/2008. Unfortunately, in VS2012 this behavior remains the same as in VS2010. So, what is the issue? Working with Visual Studio, I use the keyboard in most cases. I also use the Apps key on the keyboard to open context menus in the code editor. Moreover, long time ago I am got used to using some key sequences, and press the keys without even thinking. In VS2008, a mouse pointer position didn’t affect context menu navigation if I used the keyboard. Every time I opened a context menu I was sure that, for example, the "Apps, Down, Down, Enter, Up, Enter" key sequence always invoke "Organize Usings > Remove and Sort" function. But in VS2010, this behavior has been changed. If a mouse pointer is located over an opened context menu, the menu item under the mouse pointer becomes selected immediately! So, now the "Apps, Down, Down, Enter, Up, Enter" key sequence will not lead to expected results all the time. In some cases, the result may be a little scary. If you are using Visual SVN extension, this key sequence may invoke "Revert whole file" function. Of course, this is not a fatal problem because "Undo" function restores all the changes, but this behavior strongly annoys me. In Visual Studio 2012, context menu behavior is a little different than in VS2010, but a mouse pointer position still affects the keyboard navigation in the context menu, and this behavior is still annoying. I tried to find the way how to change this behavior, but I didn’t manage to find the answer quickly. Then I decided to go right though, so I wrote a small utility which fixes this issue. This utility watches for Apps key, and if the key is pressed in Visual Studio, the utility moves the mouse pointer to the top of the screen before opening the context menu. You can find binaries and the source code of this utility here: http://code.google.com/p/vs-ctx-menu-fix/downloads/list This utility works fine in Windows 7 and Windows 8 x64. I wrote the first version in January, 2011; now I just added Visual Studio 2012 support. I hope you will find this utility useful! :)

    Read the article

  • Windows 7 pro remote desktop true multi monitor support patch or hack

    - by Ryan D
    After hours of research the closest i came to any patch that could fix this was a concurrent sessions patch which is not what i want. I have two machines both windows 7 professional. Its not an option to upgrade either to ultimate as the computer being remoted is in another state at our corp office. I need Dual monitor support and would have used the multimon i:1 edit thing however the other tower is not Win 7 ultimate. I have read over and over how it is not supported except with ultimate and enterprise and 2008 server. What have they put into windows 7 ultimate that is not in Win 7 pro and how can i get it or patch Win 7 pro to give it the same functionality. I would have paid for a software patch had i been able to find it anywhere. Summary: I am looking for the ability to remote to a Win 7 pro computer with another Win 7 computer while being able to use Dual monitors. Is there anyone that has the skill or balls to help with this? Most Respectfully Ryan

    Read the article

  • Teamcity build agent gives 504 gateway timeout

    - by Anthony
    I have a new teamcity build agent machine, which when started up tries to connect to the build server and fails. It never shows up in the connected, disconnected or unauthorised agents tabs of the build server web interface. The logs on the build agent show that it fails to connect with a 504 gateway timeout. This is from teamcity-agent.log [2012-09-04 15:34:59,776] INFO - buildServer.AGENT.registration - Registering on server http://10.0.10.16, AgentDetails{Name='my-local', AgentId=null, BuildId=null, AgentOwnAddress='10.0.1.14', AlternativeAddresses=[10.0.10.32], Port=8080, Version='21424', PluginsVersion='21424-md5-somechecksum', AvailableRunners=[ABunchOfPlugins], AvailableVcs=[SomeRunners], AuthorizationToken='sometoken'} [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Call http://10.0.10.16/RPC2 buildServer.registerAgent3: org.apache.xmlrpc.XmlRpcClientException: Server returned incorrect status code: 504 Gateway Time-out [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Connection to TeamCity server is probably lost. Will be trying to restore it. Take a look at logs/teamcity-agent.log for details (unless you're using custom logging). (I have edited some identifying data out of this log excerpt) But I can reach the build server. In fact, tracert shows that it is very nearby. Tracing route to TEAMCITYSERVER [10.0.10.16] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 10.0.2.1 2 <1 ms <1 ms <1 ms TEAMCITYSERVER [10.0.10.16] Trace complete. I can see a teamcity login page if I hit http://10.0.10.16 in the browser. The teamcity service is logging in as the same (local administrator) account as I used to log in and test the network. The build agent is a windows 2008 server VM hosted on Ubuntu 12.04 under Oracle VirtualBox. I have disabled firewalls on both the Windows and Ubuntu machines. Other VMS with similar configuration can connect fine and do not report this error. What can possibly be preventing this connection?

    Read the article

  • Microsoft Standalone CA - Set expiration date of an individual request

    - by Hall72215
    I have set up a Microsoft Standalone CA on 2008 R2 as a root CA. I'm trying to setup a subordinate Enterprise CA. I generated the certificate request, and submitted it to the root CA. Then, I ran the following command to set the expiration date to 20 years (the request ID is 5): certutil -setattributes 5 "ValidityPeriod:Years\nValidityPeriodUnits:20" Then, I approved the request, but it failed. The Request Status Code is: The specified time is invalid. 0x8007076d (WIN32: 1901) The Request Disposition Message is: Denied by Policy Module 0x8007076d, The requested validity period is invalid. Confirm that the validity period or expiration data and time specified in the request does not extend beyond the validity period of the CA certificate, the certificate template, and the CA. The validity period of the CA can be verified by running the following commands: certutil -getreg ca\validityPeriod & certutil -getreg ca\ValidityPeriodUnits The validity period of the CA certificate is 40 years (expires in 2052). The template condition doesn't apply since this is a standalone CA. The result of those commands is Years and 1, respectively. It appears that I will need to change the CA's validityPeriod and validityPeriodUnits. But, I want to keep the default expiration for a request at 1 year. Is there a way to set a maximum and default expiration, or am I going to have to change it, issue the certificate, and then change it back?

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • MS NPS denying access, can't validate server certificate

    - by Fred Weston
    At my office we use a Cisco WLC2504 wireless controller and starting about a week ago we started having problems with users connecting to one of our secure wireless network. We are running AD on Windows Server 2008 R2 and use network policy server to control access to our wireless network. When I look at the logs in event viewer after a failed connection attempt I see an access reject message: Reason Code: 262 Reason: The supplied message is incomplete. The signature was not verified. Looking this up on Google I found this article: http://support.microsoft.com/kb/838502 I tried disabling server certificate validation on my computer and as soon as I did that I was able to connect to the network, so it seems that there is some sort of certificate validation issue. I'm not sure which certificate is unable to be validated or how to fix it. This used to work and stopped suddenly by itself so I am thinking a certificate may have expired. When I go to NPS Policies Network Policies My policy Constraints Auth methods Microsoft PEAP and view the properties, the certificae specified here expires in 2016, so doesn't seem as though this could be the problem. Any suggestions on how to troubleshoot this issue?

    Read the article

  • Mounting an Azure blob container in a Linux VM Role

    - by djechelon
    I previously asked a question about this topic but now I prefer to rewrite it from scratch because I was very confused back then. I currently have a Linux XS VM Role in Azure. I basically want to create a self-managed and evoluted hosting service using VMs rather than Azure's more-expensive Web Roles. I also want to take advantage of load balancing (between VM Roles) and geo-replication (of Storage Roles), making sure that the "web files" of customers are located in a defined and manageable place. One way I found to "mount" a drive in Linux VM is described here and involves mounting a VHD onto the virtual machine. From what I could learn, the VHD is reliably-stored in a storage role, and is exclusively locked by the VM that uses it. Once the VM Role has its drive I can format the partition to any size I want. I don't want that!! I would like each hosted site to have its own blob directory, then each replicated/load-balanced VM Role to rw mount like in NFS that blob directory to read HTML and script files. The database is obviously courtesy of Microsoft :) My question is Is it possible to actually mount a blob storage into a directory in the Linux FS? Is it possible in Windows Server 2008?

    Read the article

  • procdump on w3wp.exe: Only part of a ReadProcessMemory or WriteProcessMemory request was completed

    - by JakeS
    I'm having a problem with an IIS application that occasionally spikes up in CPU usage, and am trying to use procdump to get a memory dump for examination. I'm running "procdump.exe -64 -mA 9999" where 9999 is the pid of the process. But every time I do it, I get an error: Only part of a ReadProcessMemory or WriteProcessMemory request was completed. Doing this also recycles the apppool, relieving the CPU spike, so I can't keep trying until I get it right. Does anyone know what is going wrong? EDIT WITH MORE INFO: So far I've failed to generate a debug dump no matter what tool I try. All of them seem to generate the same sort of error. This is 2008 R2 Datacenter running IIS7 with a 64-bit asp.net web site. My best guess is that something is getting blocked, causing some requests to remain open in IIS and gradually using up resources. If I monitor the worker process using the IIS Manager and view all requests, throughout the day I'll start to see some requests that "stick" and run forever. Some of these are for static files. Some are for aspx pages. I cannot see any "common" reason for them. Every once in a while the app pool starts taking up 100% CPU and the only remedy is to kill it.

    Read the article

  • Cannot bind OSX to AD

    - by erotsppa
    I'm trying to get an mac mini running snow leopard server to join a windows domain here. The windows domain server is running Windows server 2008. When I go to "Accounts" in my System Preferences, and lick on "Join", I get this error: "Unable to add server. Node name wasn't found. (2000)" In my console messages I find this: 10-04-06 11:42:25 AM System Preferences1452 -[ODCAddServerSheetController handleOtherActionError: gotError: Error Domain=com.apple.OpenDirectory Code=2000 UserInfo=0x2004f2f80 "Custom call 82 to Active Directory failed.", Node name wasn't found. I specified a FQDN for the domain server, so I am totally confused as to why it would list "domain = com.apple...." in that error. I've tried firing up the Directory Utility and trying to join a domain via the Active Directory option there. Again I fill in the FQDN, and the proper administrator/password acount info. Now I get a different error: "Invalid Domain An invalid Domain and Forest combination was specified. You should enter a fully qualified DNS name for the domain and forest (e.g., ads.company.com)." If anyone has any pointers or suggestions this would be appreciated.

    Read the article

  • WinXP: Error 1167 -- Device (LPT1) not connected

    - by Thomas Matthews
    I am writing a program that opens LPT1 and writes a value to it. The WriteFile function is returning an error code of 1167, "The device is not connected". The Device Manager shows that LPT1 is present. I have a cable connected between a development board and the PC. The cable converts JTAG pin signals to signals on the parallel port. Power is applied and the cable is connected between the development board and the PC. The development board is powered on. I am using: Windows XP MS Visual Studio 2008, C language, console application, debug environment. Here is the relevant code fragments: HANDLE parallel_port_handle; void initializePort(void) { TCHAR * port_name = TEXT("LPT1:"); parallel_port_handle = CreateFile( port_name, GENERIC_READ | GENERIC_WRITE, 0, // must be opened with exclusive-access NULL, // default security attributes OPEN_EXISTING, // must use OPEN_EXISTING 0, // not overlapped I/O NULL // hTemplate must be NULL for comm devices ); if (parallel_port_handle == INVALID_HANDLE_VALUE) { // Handle the error. printf ("CreateFile failed with error %d.\n", GetLastError()); Pause(); exit(1); } return; } void writePort( unsigned char a_ucPins, unsigned char a_ucValue ) { DWORD dwResult; if ( a_ucValue ) { g_siIspPins = (unsigned char) (a_ucPins | g_siIspPins); } else { g_siIspPins = (unsigned char) (~a_ucPins & g_siIspPins); } /* This is a sample code for Windows/DOS without Windows Driver. */ // _outp( g_usOutPort, g_siIspPins ); //---------------------------------------------------------------------- // For Windows XP and later //---------------------------------------------------------------------- if(!WriteFile (parallel_port_handle, &g_siIspPins, 1, &dwResult, NULL)) { printf("Could not write to LPT1 (error %d)\n", GetLastError()); Pause(); return; } } If you believe this should be posted on Stack Overflow, please migrate it over (thanks).

    Read the article

  • untrusted (self-sign) certificate on android browser

    - by Basiclife
    Hi all, Apologies for the brevity of this question but due to an unfortunate series of events, I've managed to brick my PC so am posting from my phone... We've just set up Windows Small Business Server 2008 at work which has an external web portal accessible via HTTPS. We haven't yet bought?installed any certificates. The portal provides access to email, sharepoint, remote desktop, etc.... (I'm aware some of these are never going to work on the phone) From firefox / other desktop browsers, this displays an "untrusted cert' warning which I can choose to ignore. When browsing from my mobile I get a popup notification which says. "A secure connection could not be established" when I OK this (my only option) I see the standard android-generated "unable to load page - has it moved?" Page. Does anyone know of a way to either accept the certificate temporarily or allow untrusted certificates generally? I'm aware that the latter option is non-ideal in the mid to long term but at the moment, I need to access the portal and am willing to either toggle settings as/when required or forego using the mobile for banking, etc... to mitigate my risk. Thanks in advance for any help you can provide and apologies again for brevity In case it helps I'm on the G1 running android 1.6 using the default browser

    Read the article

  • The file STDOLE2.TLB cannot be found or contains a Visual Basic for Applications library that is not

    - by Jim Birchall
    In Microsoft Project 2007 Professional, If I select Tools Macro Security the message: The file "STDOLE.TLB" cannot be found or contains a Visual Basic for Applications library that is not valid. Verify that the file name is correct, and try again. If the Visual Basic for Applications library is invalid, reinstall Project. I am a software developer and I have developed an Add-In for MS Project using Visual Studio 2005, with all the problems that entails. As such my machine is configured with both Project 2003 Professional and Project 2007 Professional. I only noticed this error when trying to debug my Add-In. The Add-In loads and draws the menu, but when I click on the menu option I receive this error (which is also generated by the Tools Macros Security option mentioned above). I have tried repairing office installations and uninstalling everything and then re-installing everything all over again, but after several hours I still get the same problem. Does anyone have any idea how to resolve this? Some method of finding out what type libraries are registered with STDOLE2.TLB may help if I can identify what is causing the problem. Also a way of manually unregistering the nasty library may be helpful. My machine is configured as follows: Windows 7 Ultimate x64 Project 2003 Professional Office 2007 Ultimate Project 2007 Professional Visio 2007 Professional Visual Studio 2005 Team Edition for Developers Visual Studio 2005 Tools for Office Second edition Visual Studio 2008 Professional I also installed MS Project 2010 Beta to test my Add-In against, but have since uninstalled it. I have a suspicion that this may have caused the problem, but I cannot be sure.

    Read the article

  • IIS 7.5 How to disable "Verify File Exists" for siteminder handler

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • Moving from ColdFusion 8 to ColdFusion 10 - Migration Fails

    - by XenoFoxx
    After having made several attempts to migrate from a ColdFusion 8 Standard server to a ColdFusion 10 Standard server, it feels like I am "almost" there. I'm using the 64 bit installer from Adobe's website. I'm using a Windows Server 2008 (64 bit) server with IIS 7.0. The installation itself goes smooth and the services start and are running. But at the end of the installation it says "ColdFusion Installed, but with errors" and it generates a log file. The log file reads: Migration Error: : Check that "C:\ColdFusion8" is a valid directory and is an installation of either ColdFusion MX 6 or ColdFusionMX 7 and further down says: Status: WARNING Additional Notes: WARNING - Could not migrate settings from previous version of ColdFusion Custom Action: com.macromedia.ia.action.MigrateColdFusionAction Status: ERROR Additional Notes: ERROR - class com.macromedia.ia.action.MigrateColdFusionAction NonfatalInstallException null The applicationHost.config file has new XML referencing the ColdFusion 10 directory, but IIS is still using ColdFusion 8. I'm also going to guess that the settings in the CF Administrator have not been migrated based on the message in the log above. I've followed the instructions on Adobe's site, including ensuring that ASP.NET, CGI, ISAPI Extensions, and ISAPI Filters are all enabled. I've also enabled IIS 6 Metabase Compatibility even though I don't think it's needed. Has anyone else had similar issues with ColdFusion 10 and IIS 7. Currently I have uninstalled CF 10 and reverted back to

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >