Search Results

Search found 9944 results on 398 pages for 'plan explorer'.

Page 143/398 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • allow editing of config files by WIndows Server 2008 admins running non-elevated?

    - by Justin Grant
    My company produces a cross-platform server application which loads its configuration from user-editable configuration files. On Windows, config files are locked down at Setup time to allow reading by all users but restrict editing to Administrators only. Unfortunately, on Windows Server 2008, even local administrators no longer have admin privileges (because of UAC) unless they're running an elevated app. My question is: if a Windows Server 2008 admin wants to edit an admins-only config file, how does he normally do it? Is he forced to use a text editor which is smart enough to auto-elevate when elevation is needed, like Windows Explorer does in response to access denied errors? Or is there something that we can do in our app (e.g. in ACLs we lay down at setup time) which signal apps (or explorer) that elevation is needed before editing the file or which otherwise make our app friendlier to admins running on modern Windows OS's?

    Read the article

  • How to disable Eclipse's behavior of maximizing editor when double clicking a tab?

    - by Steve Kehlet
    I recently updated my Eclipse (now running 20100218-1602), and I've found whenever I click around quickly between tabs on the tab bar, it will sometimes maximize the editor and hide the PHP Explorer to the left. After researching a little, this appears to be a feature of double clicking a tab. So I guess it's my fault, I'm sure I'm clicking around too fast and mistakenly double clicking a tab, but it happens often enough on what I'd consider a normal editing session that I've come to absolutely loathe it, and even after the usual googling due diligence cannot figure out how to turn it off. From this post (http://stackoverflow.com/questions/594659/eclipses-tab-double-click-on-visual-studio) someone mentions the Window.AutoHideAll shortcut, however that seems to only be for assigning keyboard shortcuts--this is a mouse click thing. But maybe it's a clue. I can't find anything relevant under Eclipse - Preferences - PHP. I don't think it's specific to PHP because if I switch to the Java perspective, double clicking a tab hides the Package Explorer. Any suggestions are appreciated, thanks!

    Read the article

  • Playing around with my Java project in Eclipse... what the crap did I just do?

    - by Daddy Warbox
    I don't even remember how, but somehow I managed to make all of my project's source files hidden in Eclipse's Package and Project Explorer panels. Go figure. 'Show Filtered Children (alt+click)' temporarily reveals the files, and only in Package Explorer can I double-click to reopen them from this view. They go back into hiding after I select another item, though. Plus, now I'm getting other annoyances, such as all of the folded non-hidden items altogether expanding when I click on an item, and the entire folder tree of my project now being shown in these panels (including my .svn subversion folders... which shouldn't be any of Eclipse's business, presently). Long story short, my Package/Project Explorers' just blew up on me, and I want to know how to fix this. Thanks in advance. P.S. What's a good guide I can use to learn my way around this silly contraption, anyway?

    Read the article

  • Copy Rows in a One to Many with LINQ to SQL

    - by Refracted Paladin
    I have a table that stores a bunch of diagnosis for a single plan. When the users create a new plan I need to copy over all existing diagnosis's as well. I had thought to try the below but this is obviously not correct. I am guessing that I will need to loop through my oldDiagnosis part, but how? Thanks! My Attempt so far... public static void CopyPlanDiagnosis(int newPlanID, int oldPlanID) { using (var context = McpDataContext.Create()) { var oldDiagnosis = from planDiagnosi in context.tblPlanDiagnosis where planDiagnosi.PlanID == oldPlanID select planDiagnosi; var newDiagnosis = new tblPlanDiagnosi { PlanID = newPlanID, DiagnosisCueID = oldDiagnosis.DiagnosisCueID, DiagnosisOther = oldDiagnosis.DiagnosisOther, AdditionalInfo = oldDiagnosis.AdditionalInfo, rowguid = Guid.NewGuid() }; context.tblPlanDiagnosis.InsertOnSubmit(newDiagnosis); context.SubmitChanges(); } }

    Read the article

  • Phantom activity on MySQL

    - by LoveMeSomeCode
    This is probably just my total lack of MySQL expertise, but is it typical to see lots of phantom activity on a MySQL instance via phpMyAdmin? I have a shared hosting plan through Lithium, and when I log in through the phpMyAdmin console and click on the 'Status' tab, it's showing crazy high numbers for queries. Within an hour of activating my account I had 1 million queries. At first I thought this was them setting things up, but the number is climbing constantly, averaging 170/second. I've got a support ticket in with Lithium, but I thought I'd ask here if this were a MySQL/shared host thing, because I had the same thing happen with a shared hosting plan through Joyent.

    Read the article

  • Use multiple inheritance to discriminate useage roles?

    - by Arne
    Hi fellows, it's my flight simulation application again. I am leaving the mere prototyping phase now and start fleshing out the software design now. At least I try.. Each of the aircraft in the simulation have got a flight plan associated to them, the exact nature of which is of no interest for this question. Sufficient to say that the operator way edit the flight plan while the simulation is running. The aircraft model most of the time only needs to read-acess the flight plan object which at first thought calls for simply passing a const reference. But ocassionally the aircraft will need to call AdvanceActiveWayPoint() to indicate a way point has been reached. This will affect the Iterator returned by function ActiveWayPoint(). This implies that the aircraft model indeed needs a non-const reference which in turn would also expose functions like AppendWayPoint() to the aircraft model. I would like to avoid this because I would like to enforce the useage rule described above at compile time. Note that class WayPointIter is equivalent to a STL const iterator, that is the way point can not be mutated by the iterator. class FlightPlan { public: void AppendWayPoint(const WayPointIter& at, WayPoint new_wp); void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp); void RemoveWayPoint(WayPointIter at); (...) WayPointIter First() const; WayPointIter Last() const; WayPointIter Active() const; void AdvanceActiveWayPoint() const; (...) }; My idea to overcome the issue is this: define an abstract interface class for each usage role and inherit FlightPlan from both. Each user then only gets passed a reference of the appropriate useage role. class IFlightPlanActiveWayPoint { public: WayPointIter Active() const =0; void AdvanceActiveWayPoint() const =0; }; class IFlightPlanEditable { public: void AppendWayPoint(const WayPointIter& at, WayPoint new_wp); void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp); void RemoveWayPoint(WayPointIter at); (...) }; Thus the declaration of FlightPlan would only need to be changed to: class FlightPlan : public IFlightPlanActiveWayPoint, IFlightPlanEditable { (...) }; What do you think? Are there any cavecats I might be missing? Is this design clear or should I come up with somethink different for the sake of clarity? Alternatively I could also define a special ActiveWayPoint class which would contain the function AdvanceActiveWayPoint() but feel that this might be unnecessary. Thanks in advance!

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • Configuring 2 XBee Modules on OSX for wireless connection

    - by Annette
    I am trying find out how and with which program for OSX (10.5.8) I can configure serial ports? I am trying to establish a wireless connection between two Xbee´s (RF modules) and cannot figure out how to use ZTerm nor screen under Terminal. The setup I am using is: an Arduino+Xbeeshield+Xbee with external power supply, and an xbee on the xbee explorer connected to the Computer via USB. I am trying to gather information on this through various forums, but most of them cover the configuration issue for PC using X-CTU (which I tried with CrossOver but it doesn´t recognize ny of my ports). According to one source, using screen under Terminal should show me all my serial ports, particularly /dev/tty.KeySerial1 - but it doesn´t show, even though I´ve plugged in both my arduino+xbee shield and the xbee on the explorer.

    Read the article

  • (C#) How do you de-elevate permissions for a child process

    - by Davy8
    I know how to launch a process with Admin privileges from a process using: proc.StartInfo.UseShellExecute = true; proc.StartInfo.Verb = "runas"; where proc is a System.Diagnostics.Process. But how does one do the opposite? If the process you're in is already elevated, how do you launch the new process without admin privileges? More accurately, we need to launch the new process with the same permission level as Windows Explorer, so no change if UAC is diabled, but if UAC is enabled, but our process is running elevated, we need to perform a certain operation un-elevated because we're creating a virtual drive and if it's created with elevated permissions and Windows explorer is running unelevated it won't show up. Feel free to change the title to something better, I couldn't come up with a good description

    Read the article

  • Equivalent to produce field glow in other browsers?

    - by Liso22
    I was long using this to add a glow to focused fields, I accessed my page from Firefox for the first time and realized it doesn't work on it, and most likely not on explorer either. border: 1px solid #E68D29; outline-color: -webkit-focus-ring-color; outline-offset: -2px; outline-style: auto; outline-width: 5px; I had copy pasted it from another page so I'm not quite sure how it works. What is the equivalent for Firefox or Explorer? I mean how do I make a similar glow in other browsers? Thanks

    Read the article

  • optimizing oracle query

    - by deming
    I'm having a hard time wrapping my head around this query. it is taking almost 200+ seconds to execute. I've pasted the execution plan as well. SELECT user_id , ROLE_ID , effective_from_date , effective_to_date , participant_code , ACTIVE FROM CMP_USER_ROLE E WHERE ACTIVE = 0 AND (SYSDATE BETWEEN effective_from_date AND effective_to_date OR TO_CHAR(effective_to_date,'YYYY-Q') = '2010-2') AND participant_code = 'NY005' AND NOT EXISTS ( SELECT 1 FROM CMP_USER_ROLE r WHERE r.USER_ID= E.USER_ID AND r.role_id = E.role_id AND r.ACTIVE = 4 AND E.effective_to_date <= (SELECT MAX(last_update_date) FROM CMP_USER_ROLE S WHERE S.role_id = r.role_id AND S.role_id = r.role_id AND S.ACTIVE = 4 )) Explain plan ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 37 | 154 (2)| 00:00:02 | |* 1 | FILTER | | | | | | |* 2 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 1 | 37 | 30 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | N_USER_ROLE_IDX6 | 27 | | 3 (0)| 00:00:01 | |* 4 | FILTER | | | | | | | 5 | HASH GROUP BY | | 1 | 47 | 124 (2)| 00:00:02 | |* 6 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 159 | 3339 | 119 (1)| 00:00:02 | | 7 | NESTED LOOPS | | 11 | 517 | 123 (1)| 00:00:02 | |* 8 | TABLE ACCESS BY INDEX ROWID| USER_ROLE | 1 | 26 | 4 (0)| 00:00:01 | |* 9 | INDEX RANGE SCAN | N_USER_ROLE_IDX5 | 1 | | 3 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | N_USER_ROLE_IDX2 | 957 | | 74 (2)| 00:00:01 | -----------------------------------------------------------------------------------------------------

    Read the article

  • Sql Server 2000 Stored Procedure Prevents Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • Getting a query to index seek (rather than scan)

    - by PaulB
    Running the following query (SQL Server 2000) the execution plan shows that it used an index seek and Profiler shows it's doing 71 reads with a duration of 0. select top 1 id from table where name = '0010000546163' order by id desc Contrast that with the following with uses an index scan with 8500 reads and a duration of about a second. declare @p varchar(20) select @p = '0010000546163' select top 1 id from table where name = @p order by id desc Why is the execution plan different? Is there a way to change the second method to seek? thanks EDIT Table looks like CREATE TABLE [table] ( [Id] [int] IDENTITY (1, 1) NOT NULL , [Name] [varchar] (13) COLLATE Latin1_General_CI_AS NOT NULL) Id is primary clustered key There is a non-unique index on Name and a unique composite index on id/name There are other columns - left them out for brevity

    Read the article

  • How to make dll referenced by ActiveX component accessible?

    - by sherpa
    I have an ActiveX component developed in C#. I'm referencing a dll which comes with other native dlls and it is loading them on the fly expecting them to be in the same folder (probably using Assembly.GetExecutingAssembly().Location but not sure, I don't have control over them). There is also one ini file which is also expected to be in the same folder. When I run the application as Windows form app, everything is fine, of course. But the trouble comes when I run it within Internet Explorer as the base directory is at the location of iexplore.exe. Obviously neither the dlls or the ini file can be found and many temporary working files outputted by the dlls are located in Internet Explorer folder. The workaround is to copy all ddls and ini file into the ie folder but that is not something I'd be happy about. What is the proper solution to this? Can I somehow set the base path for the ActiveX component?

    Read the article

  • Delete from empty table taking forver

    - by Will
    Hello, I have an empty table that previously had a large amount of rows. The table has about 10 columns and indexes on many of them, as well as indexes on multiple columns. DELETE FROM item WHERE 1=1 This takes approximately 40 seconds to complete SELECT * FROM item this takes 4 seconds. The execution plan of SELECT * FROM ITEM shows the following; SQL> select * from midas_item; no rows selected Elapsed: 00:00:04.29 Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=19 Card=123 Bytes=73 80) 1 0 TABLE ACCESS (FULL) OF 'MIDAS_ITEM' (Cost=19 Card=123 Byte s=7380) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 5263 consistent gets 5252 physical reads 0 redo size 1030 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 0 rows processed any idea why these would be taking so long and how to fix it would be greatly appreciated!!

    Read the article

  • About the leading newline in Visual Studio solution files.

    - by mafutrct
    Sometimes, for unknown reasons, VS 2008 creates solution files led by a newline. Microsoft Visual Studio Solution File, Format Version 10.00 # Visual Studio 2008 [...] This happened on various machines, and I have no idea why this is. A Google search did not yield any useful results. Now, why do I worry about this? Because I can't open these solutions in Windows Explorer. I have to open VS, select File - Open - Solution and it works fine. But to open solutions from within Explorer, I have to edit the sln file and remove the leading newline. Edit: After Leom's suggestion I tested a few times and found that the issue is solely dependent on the leading newline.

    Read the article

  • get function address from name [.debug_info ??]

    - by user361190
    Hi, I was trying to write a small debug utility and for this I need to get the function/global variable address given its name. This is built-in debug utility, which means that the debug utility will run from within the code to be debugged or in plain words I cannot parse the executable file. Now is there a well-known way to do that ? The plan I have is to make the .debug_* sections to to be loaded into to memory [which I plan to do by a cheap trick like this in ld script] .data { *(.data) __sym_start = .; (debug_); __sym_end = .; } Now I have to parse the section to get the information I need, but I am not sure this is doable or is there issues with this - this is all just theory. But it also seems like too much of work :-) is there a simple way. Or if someone can tell upfront why my scheme will not work, it ill also be helpful. Thanks in Advance, Alex.

    Read the article

  • VB.Net 2008 IDE hanging - MSVB7.dll eating 100% CPU when editing code

    - by Andrew Backer
    I am having a problem with msvb7.dll eating 50%+ cpu on my dual core system. This usually lasts 10-30 seconds or so, during which time the IDE is non-responsive. This occurs when I do pretty much anything in the text editor, and can be replicated by simply adding blank lines to a function, and then deleting them. Or pasting some code. Or... lotsa stuff. SP1 installed I had DevExpress' refactor/coderush, components, and codeit.right installed, but have removed all 3 of them. (I had installed the latest version of Refactor Pro! (9.3.4), perhaps the day before) I have tried a VS.NET Repair. There is a kb that referenced some cpu destroying with vb, but it was included in SP1 Also: The solution consists of ~30 VB projects and 2 C# projects 8 other developers aren't having any issues with this (or at least not the SAME issues, we all have em) Clean get from TFS was done Project builds properly, can can even debug. This doesn't seem to happen on really small solutions, but perhaps it does and it just goes away super quick. Any clues at all as to what might be causing this, or how to fix it? I REALLY don't want to lose another day uninstalling and reinstalling and patching and so on =) If that even fixes it. Here is the stack trace (process explorer) that I get from the threads window when the msvb7.dll is churning. --- title in process explorer [threads] tab for process -------- cpu:49.28% cswitch delta: 300 to 3500 startaddress: [msvb7.dll+0x4218c] msvb7.dll version: 9.0.30729.1 --- actual stack trace ------- ntkrnlpa.exe!KiUnexpectedInterrupt+0x121 ntkrnlpa.exe!ZwYieldExecution+0x1c56 ntkrnlpa.exe!KiDispatchInterrupt+0x72e NDIS.sys!NdisFreeToBlockPool+0x15e1 // shortened stack trace. all of these are from msvb7, msvb7.dll+0x46ce7 <- 0x2676a <- 0x2698e <- 0x38031 <- 0x2659f <- 0x26644 msvb7.dll+0x25f29 <- 0x2ac7a <- 0x27522 <- 0x274a0 <- 0x2b5ce <- 0x2b6e4 msvb7.dll+0x67d0a <- 0x68551 <- 0x6817b <- 0x681f0 <- 0x67c38 <- 0x65fa8 msvb7.dll+0x666c6 <- 0x6672c <- 0x6673d <- 0x6677c <- 0x667b4 <- 0x63c77 msvb7.dll+0x63e97 <- 0x42c3a <- 0x42bc1 <- 0x41bd7 kernel32.dll!GetModuleFileNameA+0x1b4 This is the list of stuff from "copy info" in help-about, shortened to a resonable length. Microsoft Visual Studio 2008 | Version 9.0.30729.1 SP Microsoft Visual Studio 2008 Professional Edition - ENU Service Pack 1 (KB945140) KB945140 Microsoft .NET Framework | Version 3.5 SP1 Microsoft Visual Basic 2008 Microsoft Visual C# 2008 Microsoft Visual F# for Visual Studio 2008 Microsoft Visual Studio 2008 Team Explorer | Version 9.0.30729.1 Microsoft Visual Studio 2008 Tools for Office Microsoft Visual Web Developer 2008 Hotfix for Microsoft Visual Studio 2008 Professional Edition - ENU KB944899, KB945282, KB946040, KB946308, KB946344, KB946581, KB947171 KB947173, KB947180, KB947540, KB947789, KB948127, KB946260, KB946458, KB948816 Microsoft Recipe Framework Package 8.0 Process Editor WIT Designer 1.4.0.0 Process Editor for Microsoft Visual Studio Team Foundation Server, Version 1.4.0.0 tangible T4 Editor 9.0 tangible T4 Text Template Editor - T4 Editor tangibleprojectsystem 1.0 Team Foundation Server Power Tools October 2008 SQL Prompt 4.0 (disabled)

    Read the article

  • jquery is not working in IE and giving error

    - by Param-Ganak
    Hello friends! I have jquery validation code which is working fine in ff but the same code is not working in Internet Explorer. There is no error when I run same script in FF but there is an error when i run same scritp in Internet explorer the error is as follows Error: Expected identifier, string or number code: 0 I cant able to understand this problem please any one have guidence on this. I cant paste code here cause the code is very big? so please any one came accross such error before or any one know any possibility due to that such error came. please help me friends!

    Read the article

  • Oracle: Difference in execution plans between databases

    - by Will
    Hello, I am comparing queries my development and production database. They are both Oracle 9i, but almost every single query has a completely different execution plan depending on the database. All tables/indexes are the same, but the dev database has about 1/10th the rows for each table. On production, the query execution plan it picks for most queries is different from development, and the cost is somtimes 1000x higher. Queries on production also seem to be not using the correct indexes for queries in some cases (full table access). I have ran dbms_utility.analyze schema on both databases recently as well in the hopes the CBO would figure something out. Is there some other underlying oracle configuration that could be causing this? I am a developer mostly so this kind of DBA analysis is fairly confusing at first..

    Read the article

  • A HREF URL won't work

    - by user3586248
    I am trying to get the following link to work. <a href='#' onclick='window.open(" | &ESPP_Info_URL | ");return false;'>Employee Stock Purchase Plan Information</a> Basically the &ESPP_Info_URL variable takes in a url so that the code below looks like... <a onclick="window.open(https://...);return false;" href="#">Employee Stock Purchase Plan Information</a> But when I click the url it just refreshes the page. Does anyone know how to get this to access the link within the window.open function?

    Read the article

  • Long pause when accessing DFS namespace

    - by Matt
    We've recently migrated our Windows network to use DFS for shared files. DFS is working well, except for one annoying problem: users experience a significant delay when they try to access a DFS namespace that they have not accessed for some time. I have tried to troubleshoot the issue but have not had any success so far, and I was hoping someone here may have some pointers to help resolve the problem. Firstly, some background on our network: The network uses a Windows 2008 functional level Active Directory domain with two Windows 2008 DCs and two DNS servers (one on each of the DCs). The network is DNS only - no WINS. All computers are located at the same site and connected by Gigabit Ethernet. We have approximately 20 Domain-based DFS namespaces in Windows 2008 mode, and each DFS namespace has two Windows 2008 DFS namespace servers (the same two servers for all namespaces). All namespace servers are in FQDN mode and all folder targets are specified using their FQDN. All computers are up-to-date with Service Packs and patches. The actual folder targets (i.e. the SMB shares our DFS folders point to) are scattered across several file and application servers, all running Windows 2008 bar two application servers which run Windows 2003 R2, with no replication setup at all (e.g. all DFS folders currently only have one folder target). Some more detail on the problem: The namespace access delay is generally 1 - 10 seconds long and seems to occur when a particular computer has not accessed the requested namespace for approximately five minutes or more. For example, if the user has not accessed \\domain.name\namespace1\ for more than five minutes and attempts to access \\domain.name\namespace1\ via Windows Explorer, the Explorer window will freeze for 1 - 10 seconds before finally resuming and displaying the folders that exist in \\domain.name\namespace1. If they then close the Explorer window and attempt to access \\domain.name\namespace1\ again within five minutes the contents will be displayed almost instantly - if they wait longer than five minutes it will go through the 1 - 10 second pause again. Once "inside" the namespace everything is nice and snappy, it's just the initial connection to the namespace that is slow. The browsing delays seem to affect all variants of Windows that we use (Windows 2008 x64 SP2, Windows 2003 R2 x86 SP2, Windows XP Pro x86 SP3) - it is possibly a bit worse in Windows XP / 2003 than in Windows 2008, but I'm not sure if the difference isn't just psychological. Accessing the underlying folder targets directly exhibits no delay at all - i.e. if the SMB shares pointed to by DFS are accessed directly (bypassing DFS) then there is no pause. During trouble-shooting I noticed that the "Cache duration" for all of our DFS roots is set to 300 seconds - 5 minutes. Given that this is the same amount of time required to trigger the pause I assume that this caching is somehow related, although I am unsure exactly what is cached on the client and hence what needs to be looked up again after 5 minutes have elapsed. In trying to resolve the problem I have already tried / checked the following (without success): Run dcdiag on both Domain Controllers - no problems found Done some basic DNS server checks without finding any problems - I don't know how to check the DNS servers in detail, but I would add that the network is not exhibiting any other strange behavior that may point to a DNS problem Disabled Anti-virus on clients and servers Removing one of the namespace servers from a couple of namespaces - no difference So that's where I'm up to - and I'm out of ideas. Can anyone suggest what may be causing the delays and/or what I should be trying next?

    Read the article

  • IBM Server Config questions

    - by Joel Coel
    I have a few questions on a potential server setup. First, the situation: Last year we bought an IBM x3500 server with 2 Xeon E5410's, 9GB RAM, 6 HDDs. The original intent for this server was to replace the old exchange e-mail server. It was brought in, set up, and then shortly after we switched to gmail. Shortly after that my predecessor left for greener pastures, and finally I was hired. So this nice server is now sitting (mostly) idle. This year I have budget again for one server, and of course I want to put this other server to work. I'm thinking about the best use for the two server, and I think I finally have a plan for what I want to do with them. The idea is to use the two newer servers as a pair of VM hosts. I will set up each server with the same 8 VMs, but divide up the load so that only 4 are active per physical host. That means I've normally got 2GB RAM + 2 cores per host. I've done some load testing to pick out what servers to convert to virtual, and chose them so that each host will be capable of handling the entire set of 8 by itself in a pinch with 1 core and 1GB RAM, but would be very taxed to do so. This should take our data center from 13 total servers down to 7. The "servers" I'm replacing are mostly re-purposed desktops, so I'm more than happy to be able to do this. Now it's time to go shopping for the new server. I'd like my two hosts to match as closely as possible, and so I'm looking at IBM again. It also helps that we have some educational matching grant money from IBM that I need to use to help pay for this system (we're a small private college). So finally, (if you're not bored already), we come to my questions: Am I missing anything big or obvious in this plan? I'm a little worried about network performance since the VM hosts will only have 4 nics total where 8 used to be, but I don't think it will be a problem. Is there anything else like this I might be overlooking? Am I making it even too complicated? IBM no longer has a good analog to last year's server. If I want to match the performance (8 cores, 9GB RAM, 1333mhz front side bus, 6 spindles), I have to spend quite a bit more than we paid last year: $2K+, or nearly a 33% cost increase. This only brings a marginal increase in performance. The alternative to stay in budget is to take a hit on the fsb down to 800mhz or cut the number of cores in half, neither of which is attractive. The main cost culprit is the processor. IBM no longer offers the E5410. It's listed as a part, but not available in any of the server configs I've looked at. I'm considering getting the cheapest 800mhz fsb dual core xeon I can configure and then buying the E5410's separately. That's still an extra $350 I wasn't counting on, but that's better than $2K. I want to know what others think of this - will it work or will I end up with the wrong motherboard or some other issue? Am I missing a simple way to configure the server I really want? I don't really intend to do this, but one option to save some money back is to omit the redundant power supply. Since my redundancy plan for these system is to switch over to a completely different host, the extra power isn't fully necessary. That said, it's still very helpful to avoid even short downtimes while I switch over VMs. Has anyone done this?

    Read the article

  • Backup a hosted Sharepoint

    - by David Mackintosh
    One of my customers has outsourced their Sharepoint and Exchange services to a hosted services provider. I believe it is a Sharepoint 2007 service. It is a shared hosting solution, so we do not have any kind of access to the server itself; we only have user-level and sharepoint-administrator-level access to the Sharepoint application. They have come to the point where they would like to have a copy of everything that is on the Sharepoint server. I have downloaded the Office Sharepoint Designer 2007, and it features three (!) ways to backup a Sharepoint server, none (!) of which work for me: File-Export-Personal Web Package: When selecting everything, it calculates a negative size. Barfs with No "content-type" in CGI environment error. File-Export-Sharepoint Template: barfs with a A World Wide Web browser, such as Windows Internet Explorer, is required to use this feature error. Site-Administration-Backup Web Site: wants to create the backup .cmp file on the sharepoint server itself. I don't have access to any servers on the same network so I can't redirect it to any form of the suggested \\server\place. Barfs with a The Web application at $URL could not be found. [...] error. Possibly moot because Google tells me that bad things happen using OSD to back up sites larger than 24MB (which this site is most definitely). So I called the helpdesk of the outsource provider, and got told that they recommend using OSD, but no they don't actually provide any application support for OSD (not that I blame them for that), but they could do a stsadm.exe backup and provide us with that, and OSD should be able to read the resulting cmp file. Then for authorization reasons they had my customer call them directly (since I can't authorize such an operation), and they told him that he didn't want a stsadm.exe backup, he wanted to get into an 'explorer view' and deal with things that way (they were vague). Google hasn't been much help in figuring out what an 'explorer view' is, let alone how I bring one up. The end goal of this operation is to have a backup of the site as it exists (hopefully today, but shortly anyways) in such a format that we don't need another sharepoint server to restore it to. Ie we'd like to be able to pick individual content directly out of this backup. We are not excessively concerned with things like formatting. We just want the documents. This is a fairly complex site with multiple subsites and multiple folders per subsite, so sitting there and manually downloading each file isn't really going to happen if there is a better easier way. So, my questions: Is the stsadm.exe backup what I want? If not, what do I want? If I manage to convince them that I do want the stsadm.exe backup, can I pick files out of the resulting backup file with OSD? If OSD isn't going to let me extract individual files, is there a tool I can use that can?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >