Search Results

Search found 36788 results on 1472 pages for 'sql 2008'.

Page 326/1472 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Active Directory Support Folder Redirection AND Portable Home Directories?

    - by Robert F
    Does anyone here know if Active Directory will support the use of both Windows Folder Redirection and Mac OS X's Portable Home Directories for synchronizing a user's files to a remote share? I want to synchronize my user's files with a remote share as a way of backing up their data. This is fairly straightforward if a user has only a Windows computer or only a Mac computer. However, will Active Directory support a situation in which a user has both types of computers or they have a Mac on which they're running Windows within Parallels? If I configure a remote share via Group Policies for their Windows files and then configure a different share for their Mac files via ADUC, when they change a file on either computer, will AD know which computer the file was changed on and synchronize that file with the appropriate remote folder? Thanks!

    Read the article

  • Linux server cannot be pinged

    - by misamisa
    I have set up a Linux server in DMZ. There is another Windows server running in same DMZ. These two servers can be pinged via internet using my home PC. However, the another Linux server rented from a hosting service provider can only be pinged from the Windows Server and not from the Linux server (accessed via internet). So the situation is: Windows server (DMZ) ---ping--- Rented Server.....Successful Linux server (DMZ) ---ping--- Rented Server.......Unreachable Home PC ---ping--- Linux server (DMZ).......Successful Home PC ---ping--- Windows server (DMZ).....Successful When I ran tcpdump on my Linux Server(DMZ) and started ping from Rented Server, it showed that the Linux Server(DMZ) is receiving ping and replying. There is no restriction defined in hosts.deny and hosts.allow file that might cause this problem. What else should I check to get this working?

    Read the article

  • ADFS 2.0 and CRM 2011 IFD - Error 403 when being redirected

    - by JohnThePro
    I'm not sure what happened here, but let me give you the rundown. I have a CRM 2011 IFD that by all accounts was functioning. Out of nowhere, I find that when being redirected to ADFS 2.0 login page by CRM, instead of seeing the login page, I get the following error: 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I'm not sure what is going on here. The certs are good, as best as I can tell, the logins are good. More specifically, nothing has been modified. This all worked just fine, and now it doesn't. I'm really stumped.

    Read the article

  • DC on Hyper V Host

    - by Saif Khan
    I've read a few similar questions but wasn't clear. I have a small office (13) users and recently purchased a new single server (this is all I have to work with for now). Server specs Memory - 16GB Drives - 6 SCSI (2TB) PROCESSOR - DUAL QUAD I plan on making the Hyper-V host the DC and then 2 VMs, one for a file server and the other an application server. Question - since I am restricted to this single server, would it be a big issue making the Hyper-V host the domain controller? Your input greatly appreciated.

    Read the article

  • The session setup from the computer <computerName> failed to authenticate.

    - by TheCodeMonk
    Every once in a while, I get a client PC that won't be able to log into the domain. This morning it was telling us that the trust relationship between the pc and the domain failed. I checked the event logs on the primary domain controller and I see this for 2 PCs (the one that had the problem and one that can log in today). The session setup from the computer failed to authenticate. The name(s) of the account(s) referenced in the security database is . The following error occurred: Access is denied. I know how to fix this, by rejoining the PC to the domain... But why does this happen and how can I prevent it so I don't have to keep rejoining PCs to the domain?

    Read the article

  • Representing server state with a metric

    - by Sal
    I'm using Microsoft's Performance Monitor to dump logs of RAM, CPU, network, and disk usage from multiple servers. I'd like to get a single metric that captures the state of a given variable to a good extent. For instance, disk usage is pretty stable, so if I take a single reading that says I have 50% remaining disk space, that reading will give me an accurate measure for the day. (The servers aren't doing heavy IO writing.) However, the tricky part here is monitoring CPU and network usage. The logs currently dump the % CPU usage every ten seconds. If I take a straight average of the numbers, it may not represent reality, as % CPU will be much lower during the night than day. (We host websites that sell appliance items.) I'd like to get an average over a span during peak hours (about 5 hours in the day) and present a daily peak hour metric. Of course, there are most likely some readings that will come in as overly spiked (if multiple users pinged the server at once) or no use (a momentary idle state). Is there a standard distribution/test industries use in these situation?

    Read the article

  • What happens to encrypted mails when CA certificate expires in my Windows Domain

    - by Wolfgang
    does anybody know what will happen to encrypted /signed mails when a root authority certificate expires in my domain network? Can the certificate still be validated from the clients and will the clients recognize that the certificate was valid when the mail was encrypted / signed? Respectively what will happen when a migration to a new infrastructure will take place or if I install a new root-CA? Is there a need to also migrate the expired root certificate?

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • How can I view locking on a file on a file?

    - by JamesP
    Hi all, We have a database file (foxpro) on a Windows share (2003 server), we're having some problems where the program that writes to this file has to retry as the file is locked, this all happens very quickly and within a few seconds the file is available but the problem is it shouldn't be locked. Does anyone know how we can view what's locking it? Any tools available? Thanks James

    Read the article

  • How to address a recurring low temperature error seen at every boot-up?

    - by GregC
    After updating to latest controller firmware, I started receiving the following error messages: LSI 2208 ROC: Temperature sensor below error threshold on enclosure 1 Sensors 5 thru 7 Is this something I should worry about, or is it a Red Herring? Details: I have a Sans Digital NexentaSTOR 24-disk JBOD enclosure connected to LSI 9286-8e RAID-on-Chip controller with two SAS cables. Seagate ES.2 3TB SAS hard drives populate every bay in the enclosure.

    Read the article

  • Need to use DART, can't access windows at all

    - by Jack
    I have an install of windows server R2. After installing two patches, the system is stuck in a reboot loop. I wish to uninstall these patches to revert my stsem to normal. I have looked at dism, but it does not seem to provide a way to uninstall specific MSP patches. I don't believe I can utilize msiexec in WinRE to uninstall the patches. And so, I believe DART is what I need, which will allow me to uninstall specific patches. However, I cannot access my install of windows at all, which is what is apparently needed to make the dart media. Is there any way around this?

    Read the article

  • File auditing software for Windows Server 2003

    - by David Collantes
    I am looking for a program or program suite that will allow the auditing of network shared resources (specifically storage space), and render reports (who created, deleted, moved, modified files, etc). Yes, I know I can turn on auditing on Windows, but the Event Log isn't quite the "charmer" for the job.

    Read the article

  • NLB Accessed Denied from second server

    - by Igor K
    Hello Have NLB set up and working. Just two servers, web1 and web2. I set up NLB on web1 and can view both machines as Converged. (I had to specify a login/pass to connect to web2). On web2, looking at the NLB Manager, it only shows web2 and says in the log "Accessed Denied. Error connecting to web1". Any ideas how to fix?

    Read the article

  • Roaming Profiles, Folder redirection or... both

    - by Adrian Perez
    Hello, i'm developing a remote desktop services in w2008r2. Now, it's going to be a server, but in the future it's possible that another server could be added to the farm. Now, i'm creating roaming profiles and folder redirection to save space. Now, i have some doubts... if i'm redirecting all the folders i can do through gpo (start menu, desktop, appdata, My Documents, Videos, Music...), does it make sense to use roaming profiles? I mean, i'm redirecting almost everything. So, if i don't use roaming profiles, what kind of data is not shared/roamed? Perhaps is not necessary and if i set roaming profiles, i will add more unnecessary complexity to the infraestructure. What do you think about? Some advice or recomendation? Thanks!

    Read the article

  • IIS ASP Redirect Removal

    - by Kim L
    We have a website that is setup on IIS 7 and are trying to replace it with a new site, but need a redirect that is in place removed. The old site used a custom file as the homepage (WN-main.asp). We removed all the old site files, including web.config, and placed them in a subdirectory for safe keeping. The new site no longer uses ASP, and we'd like to use a regular index.html as the default. However, when we go to the website, it keeps trying to redirect our .com to .com/WN-main.asp -- and that gives us a 404 Error in the Application for "Default Web Site" because we removed that page. In the IIS "Default Document" settings we have index.html at the top, and WN-main.asp is nowhere to be found in the list (it never was there). We've also removed the web.config file from the root directory, and put the entire old website in a subdirectory. As well as restarted IIS. We're assuming that the redirect is setup somewhere in IIS because if I navigate to .com/index.html which is our new site, it works. Our problem is that oursite.com redirects to oursite.com/WN-main.asp. Grr. If you go to www.worzalla.com you can see how it redirects to the WN-main.asp page right now as the homepage. Any ideas where this redirect could have been setup so we can remove it? Thanks!

    Read the article

  • How do I apply WinHTTP proxy settings domain-wide?

    - by Oliver Salzburg
    We're already configuring Internet Explorer proxy settings through group policy and it works great. Sadly, I've recently run into multiple issues where those settings are ignored by certain services. I realized that these service have one thing in common. They use WinHTTP, which has its own proxy settings. Now I'm asking myself how to apply those across the whole domain. I realize that I could create a logon script and simply run netsh winhttp import proxy source=ie, but, from experience I know that these settings require a reboot to take effect. So this wouldn't help me at all in a logon script. So, how can I do it?

    Read the article

  • Network Printer installs driver every time i connect

    - by Patrick Schneider
    running a Citrix XenApp 6.5 Farm with a strange problem. Using Ricoh UPD 3.10 and every time a user log on into a Citrix Session and get the printers connected (via logon script) Windows shows the "Finishing installation..." dialog for every printer. The Drivers are installed on all Citrix Server and every time a user connects a printer the dialog appears. Are there any settings how to disable this behaviour?

    Read the article

  • create assembly from network location

    - by mjw06d
    The error I'm receiving: CREATE ASSEMBLY failed because it could not open the physical file "\\<server>\<folder>\<assembly>.dll": 5(Access is denied.). TSQL: exec sp_configure 'clr enabled', 1 reconfigure go create assembly <assemblyname> from '\\<server>\<folder>\<assembly>.dll' with permission_set = safe How can I create an assembly from a unc path?

    Read the article

  • How can I compact the VHD file with Ubuntu?

    - by AmShegar
    I use windows server 2008r2 with role Hyper-V. The guest system is Ubuntu 12.04 LTC. It is situated on the dynamic virtual hard disk. I want to compact this VHD (The real size is 50 GB, 360 GB on the disk). But I can not do this, because the Ubuntu file system is not NTFS. What do I need (gparted, sdelete, ...) for solving this problem? The main problem is that the filesystem is not NTFS, but ext4.

    Read the article

  • Timesync on HyperV with CentOS 6.2

    - by WaldenL
    I've got a CentOS VM (release 6.2) running under HyperV. I have integration services installed (part of base now), and CentOS shows the current clocksource is hyperv_clocksource, however my time in the VM is about 10 minutes fast after a week of uptime. My understanding of the new IC and plugable clocksource is that this shouldn't happen any more. Is there any additional configuration necessary to get the plugable clocksource to "work?" I know there are plenty of links about setting kernel options to PIT and various stuff like that, but those all seem to pre-date the integrated clocksource support, and as I understand it shouldn't be needed anylonger. Nor should ntpd nor adjtimex.

    Read the article

  • 3 Servers, 2 Work Fine, One Has Network Issues

    - by ScaleOvenStove
    i have 3 servers, all relatively the same hardware/config, etc. I run some data pulls on all 3, and on 2 of them, they have 1 nic, and they work fine. On the other , there are 2 nics, and unless they are both plugged in or teamed, the processes time out. Any ideas on why this would be? It doesn't make sense to me, as the other two work fine with 1 nic and don't time out when running the same processes.

    Read the article

  • Task Scheduler : Logon as Batch Job Rights

    - by Brohan
    I'm trying to set up a scheduled task which will work under the Network Administrators account, whether the account is logged in or not (on a specificed computer) According to the Task Scheduler, I need 'Logon as batch job rights'. Attempting to change this setting in the Local Security Policy window has it the option to add the Administrator account to the groups greyed out. Currently, only LOCAL_SERVICE may Logon as Batch job. Attempting to add administrator to this group hasn't worked. How do I make it able to set this permission so that I can run tasks if I'm logged in or not?

    Read the article

  • Should I be running my scheduled backups as SYSTEM or as the our domain admin?

    - by MetalSearGolid
    I have a daily backup which is scheduled through the Task Scheduler. It failed with a strange error code last night, but I was able to search and find a blog post with how to avoid the error in the future. However, one of his recommendations was to run the backups as the Administrator user of the domain. Since all of the files being backed up are local to this system, should I continue to have the backups run as SYSTEM? Or is it actually better to run it as a different user? I have been running these backups for well over a year now and have only had a handful of failures, but ironically when it does fail, the error code means it was a permissions issue (or so I read, this code seems to be undocumented by Microsoft). Thanks in advance for any insight into this. Might as well post the error code here too, in case anyone would like to share their insight on this as well, but I rarely ever get this error, so I don't care too much about it: 4294967294

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >