Search Results

Search found 12449 results on 498 pages for 'ebs e business suite'.

Page 438/498 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • Why can't I browse my D: drive, even if I'm in the Administrators group?

    - by Nic Waller
    My fileserver running Windows Server 2008 has two logical drives; the C: drive contains all of the system and application data, and the D: drive contains all of the business data. There are several shares on the top level of the D: drive that are working fine. However... When logged into the fileserver interactively via Remote Desktop, only the Domain Administrator and local Administrator accounts can browse the D: drive. I set up an account called "Maintenance" and added it to the local Administrators group, but when logged in with this user, I can't browse into the D: drive. The D: drive has the following permissions ACL: Full Access - SYSTEM Full Access - MACHINE\Administrators It won't even let me view the ACL for the E: drive. So I tried taking ownership of the E: drive, then I can read the ACL, and "Effective Permissions" says that I have full access. But I still get this error message. Location is not available D:\ is not accessible. Access is denied. Here's a screenshot proving that I get access denied even when I have Full Access. http://www.getdropbox.com/gallery/2319942/1/errors?h=2bd644

    Read the article

  • Windows Server 2012 Can't Print

    - by Chris
    I know this may sound incredibly stupid and there is probably an easy solution but I can't seem to find it. Friends of mine recently upgraded their server for their small business from the POS old one. New hardware and a change from Windows Server 2003 to Windows Server 2012. I've got everything they need transfered over and running except for printing. They need to be able to print to printers in the vans their technicians use from the server via remote desktop. In other words the use a laptop to remote desktop into the server and need to print invoices out from the remote server to printers attached locally via usb. On the old server they just installed the identical driver and that was it, they could print as needed. On this server no matter what we seem to do we can't get it to print remotely, and in the process we also discovered that the server can't even print to the network printer. It sees the printer on it's network and it sees (through redirect) the printers in the vans but when you hit print it claims it did and nothing happens. There isn't an issue with the printers themselves as every other device we have can print to them without issues. Is there some setting that is inhibiting the server from printing? Is there something I need to install (print server?) to add the functionality? Thanks in advance for helping me out here

    Read the article

  • How to extract attachments from Exchange 2003 database

    - by John
    I have an ancient Exchange 2003 server that I'm getting ready to retire. All user accounts have been migrated to Google Apps for Business, so no new mail is being sent or received on the server. There are less than 50 accounts on the server, but some are very large so that the whole Exchange database is between 10 and 20 GB. The largest account has over 100,000 messages. I believe that in the migration to Gmail, some attachments were not migrated. For peace of mind, I'd like to get the attachments out of the Exchange database. The only way I know of to do this is to set up a 2nd computer with Outlook on it, set up one of the accounts, and then sync the whole mail history and get the attachments out that way. Is there something simpler that I can do? Here are two possibilities: An Exchange attachment retrieval tool/script that pulls attachments for all accounts directly out of the Exchange database. An Exchange PST exporter tool/script that will export PST files for all accounts so that I can just load the PST files into Outlook at will.

    Read the article

  • Trouble setting up incoming VPN in Microsoft SBS 2008 through a Cisco ASA 5505 appliance

    - by Nils
    I have replaced an aging firewall (custom setup using Linux) with a Cisco ASA 5505 appliance for our network. It's a very simple setup with around 10 workstations and a single Small Business Server 2008. Setting up incoming ports for SMTP, HTTPS, remote desktop etc. to the SBS went fine - they are working like they should. However, I have not succeeded in allowing incoming VPN connections. The clients trying to connect (running Windows 7) are stuck with the "Verifying username and password..." dialog before getting an error message 30 seconds later. We have a single external, static IP, so I cannot set up the VPN connection on another IP address. I have forwarded TCP port 1723 the same way as I did for SMTP and the others, by adding a static NAT route translating traffic from the SBS server on port 1723 to the outside interface. In addition, I set up an access rule allowing all GRE packets (src any, dst any). I have figured that I must somehow forward incoming GRE packets to the SBS server, but this is where I am stuck. I am using ADSM to configure the 5505 (not console). Any help is very much appreciated!

    Read the article

  • Block SMTP connections from mail domains which don't themselves accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the original sender refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • MongoDB and GrifFS. What are the best storage options in the range of 1 TB?

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities?

    Read the article

  • SBS 2008 - DNS Forwarders timing out.

    - by Moif Murphy
    Hello, We have an SBS 2008 server that keeps losing connection to the internet approx 2-3 times a day. It's a simple setup, BT Business Broadband ADSL to a Wireless Zyxel router to the server. Clients connect via WiFi from their laptops. Plugging ethernet in makes no difference, only a reboot of the router seems to bring everything back again. I'm looking at the forwarders on the DNS properties page and they're timing out when trying to resolve the IPs. Currently there are two IPs in there, 194.72.9.34 which has timed out and 194.72.9.38 which has finally resolved to ns8.bt.net We've been in there and replaced all media, installed a PCI NIC, have changed the router three times. There are no errors in the DNS event logs pertaining to what's going on. We've also been on to BT who are adamant that it's not their end. Could someone shed some light on what could be going on or where else to look in the configuration of the server? Thank you.

    Read the article

  • Using Truecrypt to secure mySQL database, any pitfalls?

    - by Saul
    The objective is to secure my database data from server theft, i.e. the server is at a business office location with normal premises lock and burglar alarm, but because the data is personal healthcare data I want to ensure that if the server was stolen the data would be unavailable as encrypted. I'm exploring installing mySQL on a mounted Truecrypt encrypted volume. It all works fine, and when I power off, or just cruelly pull the plug the encrypted drive disappears. This seems a load easier than encrypting data to the database, and I understand that if there is a security hole in the web app , or a user gets physical access to a plugged in server the data is compromised, but as a sanity check , is there any good reason not to do this? @James I'm thinking in a theft scenario, its not going to be powered down nicely and so is likely to crash any DB transactions running. But then if someone steals the server I'm going to need to rely on my off site backup anyway. @tomjedrz, its kind of all sensitive, individual personal and address details linked to medical referrals/records. Would be as bad in our field as losing credit card data, but means that almost everything in the database would need encryption... so figured better to run the whole DB in an encrypted partition. If encrypt data in the tables there's got to be a key somewhere on the server I'm presuming, which seems more of a risk if the box walks. At the moment the app is configured to drop a dump of data (weekly full and then deltas only hourly using rdiff) into a directory also on the Truecrypt disk. I have an off site box running WS_FTP Pro scheduled to connect by FTPs and synch down the backup, again into a Truecrypt mounted partition.

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

  • Windows 7 'Unidentified Network'

    - by Throdne
    So my internet was working last night before I went to bed and woke up to a computer that is showing unidentified network. I have tried multiple ways of fixing it. > route delete 0.0.0.0 > uninstalling drivers - restart - reinstall from downloaded drivers > ipconfig /release - /renew > static IP > speed & duplex from auto negotiation to 1.0gbps full duplex > and also changed my network address to no value to 1234567890ab (not in that order) nothing seems to work. I have Comcast internet and when I connect the computer to the modem it works perfectly. but when I connect it to my router again, Unidentified network. I know my router isn't faulty, because my macbook pro, server, NAS, iPhones, and iPads are still working. I have also tried moving the port on the router, still same problem. my router is a: Cisco Small Business RVS4000 motherboard: GIGABYTE GA-990FXA-UD3

    Read the article

  • How do I make a PPT file as small as possible?

    - by grunwald2.0
    Currently I am agonizing over several large presentation files, which I happened to reprint to PDFs... One thing I wondered: Do PPT's (from Microsoft Powerpoint) always to have to be that big? And what would be the strategies to make a PPT smaller? (If we say "ceterus paribus" at e.g. 25 slides and assuming that one isn't allowed to use a cloud-based service like GDocs, rocketslide or Prezio.) Of course there are the obvious "bad guys": Images and graphics. But: How about roll-over animations etc, who knows how much space they take? How about "smart arts"? Could one save file size if one would use "Open Office" or "Libre Office" Impress? (I didn't try it yet.) And "what if": What if we need to include e.g. five images (or charts that can't be remade in Excel in time), how would we best reduce the file size impact of those five images, if we needed to? I ask all this from an honest "business" perspective. I am no nerd or "Microsoft MVP" and I don't intend on delving into LATeX or similar yet. But that doesn't mean that I am not curious and very willing to learn. I am basically interested in (proven) best practices. Yes I know this question is lacking "initial research", but I think the perspective of my question is interesting and unique to a lot of people and if we intend to make SE a "Q&A" / Wiki kind-of reference site, this question might be a good way to "collect" advice on a question that has a very defined goal: Minimum file-size.

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • Only receiving one document at a time from new web server.

    - by Robert Kuykendall
    We're trying to move our internal ticketing system from a Microsoft Small Business Server in the server closet to a Rackspace Cloud Server. The install is Fedora 11 LAMP, and should be default out of the box, except for the vhosts appended to the bottom of the httpd.conf. The new server is suffering from crippling load times, and watching the page load in Firebug it's easy to see the problem occurring, but I can't figure out the cause. Here is the [old server] (http://rkuykendall.com/uploads/old.server.png). I was expecting something like this, but a little slower since it was no longer hosted locally. Instead, the [new server] (http://rkuykendall.com/uploads/new.server.png) appears to only serve one file at a time. Here's another example of this [staircase load time effect] (http://rkuykendall.com/uploads/staircase.png) and another very clear example of the [staircase effect] (http://rkuykendall.com/uploads/staircase2.png). I talked to some guys on Freenode #httpd with no luck. I created a duplicate server to play with, and also created a fresh server with Fedora Core 13 and moved over just the database and web files with no luck. Any suggestions? ( image links disabled due to n00b-spam-restrictions )

    Read the article

  • Windows 8 "Upgrade Offer" eligibilty when running the Consumer Preview in a VM?

    - by Dan Harris
    If I have a VM running Windows 8 Consumer/Release Preview, am I allowed to take advantage of the Windows 8 upgrade offer, and install it on that machine? I would have assumed not...as there was never a licensed version of XP SP3 through to Windows 7 installed in that VM. It was a clean installation of the Consumer Preview into a VM. My confusion comes from the notes at the bottom of the download page for the Upgrade offer which states: Offer valid from October 26, 2012 until January 31, 2013 and is for individuals and small businesses needing to upgrade up to five devices. If you are a business customer looking to upgrade more than five devices to Windows 8 Pro, contact your Microsoft partner for more information. To install Windows 8 Pro, customers must be running Windows XP SP3, Windows Vista, Windows 7, Windows 8 Consumer Preview, or Windows 8 Release Preview. I am assuming it's not possible and i'll need to purchase the System Builder edition to install within a VM? My guess is that you can use your downloaded upgrade offer only if you updated Windows 7 to the release preview, and therefore had the Windows 7 license on the machine, I used the serial number from the Microsoft Website when downloading the Release Preview, and did a clean install, so there was never a Windows 7 license on the VM. I have MSDN for development purposes, but I am looking to run in a VM for personal use as well, so my MSDN license is not valid for that particular use.

    Read the article

  • Is there anything like Heroku for PHP and/or .NET?

    - by Wayne M
    In my area PHP is very widespread, so is .NET. Ruby not so much; most places have never heard of it. For some personal things I am "forced" to choose Rails because I want to take advantage of Heroku - the ability to deploy and scale on the cloud very easily is the main reason. Also, they offer a small FREE plan, with no ads or strings attached, that I can use for demo sites or, in this case, for my business' static page; as a totally bootstrapped startup I have maybe $50 or so in initial capital and cannot afford to pay monthly fees while I'm getting started. Are there any similar offerings for other languages? Specifically, I really like the small, 5MB site for free that Heroku offers - is there anything like that for PHP and/or .NET? I'm not even that concerned about the "cloud" part, but that would be a nice bonus. If there is, I might be able to kill two birds with one stone and pick up a useful skill as I'm doing my own thing instead of using something that nobody else knows or cares about. I should add I'm specifically interested in something that offers a free plan. As I said, Heroku has a 5mb plan that you can have as many as you want for free; I have yet to find anything similar for any other platform (most of the "free" sites require you to have ugly banners on your page, or don't allow you to use your own domain name), and to be honest I'm not too thrilled about using Ruby on Rails for everything simply to take advantage of this. I'm asking this here because I already asked it on StackOverflow and someone suggested it would be better suited here.

    Read the article

  • Disaster Recovery Standby Server

    - by user64300
    Hi, I work for a small business with 25 users and 2 servers. 1 server is the DC running Windows Server 2003/Exchange 2003. We want a reliable disaster recovery strategy for this server without having to spend a lot of money. We take regular backups but I have been advised that only an identical server will allow them to be restored easily. I'm trying to come up with a solution that means we don't have to buy two servers at twice the cost everytime we upgrade. I'm toying with the idea of upgrading our DC more frequently (say every 3 years) and then using the old server as the recovery server (temporarily - until we can source a replacement server). However, I won't know whether the backups will restore on the old server until I try it! We're planning to upgrade to Server 2008R2 in the near future so I'm hoping the backup tools will give me some success in restoring to different hardware (or perhaps I can use hyper-v if not). So what I am wondering is whether it is a idea to use old hardware as a disaster recovery strategy (providing we regular test it obviously!).

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • SBS 2008 Sharepoint Database good Memory Limit.

    - by ldelgado
    I manage a small network running on Small Business Server 2008. Lately, the Sharepoint embedded database is getting out of control with its memory usage. I've got a total of 16 GB of RAM on this server, and the Sharepoint database sometimes uses almost 8 GB of RAM. This never happened before, and it started happening after I installed Backup Exec 2010. It happens after a backup is performed. So I suspect there is a memory leak involved. I am working on that issue, but this question isn't about that. I would like to limit the amount of memory the Embedded database uses. I know how to do it. My question is, what would be the ideal amount of memory that I can allocate to Sharepoint? There are only 4 users on my network. One of the users uses two computers but not at the same time. They use sharepoint for a company calendar, and sometimes they share files that way also. Let me know if you need to know anything else. Thanks,

    Read the article

  • Security Token for Mac/Linux/Windows, self-managed, pref. open source?

    - by DevelopersDevelopersDevelopers
    I'm looking to buy an evaluation security token (combined smart card/usb reader) for my business that works on: Windows 7 x64 OS X 10.6.x x64 Ubuntu Linux (64 or 32 bit, 10.04 or 10.10, I can bend based on possible tokens) Functionality I need is: Login authentication Authentication for whole-disk encryption (in Linux/Windows, Mac is flexible here) Signing/encryption using PGP and x.509 certificates RSA-2048 key-capable (1024 not good enough.) I can manage the certificates myself Open source middleware/drivers (not necessarily FOSS, just source available. Can flex on this, I just want to be able to audit the code. OpenSC-compatible on Linux would be great.) Is there any token that can do all of this? Or would I need multiple ones to accomplish this? Or do I need to look at smart cards and readers to get this? I have been researching this for a while and have had a heck of a time even getting accurate information about products. Also, I am in the USA, and it appears that EU export laws prevent me from buying from there, so those vendors are out. I was looking at Feitian tokens from Gooze, but since they are in France I can't buy.

    Read the article

  • Router as primary DNS server, Server as alternate? (or vice versa)

    - by Jakobud
    We have a very small business network, with a typical cable modem hooked into a DD-WRT router. We also run a basic CentOS server that does a variety of things, including acting as the primary DNS server for the office. The reason we need an internal DNS server is because we do a lot of internal web development and use the DNS server to add/remove various local network URLs for internal website testing (like www.testsite.com.local). It's very important for us to be able to add/remove URL aliases easily to the DNS. The problem with this setup is that if we ever need to restart the CentOS server or take it offline for upgrades or whatever, then internet access for all computers on the network is lost. That's because each computer relies on that DNS server to access the Internet I guess? The router is online all the time and very very rarely has to be restarted. It would be nice if we could setup my router to be the primary DNS server but still be running DNS on my server. So we could still add my local testing website URLs to the DNS server in CentOS, but be able to also take down the CentOS server without loosing Internet access on the network. How would this be setup? Would I simply need to add both router + server IP addresses to each computer's IP settings? Is the router primary DNS and server secondary DNS server? Or vice versa? Or can one of the two serve as a fallback for the other? What (if anything) needs to be configured on both the router and server in order for them to recognize that the other DNS server exists on the network? Does anyone have any newb-friendly resources for setting up something like this?

    Read the article

  • Network bandwidth usage dashboard?

    - by SkippyFlipjack
    I have a couple of wifi access points hooked up to my home network, one of which I keep unsecured for some development I do; there are only a couple other homes within range and they've got their own wifi so it's not a big concern. I also have a Sonos system, Tivo, Roku, a couple laptops, a couple phones, an iPad and a desktop machine, all of which are internet-smart. So when my internet bandwidth tanks and it takes five minutes to load a YouTube video, I want to know what's going on, and there are many potential culprits. I'd like to be able to plug my MacBook into the primary router and see a nice little dashboard of the units on the network and what kind of bandwidth each is using at that moment. I could figure this out from WireShark or tcpdump but figure there has to be an easier way. I've tried a few different commercial products but none really presented the right info. Suggestions? (This may be a question for superuser since my Apple Time Capsule's SNMP capabilities are limited, but I figure admins of small business networks would have dealt w/ the same issue..)

    Read the article

  • Experience with asymmetrical (non-identical hardware) SQL Server 2005 / Win 2003 cluster

    - by user24161
    I am reasonably good at dealing with SQL Server clusters; I am wondering if folks have experience, good or bad, using a mix of different models of servers from the same vendor in one SQL 2005 cluster. Suppose: I have one more powerful, more RAM, more shizzle box and one less powerful, less memory, less shizzle box bound together in a 2-node cluster. These would be HP DL380 and 580 machines (not that it should matter) I understand AND automate the process of managing memory for each SQL instance, so there's no memory contention when SQL instances fail over. Basically I am thinking a CLR proc will monitor the instances and self-regulate memory caps on each instance, so that they won't page or step on one another. I get the fact the instances might be slower and or under memory pressure if they share a "lesser" node, and that's OK. The business can deal with a slower instance in a server-problem scenario. Reasonable? Any "gotchas" to watch out for? More info 10/28: doing some experiments with a test cluster I find that reconfiguring max/min memory is OK PROVIDED the instance isn't already under memory pressure. If I torture the system with a huge query that demands a big chunk of RAM, and simultaneously adjust the memory allocation to a smaller value than what is being actively used, it's possible to run the instance out of memory and have it halt and restart itself (unhappy situation). Many ugly out-of-memory messages in the error log, crashing, burning... It's an extreme case, but good to know. Seems, then, that it would only be really safe to set this on startup of the instance, as in have a startup script that says "I am on node1, so my RAM settings are X or I am on node two, so they are Y," like this: http://sqlblog.com/blogs/aaron_bertrand... Update: I am testing a SQL Agent + PowerShell solution described in more detail here.

    Read the article

  • Best way to restrict access to a folder in Dropbox

    - by Joe S
    I currently run a business with around 10 staff members and we currently use Dropbox Pro 100GB to share all of our files. It works very well and is inexpensive, however, I am taking on a number of new staff and would like to move the more sensitive documents into their own, protected folder. Currently, we all share one Dropbox account, I am aware that Dropbox for teams supports this, but it is far too expensive for us as a small company. I have researched a number of solutions: 1) Set up a new standard Dropbox account just for use by management, which will contain all of the sensitive documents, and join the shared folder of the rest of my team to access the rest of the documents. As i understand it, this is not possible with a free account, as any dropbox shared folder added to your account will use up your quota 2) Set up some sort of TrueCrypt container, and install TrueCrypt on each trusted staff member's machine, and store the documents inside that. Would this be difficult to use? I'd imagine the sync-ing would not work so well as the disk would technically be mounted at the time of use and any changes would be a change to the actual container rather than individual files. I was just wondering if anyone knows a way to do this without the drawbacks outlined above? Thanks!

    Read the article

  • Choosing local versus public domain name for Active Directory

    - by DSO
    What are the pros and cons of choosing a local domain name such as mycompany.local versus a publicly registered domain name such as mycompany.com (assuming that your org has registered the public name)? When would you choose one over the other? UPDATE Thanks to Zoredache and Jay for pointing me to this question, which had the most useful responses. That also led me to find this Microsoft Technet article, which states: It is best to use DNS names that are registered with an Internet authority in the Active Directory namespace. Only registered names are guaranteed to be globally unique. If another organization later registers the same DNS domain name, or if your organization merges with, acquires, or is acquired by other company that uses the same DNS names, then the two infrastructures cannot interact with one another. Note Using single label names or unregistered suffixes, such as .local, is not recommended. Combining this with mrdenny's advice, I think the right approach is to use either: Registered domain name that will never be used publicly (e.g. mycompany.org, mycompany.info, etc). Subdomain of an existing public domain name which will never be used publicly (e.g. corp.mycompany.com). The "never used publicly" part is a business decision so its probably best to get sign off from those in the company authorized to reserve domain names and subdomains. E.g. you don't want to use a registered name or subdomain that the marketing dept later wants to use for some public marketing campaign.

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >