Search Results

Search found 4884 results on 196 pages for 'ad hoc distribution'.

Page 159/196 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • OSX Server 10.5 - Cannot log into Workgroup Manager - diradmin password is correct

    - by Mister IT Guru
    I've got a setup where I am trying to rescue a broken AD. We can no longer authenticate on the Workgroup manager, with passwords being rejected all the time - even though it is correct. I can connect using the workgroup manager on another server and I get the user list as expected, but when I click the padlock to make changes, I get the following screen: The problem is, I know the password is correct, I just used it to connect to the server in the first place. I can log into the server using the local admin, and services such as AFP, VPN and SMB continue to serve users. I have about 300 or so users on this server, and I would very much like to avoid having a rebuild. As there is much configuration that has been done without my knowledge (it's a client machine), I'd like to attempt to fix it, and then create another server and migration OD off this broken machine, then decommission it "gently". Ultimately this would mean no disruption of services. What I'd like it some tips as to how to fix the problem with authenticating to make changes in the work group manager, and maintenance on open directory in general. Thanks

    Read the article

  • How to connect a VM running on an ESXi host to that host via a VMKernel NIC?

    - by Zac B
    Say I have an ESXi (5.0) host that runs a Linux distribution which hosts iSCSI targets, which contain the images for other VMs which the host will run. When it's used, I'll start the host first, then the iSCSI server, and then refresh all storage targets/HBAs in order to see the provided shares as online. I know it's a strange puzzle-box solution, but I was told to implement it. The ESXi host itself has a gigabit NIC which connects to the outside world. The guest OS (CentOS) supports VMXNet3, however, and if I can, I'd like to use its VMXNET3 NIC to host iSCSI for the ESXi host. How should I go about doing this? I went to create a new virtual network, and selected "VKernel", as it suggested that I use that type of network for SAN traffic, but it is apparently not set up for "self-hosted" SAN hosts, as the new network did not appear as an option to attach the CentOS box's VMXNET3 NIC to. How should I best connect an iSCSI host out to its "parent" ESXi host, if I need a) a 10gb connection, and (optionally) b) a VMKernel network for it?

    Read the article

  • Tracking Security Vulnerability remediation

    - by Zypher
    I've been looking into this for a little while, but havn't really found anything suitable. What I am looking for is a system to track security vulnerability remdiation status. Something like "bugzilla for IT" What I am looking for is something pretty simple that allows the following: batch entry of new vulnerabilities that need to be remediated Per user assignment AD/LDAP Authentiation Simple interface to track progress - research, change control status, remediated, etc. Historical search ability Ability to divide by division Ability to store proof of resolution for the Security Team to access Dependency tracking Linux based is best (that's my group :) ) Free is good, but cost doesn't matter so much if the system is worth it The systems doesn't have to have all of these features, but if it did that would be great. yes we could use our helpdesk software, but that has a bunch of pitfalls such as triggering SLA alerts and penalties as well as not easily searchable outside of a group. Most of what I have found are bug tracking systems that are geared towards developers, and are honstely way overkill for what I am looking for. Server Faults input is greatly appreciated as always!

    Read the article

  • Mysql, SSL and java client problem

    - by CarlosH
    I'm trying to connect to an SSL-enabled mysql server from my own java application. After setting up ssl on mysqld, and successfuly tested an account using "REQUIRE ISSUER and SUBJECT", I wanted to use that account in a java app. I've generated a private key (to a file called keystore.jks) and csr using keytool, and signed the csr using my own CA(The same used with mysqld and its certificate). Once signed the csr, I've imported the CA and client cert into the keystore.jks file. When running the application the SSL connection can't be established. Relevant logs: ... [Raw read]: length = 5 0000: 16 00 00 02 FF ..... main, handling exception: javax.net.ssl.SSLException: Unsupported record version Unknown-0.0 main, SEND TLSv1 ALERT: fatal, description = unexpected_message Padded plaintext before ENCRYPTION: len = 32 0000: 02 0A BE 0F AD 64 0E 9A 32 3B FE 76 EF 40 A4 C9 .....d..2;.v.@.. 0010: B4 A7 F3 25 E7 E5 09 09 09 09 09 09 09 09 09 09 ...%............ main, WRITE: TLSv1 Alert, length = 32 [Raw write]: length = 37 0000: 15 03 01 00 20 AB 41 9E 37 F4 B8 44 A7 FD 91 B1 .... .A.7..D.... 0010: 75 5A 42 C6 70 BF D4 DC EC 83 01 0C CF 64 C7 36 uZB.p........d.6 0020: 2F 69 EC D2 7F /i... main, called closeSocket() main, called close() main, called closeInternal(true) main, called close() main, called closeInternal(true) connection error com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure Any idea why is this happening?

    Read the article

  • Hyper-V 2008 R2 synthetic networking stops working with linux 2.6.32.15

    - by luxifer
    Hi there, so I thought I'd give Hyper-V on Windows Server 2008 R2 Enterprise a try on my Homeserver (yes, it's legit... got it from msdnaa). First thing to throw at it was my firewall which runs IPFire. This distribution currently uses the kernel version 2.6.32.15 and comes with the Hyper-V drivers. So I enabled them and at first they work just fine but after a few minutes they just fail. There are no packages going in or out anymore until I reboot the VM but sometimes even that won't work so the VM just keeps "Stopping" like forever. Emulated networking works fine but it slow and uses more CPU. That way my firewall routes slower than when running under virtualbox on an atom N270. My server has an E6750; VM is limited to 25%, but that should still outperform this atom CPU especially since it's never going anywhere near 100% CPU load, so give me a break! A quick google search led me to people having the same problem (even with other distributions and kernel versions that include those drivers) but no solution yet... I already found this but I can't quite follow the author on the part where he solved the issue - especially since I need two virtual nics for my firewall distro to work (obviously one internal and one external) What am I missing here?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Portable USB drives hidden pertition - New request

    - by ZXC
    This question was made by Francesco on Jul 29 '11 at 17:14. and the replies were not satisfactory due they not point to an important problem that´s: Why could anyone want to make certain data only accesible for a program but not to the users?. For example: If I want to do a safe distribution of original music for demostration purposes I will need several requisites: 1) The music should be heard using a simple procedure like selecting the name of each song on a playlist of a mediaplayer. 2) The portable media, ussually a portable USB drive, must hide for complete and should make unaccesible the files that contain the audio data to anything but the mediaplayer, that must be in the first partition, the one that is visible. 3) Considering that´s impossible to really hide files in a non-hidden partition, a second hidden partition should be created in the USB drive and the audio data will be stored there. 4) The trick is to read the audio data files stored in the hidden partition with a mediaplayer stored in the visible partition, the media player also should be a complete standalone program and independent from any library of the operating system except of the OS audio system. 5) The hidden partition should have a copy protection scheme that could impede to do copies of the data or create working ISO images of it. I know that this description could not be technically accurate but it has a complete logic from the needs of a music producer against the problem of piracy. The philosophy that surrounds the concept is to transform a virtual object like a digital string of audio in a solid object like the analog vinyl discs are.

    Read the article

  • PostgreSQL failover cluster on Windows Server

    - by user36997
    We are looking for advice on how to setup a basic failover cluster for our application: We will be using 4 machines running Microsoft Windows Server (most probably 2003). All four will always run our application, which is essentially a web service. Load balancing is "outsourced" - somebody else handles the distribution of the web requests among the servers. Only one of the servers will be running the PostgreSQL server actively at any given time. Another server (of the four) also has the DB installed, but is on standby/passive. The DB data is stored on shared storage. No copying data between servers. Reads are done very frequently by many end-users, and in rather small chunks of data. Writes are done much less frequently, by less users, and in very large bulks of data. Now, how can one configure Microsoft Cluster Service to keep only one instance of the DB server and 4 instances (1 per server) of our application at all times? And does PostgreSQL integrate neatly with MSCS at all? Update: Instead of keeping the data on shared storage, I also consider using log shipping to replicate data on a couple of DB servers. There are two issues with this option: Log shipping only makes sure that I have a second server that gets all of the data and is ready to take over. How do I implement the actual failure detection and failover switch? Switching back: Suppose the master fails and the system automatically fails over to the slave, and later the master comes back online. I understand that with WAL shipping this will require to reconfigure the log shipping once again, and that switching back is far from seamless. Is that so?

    Read the article

  • Intel D2500HN Atom D2500 Doesn't turn on

    - by David W
    I recently bought parts from Amazon to build an embedded PC, and have assembled everything. I have: Intel D2500HN Mini-ITX Motherboard Mini-Box Pico-PSU 80 M350 Universal Min-ITX Enclosure 2GB DDR3 Memory Kinamax AD-LCD12 LCD Monitors 12V 6A 72W AC Adapter Power Supply The motherboard gets a light (on the motherboard, not on the Pico-PSU) when I plug it into the power adapter. Furthermore, I see the power switch light come on when I press the power button. However, the display doesn't turn on, and it doesn't seem that the PC is actually turned on. Since I'm seeing these lights, I know that the motherboard is getting power. Furthermore, the display VGA port is embedded into the motherboard, so that's not the issue. I'm just trying to figure out what COULD be the issue aside from a faulty motherboard. I have a diagram of the D2500HN motherboard which labels everything, and have ensured that the power LED as well as the On/Off cables are plugged into the right spots, although to be sure I've tried flipping these two cables around, and also plugging 1 cable into the other cable's spot & vice-versa. Is there anything else you folks think I may be missing, or anything else I can do to try to troubleshoot this issue before sending the motherboard back?

    Read the article

  • directory services group query changing randomly

    - by yamspog
    I am receiving an unusual behaviour in my asp.net application. I have code that uses Directory Services to find the AD groups for a given, authenticated user. The code goes something like ... string username = "user"; string domain = "LDAP://DC=domain,DC=com"; DirectorySearcher search = new DirectorySearcher(domain); search.Filter = "(SAMAccountName=" + username + ")"; And then I query and get the list of groups for the given user. The problem is that the code was receiving the list of groups as a list of strings. With our latest release of the software, we are starting to receive the list of groups as a byte[]. The system will return string, suddenly return byte[] and then with a reboot it returns string again. Anyone have any ideas? code sample: DirectoryEntry dirEntry = new DirectoryEntry("LDAP://" + ldapSearchBase); DirectorySearcher userSearcher = new DirectorySearcher(dirEntry) { SearchScope = SearchScope.Subtree, CacheResults = false, Filter = ("(" + txtLdapSearchNameFilter.Text + "=" + userName + ")") }; userResult = userSearcher.FindOne(); ResultPropertyValueCollection valCol = userResult.Properties["memberOf"]; foreach (object val in valCol) { if (val is string) { distName = val.ToString(); } else { distName = enc.GetString((Byte[])val); } }

    Read the article

  • mysql cmd promt import data.sql

    - by udhaya
    i wanna import sql using cmd prompt. first open windows cmd prompt, navigate to xampp/mysql/bin folder & run mysql this error occurs D:\Program Files\xampp\mysql\bin>mysql ERROR 1045 (28000): Access denied for user 'ODBC'@'localhost' (using password: N O) D:\Program Files\xampp\mysql\bin>mysql -u root -p -h localhost dev1base < dev1b ase.sql Enter password: D:\Program Files\xampp\mysql\bin> D:\Program Files\xampp\mysql\bin>mysql -u root Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 104 Server version: 5.0.51a Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> mysql> -h localhost dev1base < dev1base.sql -> -> -> ->

    Read the article

  • VMWare Newbie - looking for hardware recommendations and help :) [closed]

    - by Dan
    I am looking for some hardware recommendations on an upcoming virtualization project. We are a small company (80 users - 25 in site 1, 55 in site 2) currently using Windows Server 2003 - no VM servers yet. Our AD is setup where site 1 is the root domain and site 2 is a subdomain/subnet - connected by T1 and VPN for failover. The current DC's also server as file servers, print servers, AntiVirus servers. Email is in the cloud. Additionally then in site 1 we have 3 additional member servers - one running IBM Websphere for a customer specific app, one running Infor PowerLink (no real heavy load) and another that we use for Virtual Studio apps and also runs DirSync for Exchange Online. No heavy workloads on any of these machines really. We also have an AS400 box that we run ERP/CRM software on that site 2 connects to over the WAN link. In site 2 we also have a SQL machine that runs on Win2K server. Database files are not large less than 5 GB. Light to Medium workload on this machine. File servers in each site store less than 500 GB data and probably won't grow to more than 1TB in the next 5 years. I am looking to go to VMWare in both sites and virtualize all servers. What recommendations do you have for server, storage hardware? Is it safe to virtualize all of your DC's? Any help or advice would be greatly appreciated. Thanks.

    Read the article

  • Installing Tomcat on CentOS 5

    - by andybaird
    Disclaimer: I am not a server admin, I am a windows user that has lead a life of sinful installation wizards and drag and drop I'm attempting to install Tomcat on CentOS 5 hosted by a MediaTemple dedicated virtual server. I basically followed this guide: Installed jpackage and configured the yum.repo.d jpackage file to set enabled=1 Used yum to install java (yum install java) Downloaded the binary distribution of tomcat with "wget http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.14/bin/apache-tomcat-6.0.14.tar.gz" set JAVA_HOME to point at the jdk location I found with "export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/" I gunzip/untar the Tomcat files and run ./startup.sh to start the Tomcat server. That is supposed to put the Tomcat server at myserver.com:8080 - however, I just get a could not contact host error when I try to browse to it (or when I try 'curl localhost:8080' from SSH) After I type ./startup.sh, here is the console output: [root@myserver bin]# ./startup.sh Using CATALINA_BASE: /root/apache-tomcat-6.0.14 Using CATALINA_HOME: /root/apache-tomcat-6.0.14 Using CATALINA_TMPDIR: /root/apache-tomcat-6.0.14/temp Using JRE_HOME: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/ [root@myserver bin]# Is there a step I have missed here? Edit: I've now discovered by looking at the log the following error is occuring: Error occurred during initialization of VM Could not reserve enough space for object heap

    Read the article

  • IndyTechFest Recap

    - by Johnm
    The sun had yet to raise above the horizon on Saturday, May 22nd and I was traveling toward the location of the 2010 IndyTechFest. In my freshly awaken, and pre-coffee, state I reflected on the months that preceded this day and how quickly they slipped away. The big day had finally come and the morning dew glistened with a unique brightness that morning. What is this all about? For those who are unfamiliar with IndyTechFest, it is a regional conference held in Indianapolis and hosted by the Indianapolis .NET Developers Association (IndyNDA) and the Indianapolis Professional Association for SQL Server (IndyPASS).  The event presents multiple tracks and sessions covering subjects such as Business Intelligence,  Database Administration, .NET Development, SharePoint Development, Windows Mobile Development as well as non-Microsoft topics such as Lean and MongoDB. This year's event was the third hosting of IndyTechFest. No man is an island No event such as IndyTechFest is executed by a single person. My fellow co-founders, with their highly complementary skill sets and philanthropy make the process very enjoyable. Our amazing volunteers and their aid were indispensible. The generous financial support of our sponsors that made the event and fabulous prizes possible. The spectacular line up of speakers who came from near and far to donate their time and knowledge. Our beloved attendees who sacrificed the first sunny Saturday in weeks to expand their skill sets and network with their peers. We are deeply appreciative. Challenges in preparation With the preparation of any event comes challenges. It is these challenges that makes the process of planning an event so interesting. This year's largest challenge was the location of the event. In the past two years IndyTechFest was held at the Gene B. Glick Junior Achievement Center in Indianapolis. This facility has been the hub of the Indy technical community for many years. As the big day drew near, the facility's availability came into question due to some recent changes that had occurred with those who operated the facility. We began our search for an alternative option. Thankfully, the Marriott Indianapolis East was available, was very spacious and willing to work within the range of our budget. Within days of our event, the decision to move proved to be wise since the prior location had begun renovations to the interior. Whew! Always trust your gut. Every day it's getting better At the ending of each year, we huddle together, review the evaluations and identify an area in which the event could improve. This year's big opportunity for improvement resided in the prize give-away portion at the end of the day. In the 2008 event, admittedly, this portion was rather chaotic, rushed and disorganized. This year, we broke the drawing into two sections, of which each attendee received two tickets. The first ticket was a drawing for the mountain of books that were given away. The second ticket was a drawing for the big prizes, the 2 Xboxes, 3 laptops and iPad. We peppered the ticket drawings with gift card raffles and tossing t-shirts into the audience. If at first you don't succeed, try and try again Each year of IndyTechFest, we have offered a means for ad-hoc sessions or discussion groups to pop-up. To our disappointment it was something that never quite took off. We have always believed that this unique type of session was valuable and wanted to figure out a way to make it work for this year. A special thanks to Alan Stevens, who took on and facilitated the "open space" track and made it an official success. Share with your tweety When the attendee badges were designed we decided to place an emphasis on the attendee's Twitter account as well as the events hash-tag (#IndyTechFest) to encourage some real-time buzz during the day. At the host table we displayed a Twitter feed for all to enjoy. It was quite successful and encouraging use of social media. My badge was missing my Twitter account since it was recently changed. For those who care to follow my rather sparse tweets, my address is @johnnydata. Man, this is one long blog post! All in all it was a very successful event. It is always great to see new faces and meet old friends. The planning for the 2011 IndyTechFest will kick off very soon. We have more capacity for future growth and a truck full of great ideas. Stay tuned!

    Read the article

  • Possible DNS issue after a reinstall of Windows Server 2000 (get off my lawn)

    - by cop1152
    I just replaced a drive on a Win2000 Server that replicates AD and issues out DHCP at one of our offices. I successfully joined it to the domain, setup range of IP's, etc, but am still having issues. I cannot RDC to it with name or IP. I can ping it, browse to it with Windows Explorer, and remote to it with some other software, but not RDC. The other issue is this: Users are unable to authenticate on it. They receive the message 'username or password incorrect' (or something like that). Changes made on the main domain controller seem to take forever to trickle down. The most significant entry in the DNS Server Log is Event ID 7062: The DNS Server Encountered a Packet Addressed to Itself. At least, I think its significant. The Directory Services Log shows numerous Event IDs 1265: The attempt to establish a replication link with parameters failed with the following status: The DSA operation is unable to proceed because of a DNS lookup failure. Does this make any sense to anyone? I feel like its something very simple that I am overlooking. Thanks in advance.

    Read the article

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • Configure Cisco Pix 515 with DMZ and no NAT

    - by Rickard
    I hope that someone could shed some light over my situation, as I am fairly new to PIX configurations. I will be getting a new net for my department, which I am going to configure. At my hands, I have a Cisco PIX 515 (not E), a Cisco 2948 switch (and if needed, I can bring up a 2621XM router, but this is my private and not owned by my dept.). The network I will be getting is the following: 10.12.33.0/26 Link net between the ISP routers and my network will be 10.12.32.0/29 where GW is .1 and HSRP roututers are .2 and .3 The ISP has asked me not to NAT the addresses on my side, as they will set it up to give 10.12.33.2 as a one-to-one nat to a public IP. The rest of the IP's will be a many-to-one NAT to another public IP. 10.12.33.2 is supposed to be my server placed on the DMZ, the rest of the IP's will be used for my clients and the AD server (which is currently also acting as a DHCP server in the old network config with another ISP). Now, the question is, how would I best configure this? I mean, am I thinking wrong here, I am expected to put the PIX first from the ISP outlet, then to the switch which will connect my clients. But with the ISP routers being on a different network, how will the firewall forward the packets to the other network, it's a firewall, not a router. I have actually never configured a pix before, and fortunately, this is more like a lab network, not a production network, so if something goes wrong it's not the end of the world, if though annoying. I am not asking for a full configuration from anyone, just some directions, or possibly some links which will give me some hints. Thank you very much!

    Read the article

  • Outlook 2007/2010 autodiscovering old Exchange info

    - by Dan
    I currently have an Exchange setup as follows: two Exchange 2003 servers clustered together set up as the current mailbox stores, one Exchange 2003 setup as a frontend, one Exchange 2007 set up as a frontend (was set up for testing by my predecessor, never really used intentionally), and now four Exchange 2010 servers - two mailboxes in a DAG and two with Hub/CAS. Everything seems to be working fine with one exception - Outlook 2007/2010 clients are still autodiscovering the test 2007 frontend and not the 2010 CAS array. I know this because there's an expired cert on the 2007 box so the client displays a cert error when you attempt to autocreate the outlook profile. From what I've read, there is an SCP (Service Connection Point) in AD that is pointing to the old server and it is getting returned first, causing Outlook to try it first. How can I prevent Outlook from even attempting to connect to this 2007 box from now on? http://www.msexchange.org/articles_tutorials/exchange-server-2010/management-administration/exchange-autodiscover.html When Outlook 2007 is installed on a domain joined workstation then the Outlook client will query Active Directory for the Autodiscover information. Active Directory will return a list of SCP’s and the Outlook client will automatically select the first SCP in this list. Using the information found in the SCP the Outlook client will contact the Client Access Server for its configuration information and the Outlook client will be configured automatically.

    Read the article

  • sudo or acl or setuid/setgid ?

    - by Xavier Maillard
    Hi, for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • Very uneven CPU utilization with SQL Server 2012 on 2 processor computer with 16 cores / processor

    - by cooplarsh
    After installing SQL Server Enterprise 2012 with the Server + Cal license model, on a computer with 2 processors each with 16 cores (and no hyperthreading involved) and putting the server under extremely heavy load the 16 cores on the first processor were very underutilized, the first 4 cores on the 2nd CPU were heavily utilized, and the last 12 cores were not used at all (because of the 20 core limit for this sql server version). Total CPU utilization was displaying as around 25%. Unfortunately, the server suffered from extremely poor performance even though if the tasks were evenly distributed across the 20 cores it wouldn't have been anywhere near as bad. The Windows Server was running on a VMWare virtual image under ESX Server, but all of the CPU was allocated to the windows server. We tried changing affinity settings (e.g., allocating most cores to CPU and the others to I/O), but that didn't help solve the performance problems. Upgrading the product edition to SQL Server Enterprise Core 2012 not only allowed the SQL Server to utilize the 12 previously unused cores on the 2nd processor, but it also resulted in a much more even distribution of tasks across all of the processors. To get through the backlog of requests cpU utilization jumped to around 90%, and then came down to around 33% once it was caught up, but performance improved dramatically since we failed over to the newly updated version And the performance issues went away. I was wondering if anyone knows what might cause SQL Server to unevenly distribute the load, relying almost exclusively on the first 4 cores of the 2nd processor that had 12 cores idle, and allocate only a few tasks to each of the 16 cores on the first processor. Also, is there any way we could have more evenly distributed the load across the 20 cores that were being used without the product edition upgrade? The flip side of that question is what did the product upgrade do that caused SQL Server to start evenly distributing the load across all of the cores that it recognized? Thanks to any insight to answer these questions and/or links that might help me better understand how to make sense of what was happenings.

    Read the article

  • Some clients cannot connect to Server 2008 R2 VPN

    - by Robl
    Hi all, Have a server 2008 r2 setup as a VPN server. We have created a windows group to control access to the VPN called vpn-users. Clients are all Windows 7 Pro. This all seems to work fine except some users cannot connect to the VPN! For example I try to logon to the VPN from a client and get an error saying the server refused the connect due to a policy in place. Specifically authentication type! Fine I think. So i drop that user into the vpn-users group created for this and try again and hey presto the user can now logon! Great. Now try this with another user. But this time I get the same error even though I have dropped them into the vpn-users group!! So does anyone have any idea why this works for some users and not for others?? I have tried moving the user from certain OU's in AD to others, copying the account, taking the user out of the vpn-users group and then back in but get the same error each time. Any thoughts anyone?

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • Cannot access domain from windows 2003 client

    - by Peuge
    Hey all, First off I am a novice at AD and DNS so please bear with me. This is my current situation: I have one server which is a DC and DNS server (win2k3) - Machine 1. I have another machine which is trying to join this domain - Machine2. This machine is also a win2k3 server. This is what I have done so far: I have setup DNS on the DC and its tcp/ip dns is pointing to itself. On machine2 I have set its dns to point to the dc. The DNS has been setup with a forward lookup zone with the same name as the domain (accdirect.com). I can ping machine1 from the machine2 by its FQDN and ip. I have set up forwarders on the DC for our ISP dns and can browse the internet on both machines. In the DNS mmc on the DC I can see a host (A) has been created for machine2. The problem is I still cannot join the domain. When I try join the domain via my computer - properties then it brings up the username/password box and after I go "ok" it says cannot find domain accdirect.com If I run this from machine2 dcdiag /s:accdirect.com /u:accdirect.com\admin /p: then I get the following: Performing initial setup: ** Warning: could not confirm the identity of this server in the directory versus the names returned by DNS servers. If there are problems accessing this directory server then you may need to check that this server is correctly registered with DNS [accdirect.com] Directory Binding Error 1722: Win32 Error 1722 This may limit some of the tests that can be performed. Done gathering initial info. On the dc all dcdiag and netdiag results pass. If anyone could help me I would really appreciate this! Sorry if any of my terminology is a bit off, I have only been doing this for two days. thanks Peuge

    Read the article

  • Issues installing apache debian

    - by Belgin Fish
    I'm having issues installing apache2, and pretty much everything in general, I'm using debian. I run sudo apt-get install apache2 and then it returns root@debian:~# apt-get install apache2 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: apache2 : Depends: apache2-mpm-worker (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-prefork (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-event (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-itk (= 2.2.16-6+squeeze7) but it is not going to be installed Depends: apache2.2-common (= 2.2.16-6+squeeze7) but it is not going to be installed E: Broken packages Not really sure what's up :S Seems like it can't find any of the required packages for anything, Anyone know what I'm doing wrong?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >