Search Results

Search found 7318 results on 293 pages for 'team ferrari22'.

Page 241/293 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • QNTC and Windows Server 2008 R2

    - by Ben
    I am having a really hard time getting an iSeries (AS/400) machine talking to my new Windows Server 2008 R2 box using the QNTC file system on the iSeries. I had similar problems getting it to initially talk to a Windows Server 2003 machine, but enabling the local Guest account on the 2003 box solved that one. No such luck with the new 2008 box. When I do a WRKLNK /QNTC/SVR01 on the iSeries (which should show share listings, and does on any 2003 boxes) all I get is (Cannot find object to match specified name.). I know the iSeries likes the same username and password on the remote server, but unfortunately for us this is not the case. Anyhow, it does currently work with different username/password combinations on a 2003 box. To try and get the wretched things talking, I have made the 2008 server pretty open but the iSeries will not see shares on it. I have enabled the local Guest account, turned Windows firewall off, set the share permissions so Everyone has full control but to no avail. I read something on the internet about the iSeries only being able to handle NTLM authentication (and I understand by default that Server 2008 R2 only uses NTLMv2 and has NTLM disabled), so I made a special group policy for the server and tweaked all Group Policy settings under Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options but the iSeries STILL won't see it. We have a team of programmers who do all the system administration of the iSeries, but they are stumped for ideas on their side, and I'm stumped for ideas on my side. This is driving me crazy now, and if anybody has managed to get an iSeries to talk to Windows Server 2008 R2 using QNTC I would be very appreciative of any suggestions, be it on the Windows side, iSeries settings or even IBM PTF's that might patch anything. The iSeries is running V5R4 and I have *SECOFR privileges on it, if it helps. One final (most important!) note - The programmers think it's my system being tricky, and I think it's theirs - please prove me right :)

    Read the article

  • SCOM, Server 2008 and SQL Server 2008

    - by Jacques
    Hi there, I'm trying to setup SCOM(System Center Operations Manager 2007 (SCOM) – Platform Monitoring) on my Server 2008 machine using SQL Server 2008 running on the same machine. When I check my prerequisites I get problem on SQL and Active Directory components. (I'm running SQL server 2008 and Server 2008 with active directory not installed) Errors: 1.Microsoft SQL Server 2005 Service Pack 1 required. Details: SQL Server 2005 SP1 is the next version of SQL Server. SQL Server 2005 Enterprise Edition, is a complete data and analysis platform for large mission-critical business applications. The link provided in the resolution column is a trial version of the product and is not supported by the Microsoft SQL Server team In order to install active directory needs to be present. Details:Setup failed to verify the presence of Active Directory for this server. I've got a couple of questions I need answering, hope someone can help. Do I need to install Active Directory for SCOM to work? Can I run SCOM with an SQL 2008 Database? How do I get pass these problems?

    Read the article

  • Need for explanation: NetBIOS over TCP/IP on VMware network adapter disturbs access to network share

    - by gyrolf
    Some time ago nearly all workstations in our team (Windows XP SP2) exhibited intermittend but frequent delays when accessing shares on the network. Typically the first access to a share which hadn't been accessed for some time resulted in a nearly frozen workstation for up to 30 seconds. Then everything started working fine again. Using TCPView from Sysinternals I saw that during this delays there was a connection to the netbios-ssn port on the file server which was in state SYN_SENT. First try: Disable NetBIOS over TCP/IP for the intranet network adapter. Problem solved, but I didn't like to manipulate our centrally managed network configuration for the intranet. Second try: Disable NetBIOS over TCP/IP only for the VMWare network adapter (VMNet1 used for host only communications). Problem solved again! My questions: Why does NetBIOS over TCP/IP on one network adapter disturb NetBIOS over TCP/IP on another network adapter? Is this problem specific to VMWare network adapters? Has anybody else seen this phenomen? Additional information: VMWare Workstation version 6.0.3 At the time I started seriously analysing the problem it was no more possible to find out what had been changed to our systems at the time the problems started.

    Read the article

  • The file STDOLE2.TLB cannot be found or contains a Visual Basic for Applications library that is not

    - by Jim Birchall
    In Microsoft Project 2007 Professional, If I select Tools Macro Security the message: The file "STDOLE.TLB" cannot be found or contains a Visual Basic for Applications library that is not valid. Verify that the file name is correct, and try again. If the Visual Basic for Applications library is invalid, reinstall Project. I am a software developer and I have developed an Add-In for MS Project using Visual Studio 2005, with all the problems that entails. As such my machine is configured with both Project 2003 Professional and Project 2007 Professional. I only noticed this error when trying to debug my Add-In. The Add-In loads and draws the menu, but when I click on the menu option I receive this error (which is also generated by the Tools Macros Security option mentioned above). I have tried repairing office installations and uninstalling everything and then re-installing everything all over again, but after several hours I still get the same problem. Does anyone have any idea how to resolve this? Some method of finding out what type libraries are registered with STDOLE2.TLB may help if I can identify what is causing the problem. Also a way of manually unregistering the nasty library may be helpful. My machine is configured as follows: Windows 7 Ultimate x64 Project 2003 Professional Office 2007 Ultimate Project 2007 Professional Visio 2007 Professional Visual Studio 2005 Team Edition for Developers Visual Studio 2005 Tools for Office Second edition Visual Studio 2008 Professional I also installed MS Project 2010 Beta to test my Add-In against, but have since uninstalled it. I have a suspicion that this may have caused the problem, but I cannot be sure.

    Read the article

  • How to use Salt Stack with minions all behind NAT (not publicly accessible, default salt ports not open)?

    - by MountainX
    Can Salt Stack minions communicate with the salt master from behind NAT/Firewalls, etc., using standard ports that would be open be default in all consumer NAT routers (and without the minions having a public DNS record or static IP)? I'm working my way through my first salt tutorial, and this is where I'm stuck. I am able to configure iptables on the Ubuntu salt-master. But I have no control over the routers/NAT that the minions will sit behind. So far I tried these settings: /etc/salt/master: publish_port: 465 ret_port: 443 /etc/salt/minion: master_port: 465 That did not work. Background: I have a custom developed application presently running on about 40 Kubuntu laptops (& more planned). Every few months I have to update the application. (Often this just amounts to replacing a .jar file, which requires root permissions.) I also have to run Ubuntu updates and a few other minor things. I've been doing it manually, one by one, using Team Viewer to log into each client. I would like to dramatically improve this process. The two options I'm aware of are either: use reverse ssh tunnels and bash scripts. I tested this and it works. But I don't get any of the reporting, etc., I would get with Salt Stack. use Salt Stack (or similar) management tool. But I need a really simple tool. I can't invest any time in a big learning curve. I looked at Puppet and a bunch of related tools. The only one I found that looked simple enough for me (so far) was Salt Stack. But I'm stuck now because my minion can't reach the salt-master, as stated above. I appreciate suggestions.

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • How do I enable the confluence-users group?

    - by M. Joanis
    I've got an issue with Atlassian Confluence. Normal users can't log in, but administrators can... Details below! I manage users using an Apple Open Directory (LDAP). I created two groups: "confluence-administrators" and "confluence-users". I've added team leaders and managers to both groups, and I've added some users to "confluence-users". Everyone in "confluence-administrators" can log in easily. People in "confluence-users" can't log in at all. When I look at the user list (in Confluence), and select a user to examine the list of groups he or she belongs to, I can see that the Confluence Administrators are indeed members of the "confluence-administrators" group, but not a single user is a member of the "confluence-users" group. Not event the Confluence Administrators, which are members of both groups! So I tried to have one of the "confluence-users" log in while watching the Confluence logs. Here's the result: 2012-07-05 14:50:19,698 ERROR [http-8090-11] [core.event.listener.AutoGroupAdderListener] handleEvent Could not auto add user to group: Group <confluence-users> is read-only and cannot be updated at com.atlassian.crowd.directory.DbCachingRemoteDirectory.addUserToGroup(DbCachingRemoteDirectory.java:461) ... So it says the group group is read-only... I'm not sure why it is a problem. Well confluence-administrators too is read-only and it doesn't complain. Some things I don't think are part of the problem: I've synchronized Confluence with LDAP many, many times. I have verified many times that I didn't make a typo while setting the groups on the LDAP server. LDAP synchronization goes well. No errors in the logs (only INFO level log messages). The user exists. Errors in the logs are different when a user doesn't exist. Any help is most welcome!

    Read the article

  • Dell 15Z. Trying to output image using Mini Displayport to a Yamasaki Q270. (2560 x 1440) rez monitor. Dual DVI-D to Mini Displayport

    - by michaelcku
    I have a Dell 15Z. GT525M Video card. Updated Driver via NVIDIA. Problem is I just purchased a Yamasaki Catleap Q270 monitor. It is a (2560x1440) monitor with only a Dual DVI-D out. 15Z only has HDMI and a Mini Display Port. I got a Active Dual DVI-D to Mini Display Port Active Adapter(Linked Below). http://www.monoprice.com/products/product.asp?c_id=104&cp_id=10428&cs_id=1042802&p_id=6904&seq=1&format=2 The Setup works when i plug it into my Macbook air 13.3. However when I plug it into my 15Z it doesn't work. Tried everything I can thank of. Window control P. FN and F1. It just doesn't work. I believe the issue lies in the Mini Display Port but i can't be sure. No matter what i do, the Mini Display Port doesn't get recognized. Spent 3 hours on the phone with the XPS Tech team. They simply said it was not compatible.... Any help or suggestion would be greatly appreciated.

    Read the article

  • BitLocker with Windows DPAPI Encryption Key Management

    - by bigmac
    We have a need to enforce resting encryption on an iSCSI LUN that is accessible from within a Hyper-V virtual machine. We have implementing a working solution using BitLocker, using Windows Server 2012 on a Hyper-V Virtual Server which has iSCSI access to a LUN on our SAN. We were able to successfully do this by using the "floppy disk key storage" hack as defined in THIS POST. However, this method seems "hokey" to me. In my continued research, I found out that the Amazon Corporate IT team published a WHITEPAPER that outlined exactly what I was looking for in a more elegant solution, without the "floppy disk hack". On page 7 of this white paper, they state that they implemented Windows DPAPI Encryption Key Management to securely manage their BitLocker keys. This is exactly what I am looking to do, but they stated that they had to write a script to do this, yet they don't provide the script or even any pointers on how to create one. Does anyone have details on how to create a "script in conjunction with a service and a key-store file protected by the server’s machine account DPAPI key" (as they state in the whitepaper) to manage and auto-unlock BitLocker volumes? Any advice is appreciated.

    Read the article

  • Resolving "JBoss Web Console is Accessible to Unauthenticated Remote Users" vulnerability

    - by IAmJeff
    Our security team has determined there is a vulnerability in one of our systems. We are using version JBoss 5.1.0GA on RHEL 5.10. Vulnerability description: JBoss Web Console is Accessible to Unauthenticated Remote Users Yes, this looks familiar. Refer to Question 501417. I do not find the answer there complete. Can someone (or multiple someones) answer Does a newer version of JBoss fix this vulnerability? Are there links describing, in more detail, manual modification of JBoss configuration files to resolve the issue? Are there others options to remediate this vulnerability? Why don't I find the other answer complete? I'm not at all familiar with JBoss, so this answer seems a bit too simple. The web-console.war contains commented-out templates for basic security in its WEB-INF/web.xml as well as commented-out setup for a security domain in WEB-INF/jboss-web.xml. Just uncomment those basic security blocks and restart? Is there anything else I need to include? This seems generic. Do I need to include anything about my environment, such as absolute paths, etc.? Am I making this too complicated?

    Read the article

  • IIS 7 rewriting subdomain to point at a specific port

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)?(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite> EDIT I actually got this working by adding a binding for the site on port 80 with host name tfs.server.domain.com. But using tfs.server.domain.com, I can't authenticate using Windows Authentication. Is there something that I need to configure for Windows Authentication? You can see a trace here: http://pastebin.com/k0QrnL0m

    Read the article

  • Can IIS (Ideally Azure) do SSL Proxying?

    - by Acoustic
    My team has been asked to add a new feature to a project we're working on, and none of can find authoritative details on whether it's possible with Windows/IIS. The short of it is that we're hoping to have customers update their DNS with a CNAME record to point their website to our server instead of theirs (they why's are trivial - it's what the app does on behalf of your site). We're using a reverse proxy with several custom modules to serve particular content from the original servers. So far everything works perfectly until we encounter SSL. Is there a way to have IIS serve up an SSL certificate from another server? In other words, is there a way to be a trusted man in the middle? I'm hoping that's possible so that we don't have to require all our clients to re-issue their SSL certs. Frankly, we don't want to have to manage hundreds of certs. I'd also like to avoid a UCC situation if there's a way to because it seems to require re-creating the cert each time a client is added. So, any pointers on proxying/hosting SSL (or even dynamic SSL hosting like http://www.globalsign.com/cloud/) would be appreciated.

    Read the article

  • Stream video file in debian?

    - by Rob
    I've tried ffserver with ffmpeg, I've tried VLC, and I'm not sure what else to try or what I've done wrong. I've gone through, with VLC +-[ robert@s10 ]--[ ~ ] +[#!]¬ vlc --version VLC media player 2.0.0 Twoflower (revision 2.0.0-0-g421a4fc) VLC version 2.0.0 Twoflower (2.0.0-0-g421a4fc) Compiled by buildd on biber.debian.org (Mar 1 2012 22:21:37) Compiler: gcc version 4.6.2 (Debian 4.6.2-14) This program comes with NO WARRANTY, to the extent permitted by law. You may redistribute it under the terms of the GNU General Public License; see the file named COPYING for details. Written by the VideoLAN team; see the AUTHORS file. and tried everything I could in the streaming section, but I can't get the stream to actually work. Looking around, apparently debian strips the encoders from the package? I want to do share some videos I've made with friends on IRC, and it would be easiest if I could just stream it so we can all watch at the same time and critique parts of it in real time. Has anyone done something similar? Linux s10 3.2.0-2-686-pae #1 SMP Tue Mar 20 19:48:26 UTC 2012 i686 GNU/Linux Basic home network, I am behind a NAT (192.168.1.*) and have dynamic DNS set up. That doesn't really matter too much, I can figure that out, but it's not even working locally. I have a file server set up and could just share the files that way, but I'd rather have everyone watching at the same time (or just about). Not worried about installing new packages or building something from source, that's not a big issue, just want to get it working. Big plus if I can do it from command line.

    Read the article

  • What is the most likely cause to a Blue Screen of Death when the user presses Ctrl-Alt-Delete?

    - by Jay
    My wife is experiencing this with her work laptop--she presses Ctrl-Alt-Delete to lock and she gets the BSOD. The first troubleshooting step is usually to "re-image", and it's locked down. So with this question, I am asking whether the behavior is unique enough that someone in the stack-universe knows exactly what this is (something I can tell her to tell her help desk support). Update: help desk said to order more RAM. Alt-tabbing caused the same behavior today. And...she learned that multiple users are affected. I'm not sure I'll be able to clean any additional info that will help w/ troubleshooting. I'll leave the question here for a bit and if an answer ends up being the actual solution, I'll accept it. If not, I think I should probably remove the question (i'll check meta). Update #2.5: The cause appears to be a ctrl-alt-delete keystroke while Sales Team Configurator is open. This can either be to lock the screen (there are workaround in answers already present) or to unlock the screen (no workaround for that).

    Read the article

  • A separate user for each task?

    - by Mark Tomlin
    I just got a VPS sver the other day, I'm new to server administration, but not that new to Ubuntu (11.04). I use it in my living room as the HTPC, and I had a previous VPS that I used on and off for a team speak server. This one I'm setting up for long term use. So I would like to know the best practice when it comes to websites and tasks that I have the server proforming. I understand that it could be beneficial to separate each website into it's own usergroup or under its own username. I would setup nginx so that it could read all of the users directors (and thus each website) but could not touch anything else. The same with the TeamSpeak, should I make a user for TeamSpeak so that it operates within its own confined area or is this overkill? I do have access to root on the sever and my current plan is to run about 4 websites and a TeamSpeak server. My stack is Linux (Ubuntu 11.04 LTS), nginx, and PHP 5.4.3 (using the PDO SQLite 3 built in driver for the database). Should PHP have it's own user group or is it ok to place it in with nginx?

    Read the article

  • deploy LAMP config to new boxes with low/no effort

    - by user1444233
    I'm spending a lot of time setting up new Centos 6 instances. I use a VCS (Subversion) for most of the config files and all of the webapp source files (Github), but even with excellent package managers (like yum, npm, easy_install, etc.) it still takes time. I'd like to get to the point where I could try out a new potential web host by just signing up for an account, logging in and automatically sucking my standardised config onto the box. I know there are a set of tools that can help: Puppet Chef Vagrant and a set of services that sell solutions: [Jumpbox] http://www.jumpbox.com/ [BitNami Cloud] http://bitnami.org/cloud I don't mind investing time in learning a new tool, but as a no-budget start-up, I'm keen to keep monthly costs down. My biggest concern is that time spent on the server config is time away from the codebase, and that's where I think my team and I should be investing our energy, at least until we get funded and scale up a bit. I'd be grateful of some recommendations for which way to jump on config: stick with SSH and manual deploys, at least until you get big. bite the bullet and learn [say] puppet. You may only use it 8-10 times, but it pays to have such an easy tunable server bootstrap. don't bother, just pay the $100/month for a standard config service. It'll cost you $1000/year, but you should focus on the code. Other questions in this domain I use quite a complex stack (Drupal, Zend Server, MySQL, PHP, MongoDB, Python, django), but are there standard(ish) setups that include these or that I could build upon more quickly? Are the configs optimised for small, medium, large VPS (1GB, 4GB, 16GB)? How secure are they?

    Read the article

  • Pushing updates to live server... FTP isn't cutting it... a better method?

    - by Jenkz
    I'm the lead developer in a team of 2. My partner has only just joined the project and despite using GIT for version control etc, we are still stuck in the dark ages when it comes to code deployment. Currently I make all site updates via FTP (this way I have control / responsibility over everything that goes live), using Filezilla. I've done this for years, but we now have some large PHP classes (300KB), and a lot of traffic. So in short, every time I upload a key class "general" for example, the site goes down until the file finishes uploading. This is only 5/6 seconds at a time, but this is increasingly unacceptable. I realise I can upload the file under a different name and then rename both files... but really there must be a better way? I've heard about rsyncing code across from another server, but I don't see how this prevents switching to the new file whilst uploading. We only have one server (for DB and Apache) but also use some cloud servers (for openx as an example).

    Read the article

  • SQL server queries are really slow only on first run

    - by JoelFan
    Somewhat strange problem... when I start my .NET app for the first time after rebooting my machine, the SQL Server queries are really slow... when I pause the debugger, I notice that it's hanging on getting the response from the query. This only happens when connecting to a remote SQL server (2008)... if I connect to one on my local machine, it's fine. Also, if I restart the app, it works fast, even off the remote SQL server, and subsequent runs are also fine. The only problem is when I connect to a remote SQL server for the first time after rebooting my machine. What's more, I have even noticed this same exact behavior with a 3rd party app (also .NET) that also connects to a remote SQL server. Another piece of info... this has only started hapenning since I upgraded my machine from XP to Win7 (64 bit). Also, other developers on my team who upgraded to Win7 are seeing the same behavior (both with the app we're developing and the 3rd party .NET app). (copied from http://stackoverflow.com/questions/2014814/sql-server-queries-are-really-slow-only-on-first-run )

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • How can I fix Office Sharepoint Search service?

    - by unknown (yahoo)
    For some reason, in operations server services, Office Sharepoint Search Service is MISSING! Which means I cant get Shared services working, which in turn I cannot get, and I do not think I have ever got Usage and reporting to see who is visiting my website, counter and with what OS/ browser ETC. I dont think I have ever seen this work in the 2 years trying with sharepoint. So in essense I have two problems, most important SEARCH, second usage reports. When trying to search, all i get is unknown error. Nothing is in my event viewer at all. I have tried http://www.cjvandyk.com/blog/Lists/Posts/Post.aspx?ID=96 , in the comments on that site, on other person reports search is missing entirely and is going to uninstall. Im tired of uninstall sharepoint and resinstalling to fix an odd off issue. I have things setup with Team foundation server that took forever to get work and reinstallation is not my solution. As for usage reporting, this is what microsoft responds to the " Both Windows SharePoint Services Usage logging and Office SharePoint Usage Processing must be enabled to view usage reports. Please contact your administrator to ensure that these services are enabled. " error I cannot do all the steps since SSP needs to be setup which i cant above.

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • Looking for a short term solution to improve website performance with additional server

    - by Tanim Mirza
    I am working with a small team to run an internal website running with PHP 5.3.9, MySQL 5.0.77. All the files and database are hosted on a dedicated Linux machine with the following configuration: Intel Xeon E5450 8 CPU cores @3.00GHz, 2992.498 MHz, Cache 6148 KB, Cent OS – Red Hat Enterprise Linux Server release 5.4 We started small and then the database got bigger and now the website performance degraded significantly. We often get server space overrun, mysql overloaded with too many calls, etc. We don't have much experience dealing with these issues. We recently got another server that we were thinking to use to improve performance. Since it has better configuration, some of us wanted to completely move everything to the new machine. But I am trying to find out how we can utilize both machine for optimized performance. I found options such as MySQL clustering, Load balancer, etc. I was wondering if I could get any suggestion for this situation "How to utilize two machines in short term for best performance", that would be great. By short term we are looking for something that we can deploy in a month or so. Thanks in advance for your time.

    Read the article

  • Huge or minimal performance hit running game servers on a Virtual Machine? [closed]

    - by Damainman
    I have a two dedicated servers to choose from depending on which one would do a better job. I plan on updating the Hard Drive space and RAM at a later date depending on how I move forward. Server 1: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon L5420(Quad Core) @ 2.50Ghz Server2: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon E5420(Quad Core) @ 2.50GHz I want to run a virtual machine that will host about 10 game servers, with about 16 active slots per server. It will be a mix and match from: Minecraft Counter Strike( 1.6, Source, Global Offensive) Battlefield Team Fortress I know the general consensus is virtualization is a horrible idea if you plan on running virtual servers on them. The issue is, the discussions I read do not really clearly state whether they are speaking about a virtual server running inside an OS(ie: VMware Player running on Windows with the game server in a VM) or a Hypervisor such as Xen Cloud Platform. I am trying to get a definite answer on how feasible the above would be and how much of a performance hit it might be if the VM running the game servers is on a hypervisor such as Xen Cloud Platform. My initial research lead me to believe that there wouldn't be a performance hit since the virtualization is different than running it via inside of a OS.

    Read the article

  • Are applications optimized for XenApp?

    - by damitamit
    Our IT dept are about to deploy virtualized apps using Citrix XenApp. One of these apps will be Dynamics AX 4.0 SP2, a ERP client (which I develop on). They have supposedly reached a roadblock because an external 'Dynamics AX Consultant' has told our IT Dept that Dynamics 4 will not work optimally on Citrix and will run very slow because it is not optimized for Citrix. We have it running in a Test environment now and seems ok. They have been told that the only 'solution' is to upgrade to Dynamics AX 2009 where supposedly this problem as been fixed. (not a small task for my team!) When I heard about this, I was quite surprised. From my brief knowledge of Citrix, I thought it would be application independent. How does the citrix app virtualization work, in that a particular app would work better than others on citrix? Would the speed of the virtualized app not just depend on the resources/network connection the citrix server has? FYI, Dynamics AX is a 3-tier client/server system, so the client will be accessing a AOS application server, which then accesses the database. Please enlighten me :)

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >