Search Results

Search found 14961 results on 599 pages for 'mac clients'.

Page 481/599 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Setting "Register this connection's addresses in DNS" using GPO

    - by ChamaraG
    Hi All, I need to get the Windows XP client machines in my network to dynamically update their DNS A records. The network is an AD domain running on Windows Server 2003 R2 servers with Win XP SP3 clients. Some machines already have the "Register this connection's addresses in DNS" check box checked and sucessfully update the DNS server. But some machines do not have this check box set and I need to set this. I read that this is possible using a GPO and I enabled the following: Computer configuration - Administrative templates - Network - DNS client Primary DNS Suffix Dynamic Update DNS Servers Connection-Specific DNS Suffix Register DNS records with connection-specific DNS suffix and where required, entered the relevant parameters. Running rsop.msc in the client machines shows that the GPO has been applied. The client machines have been rebooted. The DNS server allows "Nonsecure and secure" dynamic updates and is only accessible from our internal network. But, the "Register this connection's addresses in DNS" check box is not set. And the hosts without this set are not updating their DNS A records. Per another suggestion in a web site, i tried running "ipconfig /registerdns", but it does not add the DNS A record. Any advice on what I am doing wrong here would be gratefully accepted :-) Thank you.

    Read the article

  • HTTP Upload Problems

    - by jfoster
    We are running a marketplace on ColdFusion8 and IIS with a widely geographically distributed user base and have been receiving complaints of issues with some HTTP uploads. Most of the complaints are coming from geographically distant locations from our main datacenter on the US east coast. I've attempted to upload the same 70MB file from a US West coast test server to both our main site and a backup running the same code on a different network route and I saw the same issues fairly consistently in both places, so I've ruled out the code, route, and internal network errors. I've also tested uploads using both the native cf upload tag and a third party tool called SaFileUp. I saw the same issues with both upload tools, so I also don't think this is necessarily a ColdFusion problem. I don't have any problems uploading the test file from the East coast to other east coast servers, so I'm beginning to think that the distance between our users and our equipment is a factor. I've also found that smaller files are more likely to succeed than large ones (< 10MB) I tried the test upload with both IE and FF and did notice a difference in the way that the browsers seemed to handle packet errors. IE seemed to have a tough time continuing an upload after dropped / bad packets, whereas FF seemed to have the ability to gracefully resume an upload after experiencing packet problems. Has anyone experienced similar issues? Is there anything we can do on our side to make uploads more forgiving to packet loss or resumable after an error? A different upload tool etc… Do we need upload servers in more than one location to shorten the network routes between clients and servers? Does anyone think that switching uploads to SSL will help (no layer7 packet sniffing may lead to a smoother upload). Thanks.

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • Windows 7 libraries and folder redirection nightmare

    - by Lobuno
    Hello! In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Windows 7 libraries and folder redirection nightmare

    - by Lobuno
    Hello! In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Software distribution from web server to client using PHP/FTP

    - by Jenolan
    I develop and maintain a number of add-ons and utilities for various widget (mainly aMember) which generally means I need to install php based codes onto other people's systems. Whilst I have a VPS and have access to rsync and all sorts of yummy tools most of the people I deal with have a basic ftp access and that's all folks. To upload from my local system is also a problem as I am satellite based (two-way) so it is fairly slow and expensive and in any case the files are already on my server. So there is no rsync, fxp, ssh and I can't really install anything as it is obviously not my system, they would be justifiably miffed if I started installing file managers or other things onto their sites. What I have been trying to find is a utility that I can run on my server from the web, preferably php based, that will be like a file manager but a bit different. Two panels. LH-Side the local server .. pretty much like a standard FM application RH-Side ability to login via FTP to the clients system Then I can fiddle as required. The closest thing I have found is net2ftp but it doesn't have the gui interface, at the moment I simply ssh into my server power up ncftp and run that way, but something easier to use would be mucho niceness. Thanks in advance! Larry

    Read the article

  • Is Windows Server 2008R2 NAP solution for NAC (endpoint security) valuable enough to be worth the hassles?

    - by Warren P
    I'm learning about Windows Server 2008 R2's NAP features. I understand what network access control (NAC) is and what role NAP plays in that, but I would like to know what limitations and problems it has, that people wish they knew before they rolled it out. Secondly, I'd like to know if anyone has had success rolling it out in a mid-size (multi-city corporate network with around 15 servers, 200 desktops) environment with most (99%) Windows XP SP3 and newer Windows clients (Vista, and Win7). Did it work with your anti-virus? (I'm guessing NAP works well with the big name anti-virus products, but we're using Trend micro.). Let's assume that the servers are all Windows Server 2008 R2. Our VPNs are cisco stuff, and have their own NAC features. Has NAP actually benefitted your organization, and was it wise to roll it out, or is it yet another in the long list of things that Windows Server 2008 R2 does, but that if you do move your servers up to it, you're probably not going to want to use. In what particular ways might the built-in NAP solution be the best one, and in what particular ways might no solution at all (the status quo pre-NAP) or a third-party endpoint security or NAC solution be considered a better fit? I found an article where a panel of security experts in 2007 say NAC is maybe "not worth it". Are things better now in 2010 with Win Server 2008 R2?

    Read the article

  • Accessing SSH_AUTH_SOCK from another non-root user

    - by Danny F
    The Scenario: I am running ssh-agent on my local PC, and all my servers/clients are setup to forward SSH agent auth. I can hop between all my machines using the ssh-agent on my local PC. That works. I need to be able to SSH to a machine as myself (user1), change to another user named user2 (sudo -i -u user2), and then ssh to another box using the ssh-agent I have running on my local PC. Lets say I want to do something like ssh user3@machine2 (assuming that user3 has my public SSH key in their authorized_keys file). I have sudo configured to keep the SSH_AUTH_SOCK environment variable. All users involved (user[1-3]), are non privileged users (not root). The Problem: When I change to another user, even though the SSH_AUTH_SOCK variable is set correctly, (lets say its set to: /tmp/ssh-HbKVFL7799/agent.13799) user2 does not have access to the socket that was created by user1 - Which of course makes sense, otherwise user2 could hijack user1's private key and hop around as that user. This scenario works just fine if instead of getting a shell via sudo for user2, I get a shell via sudo for root. Because naturally root has access to all the files on the machine. The question: Preferably using sudo, how can I change from user1 to user2, but still have access to user1's SSH_AUTH_SOCK?

    Read the article

  • Automatically install driver on headless WHSv1 system

    - by Dan Neely
    I have one of the HP Mediasmart Windows Home Server v1 boxes. It's network port appears to have died a few days ago but the system is not giving any other sign of failure: No activity lights activate on either side of the cable when connected to my gigabit switch; when connected to one of my routers 100 megabit ports the lights turn on but it remains unreachable over the network and my router never lists it as among DHCP clients. I bought a USB-ethernet adapter to temporarily get it back online; but the adapter needs a driver to work which I can't install because the system is headless by design (no video out, no PCI/PCIe slots) with admin access only available via the WHS client or remote desktop. Both of those options require network connectivity and are consequently unavailable. I tried copying the drivers to a flash drive; but Windows either didn't look there or none of the drivers provided were suitable (Win8, Win7, or combined XP and Vista). I've been told that a USB WiFi adapter would have the same driver problem.

    Read the article

  • Ubuntu 12.04 glusterfs volume failed to mount at boot time

    - by user183394
    I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience. The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab: 192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server". If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted! I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable. So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell. But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly. Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust. --Zack

    Read the article

  • Access Denied / Server 2008 / Home Directories

    - by Shaun Murphy
    Domain Controller: BDC01 (192.168.9.2) Storage Server: BrightonSAN1 (192.168.9.3) Domain: brighton.local Last night I moved our users home directories off of our Domain Controller onto a storage server using the MS FSMT. I'm getting a mixed bag of errors. The first being some users cannot logon properly, they can't access the logon.vbs in the sysvol folder on the DC and consequently cannot map their drives. I've narrowed that down to a DNS issue as we there was a remnant of our previous DNS server in the DHCP server options and scope options. I'm able to get their drives remapped by browsing to the sysvol folder by IP address as opposed to Computer Name and manually running the logon.vbs script. The other error I'm getting is Access Denied on a few of the users home directories. The top level folder (Home) is shared as normal and I've removed and re-added the NTFS security a number of times now including making the user the owner with full control. I've checked each and every individual file and folder in said users home directory and they are indeed the owner but I'm unable to write but I can read the contents. I'm stumped. This isn't happening to all clients. I'm considering removing their AD accounts, backing up their folders and readding them as a last resort but obviously I'd like to know why the above errors are happening.

    Read the article

  • Windows SBS 2008 problem

    - by MadBoy
    I was today on clients site that has Windows 2008 SBS installed with Symantec EndPoint Protection. Problem is that after I logged in tried multiple commands like services.msc, msconfig typed in "Run" but nothing was started. For the first 5 minutes i can click around Start Menu, choose some applications (non microsoft works, even control panel works). But then something happens that I can't click where I want.. i can click on Start Menu and get it active but i cant choose anything from there, everything is like blocked, i can right click on Desktop i can do many things but most of the left clicks is blocked. Even when i start TaskMgr i am able to see it but I cannot click it, can't activate it or anything. It acts very very weird. It's newly installed system, with less then a month of when it was installed and it wasn't really used (been down most of the time). I suspect Symantec EndPoint protection might be faulty so when I go back there (Wednesday) I will uninstall it but maybe someone else have some ideas what may be happening. I doubt there's any virus or anything, symantec was installed right after setting everything up and running.

    Read the article

  • HaProxy - Http and SSL pass through config

    - by Bill
    I've currently got an HaProxy LB solution in place and everything is working fine however we are having an issue with a very few clients who cannot get to our site via HTTPS (SSL) they can browse our site in Http but as soon as they click on an absolute HTTPS link they are taken to our home page instead. Wondering if anyone can look at our config below and see if there's something awry. I believe we are on HaProxy 1.2.17 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 6144 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats auth # admin password stats uri /monitor listen webfarm # bind :80,:443 bind :443 mode tcp balance source #cookie SERVERID insert indirect #option httpclose #option forwardfor #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen webfarmhttp :80 mode http balance source # option httpclose option forwardfor # option httpchk HEAD /check.cfm HTTP/1.0 option httpchk /check.cfm server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen monitor :8443 mode http balance roundrobin #cookie SERVERID insert indirect option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 server webB 111.10.10.2

    Read the article

  • How to edit files directly on webdav in windows.

    - by phazei
    I have a webDAV setup with the cPanel webdisk. I can connect to it through NetHood and I can drag and drop files to/from there. What I can't do is simply edit any of the files directly. I need to copy it somewhere else, edit it, then copy it back. That's essentially what is needed with ftp, though smart clients will monitor the file, making it easier than webDAV in the current state I'm using it in. I was under the impression that webdav was supposed to let me work on the files as if it were a local drive. But nothing can actually open the files. How can I go about bringing more functionality around to it? Or is this as good as it gets? I have tried 'net use q:\ https://myserver.com:2078' and 'net use q:\ '\myserver.com@SSL:2078\' but neither work and only throws: "System error 67 has occurred. The network name cannot be found." I ultimately want to use TortiseSVN with the webDAV so I can have my working copy running on the server.

    Read the article

  • Windows 7 - Windows XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • Purge print driver cache on windows 7 with powershell script

    - by Doltknuckle
    [Background] We have been having trouble with our network clients suddenly being unable to print. They get an odd error with a hex code. We determined that something in the driver was messed up and we could resolve the issue by clearing the driver cache and reinstalling the driver. This happens to random computers every so often. We're assuming this is a bug with the latest Dell 2330dn driver since that is the only model that has this problem. [Problem] What we are looking to do is write a Powershell script that would clear the driver cache and redownload the driver. I see a ton of scripts out there to manage queues, servers, and ports, but nothing for local driver cache management. [Current Workaround] Since we have to do this manually, I'll write out the steps so you know what we want this script to replicate. Disable print spooler Restart machine Delete contents of: C:\windows\system32\spool\drivers\w32x86 Enable print spooler and start service. Delete the network printer object and re-add network printer off of server. [Request] I'm good enough with powershell to translate the above workaround into a pair of scripts. I'd like to find a more elegant solution then my current workaround. Any suggestions?

    Read the article

  • Windows 2008 RemoteAPP client disconnects within a matter of minutes.

    - by Jeroen Wilke
    I'm having an odd problem with Windows 2008 TS, and remote applications specifically. The situation is as follows: TS idle timeout is disabled via GPO TS terminating disconnected sessions after 1hr (via GPO) My users can log on to the Terminal server, and get a full desktop, OR via rdp files that give access to a few remote applications. When a user connects to a full desktop, everything is fine and dandy, they will remain logged on indefinately, and when they disconnect the session is terminated after an hour. however, when a user connects using a remote application link, the client seems to disconnect after only a few minutes of inactivity, when you click the window, the session reconnects. EventID's on TS server: 4779: This event is generated when a user disconnects from an existing Terminal Services session, or when a user switches away from an existing destop using Fast User Switching. 4778 : This event is generated when a user reconnects to an existing Terminal Services session, or when a user switches to an existing desktop using Fast User Switching users are connecting directly to 3389, not using a TS-gateway at the moment. This behavior is consistent on different clients that we have, Full desktop is fine, RemoteAPP constantly disconnects. The .rdp file used doesn't list any interesting parameters, aside from what application to launch, and where to find it. Can someone explain to me how there can be a difference in behaviour between full desktop, and remoteapp ? since essentially they use the exact same client ? Regards Jeroen

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • Windows RDP cannot connect to x64 server from XP SP3+

    - by Tom
    Hi all, I have a strange problem that I can't seem to find the answer to anywhere online. The issue has to do with using Windows RDP to connect to our servers. Here is what works: -XP/Vista client (any SPs) connecting to 32-bit Server 2003 machine -XP (SP2 and lower) client conecting to 64-bit Server 2003 machine Here is what does not work: - XP SP3+/Vista client connecting to 64-bit Server 2003 machine It appears that the issue is that XP SP3 and Vista clients cannot connect to x64 Server 2003 boxes. After entering the username/password, we get an error message saying the below, and the connection drops: To log on to this remote computer, you must have Terminal Server User Access persmissions on this computer. By default, members of the Remote Desktop Users group have these permissions. If you are not a member of the Remote Desktop Users group or another group that has these persmissions, or if the Remote Desktop User group does not have these permissions, you must be granted these permissions manually. The issue is that the user is a member of the Administrators group, which has permission. Also, logging in using the same username, but from an XP SP2 machine, has no problems at all. I hope I explained this well enough, and any help/insight that can be given would be greatly appreciated. Thanks, Tom

    Read the article

  • Help Email Account Management among multiple users

    - by CogitoErgoSum
    So I preface this with saying this may belong in IT Security, not too sure feel free to move. Currently we have an email account [email protected] - hosted via google apps (as is all our email). We had an incident where we had to terminate an employee. This employee however had the password for this account as we have 20-30 people utilizing it at any given point to manage customer emails etc. Thinking on this I feel there must be a better way to manage access. With Google you can associate upto 10 email accounts to another the problem is we have more like 20-30 people going. We were evaluating tools such as SalesForce and Assistly where people have their own login credentials and then the system contains the appropriate smtp information for the [email protected] email address to send emails from it rather than a users personal account. Aside from those options does anyone have any other thoughts? One suggestion floated was moving everyone to desktop clients and saving the PW info there so they could only login from their physical workstation but we may have situations where we'd like employees to work remotely. Does anyone have experience with this sort of system where ~20-30 people are responding from one email box and how to manage security and access?

    Read the article

  • Do you need to advertise an AFP service via Avahi for an Ubuntu Server to show up in OSX Finder?

    - by James
    I am only advertising an NFS share plus the "model", and I don't want to install extra services on the Server unless I have to, ie netatalk, as it is used solely for NFS exports. Currently there is no entry in Finder under "Shared" with below config of Avahi. serveradmin@FILESERVER:/etc/avahi/services$ cat nfs.service <?xml version="1.0" standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name replace-wildcards="yes">%h</name> <service> <type>_nfs._tcp</type> <port>2049</port> <txt-record>path=/Volumes/StoragePool</txt-record> </service> <service> <type>_device-info._tcp</type> <port>0</port> <txt-record>model=Xserve</txt-record> </service> </service-group> Server: Ubuntu 12.04.01 x64 Clients: OSX 10.6.8 , 10.7.5, 10.8.2 The goal is to advertise that NFS share, then assign a really old Model code of Mac like a Powermac and switch out the icon for a more "LinuxServer-y" one. Plus allow users to connect to NFS in a manner they are familiar with like our other Xserve servers. I think Avahi is working in general as if I do: nfs://FILESERVER.local/Volumes/StoragePool it will connect fine. Any ideas?

    Read the article

  • Apache, suexec, PHP, suPHP

    - by Chris_K
    While I'm quite comfortable as a Linux user, my Linux Admin-fu is a bit weak. Thus, I'm here looking for guidance with a CentOS server I'm about to build. I need to setup an Apache2 web server for a few of our clients. I want each client's web content to be under their home directory (USERDIR in apache.conf, right?) for the static HTML sites. I want Apache to run as the client (suexec?). Some of their stuff will be PHP apps and I'm under the impression I'll want to look at suphp as well then. So basically I want to look like a small version of a shared web hosting company. Considering how common those are I thought I'd easily find a nice current How-To guide on setting this all up but so far I've had very little luck. I suspect my search words are off. So the questions (feel free to answer any or all): Anyone have some solid links to current/modern guides that would help me set this all up? No, the apache documentation site is not a guide ;-) Since I have a mix of static sites and PHP apps do I want/need both suexec and suphp installed? If so, does that introduce any challenges I should be aware of? Should I be looking at other options instead of suexec and suphp? I plan to give the end users SSH, SFTP or SCP access to their stuff (if that affects anything). Thanks in advance for your help.

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • Xorg eating up too much RAM on Ubuntu 9.10 box

    - by Yang
    Xorg is eating up 444MB of 2GB total RAM on my Ubuntu 9.10 x86_64 machine with nvidia drivers installed for the nvidia G86 (GeForce 8300 GS). top shows: top - 18:21:41 up 6 days, 2:40, 9 users, load average: 0.46, 1.12, 1.22 Tasks: 266 total, 3 running, 262 sleeping, 1 stopped, 0 zombie Cpu(s): 8.4%us, 2.0%sy, 0.0%ni, 89.1%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2055736k total, 1965136k used, 90600k free, 3952k buffers Swap: 979924k total, 979908k used, 16k free, 102636k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1432 root 20 0 1154m 442m 7492 S 8 22.0 32:56.97 Xorg 18462 yang 20 0 1001m 219m 8356 S 0 10.9 5:13.25 chrome 24099 yang 20 0 865m 83m 13m S 0 4.2 0:06.91 chrome xrestop shows: xrestop - Display: :0.0 Monitoring 47 clients. XErrors: 0 Pixmaps: 40430K total, Other: 142K total, All: 40573K total res-base Wins GCs Fnts Pxms Misc Pxm mem Other Total PID Identifier 1c00000 21 46 1 19 697 9128K 18K 9146K 3169 x-nautilus-desktop 1000000 4 3 0 17 194 9000K 4K 9004K 3134 gnome-settings-daemon 1600000 51 2 1 25 1100 7648K 28K 7676K ? compiz For comparison, here's my other Ubuntu box, which also has compiz etc. enabled but with ATI RV370 (Radeon X300SE): top - 18:18:18 up 58 days, 4:27, 9 users, load average: 0.00, 0.00, 0.00 Tasks: 224 total, 1 running, 223 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 0.3%sy, 0.0%ni, 98.8%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1024964k total, 987124k used, 37840k free, 247012k buffers Swap: 2048276k total, 94296k used, 1953980k free, 264744k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24324 yang 20 0 61936 35m 6364 S 0 3.5 4:35.84 nxagent 1768 ntop 20 0 190m 32m 5388 S 1 3.2 283:36.15 ntop 1178 root 20 0 60588 29m 1788 S 0 3.0 5:48.89 console-kit-dae ... 1315 root 20 0 343m 4956 4020 S 0 0.5 3:43.87 Xorg Any ideas on how to get to the bottom of this? (i.e. not "Log out"/"Reboot") Thanks in advance.

    Read the article

  • How to configure DNS server to forward queries about particular domain AND all of its subdomains

    - by user71061
    I have DNS server (linux box with bind9), which is authorative for some domains, and forward all other queries to external DNS server of my ISP provider. So far no problem. Now I want that queries about some specific domains were forwarded to my internal DNS server, f.e.: zone "some_domain" { type forward; forwarders { some_internal_dns_ip; }; }; So far still no problem, all works ok. But then, I want also to forward some reverse DNS queries to my internal DNS. So, I have added: zone "16.172.in-addr.arpa" { type forward; forwarders { some_internal_dns_ip; }; }; And this doesn't work as I expect. Queries about "16.172.in-addr.arpa" (for example 1.16.172.in-addr.arpa) are resolved correctly, but reverse queries about full address (for example 1.1.16.172.in-addr.arpa) are not. I understand that my server should use here some recursive query, but could not configure it. I have already tried adding following options recursion yes; allow-recursion { 127.0.0.1; }; allow-recursion-on { 127.0.0.1; }; but with no success . (I have used loopback address here, because I need this functionality only for my DNS host, and not for its clients) Any suggestions?

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >