Search Results

Search found 20684 results on 828 pages for 'ad hoc network'.

Page 435/828 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • cannot connect to <server_name>\sqlexpress

    - by Jackson Sunuwar
    I have tried disabling firing wall and checked sqlbrowser is started but for some reason I cannnot connect to my datbase... called server_name\sqlexpress.. I have a virtual machine and a full scale MS SQL Server 2008 R2 running on it... and I have several other vm running sqlexpress. they run fine and I can connect to them using sqlexpress... but when i try to access from sqlserver... I get this error. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) Digging deep into the error, I found this Error Number: -1 Severity: 20 State: 0 and finally this... Program Location: at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at Microsoft.SqlServer.Management.SqlStudio.Explorer.ObjectExplorerService.ValidateConnection(UIConnectionInfo ci, IServerType server) at Microsoft.SqlServer.Management.UI.ConnectionDlg.Connector.ConnectionThreadUser() Firewall is turned off on the VM that's running mssqlserver... I turned of firewall on one of the vm that's running the sqlexpress but I still get the error... can someone please help... thank you

    Read the article

  • What steps should I take to debug this non-starting hvm virtual machine?

    - by Ophidian
    I have a dom0 machine running CentOS 5.4 with all the latest updates using Xen as my hypervisor. I am using Xen in part because this machine was set up prior to KVM being included in RHEL, and in part because KVM's network bridging configuration is not nearly as simple as Xen's. The dom0 machine is headless and I do all of my VM management via virsh from the command line. I have two hvm domU's: A web server running CentOS 5.4 A mail server running Gentoo Both VM's are backed by LV's on the dom0 but do not use LVM in the domU. Both have virtually identical libvirt configurations (differing by expected things like name, UUID, NIC MAC, VNC port, etc). The web server domU (WSdomU hereafter) does not start since applying the most recent kernel update (kernel-xen-2.6.18-164.15.1.el5.x86_64 and kernel-2.6.18-164.15.1.el5.x86_64 for the dom0 and WSdomU respectively). By 'not start' I mean it appears to be running but it does not use an CPU cycles, does not bring up a graphical console, and does not respond on the network. The WSdomU is listed as no state rather than the normal running or blocked in xentop. The mail server domU starts fine and functions normally. Here are the steps I have taken so far that did not solve the problem: Reboot the dom0 to see if things come up on their own Check xen dmesg on dom0 Check xend logs (a cursory viewing did not show anything blatant; specific suggestions of things to look for would be appreciated) Attempted to connect to the WSdomU's graphical (VNC) console from the dom0 Shutdown the mail server domU and attempt to start the WSdomU Check the SELinux labels on backing LV's (they're the same) Set SELinux to permissive and attempt to start the WSdomU Use virsh edit to try tweaking the WSdomU config virsh undefine, reboot, virsh define the WSdomU config dd the WSdomU LV to an .img file, copy it to my Fedora desktop and run it under KVM (works fine) What steps should I take next to debug this? I will edit in any additional configuration's requested in the comments.

    Read the article

  • How can I cause Task Scheduler to "fail" if a dialog box returns a certain result?

    - by Roger
    I'm working on a VBScript to do a weekly reboot of all machines on our network. I want to run this script via Task Scheduler. The script runs at 3:00 AM, but there is a small chance that users may still be on the network at that time, and I need to give them the option to terminate the reboot. If they do so, I would like the reboot to occur the next night at 3:00 AM. I've set Task Scheduler up to repeat in this way. So far, so good. The problem is that if the user selects "Cancel" in my script, the Task Scheduler does not see my task as failed, and won't run it again the next night. Any ideas? Can I pass an errorcode to task scheduler or otherwise abort the task via VBScript? My code is below: Option Explicit Dim objShell, intShutdown Dim strShutdown, strAbort ' -r = restart, -t 600 = 10 minutes, -f = force programs to close strShutdown = "shutdown.exe -r -t 600 -f" set objShell = CreateObject("WScript.Shell") objShell.Run strShutdown, 0, false 'go to sleep so message box appears on top WScript.Sleep 100 ' Input Box to abort shutdown intShutdown = (MsgBox("Computer will restart in 10 minutes. Do you want to cancel computer restart?",vbYesNo+vbExclamation+vbApplicationModal,"Cancel Restart")) If intShutdown = vbYes Then ' Abort Shutdown strAbort = "shutdown.exe -a" set objShell = CreateObject("WScript.Shell") objShell.Run strAbort, 0, false End if Wscript.Quit Appreciate any thoughts.

    Read the article

  • Can Windows logoff events be tracked?

    - by Massimo
    I'm working on an application to track network user logon/logoff events in an Active Directory domain; the application will work by auditing security logs on domain controllers. Auditing logon events can get somewhat tricky, but it can succesfully be done. My problem: how can I track logoff events? Based on some research I've done, it looks like these events are only logged locally on workstations, but not on DCs; also, the "lastLogoff" attribute exists on AD user objects, but it's not actually used by anyone. This is a very specific question: is something logged on DCs when a user logs off from a domain workstation? To clarify: I'm not intereseted in other auditing mehods, I can't deploy logon/logoff scripts and I can't install anything anywhere; I also know opened and closed network sessions are logged, but this is not what I'm looking for. I need to audit interactive logons and logoffs to domain workstations, and I can do this by only reading domain controllers security logs; reading each workstation's local event logs is out of question. If this can't be done, it's ok; but I need a clear answer on that. Can this be done? If yes, how?

    Read the article

  • Get Internal IP Address From DHCP Hostname

    - by ell
    I would like to try and get an internal ip address of one of the computers on my network. The reason for this is I have a little home server box downstairs but every time I want to SSH into it I have to open my router configuration and go on the DHCP client table and look at the IP address. For example I would like to be able to go ssh ell-sever instead of ssh 192.168.1.105 or whatever it happens to be. My network configuration is like so: Router downstairs that is connected to the Internet and is running a DHCP server My server computer (ell-server) is a headless pc connected to the router via ethernet cable. Running Ubuntu 11.04 Server Edition My laptop upstairs (ell-laptop) that is running Ubuntu 11.10 Desktop Edition connected wirelessly Other (irrelevant) computers - 2 x Windows XP, 1 x Xubuntu - all connected with cables. (It seemed to me the method of connection isn't useful information but I put it in anyway - just in case. If I have missed any information please tell me) Do I have to run a DNS server on one of my computers? If so which one? And does that mean I will have to run a DDNS client on each computer? Thanks in advance, ell.

    Read the article

  • WebDav System Error 67 in Windows XP

    - by Nixphoe
    Issue: I'm having issues getting WebDav to work in the command line on Windows XP, both Service Pack 2 and Service Pack 3. C:\>net use z: https://mywebsite.com/software/ System error 67 has occurred. The network name cannot be found. I have tested this with two webdav server. Both Ubuntu Apache and I Windows Server 2003 IIS. Both get the same result. Things That Haven't Worked: I've installed the following Microsoft KB on my XP machines with no avail. I've also found the following reg key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters UseBasicAuth REG_DWORD 1 I try the following when trying to use a few work around I've dug up on the web, all producing the same result. net use z: https://mywebsite.com/software net use z: https://mywebsite.com/software# net use z: https://mywebsite.com/software/ net use z: https://mywebsite.com/software/# I've also tried all the above combinations adding a user into it /user:user and /user:user@domain. I've also tried using http:// rather than https://. I've tried "\\server.com@ssl:443\folder" I've gone over networking related issues as @WesleyDavid had pointed out. Things that do work: I can connect to the webdav folder via the URL and with mapping in Network Place, with XP. But the command line doesn't work (I need a drive letter). Windows 7 works perfectly with the same command. My Delemma: I need this to work with a drive letter. What else can I try to get this working?

    Read the article

  • Can't get rsync over sftp to work

    - by Patrik
    I'm trying to set up a backup system from an Ubuntu server to a Synology NAS (DS413j) using rsync and sftp. I have created a user for this that we can call ubuntu-backup. I have a directory in ubuntu-backup home directory called www where the backup will be saved. I have enabled Network Backup in DSM The user ubuntu-backup has full access to it's home directory Here is my rsync config file on the Synology NAS: #motd file = /etc/rsyncd.motd #log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid lock file = /var/run/rsync.lock use chroot = no [NetBackup] path = /var/services/NetBackup comment = Network Backup Share uid = root gid = root read only = no list = yes charset = utf-8 auth users = root secrets file = /etc/rsyncd.secrets [ubuntu-backup] path = /volume1/homes/ubuntu-backup/www comment = Ubuntu Backup uid = ubuntu-backup gid = users read only = false auth users = ubuntu-backup secrets file = /etc/rsyncd.secrets The permissions on /volume1/homes/ubuntu-backup/www is ubuntu-backup:users 777 Here is the command i'm running. rsync -aHvhiPb /var/www/ [email protected]:./ The result: sending incremental file list ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.9] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(605) [sender=3.0.9] If I'm running this: rsync -aHvhiPb /var/www/ [email protected] It looks like its sending files. No errors. But I cant find anything on the NAS.

    Read the article

  • Windows 7 fails to connect to the internet a few minutes after startup

    - by SageTheGreat
    Problem Earlier today, when I turned on my desktop computer, my internet connection works fine. Cryptocurrency miners connecting and hashing as usual and I can browse websites. But after a few minutes, my miner fails indicating that there is something wrong with my internet connection. Tried refreshing my browser and is stuck at "resolving host", and then presented me an error. After that, i can't browse sites anymore. But the weird thing is that the network icon in Windows 7 shows no signs of problems. Solutions Made Restarted my computer without doing anything: Problem persists. Tried using the network troubleshooter of windows: Reported no problems Stopped bonjour still no progress. Loaded windows using Last good config: still no progress. Restarted Modem: No change. Current Status I currently did a system restore to my system to a point before installing the latest update from Microsoft. Because earlier today, I installed some updates and after that, the problem started to appear. (After system restore, same problem.) Latest Programs installed before the problem MS Visual Studio 2013 (but internet still worked fine after the install). I hope someone could provide answers on this problem. It is my first time encountering this. EDIT: Additional Info OS: Windows 7 SP1 64-bit AV: Avast Free Antivirus Internet Connection Type: Ethernet It appears that my Laptop can't even connect to the machine thru Remote Desktop My laptop and phone on WiFi works fine and can connect to the internet. EDIT 2: Whenever I boot into Safe Mode, my Internet is fine.

    Read the article

  • Port Forwarding failing only to Ubuntu servers from Draytek router

    - by Rufinus
    I know this is a kinda unusal question, but Draytek support (..which is very eager to solve the issue) seems to reach its limits. Scenario: Draytek Vigor Multiwan router with current firmware. Multiple WAN IP Aliases on one of the wan ports DMZ (or port forwarding doesnt matter) from wan ip alias to internal host currently i have two internal hosts: 192.168.0.51 (Ubuntu) 192.168.0.53 (Debian) both should be accessible from outside via one of the wan ip aliases. both are accessible with their internal ip's at all times (!) If the router gots restartet, both external ips are forwarding to its internal hosts. But after a few minutes up to 2 hours, the ubuntu host is no longer reachable via its external interface. The debian hosts on the other hand is reachable. In what does ubuntu differs from debian ? I know at least of one user with the exact same problem. see http://ubuntuforums.org/showthread.php?p=10994279 Any ideas ? TIA EDIT: via ping diagnostics directly on vigor, 192.168.0.53 is pingable, 192.168.0.51 is not. but both hosts are perfectly reachable from anywhere inside the network. if i restart ubuntu networking it works again for a short time.... i'm out of ideas.. EDIT 2: after further investigation, i noticed a ping from .51 to the network (or a host in the internet) is enough to make the port-forwarding working again. So i will add an Cronjob as a "keep-alive" ping. This will solve the problem, but the reason for this behaivor is still in the dark. Thanks to all commentors.

    Read the article

  • Need to get a list of all users within a subnet of servers

    - by mikedopp
    I am looking to write a batch or vbs script to gather all users (local to the server. ie. administrators or a local account(not ad users)) on a collection of servers inside my network. I assume I could do this by subnet. Could even put the server names into a csv text file for the script to read from and report back to. Lots to ask. I would use net user however I run into local access only. Ideas? Or too many security walls to work?

    Read the article

  • Using OSX home directories from linux

    - by Steffen
    I'm running an OSX (Snow Leopard) Server with OpenDirectory, which is nothing else than a modified OpenLDAP with some Apple-specific schemas. However, I want to reuse this directory on some of my Linux (Debian Squeeze) boxes. It's no problem to authenticate against OSXs LDAP Server, this works fine already. What I struggle with is the way the home folders are specified in OSX. If I query the passwd config on one of my linux machines, the OSX imported entries are looking like this myaccount:x:1034:1026:Firstname Lastname:/Network/Servers/hostname.example.com/Volumes/MyShare/Users/myaccount:/bin/bash While those network home folders might be fine for OSX-Clients, I don't want those server based paths on my linux machines. I saw that there is an NFSHomeDirectory Attribute in the OSX User inspector, but if I change this the whole user home path gets changed. Since my users should be able to login on both systems, OSX and Linux, this is not what I want. Does anyone have an idea how I must configure OSX to make my linux machines use home folders like /net/myaccount and leave the configuration for OSX clients untouched?

    Read the article

  • How do I load balance between two Linux machines?

    - by William Hilsum
    Inspired by the Stack Overflow network, I am now obsessed with HAProxy and trying to use it myself. At the moment, each HAProxy box has got two network cards (well, two configured, I can have a maximum of 4 and wasn't sure if they needed their own one for management between the boxes). On both machines, the backend one (eth1) is a private IP that goes to a switch connected to the webservers, and the front facing one (eth0) has a public internet IP that is routed straight though. In addition, I have created an additional virtual ip for eth0 called eth0:0 which has got a third public ip address. I just about get how to use it for load balancing between multiple web servers that are behind it, but, I am failing to load balance between the two HAProxy boxes - they appear to fight for the virtual IP, but, this does not appear to be a smart solution. Now, by using the virtual shared IP address, this solution appears to work and does seem to give me maximum uptime, but, is this the correct way to do it, or is there a smarter way? I have been looking at other Linux packages such as keepalived, but, I have only been using Linux (server) for a week now and am at the limits of my understanding. Is there anyone who has done this before and can you advise anything for maximum uptime?

    Read the article

  • Hiding subfolders from users with Windows Server security

    - by Frans
    Using Windows Server 2008. I would like to allow all users to map to a common network drive and be able to browse it. But, I only want them to be able to see the subfolders they actually have access rights to. Is this doable? Example I have a share with two folders on it; \\domain\share\FolderA \\domain\share\FolderB With three different security groups, I would like to map a network drive for all three to \\domain\share. However, for group1, I want them to only be able to see FolderA, group2 should only see FolderB and group3 should see both. I am not just talking about denying access to the actual folder, which is easy enough, I don't want the user to even be able to see that the folder exists. In other words, when group 1 logs in and do "dir n:\" they should see N:\FolderA When group 2 logs in, they should see N:\FolderB and when group 3 logs in they should see N:\Folder A N:\Folder B My half-baked solution If I completely block access to the root then I can't map a drive to it. I can give everyone the traverse right which then allows the user to map a drive. However, if a member of group1 or group2 tries to go to "N:\" they get an access denied error. If they go to N:\FolderA (for group1) then it works. So, that sort of works, but it would be nicer if the user could actually browse to N:\ and just only see the subfolders they have access to. I am pretty sure I have seen this done but not sure how to do it myself. Any advice would be greatly appreciated.

    Read the article

  • RemoteApp: Logging in as user x disconnects user y

    - by onik
    I'm having a pretty bizarre problem with a Terminal Services server used for RemoteApp. In our network the server works as it should, but at a client's office if two users log in simultaneously, the first one gets disconnected as the other one connects. The users belong to the same group but have individual users. The similar configuration works fine for all other clients. About the server, it's Windows 2008 R2 x64, no AD, SSL encrypted connections. Event viewer shows no useful information. Any hints where to start debugging? Do you need more info about the setup?

    Read the article

  • Strange Internet Connection issues

    - by Nodren
    I'm attempting to troubleshoot problems with a laptop computer(HP 8510w) while it's connected to a server of mine via Remote Desktop. I double checked all the settings on the win2k3 server for remote desktop to verify that remote desktop isn't what's causing the disconnect issues, and other people using different computers/laptops can all connect to remote desktop correctly with no issues. These problems happen specifically when the laptop is connected via wifi(several different wifi sources, so it's not an ISP issue) as well as connected via a Verizon data card. However there's no network downtime when the laptop is resting in the docking station and plugged into the network with the remote desktop server. These problems have also only recently occurred since a recent hard drive failure in which a new hard drive was purchased and the laptop had a fresh install of windows xp professional. There's no special software used on this machine, just office 2003. So my question is, what could cause two types of internet access to fail while other types do not? If it is infact related to the win2k3 server, why is this particular laptop getting disconnected when others are not and are all on at the same time?

    Read the article

  • Recommended service account setup for MS SQL Server 2005/2008

    - by boxerbucks
    We have a number of MS SQL servers in our environment running either SQL Server 2005 standard/enterprise or SQL server 2008 enterprise. Currently the SQL services are running as local service or network service and the MS recommended best practice is to run as a domain account which is what we are trying to move towards. Is the best practice with regards to domain accounts to have a separate domain account per service per server? So if we have 4 SQL services we want to run per server and we have 50 servers, we would create 50 * 4 = 200 accounts in AD? This seems excessive to me and I was wondering if anyone has any real experience with this type of setup and it's management.

    Read the article

  • Why does Outlook 2007 lose connection to Exchange when Windows 7 64-bit turns off display?

    - by Greg R.
    The problem: When Windows 7 puts the display to sleep, Outlook 2007 and also Microsoft Office Communicator 2005 lose the connection to the Exchange server. When I unlock the computer, Outlook is logged out of Exchange and prompts me for credentials (although usually I have to restart Outlook to get it to reconnect). The network connection is still active, e.g. other applications don't lose their connection to the network or Internet when Windows 7 puts the display to sleep. I'm using a Dell E5400 notebook running Windows 7 Enterprise 64-bit with Outlook 2007 connecting to a corporate Exchange server (not sure if it's Exchange 2007 or 2010). The Dell is typically docked and connected via DVI (through the dock) to two Dell monitors. The Power Options in Windows 7 are set as follows: Turn Off The Display: 15 minutes Put The Computer To Sleep: never Those are the "Plugged In" settings but the problematic behavior is the same when running on battery. When Windows 7 turns off the display, it automatically locks the computer. E.g., I have to re-enter my credentials to access the machine. This is per corporate policy. The equivalent set up on my previous Dell notebook running Windows XP SP3 did not result in this problem with Outlook 2007 or Office Communicator 2005 connecting the very same exchange server. The problem began when I switched to the new Dell E5400 with Windows 7.

    Read the article

  • Intermittent 5.7.1 email bounce to Exchange 2007

    - by Steve Kennaird
    My knowledge of Exchange isn't particularly great, so excuse me if some of the terminology I use isn't quite right. I'm primarily a web developer who's now responsible for a small business's network. We have a server running SBS 2008 and Exchange 2007. Generally, everything works well, emails are able to be sent to both internal and external domains without issue. We've only got ~20 users, Exchange is sitting on a single server. I use SendGrid to send emails generated by our externally hosted website to users in the office. Primarily, order notifications are sent to [email protected]. Without any pattern and less than once per week on average, an email to [email protected] will bounce back, and the logs on SendGrid detail the following error: 550 5.7.1 Unable to relay for [email protected] Either side of that failed delivery attempt, I'm able to send and receive emails to/from [email protected]. Having done some research, incorrect reverse DNS seems like it could be a cause of intermittent bounces like this. Having used nslookup, I have found that the reverse DNS doesn't map like it should, e.g. Office IP: 135.325.351.123 (made up IP, for example only) Domain: office.somedomain.com (made up, for example only) Reverse DNS: somedomain.gotadsl.co.uk (half made up) Could this be a cause? I'm sure that the IP address and the domain should map to each other. Also, it has been suggested to me that as the Exchange server is on a network with an ADSL connection, that could be a potential cause as the connection "goes up and down all day long". I don't have an opinion on this, as I don't have enough knowledge of Exchange/ADSL to form a reliable opinion. Can anyone offer any insight as to whether one or both are actually potential causes, or if there is another possible cause?

    Read the article

  • SharePoint extranet security concerns, am I right to be worried?

    - by LukeR
    We are currently running MOSS 2007 internally, and have been doing so for about 12 months with no major issues. There has now been a request from management to provide access from the internet for small groups (initially) which are comprised of members from other Community Organisations like ours. Committees and the like. My first reaction was not joy when presented with this request, however I'd like to make sure the apprehension is warranted. I have read a few docs on TechNet about security hardening with regard to SharePoint, but I'm interested to know what others have done. I've spoken with another organisation who has already implemented something similar, and they have essentially port-forwarded from the internet to their internal production MOSS server. I don't really like the sound of this. Is it adviseable/necessary to run a DMZ type configuration, with a separate web front-end on a contained network segment? Does that even offer me any greater security than their setup? Some of the configurations from a TechNet doc aren't really feasible, given our current network budget. I've already made my concerns known to management, but it appears it will go ahead in some form or another. I'm tempted to run a completely isolated, seperate install just for these types of users. Should I even be concerned about it? Any thoughts, comments would be most welcomed at this point.

    Read the article

  • Multiple test Active Directory envirovments hand in hand with production domain controllers

    - by MadBoy
    What's the best approach of having multiple test environments next to production one? We have multiple programming teams that build solutions that use Active Directory very often. We have tried different approaches, starting with their own domain controllers (in same subnet), or additional OU's in our production AD that the team gets control over and can create/delete accounts within that one OU. We thought of possible 4 solutions: Setting up separate OU's in ou production env. Creating subdomains for our contoso.com domain like test.contoso.com, something.contoso.com and delegating control to the teams (would we need additional DC's or the two that we have already would be enough to hold this? Setting up additional test domain controler that has a trust to our main domain and all teams can use the test domain controler as they please. Setting up single domain controller for every team/project. We're taking in consideration amount of resources needed, security (for example having multiple domain controlers with multiple passwords may lead users to use simpler passwords) and overall best practices for this scenario.

    Read the article

  • SVN, Samba and Symbolic Links. How to get them all to play together?

    - by Camsoft
    I've got a website project under version control that relies on files from an unversioned directory on the same server via Symbolic Links. I'm currently storing the symbolic links in the repository. The idea is that if someone checks out a working copy on to the same server they can edit and test the working copy of the project before committing it back to the repository. When they checkout their working copy it successfully sets up the symlinks so that the entire site works when testing. The users that work on the project are Windows users, so I've set a samba shares on the server and then mapped them to network drives in Windows. People can edit their working copies directly on the server via network shares and then test them in the web browser before committing their changes back to the repository via TortoiseSVN. The Problem The problem I have is that Samba resolves the symlinks as expected but when a user tries to commit their changes back to the repository, TortoiseSVN thinks the linked files are part of the project and tries to commit the target files to the repository and not the symlinks themselves. I tried turning off symlink support in samba which means that the linked files cannot be resolved as I don't really want people to have access to the linked files nor do I want to import the linked files in the repository. The problem with this is that I get Can't stat '\webserver\projects\working\project\symlinked_file.php'. Access is denied Apart from the symlink problem everything else works 100% perfectly. Users can either checkout website projects to their machine and work on them (but can't test) or checkout them out to their space on the dev web server and work on them and fully test. So I don't want to change the workflow process, I just need a solution to the symbolic link issue. Many thanks. Originally posted on StackOverflow: http://stackoverflow.com/questions/2400917/svn-samba-and-symbolic-links-how-to-get-them-all-to-play-together

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

  • Unable to browse to apache service, Service is running

    - by Jeff
    Summary I have a very peculiar problem. I am not able to open the "It Works!" page after installing a fresh server with apache. I am able to ssh to the box (from outside the network). Apache seems to be running on my Centos6.4x86_64 box just fine. Nothing useful in /var/logs/httpd/*. What am I missing? The setup I am outside the network right now. The "server" is a VM on my home computer running bridged mode. public ip: A.B.C.D Host: 192.168.1.5 VM: 192.168.1.8 I have a verizon fios router that is forwarding ports 22, 80, and 8888 to the VM. I am able to ssh over port 22, but I am not able to browse to the public URL over port 80. so A.B.C.D:22 is working, but http://A.B.C.D:80 is not. What I've tried nmap to see if it is listening: nmap -sT -O localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-10-25 11:10 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.000040s latency). Other addresses for localhost (not scanned): 127.0.0.1 Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 80/tcp open http 3306/tcp open mysql I tried going to it locally (lynx) and it does work. So, is the problem in my ports?

    Read the article

  • erlyvideo server doesn't start automatically after reboot

    - by electroid
    I have installed erlyvideo server on ubuntu 9.10 karmic koala. Everything works fine, but after server reboot I have to start erlyvideo server manually with /etc/init.d/erlyvideo start. I try allready update-rc.d and I think erlyvideo by default should start automaticaly. Any help will be appreciated. Here erlyvideo startup script located in /etc/init.d/erlyvideo: #!/bin/sh ### BEGIN INIT INFO # Provides: erlyvideo # Required-Start: $local_fs $network # Required-Stop: $local_fs $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the erlyvideo streaming server # Description: starts the erlyvideo using erlang system ### END INIT INFO case "$1" in start) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; stop) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; restart) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; soft-restart) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; upgrade) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; reconfigure) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; reboot) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; ping) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; console) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; attach) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; attach-erl) cd /opt/erlyvideo && ./erts-5.8.4/bin/erl -name [email protected] -remsh [email protected] ;; *) echo $"Usage: $0 {start|stop|restart|soft-restart|upgrade|reboot|ping|console|attach}" exit 1 esac exit 0 And I have found S91erlyvideo in /etc/rc2.d next to S91apache2 which starts just fine on every reboot.

    Read the article

  • Enable roaming profile from group policy

    - by Rob Nicholson
    I've had a reasonable look around the AD policies but am I right in saying the only place that you can enable & define the group policy location is by editing the user, i.e. there isn't a group policy setting to (say) "Set the profile location to \myserver\users\%username%\profile" for all users in group XYZ? I suspect this might be because of chicken & egg, i.e. group policy is applied after the profile has been loaded and therefore can't specify the location. Cheers, Rob.

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >