Search Results

Search found 20140 results on 806 pages for 'remote management'.

Page 690/806 | < Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >

  • Anonymous access to SMB share hosted on Server 2008 R2 Enterprise

    - by bwerks
    Hi all, First off, I have read through this post and a whole slew of non-SF posts which seem to address the same or a similar problem, however I was still unable to fix my problem. I've got three machines in this situation: a domain-joined server that runs Server 2008 R2 Enterprise ("share server") a domain-joined workstation running XP Pro SP3 ("test server") a domain-unjoined test server running Server 2003 R2 SP2 ("workstation") The share server is exposing a share on the network that the test server must access--it's a Source/Symbol Server share for our debugging purposes. I believe visual studio simply accesses the the share with its own credentials in this case, meaning that the share must be accessible anonymously since the test server isn't joined to the domain and there's no opportunity to supply domain authentication. I've attempted a lot of things to avoid the authentication window when accessing the share: I've enabled the Guest account on the share server and given Guest full sharing/NTFS permissions for the share. I've given ANONYMOUS LOGON full sharing/NTFS permissions for the share. I've added my share to “Network Access: Shares that can be accessed anonymously” in LSP. I've disabled “Network access: Restrict anonymous access to Named Pipes and Shares” in LSP. I've enabled “Network access: Let Everyone permissions apply to anonymous users” in LSP. Added ANONYMOUS LOGON to “Access this computer from the network” in LSP. Added the Guest account to “Access this computer from the network” in LSP. Attempted to provision the share using the Share and Storage Management MMC snap-in. Unfortunately when I attempt to access the share from the test server, I still see the prompt and I'm forced to enter "Guest" manually. I also tried this workflow using the local administrator account on a workstation, and the same thing happens both with and without XP Simple File Sharing enabled. Any idea why I'm getting these results, or what I should have done differently?

    Read the article

  • SSH client not showing prompt after successful login

    - by user431949
    I'm having problems with my SSH client on Ubuntu 10.10. When I switch on my computer and open a Terminal and execute the command ssh user@host, it gives me a password prompt after which I enter the right password, I then get a prompt to execute my commands on the remote computer. Now the problem is, after a little while (probably around 10 minutes), the terminal window stops accepting commands (No matter what I type, nothing shows). Once this happens, I close the Terminal window and try to start all over again by opening another Terminal window. But this time around, after entering the right password, I don't get a welcome message or prompt. The cursor just keeps blinking on a new line. I ran the ssh command with -v parameter and the message I get after a successful login is: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.utf8 Still the cursor keeps blinking on a new line without a prompt. However, Putty SSH client works perfectly on the same machine. Thank you very much for your time. Your help would be greating appreciated.

    Read the article

  • Is there an SSL equivelent to an ssh agent?

    - by Matthew J Morrison
    Here is my situation: There are a number of developers who all need to have access to be able to install ruby gems and python eggs from a remote source. Currently, we have a server inside our firewall that hosts the gems and eggs. We now want the ability to be able to install things hosted on that server outside of our firewall. Since some of the gems and eggs that we host are proprietary I would like to somewhat lock access to that machine down, as unobtrusively as possible to the developers. My first thought was using something like ssh keys. So, I spent some time looking at SSL mutual authentication. I was able to get everything set up and working correctly, testing with curl, but the unfortunate thing was that I had to pass extra arguments to curl so it knows about the certificate, key and certificate authority. I was wondering if there is anything like the ssh agent that I can set up to provide that information automatically so that I can push the certificates and keys to the developer's machines so the developers don't have to log in or provide keys each time they try to install something. Another thing that I want to avoid is having to modify the 'gem' command and the 'pip' command to provide keys when they make the http connection. Any other suggestions that may solve this problem (not related to ssl mutual auth) are also welcome. EDIT: I've been continuing to research this and I came across stunnel. I think this may be what I'm looking for, any feedback regarding stunnel would also be great!

    Read the article

  • Why is my connection slow?

    - by Jay R.
    I have a Dell Precision T5400 with a Broadcom 1Gb onboard NIC. For some strange reason, when I access machines on our local network, the best I can get is around 125KB/s download speed. My laptop that has a 10/100Mb NIC onboard usually gets around 300KB/s or better from the same network resource. Both machines are plugged into the same 1Gb switch which connects to our local network wall jack at 100Mb half duplex. There is also a printer plugged into the same switch at 100Mb full. The resource I'm using for the test is a 30MB zip file copied from a jetty webserver that is running as part of a cruisecontrol installation. The cruisecontrol installation is running WindowsXP with full real-time antivirus and Altiris patch management and inventory running. That stuff on its own is eating some of the download speed. I've seen the laptop reach into the multiple MB/s download speed before, but the desktop never seems to get past 125KB/s to 130KB/s. In WindowsXP, before I upgraded the driver in the desktop, it was that slow. In Fedora, it is still slow even though it appears to be using the same driver version as the upgraded Windows driver. The upgraded Windows driver is faster, but still not nearly as fast as the laptop. What gives? Any insight to improve the situation would be appreciated. Could it be that the BroadCom board just isn't that good, or the driver in linux is just not as good as the Windows one?

    Read the article

  • How do I correctly SSH port forward using LiveReload on Redhat?

    - by program247365
    Referencing this page: http://feedback.livereload.com/knowledgebase/articles/86280-if-you-edit-files-directly-on-your-server It says you can remotely port forward the LiveReload specific port of 35729, using this command: ssh -L 35729:127.0.0.1:35729 mylogin@myremoteserverIP When I run the -v option, I get: debug1: Local connections to LOCALHOST:35729 forwarded to remote address 127.0.0.1:35729 debug1: Local forwarding listening on ::1 port 35729. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 35729. debug1: channel 1: new [port listener] debug1: channel 2: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: client_input_channel_req: channel 2 rtype [email protected] reply 1 debug1: Connection to port 35729 forwarding to 127.0.0.1 port 35729 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 35729 for 127.0.0.1 port 35729, connect from 127.0.0.1 port 63673, nchannels 4 I thought editing my /etc/services with this line, would work, but it doesn't: livereload 35729/tcp # livereload usage with guard-livereload Every time I attempt to connect with the browser extension, I believe It's getting blocked by my server. What am I missing here? Do I need to edit /etc/services for this to work?

    Read the article

  • NRPE: Unable to read output with check_connections plugin

    - by Wlodzimierz
    I'm using plugin which gives me warning or crtis with established connections. If I run it on local machine it gives: *root@graber:/usr/lib/nagios/plugins# ./check_connections -w 1 -c 5 -C sshd CRITICAL Established connections: 6* I know, I run as root. But: Rights to the file: root@graber:/usr/lib/nagios/plugins# ls -all check_connections -rwxr-xr-x 1 nagios nagios 5459 2012-07-06 10:19 check_connections /etc/sudoers: root@graber:/usr/lib/nagios/plugins# cat /etc/sudoers Defaults env_reset root ALL=(ALL:ALL) ALL %admin ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/bin/lsof nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ /etc/nagios/nrpe.cfg: *nrpe_user=nagios nrpe_group=nagios* *dont_blame_nrpe=1* *command_prefix=/usr/bin/sudo command[check_connections]=/usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd* log from remote: *2012-07-06T11:12:49+02:00 graber nrpe[25928]: Handling the connection... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host address is in allowed_hosts 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host is asking for command 'check_connections' to be run... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Running command: /usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd 2012-07-06T11:19:11+02:00 graber nrpe[26100]: Return Code: 2, Output: NRPE: Unable to read output* Why is this happening? I'm out of ideas, I've searched google for 2 days now :)

    Read the article

  • Windows 7 pc freezes for an indeterminate amount of time after unlocking

    - by pikes
    Not sure if this type of question is appropriate for this forum, but I've tried everything I can think of to solve this problem aside from format/reinstall. I recently got a new work PC (Dell optiplex 755) with windows 7 professional x64. Standard developer software installed for .net development: VS2008, VS2005, SQL management studio, office 2007, etc. Recently I've been having this weird problem where after I lock my pc, when I try to unlock it, the screen will be black for awhile after unlocking. I can ctl+alt+del and put my password in but then it just goes black. The amount of time on the black screen seems to be related to the amount of time I am away from my PC. If only away a few minutes, it'll take about a minute to get to the desktop. If away for an hour, could take up to 15 minutes. If I lock it and go home for the night, I have to restart my PC in the morning (I've let it sit for an hour after a night of being locked and nothing happened). It doesn't do it every time but definitely the majority of the time. One weird thing I've seen is that if I remote into my machine before trying to log back in it does not do it. I uninstalled all software back to the point when I remember it started happening and it still does it. I was using this PC for a few weeks without this problem happening at all. Anyone know what my next troubleshooting steps could be? My IT department tried to fix it by moving my old profile to another disk and having me log in, effectively recreating a profile from scratch but that didn't solve it. As I said above if this isn't the right forum for these types of questions please let me know. Thanks in advance!

    Read the article

  • Msg 10054, Level 20, State 0, Line 0 Error when altering a stored procedure to add a couple of curso

    - by doug_w
    We have a home-rolled backup stored procedure that uses xp_cmdshell to create and clean up database backups. We have an instance that is 2005 sp3 that we are trying to deploy this script to. I am at a bit of a loss for why it is not working. When I execute the create it runs for about 30 seconds and yields the following error: Msg 10054, Level 20, State 0, Line 0 A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) In my tinkering I discovered that by removing the cursors that actually do the work it will allow me to create the stored procedure (not very helpful for me though). If I add the cursors back in using an alter the error returns. I would be curious if someone has experienced this problem and knows of a solution or work around. I am not opposed to posting the source, it is just lengthy. Things I have checked: Error Logs No dump files in the log directory Thanks in advance for the help.

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • Overriding vhost.conf to always allow PHP include access to directory

    - by Jeremy Dentel
    My predecessor in my job developed a simplistic newsletter system for our school's newspaper utilizing PEAR's Mail package. As I grow this system (and our site) we are constantly stuck with Plesk rewriting the vhost.conf file in which the PEAR include path has been manually entered. This has become an unwieldy task to actually manage and keep running. There's been a "note" from both the previous developer and I to attempt to solve this problem, but we can't entirely figure it out. I'm attempting a move to cPanel through another host, so hopefully it'll go away there, but until then, it can be tedious extremely difficult to get a solid uptake of the system without constant "web-presence." I've searched around and haven't found a solution. I'm rather new to the server management scene (command line was non-existant till around a year ago. =/), so I haven't found anything. Any help would be useful. "Similar Questions" popped this up, but it still seems to rely on vhost.conf, and will still allow changes within Plesk to overwrite the changes.

    Read the article

  • Multiple static WAN IP addresses to single LAN subnet

    - by Jessy Houle
    Below is my home network topology. I currently have 5 static IP addresses, 3 of which are in use by 3 routers. These routers in-turn subnet internal networks and port forward. I use my SSL VPN appliance to remote home from work or on the road. At this point I can remotely administer my Windows Server. I know the network is setup wrong, I was matching existing hardware the best I knew how. http://storage.jessyhoule.com.s3.amazonaws.com/network_topology.jpg Ok this said, here is the problem... One of my websites on my Windows Server now needs to be secure (SSL using port 443). However, I'm already port forwarding port 443 to my VPN appliance. Furthermore, if I'm going to have to reconfigure the network, I would really like to be able to use the SSL VPN to remotely administer all machines. I mentioned this to a friend of mine, who said that what I was looking for was a firewall. Explaining that a firewall would take in multiple static (WAN) IP addresses, and still allow all internal devices to be on the same network. So, basically, I could supply my SSL VPN appliance it's very own static (WAN) IP address routing, and yet have it on the same internal network (192.168.1.x) as all my other devices. The first question is... Does this sound right? Secondly, would you suggest anything different? And, finally, what is the cheapest way to do this? I am started down the road of downloading/installing untangle and smoothwall to see if they will do the job, hoping they take multiple static (WAN) IP addresses. Thank you in advance for your answers. -Jessy Houle

    Read the article

  • OpenVPN Server - CPU is pegged out

    - by ericl42
    Hello, I am configuring OpenVPN to act as a SSL tunnel for a remote location. I have OpenVPN1 at our current location acting as a server then OpenVPN2 at the other location that is acting as a client but is also acting as a DHCP server to machines behind it so they are basically connected to the local LAN. Everything is set up fine and I can talk from location A to location B with no problems like everyone is local. I am however having some performance issues. OpenVPN1 CPU is pegged to 100% the entire time I am copying or doing any type of activity through the tunnel. I expect some CPU usage going up but nothing like this. It's really killing my performance. OpenVPN1 is running in ESX right now with 2 gig RAM and 4 procs with unlimited bursting capacity. I am using AES-192 encryption with a 1024 key. Any idea how I can get my CPU down on OpenVPN1 and my download/upload speeds higher between the tunnel? Thanks. edit: Turning down the logging helped boost the throughput a little bit, but I am still fairly shy of where I believe I should be. Also I am still maxed out on the CPU. Does anyone have any ideas? I am really stuck on this. Thanks.

    Read the article

  • Windows 8 folder to folder sync software

    - by Danny
    I'm looking for direct folder to folder synchronization in Windows 8. I was previously using Live Mesh to accomplish this, but now it looks like that is no longer an option. Note that I'm talking about direct folder to folder sync between different computers, not syncing to the cloud. I'm aware of products like Google Drive, SkyDrive, Dropbox, etc. The problem with them is the space limitation. Basically, I was syncing important files before between my desktop and all of my laptops. One folder for example is My Pictures. This folder has almost 40 gigs of files, which is why the options listed above are not going to work for me. Just need direct syncing, nothing stored on the cloud. I was told by a Microsoft employee that SkyDrive would be replacing Mesh and would provide all the same functionality. So far this looks to be completely false, since the ability to remote desktop is gone along with folder to folder sync. Unless I'm just missing something?

    Read the article

  • Can I trick Carbonite into backing up an external hard drive?

    - by Brian
    I use Carbonite to back up my PC (Windows XP). We were running low on disk space on our home PC (down to 15 GB), so I went out and purchased an external hard drive. However, Carbonite will not back it up. Is it possible to set up Carbonite to backup an external hard drive? I just want the external drive to be extra disk space. From their FAQ: The current version of Carbonite backs up only the files that reside on permanent hard drives on your computer. It will not back up network drives, external drives, and NAS (network accessed storage) drives. If there are files on a remote drive that you wish to include in your Carbonite backup, you should copy the files to a folder on your local hard drive. If the files are on a shared network drive, you could install Carbonite on the computer on which the network shared drive physically exists, and back the files up directly from that computer. Check back soon for a Carbonite service plan that will allow you to back up your external drives.

    Read the article

  • Central Authentication For Windows, Linux, Network Devices

    - by mojah
    I'm trying to find a way to centralize user management & authentication for a large collection of Windows & Linux Servers, including network devices (Cisco, HP, Juniper). Options include RADIUS/LDAP/TACACS/... Idea is to keep track with staff changes, and access towards these devices. Preferably a system that is compatible with both Linux, Windows & those network devices. Seems like Windows is the most stubborn of them all, for Linux & Network equipment it's easier to implement a solution (using PAM.D for instance). Should we look for an Active Directory/Domain Controller solution for Windows? Fun sidenote; we also manage client systems, that are often already in a domain. Trust-relationships between Domain Controllers isn't always an option for us (due to client security restrictions). I'd love to hear fresh ideas on how to implement such a centralized authentication "portal" for those systems.

    Read the article

  • Diagnosing Microsoft SQL Server error 9001: The log for the database is not available.

    - by Scott Mitchell
    Over the weekend a website I run stopped functioning, recording the following error in the Event Viewer each time a request is made to the website: Event ID: 9001 The log for database 'database name' is not available. Check the event log for related error messages. Resolve any errors and restart the database. The website is hosted on a dedicated server, so I am able to RDP into the server and poke around. The LDF file for the database exists in the C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA folder, but attempting to do any work with the database from Management Studio results in a dialog box reporting the same error - 9001: The log for database is not available... This is the first time I've received this error, and I've been hosting this site (and others) on this dedicated web server for over two years now. It is my understanding that this error indicates a corrupt log file. I was able to get the website back online by Detaching the database and then restoring a backup from a couple days ago, but my concern is that this error is indicative of a more sinister problem, namely a hard drive failure. I emailed support at the web hosting company and this was their reply: There doesn't appear to be any other indications of the cause in the Event Log, so it's possible that the log was corrupted. Currently the memory's resources is at 87%, which also may have an impact but is unlikely. Can the log just "become corrupted?" My question: What are the next steps I should take to diagnose this problem? How can I determine if this is, indeed, a hardware problem? And if it is, are there any options beyond replacing the disk? Thanks

    Read the article

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • Conditionally Rewrite Email Headers (From & Reply-To) Exchange 2010

    - by NorthVandea
    I have a client who maintains Company A (with email addresses %username%@companyA.com) and they own the domain companyB.com however there is no "infrastructure" (no Exchange server) set up specifically for companyB.com. My client needs to be able to have the end users within it's company (companyA.com) add a specific word or phrase to the Subject (or Body) line of the Outgoing email (they are only concerned with outgoing, incoming is a non-issue in this case) that triggers the Exchange 2010 servers to rewrite the header From and Reply-To [email protected] with [email protected] but this re-write should ONLY occur if the user places the key word/phrase in the Subject (or Body). I have attempted using Transport Rules and the New-AddressRewriteEntry cmdlet however each seems to have a limitation. From what I can tell Transport Rules cannot re-write the From/Reply-To fields and New-AddressRewriteEntry cannot be conditionally triggered based on message content. So to recap: User sends email outside the organization: From and Reply-To remain [email protected] User sends email outside the organization WITH "KeyWord" in the Subject or Body: From and Reply-To change to [email protected] automatically. Anyone know how this could be done WITHOUT coding a new Mail Agent? I don't have the programming knowledge to code a custom Agent... I can use any function of Exchange Management Shell or Console. Alternatively if anyone knows of a simple add-on program that could do this that would be good too. Any help would be greatly appreciated! Thank you!!!

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by blade
    Hi, I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • Connecting jconsole using SOCKS to Amazon EC2

    - by freshfunk
    I'm trying to use jconsole to view stats on an EC2 instance by using a socks proxy created by SSH. I've tried the various scripts mentioned in the links below but to no avail: http://simplygenius.com/2010/08/jconsole-via-socks-ssh-tunnel.html http://gabrielcain.com/blog/2010/11/02/using-ssh-proxying-to-connect-jconsole-to-remote-cassandra-instances/ I'm running ssh -f -ND 8123 myuser@mymachine and verified that at least Firefox goes through it as a proxy. I then run jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=8123 service:jmx:rmi:///jndi/rmi://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com:8080/jmxrmi I run netstat -n on my EC2 instance and I see a connection created by my machine. However, the connection eventually disappears and I get a 'channel 2: open failed: connect failed: Operation timed out' from my ssh tunnel. I've opened the jmx port through the security group and I've checked the port on the EC2 instance to make sure it's open (by telnet-ing to it). I'm not sure where to look next. Are there some properties in sshd_config or ssh_config I need to enable for tunneling? Or anything in Mac OS X? I feel like a serious noob but sys administration is really not my strong point. I've spent several hours and can't get this to work.

    Read the article

  • Where is the bare cygwin package list located and how do I manipulate it?

    - by matnagel
    Where is the bare cygwin package list located and how do I manipulate it programmatically or from a shell or with a different method than the gui? I know the gui (setup.exe), and I'd love to go one or more levels deeper. I can retrieve a list of selected/installed packages ( http://serverfault.com/questions/83456/cygwin-package-management ), but how do I write it back or to a different machine? What I have in mind is when I install a new windows I would like to start with my package list in text form, an apply or inject it somehow to the new system. Where is it? In the registry? In a binary file? in a local database? Or has anybody done this, is there a tool, a tutorial? The essence of what I want is to manipulate the selected package list with something else than the gui. It is ok for me to use the gui for the setup process. So I could imagein manipulating the package List and then run setup.exe and just click through it. Note: I do not want to manipulate the list of already installed packages but of packages that "should be installed". But if htis is not possible, maybe there is some workaround. E.g add an outdated version as installed and the installer will then install the new version.

    Read the article

  • RRAS Problem routing to central site from RRAS server only?

    - by TomTom
    Given is an office connected to headquarters using a RRAS bridge (2 virtual machines using RRAS to route between the two networks). Naming: The office is A, the RRAS on A is a-lnk. THe headquartters is B, b-lnk the RRAS machine there. The VPN works perfectly - machines can ping and work between the sites. Domain controllers on both ends replicating, DFS working, remote desktop working. All in all... everything is fine. EXCEPT: a-lnk itself can not reach any machine in B. This would normally not be troublesome (noone ever does anything on a-lnk), but there are two exceptions: * a-lnk is supposed to get it's license from a KMS in B, so not being able to reach B means it is not prolonging. * a-lnk is supposed to pull updates from a WSUS in B - and not being able to reach B means - no updates. Given that thigns work (and security is a minor issue - A-lnk is not reachable from the internet as it is behing a NAT hardware anyway) this got not handled for months. I just wan to get this item ticked off now. Anyone an idea what this is? It definitely is not a "dns does not work" or "routing in general is bad" item, as any computer in A can connect to any computer in B, and the other way arount - only the RRAS computer itself seems to do something really awkward. Platform for both: 2008 R2 standard.

    Read the article

  • Network Load Balancing, intermittent port problem on Windows Server 2008

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem on a Windows Server 2008 NLB. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • What is the best way/Software to manage multiple short lived instances of virtual machines ?

    - by Newtopian
    Hi, We have a QA department that have to test our software on multiple combination of OS and DMBS. With Windows spewing out many different versions the combinatorial math of all this can be daunting. So we decided on visualizing our setups but so far it only displaces the problem. The cost of hardware is expensive and we need many different combination far exceeding your server capacity to deliver. Also, these instances are throw away, once the test is complete we no longer need it, furthermore to ensure proper test isolation we should start fresh from a new instance. Lastly we only need a small subset of these system online at any given time. What I am looking for is a way to manage inventory so that our QA staff can order instances to be put online as required and discarded once used. Instances are spawned from a pool of freshly installed systems with the appropriate combination ready to accept our software. It also should be possible for two or more people to start the same instance at the same time, though we could manage without this if it proves too complex to put in place. Finally our budget is pretty thin, we can probably make some purchases but ideally expenditures should be kept to a minimum. To summarize we should be able to : Bring instances online on demand. Ideally should offer queue and scheduling management Destroy instances on demand Keep masters in inventory but not online. Manage large inventory of VMs (30-100 maybe more) with small staff of users (5-10). Allow adding, deleting and changing instances from inventory (bring online, make changes and check back in, or create new and check in). Allow few long lived instances for support tools (normal VM server usage) Thanks for your answers

    Read the article

< Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >