Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 531/590 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Kernel-mode Authentication: 401 errors when accessing site from remote machines

    - by CJM
    I have several Classic ASP sites that use Integrated Windows Authentication and Kerberos delegation. They work OK on the live servers (recently moved to a Server 2008/IIS7 servers), but do not work fully on my development PC or my development server. The IIS on both machines were configured through an IIS web deployment tool package which was exported from an old machine; the deployment didn't work perfectly, and I had to tinker a bit to get the sites working. When accessing the apps locally on either machine, they work fine; when accessing from another machine, the user is prompted by a username/password dialog, and regardless of what you enter, ultimately it results in a 401 (Unauthorised) error. I've tried comparing the configuration of these machines against similar live servers (that all work fine), and they seem generally comparable (given that none of the live servers are yet on IIS7.5 (Windows 7/Server 2008 R2). These applications run in a common application pool which uses a special domain user as it's identity - this user has similar permissions on the live and development machines. On IIS6 platforms, to enable kerberos delegation, I needed to set up some SPNs for this user, and they are still in place (even though I don't believe they are needed any longer for IIS7+ due to kernel-mode authentication), Furthermore, this account is enabled for Kerberos delegation in Active Directory, as is each machine I am dealing with. I'm considering the possibility that the deployment might have made changes/failed to make changes to the IIS configuration thus causing this problem. Perhaps a complete rebuild (minus another web deployment attempt) would solve the problem, but I'd rather fix (thus understand) the current problem. Any ideas so far? I've just had another attempt at fixing this issue, and I've made some progress, but I don't have a complete fix...yet. I've discovered that if I access the sites via IP address (than via NetBIOS name), I get the same dialog, except that it accepts my credentials and thus the application works - not quite a fix, but a useful step. More interestingly, I discovered that if I disable Kernel-mode authentication (in IIS Manager Website Authentication Advanced Settings), the applications work perfectly. My foggy understanding is that this is effectively working in the pre-IIS7 way. A reasonable short-term solution, but consider the following explicit advice from IIS on this issue: By default, IIS enables kernel-mode authentication, which may improve authentication performance and prevent authentication problems with application pools configured to use a custom identity. As a best practice, do not disable this setting if Kerberos authentication is used in your environment and the application pool is configured to use a custom identity. Clearly, this is not the way my applications should be working. So what is the issue?

    Read the article

  • Batch file to uninstall all Sun Java versions?

    - by Ricket
    I'm setting up a system to keep Java in our office up to date. Everyone has all different versions of Java, many of them old and insecure, and some dating back as far as 1.4. I have a System Center Essentials server which can push out and silently run a .msi file, and I've already tested that it can install the latest Java. But old versions (such as 1.4) aren't removed by the installer, so I need to uninstall them. Everyone is running Windows XP. The neat coincidence is that Sun just got bought by Oracle and Oracle has now changed all the instances of "Sun" to "Oracle" in Java. So, I can conveniently not have to worry about uninstalling the latest Java, because I can just do a search and uninstall all Sun Java programs. I found the following batch script on a forum post which looked promising: @echo off & cls Rem List all Installation subkeys from uninstall key. echo Searching Registry for Java Installs for /f %%I in ('reg query HKLM\SOFTWARE\microsoft\windows\currentversion\uninstall') do echo %%I | find "{" > nul && call :All-Installations %%I echo Search Complete.. goto :EOF :All-Installations Rem Filter out all but the Sun Installations for /f "tokens=2*" %%T in ('reg query %1 /v Publisher 2^> nul') do echo %%U | find "Sun" > nul && call :Sun-Installations %1 goto :EOF :Sun-Installations Rem Filter out all but the Sun-Java Installations. Note the tilda + n, which drops all the subkeys from the path for /f "tokens=2*" %%T in ('reg query %1 /v DisplayName 2^> nul') do echo . Uninstalling - %%U: | find "Java" && call :Sun-Java-Installs %~n1 goto :EOF :Sun-Java-Installs Rem Run Uninstaller for the installation MsiExec.exe /x%1 /qb echo . Uninstall Complete, Resuming Search.. goto :EOF However, when I run the script, I get the following output: Searching Registry for Java Installs 'DEV_24x6' is not recognized as an internal or external command, operable program or batch file. 'SUBSYS_542214F1' is not recognized as an internal or external command, operable program or batch file. And then it appears to hang and I ctrl-c to stop it. Reading through the script, I don't understand everything, but I don't know why it is trying to run pieces of registry keys as programs. What is wrong with the batch script? How can I fix it, so that I can move on to somehow turning it into a MSI and deploying it to everyone to clean up this office? Or alternatively, can you suggest a better solution or existing MSI file to do what I need? I just want to make sure to get all the old versions of Java off of everyone's computers, since I've heard of exploits that cause web pages to load using old versions of Java and I want to avoid those.

    Read the article

  • libreadline history lines combine

    - by jettero
    This has been driving me crazy for about three years. I don't know how to fully describe the problem, but I think I can finally describe a way to recreate it. Your milage may vary. I have a mixture of ubuntu server and desktop machines of various versions and a few gentoo machines with various states of disrepair. They all seem to kindof do their own thing, although with similarities. Try this and let me know if you see the same thing. pop open two xterms (TERM=xterm) resize one so they're not the same issue screen -R test1 in one (TERM=screen) and screen -x test1 in the other hooray, typing in one shows up in the other; although notice that their different size produces artifacts and things issue a couple commands in your shell hit ^AF in the one that doesn't fit quite right, now it fits!! scroll back over the history a little goto 6 Eventually you'll notice a couple history lines combine. If you don't, then it's something unique to my setup, which spans various distributions and computers; so that's a confusing concept to me. If you see the thing I'm seeing then this: bash$ ls -al bash$ ps auxfw becomes this: bash$ ls -al; ps auxfw It doesn't happen every time. I have to really play with it — unless I don't want it to happen, then it always does. On some systems (or combinations), I get a line separator like the example above. On some systems, I do not. That I get the line separator on some systems seems to indicate to me that bash supports this behavior. Its history is entirely handled by libreadline and after perusing (ie, carefully reading) the man pages, I couldn't find a single readline setting for combining two history lines. Nor can I find anything in the bash manpage. So, how can I invoke this on purpose? Or, if I can't do that, how can I disable it completely? I would take either answer as a solution. Currently, I only see it when I don't want it.

    Read the article

  • Unable to telnet out on port 25 on windows server 2008

    - by NickGPS
    Hi All, I just setup a Windows 2008 R2 server and am trying to get a basic mail server up and running so that I can send emails from my applications. I setup a virtual SMTP server in IIS6 and tried doing a local telnet to port 25, which seemed to work fine. There were no errors during this stage and I can see the mail message appear in the Queue folder. The problem is that mail never leaves the Queue folder. I then tried to telnet to a remote mail server on port 25 but couldn't connect:- telnet 209.85.227.27 25 Could not open connection to the host, on port 25: Connection failed) I checked my firewall and there is a default setting to allow all outgoing TCP traffic with no restriction. I even setup a specific rule for outgoing port 25 traffic but to no avail. I then ran a SmtpDiag.exe command .\SmtpDiag.exe [email protected] [email protected] and received the following output Searching for Exchange external DNS settings. Computer name is WIN-SERVERNAME. Failed to connect to the domain controller. Error: 8007054b Checking SOA for gmail.com. Checking external DNS servers. Checking internal DNS servers. SOA serial number match: Passed. Checking local domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Local DNS test passed. Checking remote domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Remote DNS test passed. Checking MX servers listed for [email protected]. Connecting to gmail-smtp-in.l.google.com [209.85.227.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Is there any other diagnostics I can do to figure out if it's my firewall or something else? I have removed antivirus to make sure that it wasn't causing the problem. Any ideas would be much appreciated.

    Read the article

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • How to conform to update-rc.d with LSB standard?

    - by user34881
    This is a migrated question from stackoverflow, as I was told, this is the place for it to be. http://stackoverflow.com/questions/2263567/how-to-conform-to-update-rc-d-with-lsb-standard I have set up a simple script to back up some directories. While I haven't had any problems setting up the functionality, I'm stuck with adding the script to rcX.d dir's using update-rc.d. My script: #! /bin/sh ### BEGIN INIT INFO # Provides: backup # Required-Start: backup # Required-Stop: # Should-Stop: # Default-Start: 0 6 # Default-Stop: # Description: Backs up some dirs ### END INIT INFO check_mounted() { # Check if HD is mounted } do_backup() { if check_mounted; then # Some rsync statements. fi } case "$1" in start) do_backup ;; restart|reload|force-reload) echo "Error: argument '$1' not supported" >&2 exit 3 ;; stop|"") # No-op ;; *) echo "Usage: backup [start]" >&2 exit 3 ;; esac : Using update-rc.d backup start 10 0 6 . I get the following warnings and errors: update-rc.d: warning: backup start runlevel arguments (none) do not match LSB Default-Start values (0 6) update-rc.d: warning: backup stop runlevel arguments (0 6.) do not match LSB Default-Stop values (none) update-rc.d: error: start|stop arguments not terminated by "." The syntax I try to use is the following: update-rc.d [-n] <basename> start|stop NN runlvl [runlvl] [...] . Google wasn't that helpful at finding a solution. How can I correctly set up a script and add it via update-rc.d? I'm using Ubuntu 9.10. UPDATE Using update-rc.d backup start 10 0 6 . stop 10 0 . the error disappears. The warnings about default values persists: update-rc.d: warning: backup start runlevel arguments (none) do not match LSB Default-Start values (0 6) update-rc.d: warning: backup stop runlevel arguments (0 6 0 6) do not match LSB Default-Stop values (none) It even is added to the appropiate rcX-dirs but it still does not get executed...

    Read the article

  • Add user in CentOS 5

    - by Ron
    I created a new user in my CentOS web server with useradd. Added a password with passwd. But I can't log in with the user via SSH. I keep getting 'access denied'. I checked to make sure that the password was assigned and that the account is active. /var/log/secure shows the following error: Aug 13 03:41:40 server1 su: pam_unix(su:auth): authentication failure; logname= uid=500 euid=0 tty=pts/0 ruser=rwade rhost= user=root Please help, Thanks Thanks for the responses so far: I should add that it is a VPS on a remote computer, fresh out of the box. I can log in as the root user quite fine. I can also su to the new user, but I cannot log in as the new user. Here is my sshd_config file: # $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #Protocol 2,1 Protocol 2 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 768 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no PasswordAuthentication yes # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes ChallengeResponseAuthentication no # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no GSSAPIAuthentication yes #GSSAPICleanupCredentials yes GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication mechanism. # Depending on your PAM configuration, this may bypass the setting of # PasswordAuthentication, PermitEmptyPasswords, and # "PermitRootLogin without-password". If you just want the PAM account and # session checks to run without PAM authentication, then enable this but set # ChallengeResponseAuthentication=no #UsePAM no UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner /some/path # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • Bluetooth not detecting any devices in Windows 7

    - by underDog
    My Lenovo ThinkPad E320 Laptop running Windows 7 64bit has recently been refusing to detect any Bluetooth devices. I have tried to connect, using 'Add devices' under 'devices and printers' to two different Bluetooth mice and my HTC Wildfire android (2.2.1) phone and none of them are detected in the 'Add a device' dialog. History - Bluetooth initially seemed OK when I first got this laptop. I was able to connect to and use my android phone as a remote with no issues. I got my first Bluetooth mouse, it paired, but after each restart, or even after sleeping, it would not 're-connect' (even though it was listed under Bluetooth devices and supposedly 'working'), and I would need to remove the device and add it again. A week or two ago it stopped working all together. It is not detected at all. I gave up on the mouse and bought another (Lenovo ThinkPad brand) only to find that it was not detected either. I subsequently tested my Android phone and discovered it would not detect either. One thing of note is under 'Devices and Printers' there is listed a 'HID Keyboard Device' which under properties is listed as a 'Bluetooth HID Device'. This was not previously there before this problem started. Each time I remove it, or uninstall it from Device Manager it will quickly re-install itself, even with all my Bluetooth devices switched off. My (Google and searching this site) research of this issue has not yielded any definitive answers. I have changed the setting under Device Manager - Bluetooth - Properties - Power Management - 'Allow the computer to turn off this device to save power' to off. I have attempted to un-install and re-install the Bluetooth hardware, including the 'remove drivers' option and downloading and running the Lenovo Bluetooth installer package (found @ http://support.lenovo.com/en_US/downloads/detail.page?DocID=DS014997). Bluetooth is turned on. All items under Bluetooth properties (Discovery and Connections) are checked. I have tried changing the batteries. I'm not sure what else I can try, apart from perhaps doing a fresh install of Windows. Any suggestions?

    Read the article

  • Nvidia Drivers on Debian / Lenny (Stable) -> Installation successful -> Monitors gets black

    - by David
    I have successfully installed the proprietary drivers for my nvidia (geforce 7300 gt) graphics card on debian/lenny. I know its not the best way to chose for driver installation ( see this link: http://wiki.debian.org/NvidiaGraphicsDrivers#non-freedrivers ). but the two ways seem to be possible for me (nvidia-kernel module compilation). Now the problem is that the monitors gets black, the power light starts blinking after i launch the x-server. Have a short look a the logs (output truncated from /var/log/Xorg.0.log): (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) Jul 28 17:10:11 NVIDIA(0): Enabling RENDER acceleration (II) Jul 28 17:10:11 NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) Jul 28 17:10:11 NVIDIA(0): enabled. (II) Jul 28 17:10:11 NVIDIA(0): NVIDIA GPU GeForce 7300 GT (G73) at PCI:1:0:0 (GPU-0) (--) Jul 28 17:10:11 NVIDIA(0): Memory: 262144 kBytes (--) Jul 28 17:10:11 NVIDIA(0): VideoBIOS: 05.73.22.25.00 (II) Jul 28 17:10:11 NVIDIA(0): Detected PCI Express Link width: 16X (--) Jul 28 17:10:11 NVIDIA(0): Interlaced video modes are supported on this GPU (--) Jul 28 17:10:11 NVIDIA(0): Connected display device(s) on GeForce 7300 GT at PCI:1:0:0: (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0): 400.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): 165.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): Internal Single Link TMDS (II) Jul 28 17:10:11 NVIDIA(0): Assigned Display Device: CRT-0 (==) Jul 28 17:10:11 NVIDIA(0): (==) Jul 28 17:10:11 NVIDIA(0): No modes were requested; the default mode "nvidia-auto-select" (==) Jul 28 17:10:11 NVIDIA(0): will be used as the requested mode. (==) Jul 28 17:10:11 NVIDIA(0): (II) Jul 28 17:10:11 NVIDIA(0): Validated modes: (II) Jul 28 17:10:11 NVIDIA(0): "nvidia-auto-select" (II) Jul 28 17:10:11 NVIDIA(0): Virtual screen size determined to be 1280 x 1024 (--) Jul 28 17:10:11 NVIDIA(0): DPI set to (85, 86); computed from "UseEdidDpi" X config (--) Jul 28 17:10:11 NVIDIA(0): option (==) Jul 28 17:10:11 NVIDIA(0): Enabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp Here is the complete /etc/X11/xorg.conf file as generated by nvidia-xconfig: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 256.35 (buildmeister@builder101) Wed Jun 16 19:25:59 PDT 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" Hor

    Read the article

  • Thunderbird 3: can't change column width?

    - by rumtscho
    I recently installed Thunderbird 3.0.3. Just noticed a suboptimal UI setting: in the upper pane, which lists the e-mails in the current folder, the Date column is about 200px wide. So when I keep the window at 480x600, all I see in a row is: | tree icon | favourites icon | attachment icon | read icon | junk icon | Date and time, followed by 5cm whitespace | ... | P Where "P" is the first letter of the name of the sender. And the "..." is actually shown this way, I have no idea which column it is meant to be. But I don't see neither the sender, nor the message subject, which makes scrolling a folder for a certain mail rather pointless. I see these when I maximize the window, actually the columns are then not only bigger, they are arranged in another sequence. But I feel that holding a mail client permanently maximised at 1600x1200 is a waste of screen real estate. My naive solution attempt was to try to go with the mouse cursor to the right edge of the date column and try to shrink it by moving the cursor left while holding down the left mouse button. Not only is this default behaviour for all resizable columns I've ever encountered in GUIs, the cursor actually turns into a horizontal double-headed arrow. But pulling has no effect at all. I cannot make a wide column narrow, and I cannot make the narrow columns wide. I didn't find anything in the preferences either. So can please somebody explain how to get the columns arranged sensibly? Edit: I found out that I only have the problem when I drag the Thunderbird window to a GridMove screen area. It gets automatically resized, but doesn't notice the resize event or something, so the column width remains the same as under a maximized window. First making the window narrow using the mouse helps with column width, but the width of the mail pane is still too wide (rows don't reflow). Anyway, this seems to be a bug caused by the combination of the two applications and not a configuration problem, so I guess I'll have to live with it.

    Read the article

  • Moving from VPS to Cloud

    - by GRIGORE-TURBODISEL
    ...and I have a few questions. I'm basically working on a MySQL+PHP based webapp. Since I don't have on-demand scaling with VPS, I'm planning to move from VPS to Cloud when I hit the 1000 subscribers barrier. I'm looking at Windows Azure but I'm ok with other suggestions. So here are my questions: Will it really cost me a kidney? Every subscriber needs to download around 4-5MB of static resources each day. Bandwidth is free on the VPS but here I see costs can easily get to $800.00/mo; this makes me very insecure about the whole thing, I mean VPS is just $2,000/yr. Do I need another VM or is PHP included in the Web Sites? I have basic sysadmin skills, I think I can handle setting up a PHP install, but will I have to do this? If yes, what other service do I need to setup manually? What about Memcached, MySQL, etc? What security protections does it include? For example I have some basic protection included, like directory traversals and executable files upload; I also have CloudFlare on my other websites for DDoS protection; will I need to do the same thing here too, can it even be installed, can I edit my DNS records, etc? How are e-mails, subdomains, add-on domains, parked domains, etc. handled? I haven't seen any references to e-mail boxes. On the VPS I simply add them from cPanel ([email protected] / whatever.mysite.com / ...); do I have a similar management interface here? Do I get SSH access? Or at least FTP, remote MySQL access and maybe some incremental back-ups or something? Can I see my quotas and advanced traffic info? I must mention that I really like the idea of the whole "cloud" concept, the added reliability and everything but I really need maybe a parallel to regular hosting or something so I know what to expect.

    Read the article

  • OpenVPN multiple servers on the same subnet, high availability

    - by andre
    Hey everyone. Let me start by saying that my Linux experience isn't super awesome but I can usually find my way around things easily. Over at work we have an OpenVPN setup that's been due for some improvement for a while now. The main server (tap mode) runs in our office, behind a rather slow DSL connection. The main problem is that, since I'm usually out of the office, every time I want to access something on the virtual network I have to go through that server to get anywhere else. We have two servers up on 100 Mbit connections that we use for development and production purposes, about 3 more servers in the office (one of them behind a different T1 line for VOIP) and about two dozen clients who use the network on a daily basis from various locations. We've had situations where network routing (outside of our control) would not allow people to reach our main OpenVPN server whilst the other locations were connectable. Also any time someone outside the office wants to fetch something from any of the servers (say, a 500 MB code repository), a whopping 20 KB/s download speed is just unacceptable these days (did I mention slow DSL? ok). We had to implement traffic shaping on this server since maxing out this connection was fairly trivial. I had the thought of running two (or more) OpenVPN servers in the network. These would have to have the same subnet though, as our application relies on virtual network's IP addresses for some of its core functionality. The clients would also preferably retain the same IP addresses but that's not vital. For simplicity, lets call the current server office and the second server I'm setting up, cloud. Call the server on the T1 phone. This proved to be rather complex because as soon as I connect to cloud, I cannot see office. Any routes to a server that would go through office also do not work while I'm connected to cloud (no ping, nothing) and vice-versa. There's no rules for iptables that would be blocking the traffic either. Recently I came across this article on linuxjournal but the solution they provide seems to only cover the use of two servers and somewhat outdated (can't even find much documentation, their wiki is offline). They also state that adding more servers would be a complex task. Ideally I would like to keep the existing server office running the virtual network and also run the OpenVPN daemon on the cloud and phone servers (100 Mbit and very reliable connection, respectively) so that we're on safe ground in case of a hardware failure, DSL failure, etc. So, in essence, I'm looking for a highly available OpenVPN solution (fix, patch, hack, tweak, whatever you want to call it) that will accept connections on multiple hosts (2 or more) whilst keeping the same IP address subnet regardless of the server to which you connect to. Thanks for reading and sorry for the long post, I hope it gets the point across :P

    Read the article

  • How to configure ASP.NET MVC 3 on IIS 6 (Windows 2003 R2)

    - by Nedcode
    I am getting 403 Directory Listing Denied for the root and 404 for an action that I know should exist. Background: I have build and deployed an ASP.NET MVC 2 applcation a long time ago. Later I upgraded it to MVC 3 and it is still working with not configuration changes. Setting it up on a windows 2003 R2 (Standard) initialy was a pain, but after a couple of days(yes, days) struggling it started working. Now I have to do the same with the same application on a different server (2003 R2 Standard again) on a different network. .Net 4 is installed and allowed ASP.NET MVC 3 is also installed By default IIS is set to use .net 4 I verify aspnet_isapi.dll used in application extension are from version 4.0.30319 .NET asemblies folder. I also added the wildcard mapping to aspnet_isapi.dll and unchecked verify file exists. Under Directory Security in Authentication Methods I have disabled anonymos access and enabled Integrated Windows authentication(same as the one on the server that it works) I have copied the same web.config with the <authentication mode="Windows" /> <authorization> <deny users="?" /> </authorization> I have set Read & Execute, List Folder Contents, and Read for the Networkservice account(under which the app pool is working). Also I have set the same for Network account, IIS_WPG, ASPNET and IUSR_MAchineName. I do not have an EnableExte??nsionlessUrls but even if I create it and set it to true or false it does not help. I also tried http://haacked.com/archive/2010/12/22/asp-net-mvc-3-extensionless-urls-on-iis-6.aspx and it did not help. But I kept getting 403 Directory Listing Denied for the root and 404 for an action that I know should exist. Web Platform installer was then used to re-install and possibly update .net, asp.net etc. I then noticed IIS was reset to default. So I added the wildcard mapping again. No, luck still 403. I exported configuration files from the working server setup and created new default app pool and new default website using those configurations. Still I get 403 Directory Listing Denied for the / and 404 for any action I try.

    Read the article

  • ISPconfig3 + CentOS 6.2 , confused on how to move forward after initial install?

    - by Damainman
    I installed ISPCONFIG3 on centos 6.2 using the great guide on howtoforge.com. Everything is up and running and I can access ISPCONFIG via a browser. However I am not sure how to move forward with the initial setup so I can setup the very first account and get my website live. Details: Only have 1 server, the centos+ispconfig is running on a virtual machine of XEN XCP. I setup the server name to be server1.mydomain.com. I only have 2 usable ips. I plan to use them as follows: xx.xx.xx.01 : For my website and the websites of all accounts I add. xx.xx.xx.02 : For ns1.mydomain.com and ns2.mydomain.com (Yea I know they should be different ips at different locations, but this is what I have to work with at the moment.... ) I registered the nameservers at my registrar with the .02 ip. I want to use bind and ISPconfig to run the DNS on my server itself and not via my registrar. Right now if I go to the .01 IP it shows the centos+apache successful install page. So to break it down basically I am not sure where to start when it comes to: (What to consider and what to do to setup the first domain on the server) Telling bind to use the name server domains with .02. Setting up my First website(which will be my main website) in ISPconfig so mydomain.com resolves properly to my server. Make it so when you go to the .01 IP, it either redirects or shows the contents of my main website. (If this can't be done, then any advice is appreciated) Making sure that when I add a new domain, it automatically puts in the proper information for the domain so it points to the right mail, database, dns, entry. If I overlooked a tutorial then please feel free to let me know, and any advice would be greatly appreciated. Some of the tutorials I found were not specific to doing everything on only one server with Centos+Apache+Bind. Right now all I did was install centos and install ISPconfig3. Trying to move forward correctly so I don't mess up everything I did by not knowing what to do. Thank you in advance!!

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • Automating silent software deployments on Solaris 10

    - by datSilencer
    Hello everyone. Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10. Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example: Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations. Checking that target application folders exist and if not, then create them with preconfigured path values defined when the package was assembled. Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them. Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system. Obviously, doing this for many boxes (the goal in question) by hand can be slow and error prone. I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another. 1) Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort. 2) Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P. 3) Ant or Gradle scripts. They may be an option since the boxes also have java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting. It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this. I thank you for your time and help.

    Read the article

  • HTTPS in sub domain redirects to main domain

    - by Amitabh
    We recently bought a wildcard certificate and installed it for a domain. It works fine for the main domain but seems to not work at all for any sub domains. Whats happening is we can access the sub domains fine on HTTP, but whenever we try HTTPS for the same sub domain url we are redirected back to the main domain. So if I put up a test folder "httpstest" in a sub domain with a index.html file in it, the following happens mysubdomain.mywebsite.com/httpstest/index.html or mysubdomain.mywebsite.com/httpstest/ works perfectly fine with http:// but mysubdomain.mywebsite.com/httpstest/ or mysubdomain.mywebsite.com/httpstest/index.html does not work with https:// and redirects to the main domain.Any help on this is greatly appreciated. The site is not the main site used for setting up the VPS. It was added from WHM. Environment: We are on a Linux VPS. Cpanel 11.30.6 , Apache 2.2.22, PHP 5.3.13 The Virtualhost entry looks like: <VirtualHost xx.xx.xxx.xx:443> ServerName my-own-website.com ServerAlias www.my-own-website.com DocumentRoot /home/amitabh/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/my-own-website.com combined CustomLog /usr/local/apache/domlogs/my-own-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User amitabh # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup amitabh amitabh </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup amitabh amitabh </IfModule> ScriptAlias /cgi-bin/ /home/amitabh/public_html/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-own-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-own-website.com.key SSLCACertificateFile /etc/ssl/certs/my-own-website.com.cabundle CustomLog /usr/local/apache/domlogs/my-own-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/amitabh/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/ssl/2/amitabh/my-own-website.com/*.conf" </VirtualHost>` I guess I messed up the formatting big time. Any help on formatting and on the issue is great appreciated. Thank you. Update: I could not update the formatting here. I posted the same question in a linux forum . I will really appreciate any pointer on it.

    Read the article

  • Two VHosts Use Same DocumentRoot, PHP Not Working on Second VHost

    - by thegrip
    I'm helping maintain an e-commerce site that is run on Magento. This site is an outlet for our wholesale customers. We have recently decided to open a second store to reach out to our Retail customers. We decided to set up another website inside of our magento store so that we can share the products across both stores. I'm in the process of setting up this new site on the server, but have run into an issue. I've set up the second vhost for the new retail site, and I've made the DocumentRoot for this vhost the same as for the wholesale site, so we can use one magento application for both sites. This is where the error occurs. When I browse to the new store it triggers a download of the index.php file. So I know the DocumentRoot directive is working, but it seems like PHP is being broken in the process. I'm using plesk to manage the server. I've made sure that PHP is turned on in both vhosts and still get the same issue. Does this sound like a problem of PHP breaking, or is it possible my vhost.conf file is set up incorrectly? (Although the vhost is managed by plesk and appears correct) Any help will be much appreciated. EDIT: Here's the vhost configs (generated by plesk): <VirtualHost IPADDRESS:80> ServerName domain.com:80 ServerAdmin "[email protected]" DocumentRoot /var/www/vhosts/domain.com/subdomains/tk/httpdocs CustomLog /var/www/vhosts/domain.com/statistics/logs/access_log plesklog ErrorLog /var/www/vhosts/domain.com/statistics/logs/error_log <IfModule mod_ssl.c> SSLEngine off </IfModule> <Directory /var/www/vhosts/domain.com/subdomains/tk/httpdocs> <IfModule mod_php4.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/domain.com/subdomains/tk/httpdocs:/tmp" </IfModule> <IfModule mod_php5.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/mkdesigngroup.com/subdomains/tk/httpdocs:/tmp" </IfModule> Options -Includes -ExecCGI </Directory> Include /var/www/vhosts/domain.com/subdomains/tk/conf/vhost.conf And here's what I've added in vhost.conf (which is included by plesk): DocumentRoot /var/www/vhosts/domain.com/subdomains/dev/httpdocs -grip

    Read the article

  • Default permission for newly-created files/folders using ACLs not respected by commands like "unzip"

    - by Ngoc Pham
    I am having trouble with setting up a system for multiple users accessing the same set of files. I've read tuts and docs around and played with ACLs but haven't succeeded yet. MY SCENARIO: Have multiple users, for example, user1 and user2, which is belong to a group called sharedusers. They must have all WRITE permission to a same set of files and directories, say underlying in /userdata/sharing/. I have the folder's group set to sharedusers and SGID to have all newly created files/dirs inside set to same group. ubuntu@home:/userdata$ ll drwxr-sr-x 2 ubuntu sharedusers 4096 Nov 24 03:51 sharing/ I set ACLs for this directory so I can have permission of sub dirs/files inheritted from its parents. ubuntu@home:/userdata$ setfacl -m group:sharedusers:rwx sharing/ ubuntu@home:/userdata$ setfacl -d -m group:sharedusers:rwx sharing/ Here's what I've got: ubuntu@home:/userdata$ getfacl sharing/ # file: sharing/ # owner: ubuntu # group: sharedusers # flags: -s- user::rwx group::r-x group:sharedusers:rwx mask::rwx other::r-x default:user::rwx default:group::r-x default:group:sharedusers:rwx default:mask::rwx default:other::r-x Seems okay as when I create new folder with new files inside and the permission is correct. ubuntu@home:/userdata/sharing$ mkdir a && cd a ubuntu@home:/userdata/sharing/a$ touch a_test ubuntu@home:/userdata/sharing/a$ getfacl a_test # file: a_test # owner: ubuntu # group: sharedusers user::rw- group::r-x #effective:r-- group:sharedusers:rwx #effective:rw- mask::rw- other::r-- As you can see, the sharedusers group has effective permission rw-. HOWEVER, if I have a zip file, and use unzip -q command to unzip the file inside the folder sharing, the extracted folders don't have group write permisison. Therefore, the users from group sharedusers cannot modify files under those extracted folders. ubuntu@home:/userdata/sharing$ unzip -q Joomla_3.0.2-Stable-Full_Package.zip ubuntu@home:/userdata/sharing$ ll drwxrwsr-x+ 2 ubuntu sharedusers 4096 Nov 24 04:00 a/ drwxr-xr-x+ 10 ubuntu sharedusers 4096 Nov 7 01:52 administrator/ drwxr-xr-x+ 13 ubuntu sharedusers 4096 Nov 7 01:52 components/ You an spot the difference in permissions between folder a (created before) and folder administrator extracted by unzip. And the ACLs of a files inside administrator: ubuntu@home:/userdata/sharing$ getfacl administrator/index.php # file: administrator/index.php # owner: ubuntu # group: ubuntu user::rw- group::r-x #effective:r-- group:sharedusers:rwx #effective:r-- mask::r-- other::r-- It also has ubuntu group, not sharedusers group as expected. Could someone please explain the problem and give me advice? Thank you in advance!

    Read the article

  • configuring mod_proxy_html properly?

    - by tobinjim
    I have an apache2 web server that handles reverse proxy for Rails3 app running on another machine. The setup works except URLs generated within the webapp aren't getting rewritten by my configuration for mod_proxy_html. The ["Reverse Proxy Scenario"][1] is exactly what I'm trying to do, so I've followed the tutorial as completely as I know how. I've applied or tried answers supplied here on stackoverflow, to no effect. According to the "Reverse Proxy Scenario" you want a number of modules loaded. All those instructions are in my httpd.conf file and when I examine the output from apactectl -t -D DUMP_MODULES all the expected modules show in amongst the listing. My external web server doing the reverse proxy is at www.ourdomain.org and the Rails app is internally available at apphost.local (the server is Mac OS X Server 10.6, the rails app server is Mac OS X 10.6). What's working right now is access to the webapp via the reverse proxy as: http://www.ourdomain.org/apphost/railsappname/controllername/action But none of the javascript files, css files or other assets get loaded, and links internal to the web app come out missing the apphost portion of the URL, as if my rewrite rule is configured incorrectly (so of course I've focused on that and can't seem to get anything to be added or deleted in the process of passing the html in from the apphost and out through the Apache server). For instance, hovering over an action link in the html returned by the web app you'll get: http://www.ourdomain.org/railsappname/controllername/action Here's what my Apache directives look like: LoadModule proxy_html_module /usr/libexec/apache2/mod_proxy_html.so LoadModule xml2enc_module /usr/libexec/apache2/mod_xml2enc.so ProxyHTMLLogVerbose On LogLevel Debug ProxyPass /apphost/ http://apphost.local/ <Location /apphost/> SetOutputFilter INFLATE;proxy-html;DEFLATE ProxyPassReverse / ProxyHTMLExtended On ProxyHTMLURLMap railsappname/ apphost/railsappname/ RequestHeader unset Accept-Encoding </Location> After every change I make to httpd.conf I religiously check apachectl -t just to be sane. I'm definitely not an Apache expert, but all the directives that follow mine seem to not overrule what I'm doing here. But then nothing that I try seems to alter the URLs I see in my browser after hitting the Apache server with a request for my web app. Even if you can't tell what I've done incorrectly, I'd welcome ideas on how to get Apache to help see what it's working on and doing to the html coming from my web app. That's what I understood the ProxyHTMLLogVerbose On and LogLevel Debug to be setting up, but I'm not seeing anything in the log files.

    Read the article

  • Installing and configuring Zend Framework 2 server-wide [Ubuntu] and test driving ZendSkeletonApplication

    - by kinologik
    I'm trying to have ZF2 installed for all my subdomains at once (Ubuntu 12.04). ZF2 just launched its first stable version, so I wanted to install it on my development server and finally get my hands dirty with it. I downloaded ZF2 and unzipped the files in /var/ZF2/ (which now contains Zend/[all components]). I then edited /etc/php5/apache2/php.ini and added the path to the ZF2 files: include_path = ".:/var/ZF2" I then downloaded the ZendSkeletonApplication and unzipped it in /var/www/skeleton. I know it is suggested to composer.phar to install ZF2 application, but: I don't want to make a local installation of ZF2... I want to make a server-wide installation be able to use my Zend components on all my domains/subdomains on my development server. Before using any automatic installation process, I'd really like to understand that process by doing it manually at first. Obviously, something goes wrong when I fire ZendSkeletonApplication, and I get the following when hit the following URL: http://www.myDevServer.com/skeleton/public/ Fatal error: Uncaught exception 'RuntimeException' with message 'Unable to load ZF2. Run `php composer.phar install` or define a ZF2_PATH environment variable.' in /var/www/skeleton/init_autoloader.php:48 Stack trace: #0 /var/www/skeleton/public/index.php(9): include() #1 {main} thrown in /var/www/skeleton/init_autoloader.php on line 48 I have skimmed through the docs, tutorials and the like, but there are no straight forward answer to this kind of configuration. In the official doc, in the (very short) installation chapter, I see a reference to adding an include path in PHP. But no example... http://zf2.readthedocs.org/en/latest/ref/installation.html Once you have a copy of Zend Framework available, your application needs to be able to access the framework classes found in the library folder. Though there are several ways to achieve this, your PHP include_path needs to contain the path to Zend Framework’s library. But then, when I get to the "Getting Started" chapter, it's all composer.phar and nothing else... http://zf2.readthedocs.org/en/latest/user-guide/skeleton-application.html I'm no sysAdmin, just a Zend enthusiast. I'm pretty sure this PEBKAC problem might be obvious for those who already got in ZF2 previous betas. Thanks for helping my out. EDIT: Problem was resolved, thanks to Daniel M. Just setting up ZF2_PATH in httpd.conf was all that was needed. SetEnv ZF2_PATH /var/ZF2 I also removed the include_path reference in php.ini and everything works just fine. So I have no idea why Zend suggested to include it there in their official docs.

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • How to Configure Source NAT (Private IP => Public IP Outbound)

    - by DavidScherer
    I'm running VMWare ESXi Free and have Zentyal SBS 3.2 running as a Gateway. I have 5 Public IPS (CIDR/29, let's call them 69.1.1.1 - 69.1.1.5) and currently Zentyal is bound to 69.1.1.1 as the Gateway, with the other 4 Public IPs set as Virtual Interfaces in Zentyal (wan2-wan5) I have machines sitting on the Private Network (10.34.251.x) that, when going Outbound (to Google for instance) should be seen by the Internet as an IP other than the Gateway (69.1.1.1), this is because our machines need to be able to communicate with 3rd party APIs that expect these requests to come from a specific IP. From what I could find, SNAT (Source NAT) in Zentyal is used to achieve this, but I'm not sure how to configure it and cannot find a specific piece of Documentation for it at Zentyal. I've tried setting this up a couple different ways, with no results and at this point I have no idea if I'm going about this completely wrong, or my lack of experience with networking and the associated terminology is preventing me from placing the correct values in the correct fields. I get the following form to set up "SNAT" rules in Zentyal: Perhaps someone can offer some guidance and definitions for the fields above? SNAT Address Is this the Public IP I want to masquerade? Outgoing Interface Should this by my External NIC (one connected to Public 'Net), or is it the "Private" interface? It sounds as though this should be the External interface as I want the traffic from the internal network sent Out over this Interface (using a different IP than normal, anyway) Source Is the the Source on the internal network (one of the private IPs?), a public IP I want to masquerade as, or something else entirely? Destination Is this a place on the Internet (eg, "Only do this for the Site Google.com"/IP) or am I allowing myself to become confused again? Service I'm assuming this allows me to restrict which services this rule will apply to, but is it for a service on the internal network or a service being accessed on the external network? If I can offer any further details or information to make what I'm trying to do more clear, I will happily do so. Honestly any kind of help here would be very appreciated. I'm not a NetOps or anything even close, I spend most of my day writing code and my entire "team" at this company consists of "me, myself, and I" so while I try to broaden my KB at every possible opportunity, I can only learn so much, so fast and I feel like with networking especially there's just so much, coupled with a learning curve for each solution that likes to (from my limited perspective) use slightly different terminology that what I'm used to (and I don't exactly have the necessary experience to cross reference this stuff with the stuff I already know in context).

    Read the article

  • What's the best scenario for using a wireless router with Comcast Business Class

    - by Buck
    Just had Comcast Business Class internet installed (usage details at bottom of post). During the call to order I asked about the hardware they'd be providing and was told it was a docsis 3 modem that I'd have to pay $7.00/month for. Figuring I'd have to buy a router anyway, I decided to get my own modem - a Surfboard SB6121 Docsis 3. I called in to tech support to ask some questions and learned that the modem they would have provided DID have a router built in. It's an SMCD3G-CCR. It's not wireless (we need wireless). The guy explained that it was better to have their hardware here because if there's a problem with our service and we're using our own hardware, chances are they'll blame it on our hardware and do nothing since they don't support it. He explained that I could still hang my own wireless router off their modem/router and if we ever had any service problems, we'd be able to plug directly into their hardware and they'd be able to tell where the problem is and they wouldn't be able to pawn it off onto "customer provided equipment". That all said, a few questions: 1. Am I better off returning my Surfboard modem and getting the Comcast one? If I get a wireless router and plug into one of the ethernet ports of the Comcast device, should I NOT plug anything else into the Comcast device since it would be a different network from anything connecting via the wireless router? Is that correct? Given that I know VERY LITTLE about networking and setting up hardware like this... since I need wireless and will HAVE to get a wireless router to work with this Comcast device, do I need to do anything with the settings of the Comcast device? Do I use security on the Comcast device or the wireless router or both? Any suggestions or anything I need to think about, given this scenario, in order to use a business-type voip service like RingCentral or Jive or Nextiva? Any recommendations on a wireless router for this scenario? We are running 2 PCs (possibly 3-4 in the future) - could be wired for the time being if needed but would prefer wireless; would like to have a networked hard drive and a networked printer; NEED business-type VOIP service asap for 2 phone lines. Would like to hook up some IP cameras at some point (but not the kind that require static IPs since I don't have one nor do I plan to pay Comcast another $15/month for one). I don't have or plan to have any type of web servers or anything like that. Want to use WPA or WPA2 security and take advantage of the NAT feature of the router for additional protection (that's the extent of my networking knowledge).

    Read the article

  • Routing table with two NIC adapters in libvirt/KVM

    - by lzap
    I created a virtual NAT network (192.168.100.0/24 network) in my libvirt and new guest with two interfaces - one in this network, one as bridged (10.34.1.0/24 network) to the local LAN. The reason for that is I need to have my own virtual network for my DHCP/TFTP/DNS testing and still want to access my guest externally from my LAN. On both networks I have working DHCP, both giving them IP addresses. When I setup NAT port forwarding (e.g. for ssh), I can connect to the eth0 (virtual network), everything is fine. But when I try to access the eth1 via bridged interface, I have no response. I guess I have problem with my routing table - outgoing packets are routed to the virtual NAT network (which has access to the machine I am connecting from - I can ping it). But I am not sure if this setup is correct. I think I need to add something to my routing table. # ifconfig eth0 Link encap:Ethernet HWaddr 52:54:00:B4:A7:5F inet addr:192.168.100.14 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:feb4:a75f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16468 errors:0 dropped:27 overruns:0 frame:0 TX packets:6081 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:22066140 (21.0 MiB) TX bytes:483249 (471.9 KiB) Interrupt:11 Base address:0x2000 eth1 Link encap:Ethernet HWaddr 52:54:00:DE:16:21 inet addr:10.34.1.111 Bcast:10.34.1.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fede:1621/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:34 errors:0 dropped:0 overruns:0 frame:0 TX packets:189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4911 (4.7 KiB) TX bytes:9 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.34.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 0.0.0.0 192.168.100.1 0.0.0.0 UG 0 0 0 eth0 Network I am trying to connect from is different than network the hypervisor is connected to: 10.36.0.0. But it is accessible from that network. So I tried to add new route rule: route add -net 10.36.0.0 netmask 255.255.0.0 dev eth1 And it is not working. I thought setting correct interface would be sufficient. What is needed to get my packets coming through?

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >