Search Results

Search found 9754 results on 391 pages for 'virtual lotos'.

Page 326/391 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Create a partition table on a hardware RAID1 drive with [c]fdisk

    - by Lev Levitsky
    My question is, is there a reason for this not to work? Details: I have two 500 Gb drives, and my motherboard RAID support, so I created a RAID1 array and booted from a Linux live medium. I then listed the disks and, apart from the obvious /dev/sda, /dev/sdb, etc. there was /dev/md126 which, I figured, was the mirrored "virtual" drive. Its size was 475 Gb; I had seen that the size of the array would be smaller than 500 Gb when I was creating it, so no surprise there. I did cfdisk /dev/md126, created the necessary partitions and chose write. It's been about half an hour now, I think. It doesn't seem like it's ever going to finish. The only thing about cfdisk in dmesg is that it's "blocked for more than 120 seconds". Doing fdisk -l /dev/md126 in another terminal I see all three partitions I created and a note that "Partition 1 does not start on a physical sector boundary". The table is lost after reboot, though. I tried to partition /dev/sda individually, and it worked, the table was written in about a second. The "not on a physical sector boundary" message is there, too. EDIT: I tried fdisk on /dev/sda, then there were no messages about sector boundaries. After a reboot, I am able to use mkfs on /dev/dm126p1, etc. fdisk shows that /dev/md126 has the same partitions as /dev/sda (but /dev/sdb doesn't have any). But at some point ("writing superblock and filesystem accounting information") mkfs is also blocked. Using it on sda1 results in a "partition is used by the system" error. What can be the problem? EDIT 2: I booted a freshly updated system from a pendrive and was able to create partition table and filesystems on /dev/md126 without any apparent problems. Was it an issue with the support of the hardware? My MB is Asus P9X79.

    Read the article

  • pfSense router on a LAN with two gateways

    - by JohnCC
    I have a LAN with an ADSL modem/router on it. We have just gained an alternative high-speed internet connection at our location, and I want to connect the LAN to it, eventually dropping the ADSL. I've chosen to use a small PFSense box to connect the LAN to the new WAN connection. Two servers on the LAN run services accessible to the outside via NAT using the single ADSL WAN IP. We have DNS records which point to this IP. I want to do the same via the new connection, using the WAN IP there. That connection permits multiple IPs, so I have configured pfSense using virtual IP's, 1:1 NAT and appropriate firewall rules. When I change the servers' default gateway settings to the pfSense box, I can access the services via the new WAN IPs without a problem. However, I can no longer access them via the old WAN IP. If I set the servers' default gateway back to the ADSL router, then the opposite is true - I can access the services via the ADSL IP, but not via the new one. In the first case, I believe this is because an incoming SYN packet arrives at the ADSL WAN IP, and is NAT'd and sent to the internal IP of the server. The server responds with a SYN/ACK which it sends via its default gateway, the pfSense box. The pfSense box sees a SYN/ACK that it saw no SYN for and drops the packet. Is there any sensible way around this? I would like the services to be accessible via both IPs for a short period at least, since once I change the DNS it will take a while before everyone picks up the new address.

    Read the article

  • Win 2003 SBS - secure enough by default?

    - by Pekka
    I have to set up a Windows 2003 Small Business Server to work as a Subversion repository and possibly as an E-Mail server later. The machine is a virtual one, hosted with a hosting company, and freshly initialized. I used the Security Configuration Wizard to deactivate all server roles. After I install Subversion, I will open the necessary ports for the service; in addition, obviously, RDP will stay open so I can remote control the machine. Automatic updates are activated, and I will set up E-Mail notification every time somebody logs on to the server. I'm a programmer and not a professional systems administrator, so I would like to know whether you would regard this a sane and secure setup for a (publicly available) box to host sensitive code and/or E-Mail on. Is there anything in addition I should do to make the machine secure? Is there anything I can do on a long-term basis to keep the machine secure, apart from monitoring the event log (as far as I can make sense out of it), and seeing that any hotfixes are installed properly?

    Read the article

  • Xinerama creates a panning viewport

    - by iblue
    EDIT: I've created a bug report: https://bugs.freedesktop.org/show_bug.cgi?id=48458 My Setup I have 4 monitors, 1920x1080, which are in portrait mode (rotated left). They are connected to two radeon graphic cards. As usual, a picture says more than a thousand words. The problem Everything works fine, when Xinerama is disabled. But when I enable Xinerama, things get weird. When I move the mouse of the screen and return, the screen contents begin to move with the mouse, only on this monitor. It seems like the virtual display size does not match the real screen size, which activates a panning viewport. Any idea how to stop this? The video I created a video to demonstrate the issue: http://www.youtube.com/watch?v=zq_XHji1P24 xorg.conf This is my xorg.conf: Section "ServerLayout" ##################[ Evilness begins here ]############# Option "Xinerama" "on" # <--- Makes it go b0rked! ##################[ End of all evil ]############# Identifier "BOFH Console of Doom" Screen 0 "Screen-0" 0 0 Screen 1 "Screen-1" RightOf "Screen-0" Screen 2 "Screen-2" RightOf "Screen-1" Screen 3 "Screen-3" RightOf "Screen-2" EndSection Section "ServerFlags" Option "RandR" "false" EndSection Section "Module" Load "dbe" Load "dri" Load "extmod" Load "dri2" Load "record" Load "glx" EndSection Section "Monitor" Identifier "Monitor-0" Option "Rotate" "left" EndSection Section "Monitor" Identifier "Monitor-1" Option "Rotate" "left" EndSection Section "Monitor" Identifier "Monitor-2" Option "Rotate" "left" EndSection Section "Monitor" Identifier "Monitor-3" Option "Rotate" "left" EndSection Section "Device" Identifier "Radeon-0-0" Driver "radeon" BusID "PCI:9:0:0" Option "ZaphodHeads" "DVI-0" Screen 0 EndSection Section "Device" Identifier "Radeon-0-1" Driver "radeon" BusID "PCI:9:0:0" Option "ZaphodHeads" "DVI-1" Screen 1 EndSection Section "Device" Identifier "Radeon-1-0" Driver "radeon" BusID "PCI:4:0:0" Option "ZaphodHeads" "DVI-2" Screen 0 EndSection Section "Device" Identifier "Radeon-1-1" Driver "radeon" BusID "PCI:4:0:0" Option "ZaphodHeads" "DVI-3" Screen 1 EndSection Section "Screen" Identifier "Screen-0" Device "Radeon-0-0" Monitor "Monitor-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen-1" Device "Radeon-0-1" Monitor "Monitor-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen-2" Device "Radeon-1-0" Monitor "Monitor-2" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen-3" Device "Radeon-1-1" Monitor "Monitor-3" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Fix Video timelines

    - by Josh
    So, I have been going through and riping all of my DVD's and it seems that the way to get the highest quality out of these is to have DVD Shrink de-encrypt, rip, and decompress, the DVD's. After that I usually end up with a high quality (high size) set of .vob files in a classic DVD structure. Then I use a python script that I wrote to automate the process of finding the title sequence and then combining all of the title sequences' .vob files together into one file(similar to the "copy /b" command in windows), and then changing the extension to .mpg (a more widely supported format then .vob). This allows me to get a high quality rip in about 40 min. The problem comes in playing the files. I need all of the ripped dvd's to play on my media computer using windows media center but windows media center (and vlc for that matter) all think that the video files are anywhere from 5 min. to 0 min. which is not a problem (the video will still play all the way through) but if you want to pause it, when it is unpaused the video will start all the way over (Also fast forward and rewind don't work). I suspect that it is something wrong with the way the timeline is encoded in the video file, various forums on the internet recommended using virtualdub to fix the errors. But when I try to open the file virtual dub says that the file is not in mpeg-1 encoding and may be in mpeg-2. Is there any way to fix this? PS: I am aware that there was a similar question but it hasn't had any activity for 2 months and is dealing more with wmv files.

    Read the article

  • a VPS mail server

    - by microspino
    Hello I'm trying to substitute citadel on my Virtual Private Server with something more simple. I dislike their documentation and the webmail client. I don't need any groupware feature. I need only an MTA with a nice looking web interface, SPAM and VIRUS check. I recently found the lamson project from Zed Shaw. Is that production ready? Do you had any real and good experience with It? On the latest-news page I see that the last release dates december 2009. Sorry for my lack of knowledge, I'm really new to mail servers but I have to find a solution to manage sending and receiving mail on my VPS. I would accept also to build my VPS email server using a linux system like exim, postfix or whatever but I have really small needs and they will not grow in at least a year and i will be the only one user. I'm searching for something that I could build and manage easily, as I'm a novice linux sysadmin. Having also some good documentation or at least a robust step by step guide would be a plus.

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • Want to use something like Citrix XenDesktop, Free Alternative?

    - by Chris
    I'm looking to go into IT, general office server management, and it looks like XenDesktop would be a awesome tool to use. If I get it right, you would store a central image of the OS you want to deploy (in an iso file) on the main server. Then use XenDesktop to pull that image down to the client, and it will then boot the OS inside of the virtual machine. Does it download the image of the OS and store it locally (like cloning the VM onto the client?) I'd love to find a free (possibly open source?) alternative to this, I keep on hearing about KVM in Linux and PXE booting a minimalistic OS to use remote KVMs.... Would that be what I'm looking for? Ideally, I'd like a system.. - That allows me to manage one central image for multiple clients (virtualized hardware) - Easily boot a thin client OS that connected to XenDesktop. Would those things be possible with some kind of free alternative? Some guidance would be greatly appreciated.

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • Is there a free PDF printer / distiller that creates signable documents?

    - by Coderer
    I've used various methods (mentioned elsewhere on this site) to create PDFs, using a printer driver or converting from PostScript, etc. The common problem is that if I open any of the output files in the newer versions of Adobe Reader, there's an option to "Place Signature" but it's greyed out, or gives an error message that the feature has been disabled for this document. As far as I can tell, there's an option set somewhere in the document metadata that tells Reader "allow the user to sign this document", or don't. None of the free/open source tools that have been been linked to in other SU posts have had this listed as an option (though to be fair I haven't actually downloaded and tried all of them). Is there a tool that does this? Can I just poke a bit with a hex editor somewhere to turn on this functionality? I can sometimes get access to Acrobat Professional to turn on this option, but doing it for every desired case would be more work than I care to do. The current workaround for single-page documents is: Print the document to PDF (possibly via postscript) Open a single-page blank PDF with the "signable" bit turned on in Reader create a custom "stamp" using the Reader markup tools, by importing the printed-to document "stamp" an image of the printed document on the blank page, hoping to get it centered about right place a signature over the document-but-not-really you just stamped This obviously does not scale well at all. It would be much better if I could: Print the document to PDF Drag the document to a simple shortcut / tool / whatever Open the document in Reader Place a signature in the document ETA: Sorry, maybe I should have been clearer -- I'm talking about the certificate-based digital signing available in Adobe Reader, not adding a virtual ink signature. Also, any solution really would have to be available offline.

    Read the article

  • Connection refused in ssh tunnel to apache forward proxy setup

    - by arkascha
    I am trying to setup a private forward proxy in a small server. I mean to use it during a conference to tunnel my internet access through an ssh tunnel to the proxy server. So I created a virtual host inside apache-2.2 running the proxy, the proxy_http and the proxy_connect module. I use this configuration: <VirtualHost localhost:8080> ServerAdmin xxxxxxxxxxxxxxxxxxxx ServerName yyyyyyyyyyyyyyyyyyyy ErrorLog /var/log/apache2/proxy-error_log CustomLog /var/log/apache2/proxy-access_log combined <IfModule mod_proxy.c> ProxyRequests On <Proxy *> # deny access to all IP addresses except localhost Order deny,allow Deny from all Allow from 127.0.0.1 </Proxy> # The following is my preference. Your mileage may vary. ProxyVia Block ## allow SSL proxy AllowCONNECT 443 </IfModule> </VirtualHost> After restarting apache I create a tunnel from client to server: #> ssh -L8080:localhost:8080 <server address> and try to access the internet through that tunnel: #> links -http-proxy localhost:8080 http://www.linux.org I would expect to see the requested page. Instead a get a "connection refused" error. In the shell holding open the ssh tunnel I get this: channel 3: open failed: connect failed: Connection refused Anyone got an idea why this connection is refused ?

    Read the article

  • Whats the best way to update Ubuntu 9.04?

    - by Fu86
    I have a Ubuntu 9.04 server which has no packase support anymore. If I want to update my package lists, I get th following errors: Err http://de.archive.ubuntu.com jaunty-security/multiverse Packages 404 Not Found [IP: 141.30.13.10 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/jaunty/main/binary-amd64/Packages 404 Not Found [IP: 141.30.13.10 80] .... I read at the official Ubuntu-Support-Page, that there is a update-manager-core-Package to upgrade to a new release. Unfortunately I dont have this package installed and I am unable to install it because of the lack of package sources. EDIT: Installing the package update-manager-core from another release doesn't work because it depends on a higher version of python-apt. (Tried with 10.04) $ dpkg -i update-manager-core_0.134.7_amd64.deb Selecting previously deselected package update-manager-core. (Reading database ... 28743 files and directories currently installed.) Unpacking update-manager-core (from update-manager-core_0.134.7_amd64.deb) ... dpkg: dependency problems prevent configuration of update-manager-core: update-manager-core depends on python-apt (>= 0.7.13.4ubuntu3); however: Version of python-apt on system is 0.7.9~exp2ubuntu10. update-manager-core depends on python-gnupginterface; however: Package python-gnupginterface is not installed. dpkg: error processing update-manager-core (--install): dependency problems - leaving unconfigured Errors were encountered while processing: update-manager-core So, whats the best way to upgrade to to current Release without reinstalling the complete (virtual) server?

    Read the article

  • JVM disappeared on Mac OS X Snow Leopard 10.6.8

    - by weisjohn
    I'm working in Eclipse one night, (also using Android's DDMS from the commandline). The next morning, I open the lid... attempt to run Eclipse and get an error. me$ sudo /Applications/eclipse/eclipse JavaVM: requested Java version ((null)) not available. Using Java at "" instead. JavaVM: Failed to load JVM: /bundle/Libraries/libserver.dylib So I then attempt to find out where my JDKs are pointed: me$ ls -la /System/Library/Frameworks/JavaVM.framework/Versions/ total 64 drwxr-xr-x 12 root wheel 408 Nov 16 10:44 . drwxr-xr-x 12 root wheel 408 Sep 7 09:39 .. lrwxr-xr-x 1 root wheel 5 Sep 7 17:07 1.3 -> 1.3.1 drwxr-xr-x 3 root wheel 102 Dec 2 2009 1.3.1 lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.4 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.4.2 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.5 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.5.0 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.6 -> CurrentJDK drwxr-xr-x 9 root wheel 306 Nov 16 10:44 A lrwxr-xr-x 1 root wheel 1 Sep 7 17:07 Current -> A lrwxr-xr-x 1 root wheel 59 Sep 7 17:07 CurrentJDK -> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents Everything looks normal so far... me$ ls -la /System/Library/Java/JavaVirtualMachines/ total 0 drwxr-xr-x 2 root wheel 68 Nov 16 10:44 . drwxr-xr-x 5 root wheel 170 Nov 16 10:44 .. Apparently, my virtual machines have been deleted or moved? I'll probably be able to just re-install Java, but does anyone have any insight into why this may have happened or how to prevent in the future?

    Read the article

  • Time Drift on VM servers, need a reliable solution

    - by zeroasterisk
    We have some windows server 2008 VMware instances on multiple physical servers (hosted) and an application which requires the time to be synced across the server instances. Obviously, VMware has problems with this and we really have never gotten it working any better, we have setup the servers to poll for an NTP update every minute which mitigates the problem (in a fairly crude way). Except that every once in a while, the update will fail (because there's already too much drift) and then windows never does an NTP update afterwards which eventually allows the servers to drift far enough apart that our application breaks, and we notice. We are thinking about changing hosts to Xen servers on approximately the same setup, and I anticipate similar problems. can anyone tell me if Xen has the same time-drift issues VMware does, for guests? can anyone tell me what the best windows server settings are for syncing with an external NTP server to keep things in sync: how frequently do you recommend syncing? (assuming every minute) do you recommend running our own NTP server - even if it has to be on a virtual instance? (assuming not) is there any way to tell windows to sync with the NTP server no matter what the time difference is? any other suggestions for keeping windows servers time in sync? I have become familiar with [ http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1318 ] and it's helped, but it's not been totally effective (see above). thanks much!

    Read the article

  • postfix smtp_fallback_relay for deferred messages to a single domain

    - by EdwardTeach
    I use Postfix to send messages to a mail server outside my organization which frequently rejects/defers my mail. My Postfix server sees that these messages are deferred and tries again, eventually getting through. Final delivery can take up to an hour, which makes my users unhappy. In comparison, mail from my Postfix server to other hosts works normally. I have now found out about a second, unofficial MX for this domain that does not reject/defer mail. This second MX does not appear when doing a DNS MX query for the domain. Therefore, for the problem domain I would like to use this second MX as a fallback. That is: whenever mail is deferred by the primary MX, try again on the unofficial second MX. I see that there is already a postfix configuration "smtp_fallback_relay". However the documentation seems to indicate that I can not restrict usage of the fallback to a single domain. The documentation also doesn't mention deferred message handling. So is there a way to configure a single-domain, deferred-retry fallback host in Postfix? For reference, I am including my postconf output (the host names and ip addresses are fake): alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/etc/postfix/legacy_mailman, ldap:/etc/postfix/ldap-aliases.cf append_dot_mydomain = no biff = no config_directory = /etc/postfix default_destination_concurrency_limit = 2 inet_interfaces = all inet_protocols = all local_destination_concurrency_limit = 2 local_recipient_maps = $alias_maps mailbox_size_limit = 0 mydestination = myhost.my.network, localhost.my.network, localhost, my.network myhostname = myhost.my.network mynetworks = 127.0.0.0/8, [::ffff:127.0.0.0]/104, [::1]/128, 10.10.10.0/24 myorigin = my.network readme_directory = no recipient_delimiter = + relay_domains = $mydestination relayhost = smtp_fallback_relay = the.problem.host smtp_header_checks = smtpd_banner = $myhostname ESMTP $mail_name virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • nginx projects in subfolders

    - by Timothy
    I'm getting frustrated with my nginx configuration and so I'm asking for help in writing my config file to serve multiple projects from sub-directories in the same root. This isn't virtual hosting as they are all using the same host value. Perhaps an example will clarify my attempt: request 192.168.1.1/ should serve index.php from /var/www/public/ request 192.168.1.1/wiki/ should serve index.php from /var/www/wiki/public/ request 192.168.1.1/blog/ should serve index.php from /var/www/blog/public/ These projects are using PHP and use fastcgi. My current configuration is very minimal. server { listen 80 default; server_name localhost; access_log /var/log/nginx/localhost.access.log; root /var/www; index index.php index.html; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; include fastcgi_params; } } I've tried various things with alias and rewrite but was not able to get things set correctly for fastcgi. It seems there should be a more eloquent way than writing location blocks and duplicating root, index, SCRIPT_FILENAME, etc. Any pointers to get me headed in the right direction are appreciated.

    Read the article

  • One Apache VirtualHost entry overrides another?

    - by johnlai2004
    I can't tell why one apache virtual host entry keeps overriding another. The following file // filename: cbl <VirtualHost 74.207.237.23:80> ServerAdmin [email protected] ServerName completebeautylist.com ServerAlias www.completebeautylist.com DocumentRoot /srv/www/cbl/production/public_html/ ErrorLog /srv/www/cbl/production/logs/error.log CustomLog /srv/www/cbl/production/logs/access.log combined </VirtualHost> keeps overriding this file // filename: theccco.org <VirtualHost 74.207.237.23:80> SuexecUserGroup "#1010" "#1010" ServerName theccco.org ServerAlias www.theccco.org ServerAlias webmail.theccco.org ServerAlias admin.theccco.org DocumentRoot /home/theccco/public_html ErrorLog /var/log/virtualmin/theccco.org_error_log CustomLog /var/log/virtualmin/theccco.org_access_log combined ScriptAlias /cgi-bin/ /home/theccco/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/theccco/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks allow from all AllowOverride All </Directory> <Directory /home/theccco/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.theccco.org RewriteRule ^(.*) https://theccco.org:20000/ [R] RewriteCond %{HTTP_HOST} =admin.theccco.org RewriteRule ^(.*) https://theccco.org:10000/ [R] Alias /dav /home/theccco/public_html <Location /dav> DAV On AuthType Basic AuthName theccco.org AuthUserFile /home/theccco/etc/dav.digest.passwd Require valid-user ForceType text/plain Satisfy All RewriteEngine off </Location> </VirtualHost> I tried a2ensite, a2dissite, and reloading I get this message * Reloading web server config apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Thu Apr 15 10:47:36 2010] [warn] NameVirtualHost 74.207.237.23:443 has no VirtualHosts Aside from that, I don't know what else could be wrong. Can anyone tell me what to do?

    Read the article

  • How do I secure Sql Server 2008 R2

    - by Mark Tait
    I have both a dedicated and a VPS (from Fasthosts) virtual server - the web sites/applications I run on these, access Sql Server stored on the same web server. Until now, I have logged onto Sql Server on both the deidicated and VPS server, from Sql Server Management Studio - until I noticed in my server application logs, multiple attempts to logon to Sql Server using the 'sa' username, but failed password. So someone/bot is trying hard (repeatedly every couple of hours, for approx 20 attempts during each instance) to log on... so obviously I have to lock down access to Sql Sever remotely. What I have done is gone into Configuration Manager, and in Sql Server Network Configuration - Protocols for Sql2008 and also in Sql Native Client 10.0 Configuration - Client Protocols - I have diabled Named Pipes, TCP/IP (and VIA by default). I have left Shared Memory enabled. I also disabled in Sql Server Services, the Sql Server Browser. Now the only way I can manage the databases on these servers, is by logging on to them via Remote Desktop. Can anyone confirm if this is the correct way of stopping anyone maliciously logging on to Sql Server? (I'm not a DBA or security expert - and there are hundreds of articles advising all different ways - but I was hoping for the experts here to confirm, or otherwise, if what I've done is correct) Thank you, Mark

    Read the article

  • Are there any open source reseller packages?

    - by Tom Wright
    My department has just been given the right/responsibility to manage our own VPS. The idea being that the bureaucracy will be less for the many small web projects we run. Since each project will be managed by a different team, I was planning on approaching a shared hosting model. Are there any free pieces of software that would help automate the provision of resources each time a team request a new project? Most of the projects have identical requirements - basically LAMP - so it would be these resources that I would want provisioning (and de-provisioning, if that is a word) automatically. Ideally, there would also be a way to hook it into our LDAP authentication backend too, though I could probably make this sort of modification if necessary. Since we won't be charging our "client" however, we won't need the ability to generate invoices, handle payments, etc. etc. EDIT: Sample workflow Login authenticated against LDAP Username checked against admin group (not on central LDAP) Click 'new project' and enter project name User created on VPS with project name as username Apache virtual host created and subdomain (using project name) allocated FTP & MySQL users created

    Read the article

  • VMWare Raw Device Mapping Not Working

    - by George H. Lenzer
    While I'm waiting for VMWare support to get back to me, I thought I'd ask here. I have a 400 gig LUN presented from a fiber channel SAN to my VMWare host. It's legacy from another virtualization platform and I need to keep it as is to avoid a long period of downtime. I formatted my VMFS3 datastore with 4 meg blocks to allow up to 1 TB disks. Then I tried adding my 400 gig disk as a raw device in physical compatibility mode. I get the error: "File is larger than the maximum size supported by datastore 'Base Test'. [Base Test]VMTEST01/VMTEST01_2.vmdk Originally I had the VMFS datastore formatted with 1 meg blocks which was the cause of this problem since the largest disk allowed would be 256 gigs. But I deleted the data store and then reformatted with 4 megs blocks. I've also tried using virtual compatibility mode for the raw device but it still fails. Does anyone have any suggestions? I've been waiting for a little over a week for VMWare, but that's fine because I'm not yet a paying customer. I'm still in the eval phase.

    Read the article

  • Windows 2012 RDS Temporary profile for Administrator

    - by Fabio
    I've configured a Windows 2012 RDS Farm with two virtual servers (VMWare - each one on a different ESX server). Both servers have Licensing, Web Access, Gateway, Connection Broker and Session Host roles. High Availability is set up and it works fine. Remote Apps are working and even Windows XP clients have access to the web interface. User profile path is \vmfiles1\UserProfileDisks\App\ and almost everyone has full right access to it. The problem I have is that I would like to be able to access both servers at the same time with the Administrator account (console), but each time I try, the second server that I logon to give me access with a temporary profile. I tried to enable/disable multiple sessions per user and forced Admin logoff with the GPO but nothing changed. Another thing is that the server pool is not saved, so each time I restart the RDS server or I logoff from it, I have to add a server in the server manager. Do you have any idea? Sorry if my english is not perfect.

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • linux command prompt ftp to ftp server running on windows

    - by Vass
    Hi, I am using on Windows Vista, Filezilla server. I have it set up to be accessed via outside IPs and when I use a client on the IP I have it connects normally using Filezilla client. On the same machine I have Ubuntu running in a virtual box and when using filezilla client in there it works fine. Now I want to try the command prompt. So I do the ftp xxx.xxx.xx.xx I enter the name and password and i get the ftp command prompt, but the commands are not working properly. when trying "ls" or "cd" these commands do not work. "cd" tells me that the current directory is "/" root, but this does not make sense in the windows operating system. Now the filezilla client is taking the user in the application window directly to the root folder of the permitted filespace granted to that user. How can the same be done from the command prompt, if there is a way? It is as if the command prompt takes me to the root which does not exist or even have correct permissions to move in. Is there any way to be taken to the correct directory directly, or move there especially when the slashes are the wrong way around etc? Best,

    Read the article

  • Why is 32-bit-mode required in IIS7.5 for my app?

    - by Jonas Lincoln
    I have a .net4 web application running in a 64 bits 2008 server. I can only get it to run when I set the app pool to Enable 32-bits application to true. All dlls are compiled for .net4 (verified with corflags.exe). How can I figure out why Enable 32-bit is required? The error message from the event log when starting as a 64-bit app-pool Event code: 3008 Event message: A configuration error has occurred. Event time: 2011-03-16 08:55:46 Event time (UTC): 2011-03-16 07:55:46 Event ID: 3c209480ff1c4495bede2e26924be46a Event sequence: 1 Event occurrence: 1 Event detail code: 0 Application information: Application domain: removed Trust level: Full Application Virtual Path: removed Application Path: removed Machine name: NMLABB-EXT01 Process information: Process ID: 4324 Process name: w3wp.exe Account name: removed Exception information: Exception type: ConfigurationErrorsException Exception message: Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) Request information: Request URL: "our url" Request path: "url" User host address: ip-adddress User: Is authenticated: False Authentication Type: Thread account name: "app-pool" Thread information: Thread ID: 6 Thread account name: "app-pool" Is impersonating: False Stack trace: at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Custom event details:

    Read the article

  • EngineX ignores Auth Basic?

    - by Miko
    I have configured nginx to password protect a directory using auth_basic. The password prompt comes up and the login works fine. However... if I refuse to type in my credentials, and instead hit escape multiple times in a row, the page will eventually load w/o CSS and images. In other words, continuously telling the login prompt to go away will at some point allow the page to load anyway. Is this an issue with nginx, or my configuration? Here is my virtual host: 31 server { 32 server_name sub.domain.com; 33 root /www/sub.domain.com/; 34 35 location / { 36 index index.php index.html; 37 root /www/sub.domain.com; 38 auth_basic "Restricted"; 39 auth_basic_user_file /www/auth/sub.domain.com; 40 error_page 404 = /www/404.php; 41 } 42 43 location ~ \.php$ { 44 include /usr/local/nginx/conf/fastcgi_params; 45 } 46 } My server runs CentOS + nginx + php-fpm + xcache + mysql

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >