Search Results

Search found 17314 results on 693 pages for 'vpn setup'.

Page 637/693 | < Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >

  • Change Logon DPI setting in Windows 8.1

    - by jmc302005
    I love how M$ keeps making decisions for me about how I want my desktop to look. Now they have added per-user dpi settings. The problem this has created is that there is no adjustable dpi setting for the Lock/Logon screen. Let me explain. you can change the dpi setting to be the same across all displays and this does affect the icons and font on the lock/logon screen. However it does not affect any app/program that can run on the lock/logon screen. Ex. I use a 44" flat screen tv for my monitor on my desktop. Big enough for me to sit in my recliner and use my comp. But I don't have a wireless keyboard. And it sucks having the wire from the keyboard running across the floor. Plus I really don't want to keep a keyboard next to me. So I use the on screen keyboard for logging in and quick typing (search, web address, etc.) So the problem is that with the new dpi setup my on screen keyboard takes up nearly half the screen. Does M$ think we are all blind? Oh no I remember they think desktops should look like tablets and phones. I tried looking through the registry to see if I could find a setting for it. In the key HKEY_USERS.DEFAULT\Control Panel\Desktop there is a string value named "LogicalDPIOverride" with a value of -1. I have a feeling this is where I can fix the issue. I tried changing the value to 0 and to 1 with no change in the result. Instead I noticed that after logging out and back in the -1 value was back in the registry. So now M$ has also added a way for us to not be able to change a setting in the registry. They are making it harder and harder for us power users to be able to do anything with the settings in windows. Soon we will all have the same exact Windows with absolutely no customization. ok sorry for the quick rant. The real question here is. How can I change this defualt dpi crap? Can I use the LogPixels string that worked for dpi in Windows 7? Here are 2 Screen shots 1 of the Lock Screen and 1 of the Logon Screen http://i.imgur.com/6RM5ufE.jpg http://i.imgur.com/cnY5bmm.jpg Please any help will be appreciated.

    Read the article

  • Fedora 11 System - Failed Hard Drive Removed, and Boot gets GRUB Hard Disk Error

    - by Mindful
    Greetings, I have a machine with a 120GB ATA drive that has what I thought to be non-essential data on it. I also have a 320GB SATA hard drive with the OS/Application/Files (good data I want to keep). My 120GB ATA is failing I believe, as my computer kept slowing to a halt. However, when I move the drive from BIOS my computer will not start, says "GRUB Hard Disk Error". I know that my Fedora system has an LVM setup. I am looking to just remove the 120GB drive from "the mix", and just have one hard drive. How do I recover ? Thank you. I have access to a Linux Live CD right now and can make any changes. However, it won't boot into my OS - it fails. UPDATE: here's my Grub.Conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd1,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda1 default=0 timeout=5 splashimage=(hd1,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.30.10-105.2.23.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.10-105.2.23.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.10-105.2.23.fc11.i686.PAE.img title Fedora (2.6.30.9-102.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.9-102.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.9-102.fc11.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.img title Fedora (2.6.27.21-170.2.56.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.21-170.2.56.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.21-170.2.56.fc10.i686.img title Fedora (2.6.27.19-170.2.35.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.19-170.2.35.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img title Upgrade to Fedora 10 (Cambridge) kernel /upgrade/vmlinuz preupgrade repo=hd::/var/cache/yum/preupgrade stage2=http://chi-10g-1-mirror.fastsoft.net/pub/linux/fedora/linux/releases/10/Fedora/i386/os/images/install.img ks=hd:UUID=f11769ba-29bc-46de-8c40-a949720a438e:/upgrade/ks.cfg initrd /upgrade/initrd.img title Win rootnoverify (hd0,0) chainloader +1

    Read the article

  • nginx + Jetty - thousands of connections stuck in LAST_ACK

    - by virulence
    I have a FreeBSD machine with jails -- two in particular, one that runs nginx and another that runs a Java program that accepts requests via Jetty (embedded mode) Jetty receives upwards of 500 requests/sec constantly and there has been an issue lately where I will constantly have over 60,000 connections in the LAST_ACK state between nginx and jetty. Distribution of all connections (includes some other services, particularly php-fpm) root@host:/root # netstat -an > conns.txt root@host:/root # cat conns.txt | awk '{print $6}' | sort | uniq -c | sort -n 18 LISTEN 112 CLOSING 485 ESTABLISHED 650 FIN_WAIT_2 1425 FIN_WAIT_1 3301 TIME_WAIT 64215 LAST_ACK Distribution of nginx - jetty connections root@host:/root # cat conns.txt | grep '10.10.1.57' | awk '{print $6}' | sort | uniq -c | sort -n 1 3 CLOSE_WAIT 3 LISTEN 18 FIN_WAIT_2 125 ESTABLISHED 64193 LAST_ACK I'd prefer every request to fully close the connection. Clients requests are about 10 minutes apart from each other so connections must be closed. Some of the connections, tcp4 0 0 10.10.1.50.46809 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46805 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46797 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46794 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46790 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46789 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46771 10.10.1.57.9050 LAST_ACK etc.. On Jetty's end I've set maxIdleTime to 2000 -- before this all connections were in ESTABLISHED but they are now LAST_ACK On Jetty's end I've set Connection: close (i.e response.setHeader(HttpHeaders.CONNECTION, HttpHeaderValues.CLOSE);) Jetty never reports a lot of open connections -- always very few. PF/IPFW is not currently being used nginx - reset_timedout_connection is on I cannot figure out how to get nginx or jetty to forcibly close the connection, is this simply something that needs to be fixed in Jetty so that it fully closes the socket after the request finishes? Thanks a lot in advance EDIT: forgot my nginx config for the proxy setup- proxy_pass http://10.10.1.57:9050; proxy_set_header HTTP_X_GEOIP $http_x_geoip; proxy_set_header GEOIP_COUNTRY_CODE $geoip_country_code; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header Connection ""; proxy_http_version 1.1; EDIT2: Forcing Jetty to close the connection via request.getConnection().getEndPoint().close() does nothing -- it's obvious the connection IS being closed (as it's in LAST_ACK) but why isn't it getting past this? Is Nginx keeping the connection open to the backend for some reason?

    Read the article

  • PHP Sessions suddenly not working

    - by styrken
    Out of no where my php sessions does not work anymore. The server have been running fine for several months. I'am running Ubuntu 11.10 (GNU/Linux 3.0.0-14-server x86_64) with nginx/1.0.11 and php 5.3.19-1~dotdeb.0 Session info copied from phpinfo() Session Support enabled Registered save handlers files user memcached Registered serializer handlers php php_binary wddx Directive Local Value Master Value session.auto_start Off Off session.bug_compat_42 Off Off session.bug_compat_warn Off Off session.cache_expire 180 180 session.cache_limiter nocache nocache session.cookie_domain no value no value session.cookie_httponly Off Off session.cookie_lifetime 0 0 session.cookie_path / / session.cookie_secure Off Off session.entropy_file no value no value session.entropy_length 0 0 session.gc_divisor 1000 1000 session.gc_maxlifetime 1440 1440 session.gc_probability 0 0 session.hash_bits_per_character 5 5 session.hash_function 0 0 session.name PHPSESSID PHPSESSID session.referer_check no value no value session.save_handler files files session.save_path /tmp /tmp session.serialize_handler php php session.use_cookies On On session.use_only_cookies On On session.use_trans_sid 0 0 I have setup the following php script to test with: error_reporting(E_ALL); ini_set('display_errors', true); error_log($_SERVER['REMOTE_ADDR'] . ' visited test page'); if(session_start()) echo "Session started <br />"; else echo "Session failed <br />"; echo '<a href="?', time(), '">refresh</a>', "\n"; echo '<pre>'; echo 'session id: ', session_id(), "\n"; $sessionfile = ini_get('session.save_path') . '/' . 'sess_'.session_id(); echo 'session file: ', $sessionfile, ' '; if ( file_exists($sessionfile) ) { echo 'size: ', filesize($sessionfile), "\n"; echo '# ', file_get_contents($sessionfile), ' #'; } else { echo ' does not exist'; } echo PHP_EOL; $_SESSION['number'] = (int) @$_SESSION['number'] + 1; var_dump($_SESSION); echo "</pre>\n"; session_write_close(); echo 'done.'; It tells me that the session file exists, but my session id changes on each refresh.. What is going wrong? There is no output to any error logs at all.. :/ Please help!

    Read the article

  • Moving from VPS to Cloud

    - by GRIGORE-TURBODISEL
    ...and I have a few questions. I'm basically working on a MySQL+PHP based webapp. Since I don't have on-demand scaling with VPS, I'm planning to move from VPS to Cloud when I hit the 1000 subscribers barrier. I'm looking at Windows Azure but I'm ok with other suggestions. So here are my questions: Will it really cost me a kidney? Every subscriber needs to download around 4-5MB of static resources each day. Bandwidth is free on the VPS but here I see costs can easily get to $800.00/mo; this makes me very insecure about the whole thing, I mean VPS is just $2,000/yr. Do I need another VM or is PHP included in the Web Sites? I have basic sysadmin skills, I think I can handle setting up a PHP install, but will I have to do this? If yes, what other service do I need to setup manually? What about Memcached, MySQL, etc? What security protections does it include? For example I have some basic protection included, like directory traversals and executable files upload; I also have CloudFlare on my other websites for DDoS protection; will I need to do the same thing here too, can it even be installed, can I edit my DNS records, etc? How are e-mails, subdomains, add-on domains, parked domains, etc. handled? I haven't seen any references to e-mail boxes. On the VPS I simply add them from cPanel ([email protected] / whatever.mysite.com / ...); do I have a similar management interface here? Do I get SSH access? Or at least FTP, remote MySQL access and maybe some incremental back-ups or something? Can I see my quotas and advanced traffic info? I must mention that I really like the idea of the whole "cloud" concept, the added reliability and everything but I really need maybe a parallel to regular hosting or something so I know what to expect.

    Read the article

  • How to configure ASP.NET MVC 3 on IIS 6 (Windows 2003 R2)

    - by Nedcode
    I am getting 403 Directory Listing Denied for the root and 404 for an action that I know should exist. Background: I have build and deployed an ASP.NET MVC 2 applcation a long time ago. Later I upgraded it to MVC 3 and it is still working with not configuration changes. Setting it up on a windows 2003 R2 (Standard) initialy was a pain, but after a couple of days(yes, days) struggling it started working. Now I have to do the same with the same application on a different server (2003 R2 Standard again) on a different network. .Net 4 is installed and allowed ASP.NET MVC 3 is also installed By default IIS is set to use .net 4 I verify aspnet_isapi.dll used in application extension are from version 4.0.30319 .NET asemblies folder. I also added the wildcard mapping to aspnet_isapi.dll and unchecked verify file exists. Under Directory Security in Authentication Methods I have disabled anonymos access and enabled Integrated Windows authentication(same as the one on the server that it works) I have copied the same web.config with the <authentication mode="Windows" /> <authorization> <deny users="?" /> </authorization> I have set Read & Execute, List Folder Contents, and Read for the Networkservice account(under which the app pool is working). Also I have set the same for Network account, IIS_WPG, ASPNET and IUSR_MAchineName. I do not have an EnableExte??nsionlessUrls but even if I create it and set it to true or false it does not help. I also tried http://haacked.com/archive/2010/12/22/asp-net-mvc-3-extensionless-urls-on-iis-6.aspx and it did not help. But I kept getting 403 Directory Listing Denied for the root and 404 for an action that I know should exist. Web Platform installer was then used to re-install and possibly update .net, asp.net etc. I then noticed IIS was reset to default. So I added the wildcard mapping again. No, luck still 403. I exported configuration files from the working server setup and created new default app pool and new default website using those configurations. Still I get 403 Directory Listing Denied for the / and 404 for any action I try.

    Read the article

  • Problems set-up Single Sign-On using Kerberos authentication

    - by user1124133
    I need for Ruby on Rail application set authentication via Active Directory using Kerberos authentication. Some technical information: I are using Apache installed mod_auth_kerb In httpd.conf I added LoadModule auth_kerb_module modules/mod_auth_kerb.so In /etc/krb5.conf I added following configuration [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EU.ORG.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] EU.ORG.COM = { kdc = eudc05.eu.org.com:88 admin_server = eudc05.eu.org.com:749 default_domain = eu.org.com } [domain_realm] .eu.org.com = EU.ORG.COM eu.org.com = EU.ORG.COM [appdefaults] pam = { debug = true ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } When I test kinit validuser and enter password then authentication is successful. klist returns: Ticket cache: FILE:/tmp/krb5cc_600 Default principal: [email protected] Valid starting Expires Service principal 02/08/13 13:46:40 02/08/13 23:46:47 krbtgt/[email protected] renew until 02/09/13 13:46:40 Kerberos 4 ticket cache: /tmp/tkt600 klist: You have no tickets cached In application Apache configuration I added IfModule mod_auth_kerb.c> Location /winlogin> AuthType Kerberos AuthName "Kerberos Loginsss" KrbMethodNegotiate off KrbAuthoritative on KrbVerifyKDC off KrbAuthRealms EU.ORG.COM Krb5Keytab /home/crmdata/httpd/apache.keytab KrbSaveCredentials off Require valid-user </Location> </IfModule> I restarted apache Now some tests: When I try to access application from Win7, I got pop-up message box, with text: Warning: This server is requesting that your username and password be sent in an insecure manner (basic authentification without a secure connection) When I enter valid credentials then my application opens successfully, and all works fine. Questions: Is ok that for user pop-ups such windows? If I use NTLM authentication then there no such pop-up. I checked IE Internet Options and there 'Enable Integrated Windows Authentication' is checked. Why IE try to send username and password to application apache? If I correct to understand then Windows self must make authentication via Active Directory using Kerberos protocol. When I try to access application from Win7 and I enter incorrect credentials to pop-up message box Application say Authentication failed (this is OK) In apache error log I see: [error] [client 192.168.56.1] krb5_get_init_creds_password() failed: Client not found in Kerberos database But now I cannot get possibility to enter valid credentials, only when I restart IE I can get again pop-up box. What could be incorrect or missing in my Kerberos setup? I read in some blog post that probably something is needed to be done in Active Directory side. What exactly?

    Read the article

  • Nginx + Wordpress Multisite 3.4.2 + subdirectories + static pages and permalinks

    - by UrkoM
    I am trying to setup Wordpress Multisite, using subdirectories, with Nginx, php5-fpm, APC, and Batcache. As many other people, I am getting stuck in the rewrite rules for permalinks. I have followed these two guides, which seem to be as official as you can get: http://evansolomon.me/notes/faster-wordpress-multisite-nginx-batcache/ http://codex.wordpress.org/Nginx#WordPress_Multisite_Subdirectory_rules It is partially working: http://blog.ssis.edu.vn works. http://blog.ssis.edu.vn/umasse/ works. But other permalinks, like these two to a post or to a static page, don't work: http://blog.ssis.edu.vn/umasse/2008/12/12/hello-world-2/ http://blog.ssis.edu.vn/umasse/sample-page/ They either take you to a 404 error, or to some other blog! Here is my configuration: server { listen 80 default_server; server_name blog.ssis.edu.vn; root /var/www; access_log /var/log/nginx/blog-access.log; error_log /var/log/nginx/blog-error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Add trailing slash to */username requests rewrite ^/[_0-9a-zA-Z-]+$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { # Forbid PHP on upload dirs if ($uri ~ "uploads") { return 403; } client_max_body_size 25M; try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Any ideas are welcome! Have I done something wrong? I have disabled Batcache to see if it makes any difference, but still no go.

    Read the article

  • Mysql Servers for Attendance System

    - by foo
    I'm building an attendance system. There are about 20 places where people will check in and check out using Mifare 1K Card. It will use MySQL as the database. The system will display something like "#ID IN: 800AM" when the first time the user checks in and "#ID OUT: 400PM" when the user checks out. For this to work, all the databases need to be synchronized with each other all the times. For an example, if user A went to location #1 to check in but by the time he wants to return home, the server at location #1 went down, he needs to go to location #2 or the nearest server to check out. The server at location #2 should display '#ID OUT: 400PM" and not "#ID IN: 400PM" since he's already checked in. So, what should I use to ensure this idea will work? My main concern is with the network (other department manages it) which is very unpredictable. It just love to go down anytime it wants to. Update LOL, didn't realize my question is not clear, just noticed it when you guys pointed it out, sorry about that. My real question is, how can I configure my MySQL to be synchronized with each other (20 servers)? MySQL cluster ? (tried reading about it, but I'm not sure if it's the right thing to do) My current setup (first phase): Local database for each server OS: Slackware A main server that keeps track which staff is at which server A web based front end for the user to see their history (which connects to the server based on their records) Main Pros No worries about network problems since it is a local database Main Cons A user can only check in and out at the same server. Databases/Servers are not connected with each others. Have to add the user to each server if the users want to check in at different locations. Which means, if he wants to go to location A, he must be checked out from location A first and then check in at location B. The server at location B didn't know that the user has checked in before at A. By the way, I've already centralized my NTP to a local server. About the network, let's just say, I don't have the authority to make changes so that the network will be better. The network won't effect all 20 servers at once, usually, just a few of them for several times a week. If there are anything else you would like me to answer, please just ask.

    Read the article

  • routing through multiple subinterfaces in debian

    - by Kstro21
    my question is as simple as the title, i have a debian 6 , 2 NICs, 3 different subnets in a single interface, just like this: auto eth0 iface eth0 inet static address 192.168.106.254 netmask 255.255.255.0 auto eth0:0 iface eth0:0 inet static address 172.19.221.81 netmask 255.255.255.248 auto eth0:1 iface eth0:1 inet static address 192.168.254.1 netmask 255.255.255.248 auto eth1 iface eth1 inet static address 172.19.216.3 netmask 255.255.255.0 gateway 172.19.216.13 eth0 is conected to a swith with 3 differents vlans, eth1 is conected to a router. No iptables DROP, so, all traffic is allowed. Now, passing the traffic through eth0 is OK, passing the traffic through eth0:0 is OK, but, passing the traffic through eth0:1 is not working, i can ping the ip address of that sub interface from a pc where this ip is the default gateway, but can't get to servers in the subnet of the eth1 interface, the traffic is not passing, even when i set the iptables to log all the traffic in the FORWARD chain and i can see the traffic there, but, the traffic is not really passing. And the funny is i can do any the other way around, i mean, passing from eth1 to eth0:1, RDP, telnet, ping, etc, doing some work with the iptable, i manage to pass some traffic from eth0:1 to eth1, the iptables look like this: iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m multiport --dports 25,110,5269 -j DNAT --to-destination 172.19.216.1 iptables -t nat PREROUTING -d 192.168.254.1/32 -p udp -m udp --dport 53 -j DNAT --to-destination 172.19.216.9 iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 172.19.216.11 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 172.19.221.80/29 -j SNAT --to-source 172.19.221.81 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 192.168.254.0/29 -j SNAT --to-source 192.168.254.1 iptables -t nat POSTROUTING -s 172.19.216.0/24 -o eth0 -j SNAT --to-source 192.168.106.254 dong this is working, but,it is really a headache have to map each port with the server, imagine if i move the service from server, so, now i have doubts: can debian route through multiple subinterfaces?? exist a limit for this?? if not, what i'm doing wrong when i have the same setup with other subnets and it is working ok?? without the iptables rules in the nat, it doesn't work thanks and i hope good comments/answers

    Read the article

  • Linux Software RAID1 Rebuild Completes, but after reboot, its degraded again

    - by zimmy6996
    I have been beating my head with an issue here, and I'm now turning to the internet for help. I have a system running Mandrake Linux, with the following configuration: /dev/hda - This is a IDE drive. Has some partitions on it that boot the system and make up most of the file system. /dev/sda - This is drive 1 of 2 for a software raid /dev/md0 /dev/sdb - This is drive 2 of 2 for a software raid /dev/md0 md0 gets mounted but fstab as /data-storage, so it is not critical to the systems ability to boot. We can comment it out of fstab, and the system works just fine either way. The problem is, we have a failed sdb drive. So I shut the box down, and have pulled the failed disk and installed a new disk. When the system boots up, /proc/mdstat shows only sda as part of the raid. I then run the various command to rebuild the RAID to /dev/sdb. Everything rebuilds correctly, and upon completion, you look at /proc/mdstat and it shows 2 drives sda1(0) and sdb1(1). Everything looks great. Then you reboot the box ... UGH!!! Once rebooted, sdb is missing again from the RAID. It is like the rebuild never happened. I can walk through the commands to rebuild it again, and it will work, but again, after reboot, the box seems to make sdb just vanish! The real odd thing is, if after reboot, I pull sda out of the box, and try to get the system to load with the rebuilt sdb drive in the system, and when I do, the system actually throws and error just after grub, and says something about drive error, and the system has to shut down. Thoughts??? I'm starting to wonder if grub has something to do with this mess. That the drive isn't being setup within grub to be visible at boot? This RAID array isn't necessary for the system to boot, but when the replacement drive is in there, without SDA it won't boot system, so it makes me believe there is something to that. On top of that, there just seems to be something wonky here the drive falling off of RAID after reboot. I've hit the point of pounding my head on the keyboard. Any help would be greatly appreciated!!!

    Read the article

  • Is this distributed database server idea feasible?

    - by David
    I often use SQLite for creating simple programs in companies. The database is placed on a file server. This works fine as long as there are not more than about 50 users working towards the database concurrently (though depending on whether it is reads or writes). Once there are more than this, they will notice a slowdown if there are a lot of concurrent writing on the server as lots of time is spent on locks, and there is nothing like a cache as there is no database server. The advantage of not needing a database server is that the time to set up something like a company Wiki or similar can be reduced from several months to just days. It often takes several months because some IT-department needs to order the server and it needs to conform with the company policies and security rules and it needs to be placed on the outsourced server hosting facility, which screws up and places it in the wrong localtion etc. etc. Therefore, I thought of an idea to create a distributed database server. The process would be as follows: A user on a company computer edits something on a Wiki page (which uses this database as its backend), to do this he reads a file on the local harddisk stating the ip-address of the last desktop computer to be a database server. He then tries to contact this computer directly via TCP/IP. If it does not answer, then he will read a file on the file server stating the ip-address of the last desktop computer to be a database server. If this server does not answer either, his own desktop computer will become the database server and register its ip-address in the same file. The SQL update statement can then be executed, and other desktop computers can connect to his directly. The point with this architecture is that, the higher load, the better it will function, as each desktop computer will always know the ip-address of the database server. Also, using this setup, I believe that a database placed on a fileserver could serve hundreds of desktop computers instead of the current 50 or so. I also do not believe that the load on the single desktop computer, which has become database server will ever be noticable, as there will be no hard disk operations on this desktop, only on the file server. Is this idea feasible? Does it already exist? What kind of database could support such an architecture?

    Read the article

  • Suspected network performance issue on VirtualBox Ubuntu guest on Win7 host

    - by Adam
    I set up Ubuntu 12.04 in VirtualBox on the Win7 machine I was allocated on my new project. I am running Java, Eclipse, Tomcat to develop a large data-intensive application and I noticed that this application runs at half the speed of my colleague's identical machine, where he runs it all under Windows. I think I have narrowed down the performance issue to the network, after comparing and equalising all the Java VM settings with my colleague. Is there a ping test I can do or some other network diagnostic test to flag up any problems? To give some background, the network performance is confusing. Running a network speed test to my colleague's machine with iperf shows speeds of 6 Mb/s from my Ubuntu guest, and 90 Mb/s from the win7 host. Large downloads, e.g. the Java SDK, come down at about 1.2 MB/s on both the guest and the host. Pings are sub-1ms on the host, but 1.5ms on the guest. I also did a broadband speed test, and got 10Mb/s download speed on both, but the host has an upload speed of 10Mb/s but the guest only uploads at 3Mb/s. I've been trying to diagnose any MTU problems with ping -M do to identify any kind of packet fragmentation problem but it's progressing very slow because I don't have much experience in this area. From what I read on other people's networking issues with VB and Linux guests on Win7 hosts, I should be able to get the speed on the guest up to the same level as the host. I installed a fresh VM with Ubuntu again to see if I'd foobar'd it somehow, but I'm getting the same readings with iperf on the virgin installation. My setup is: Adapter 1: Intel PRO/1000 MT Desktop (NAT) Adapter 2: ditto (host-only adapter) eth0 Link encap:Ethernet HWaddr 08:00:27:0b:76:bf inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe0b:76bf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:86236 errors:0 dropped:0 overruns:0 frame:0 TX packets:49369 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69163946 (69.1 MB) TX bytes:3530535 (3.5 MB) eth2 Link encap:Ethernet HWaddr 08:00:27:a3:26:b8 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:26b8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:59 errors:0 dropped:0 overruns:0 frame:0 TX packets:57 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9148 (9.1 KB) TX bytes:7648 (7.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:701 errors:0 dropped:0 overruns:0 frame:0 TX packets:701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:66321 (66.3 KB) TX bytes:66321 (66.3 KB)

    Read the article

  • Ping Unknown Host on CentOS at EC2

    - by organicveggie
    Weird problem. We have a collection of servers running CentOS 5 on EC2. The setup includes two DNS servers and two LDAP servers. DNS has a CNAME pointing at the primary LDAP server. One machine (and only one machine) is giving me problems. I can ssh into the server using LDAP authentication. But once I'm on the machine, ping won't resolve the LDAP host even though DNS seems to work fine. Here's ping: $ ping ldap.mycompany.ec2 ping: unknown host ldap.mycompany.ec2 Here's the output of dig: $ dig ldap.mycompany.ec2 ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> ldap.studyblue.ec2 ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2893 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ldap.mycompany.ec2. IN A ;; ANSWER SECTION: ldap.mycompany.ec2. 3600 IN CNAME ec2-hostname.compute-1.amazonaws.com. ec2-hostname.compute-1.amazonaws.com. 55 IN A aaa.bbb.ccc.ddd ;; Query time: 12 msec ;; SERVER: 10.32.159.xxx#53(10.32.159.xxx) ;; WHEN: Tue May 31 11:16:30 2011 ;; MSG SIZE rcvd: 107 And here is resolv.conf: $ cat /etc/resolv.conf search mycompany.ec2 nameserver 10.32.159.xxx nameserver 10.244.19.yyy And here is my hosts file: $ cat /etc/hosts 10.122.15.zzz bamboo4 bamboo4.mycompany.ec2 127.0.0.1 localhost localhost.localdomain And here's nsswitch.conf $ cat /etc/nsswitch.conf passwd: files ldap shadow: files ldap group: files ldap sudoers: ldap files hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: files ldap publickey: nisplus automount: files ldap aliases: files nisplus So DNS works the way I would expect. And I can ping the ldap server by ip address. And I can even access the box with SSH using LDAP authentication. Any suggestions?

    Read the article

  • Some domain names not resolving on local network

    - by Solignis
    I am not really sure where to start with this one... I have a small network setup with some linux servers (Ubuntu 11.04 Server). 2 servers are running BIND 9 (NS01, NS02), they are configured as master and slave respectively. 1 server is running Zimbra ZCS 7.1.1 (MX01), it has a private BIND 9 server running to achieve a split DNS configuration. This DNS server does not interact with the other two, it forwards queries it can resolve to the other 2 that is it. No zone transfers. Zimbra is hosting 3 domains at the moment, solignis.local, solignis.com, campbellsurvey.net. The problem From with in my network I cannot connect to mail.campbellsurvey.net. When I mean I cannot connect, I mean if I open firefox and type https://mail.campbellsurvey.net I go nowhere, the address is supposed to connect to my Zimbra webmail. But it goes nowhere, the odd thing is if I try the same task from outside of the network it brings the website up like normal. If I try to create an account in thunderbird to connect to the same server using IMAP4 or POP3 I get an error saying that thunderbird cannot find the domain name. Even the zimbra client fails too. It is like from with in my own walls that campbellsurvey.net does not exist. But if step outside I can get it work with no problem at all. I had thought maybe the problem was with the DNS server (BIND 9), so just to eliminate it as a possibility I configured a windows server I use for VMware VCenter as a DNS server to see what would happen. The result was the same, its like something is preventing connections to those domains, but I have checked various firewalls and such. I checked port forwards, etc. So I am running out of ideas I know this is not a lot of information to work from and I can give more details about certain things as needed. I am just trying to figure out what could be going wrong. Any help you could offer would be much appreciated.

    Read the article

  • Can't validate mine, sudo nor root in Debian "Jessie" Gnome anymore?

    - by Janar
    I'm Debian beginner & GUI guy in a bit of trouble? Can't login as sudo/gksu/root/su nor as (main/super)user after removed user password via Gnome-user-settings. History of actions (Probably irrelevant though) Installed Debian "Jessie" GNU/Linux with xFce GUI (en-US) as only OS. HardWare is ThinkPad w510. Skipped root user password in setup, to get sudo for superuser easily. Logged in (as always had) with Gnome (3.4.x), not once with xFCE. (installed Xfce. Installed xFce only to achieve more control (easier management) over packages this way, to set-up gnome much more by mine likes. Added more jessie repros (same ones as in Wheesy stable by default but for Jessie as, Jessie only had repros for security updates by default). Installed lots of gtk(3) & gnome(3) based soft; (- restarted again after this) Installed propietary graphics card driver for mine nvidia quadro. (- restarted once again after that one) Installed more stuff related to mine work/school/devel. The actual problem Had a plan to restart again, but wanted to set up auto-login first, instead set user password to none (don't ask why / perhaps caused by being awake for a looooong time), noticed it, and set also to auto-login, but couldn't undo mine previous mistake to create new password for me. As mine password is set to none I would have expected that simply return in pass prompt for emty password field would do, but it won't authenticate. I tried Alt+F2 "gksu gedit" as well as: sudo wget "https://www.some-page.eu/file.ext" and "su" in terminals, none has applied (quite logical actually - as I'm sudoer and highest ranked super user, besides only user in computer). Current stand Everything worked & still works nice after this accident, besides this password prompts part. To spoked to log-out nor restart. Synaptic package-manager is still open with root rights (only one, that has left open prior to the issue and not closed since, just in case). Goggled for help and read some manuals/faqs/how-tos - mostly lead to sudoers file management, but not found one specifically for mine issue - so still not any smarter. Really hope, that I don't have to redo OS inst all over again, by just one stupid mistake. Thanks for your reply :-)

    Read the article

  • configuring mod_proxy_html properly?

    - by tobinjim
    I have an apache2 web server that handles reverse proxy for Rails3 app running on another machine. The setup works except URLs generated within the webapp aren't getting rewritten by my configuration for mod_proxy_html. The ["Reverse Proxy Scenario"][1] is exactly what I'm trying to do, so I've followed the tutorial as completely as I know how. I've applied or tried answers supplied here on stackoverflow, to no effect. According to the "Reverse Proxy Scenario" you want a number of modules loaded. All those instructions are in my httpd.conf file and when I examine the output from apactectl -t -D DUMP_MODULES all the expected modules show in amongst the listing. My external web server doing the reverse proxy is at www.ourdomain.org and the Rails app is internally available at apphost.local (the server is Mac OS X Server 10.6, the rails app server is Mac OS X 10.6). What's working right now is access to the webapp via the reverse proxy as: http://www.ourdomain.org/apphost/railsappname/controllername/action But none of the javascript files, css files or other assets get loaded, and links internal to the web app come out missing the apphost portion of the URL, as if my rewrite rule is configured incorrectly (so of course I've focused on that and can't seem to get anything to be added or deleted in the process of passing the html in from the apphost and out through the Apache server). For instance, hovering over an action link in the html returned by the web app you'll get: http://www.ourdomain.org/railsappname/controllername/action Here's what my Apache directives look like: LoadModule proxy_html_module /usr/libexec/apache2/mod_proxy_html.so LoadModule xml2enc_module /usr/libexec/apache2/mod_xml2enc.so ProxyHTMLLogVerbose On LogLevel Debug ProxyPass /apphost/ http://apphost.local/ <Location /apphost/> SetOutputFilter INFLATE;proxy-html;DEFLATE ProxyPassReverse / ProxyHTMLExtended On ProxyHTMLURLMap railsappname/ apphost/railsappname/ RequestHeader unset Accept-Encoding </Location> After every change I make to httpd.conf I religiously check apachectl -t just to be sane. I'm definitely not an Apache expert, but all the directives that follow mine seem to not overrule what I'm doing here. But then nothing that I try seems to alter the URLs I see in my browser after hitting the Apache server with a request for my web app. Even if you can't tell what I've done incorrectly, I'd welcome ideas on how to get Apache to help see what it's working on and doing to the html coming from my web app. That's what I understood the ProxyHTMLLogVerbose On and LogLevel Debug to be setting up, but I'm not seeing anything in the log files.

    Read the article

  • How do I (robustly) remotely execute tasks on Windows workstations in a domain?

    - by Zac B
    I'm not even sure if "robustly" is a word. Anyway. Context: We have a few hundred Windows 7 workstations on a LAN. We use AD/GPO management pretty heavily, but there are a lot of periodic and/or manual maintenance tasks we need to do that can't be done via GPO/scheduled task. For example, say I want to execute program X (which runs silently, in the background, and doesn't bother the user) on workstation Y, or say I want to execute task A on a workstation group B either on a schedule or on demand. Kicking the users off of their computers to do this (i.e. using RDP) is a no-no, and doesn't work on groups anyway. Question: What's the best way to do this that is robust enough that, after setup, I could give it to beginner support people (read: people who are phobic of the command line, and get confused with GUI interfaces more complicated than Firefox)? I'm a competent programmer, and, if there is a robust set of tools or framework out there for this type of task, I'd consider hacking something together myself if it didn't take too long. If there's some combination of tools or techniques that others use to make remote-workstation-administration doable by beginners, I have yet to find it. For those who care about the "why": I'm midlevel IT, and was told to implement a remote management solution that allows arbitrary/scheduled remote execution, with confirmation that programs actually ran remotely, and the ability to view what they returned. "Why?" I asked, "Can't I just use PsExec and the task scheduler on a dispatcher machine?" "No," I was told, "'Joe' the second-week tech is going to be in charge of this one, and he needs something simple with a GUI." What I've tried: I've played with making a bunch of one-clickable "transfer files to remote computer and run them with PsExec" batch/VB scrips, but those tend to break down and don't easily support running on customizable groups. I've played a little bit with the Windows version of Puppet, but it doesn't support arbitrary-time remote execution (it's ability to group computers into a tree/node structure is really nice though). I've used an older version of Altiris, and, while it does a lot of what I want, it's interface is awful, it's slow, crashes a lot, and is probably too expensive for management. SwiftWater's DMS solution does some of what I want, but it's very underdeveloped, closed-source (not a deal breaker but not ideal), and I get the impression that support and reliability are lacking.

    Read the article

  • Virtualbox - routing subnet to bridge adapters

    - by user42384
    Hello, I have set up a Debian Lenny box with 3 vbox Lenny machines running eth0 of the host in bridged mode (on virtualbox 3.1.6). When testing in my local LAN, this all worked perfectly well and traffic flowed to and from the IPs of the virtual machines as it should. However, now that it's in its co-lo home, the networking setup is a bit different, and I'm unable to get traffic to flow to the vboxes properly. Specifically, the host has its own Primary IP, and I have a separate subnet of 8 (6 usable) IPs routed to the box for use by the vboxes. So, eth0 on host is: Machine IP: 2x.x.x.137 Gateway IP: 2x.x.x.138 Subnet Msk: 255.255.255.252 Subnet for vboxes is Subnet: 2x.x.x.240/29 Netmask: 255.255.255.248 vbox1 is configured to 2x.x.x.241 on eth0 as follows: auto eth0 iface eth0 inet static address 2x.x.x.241 netmask 255.255.255.248 Setting up a virtual interface (eth0:0) on the host with one of these subnet IPs allows me to ping to that address only from vbox1, and it allows me to ping vbox1 from the host. I can also ping that virtual interface perfectly well from outside, so the IPs are definitely landing at my machine. It seems I'm missing some sort of routing instruction either on the host or vbox1 to get traffic moving between the subnet and the default gateway, but I can't seem to figure out what it should be, or what glaringly obvious thing i'm missing. Most of my obvious attempts (the gw of eth0, the ip of eth0) were rejected by route command with SIOCADDRT: No such device (eg - i can't find it). I tried setting vbox1 to bridge on eth0:0, but this was not an acceptable device name and VBoxHeadless refused to start. The physical machine does have an unused physical NIC at eth1 that can be used if necessary for something or other. Host machine is running iptables configured by ferm, have experimented with it allowing forwarding for that subnet, but I wouldn't have thought this was necessary given the nature of the virtualbox devices (nor did it actually work). Clearing out all of these rules for a blank iptables set does not resolve the issue. (you can see ferm generated iptables at http://codedumper.com/ojaze) Thanks for any help you can give... Patrick

    Read the article

  • Help needed setting up nginx to serve static files.

    - by Catalina
    Hi Guys, I'm trying to setup nginx to serve static files. Basically all I need is to have http://mydomain.com/site_media/ point to /var/django/myproject/site_media. I have tried so many configurations and when I test it I always get a 404 error for static files. Can anyone please tell me what I'm doing wrong or how I should be setting this up? This is my current nginx configuration file. user www-data; worker_processes 1; #error_log /usr/local/nginx/logs/error.log; #pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 1024; use epoll; } http { # Enumerate all the Tornado servers here upstream frontends { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; server 127.0.0.1:8003; } include mime.types; default_type application/octet-stream; #access_log /usr/local/nginx/logs/access.log; keepalive_timeout 65; proxy_read_timeout 200; sendfile on; tcp_nopush on; tcp_nodelay on; gzip on; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain text/html text/css text/xml application/x-javascript application/xml application/atom+xml text/javascript; proxy_next_upstream error; server { listen 80; # Allow file uploads client_max_body_size 50M; location ^~ /site_media/ { root /var/django/myproject/site_media; if ($query_string) { expires max; } } location = /favicon.ico { rewrite (.*) /site_media/favicon.ico; } location = /robots.txt { rewrite (.*) /site_media/robots.txt; } location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://frontends; } } #include /usr/local/nginx/sites-enabled/*; } Thanks, Cata

    Read the article

  • Synchronize the same set of files to 2 different locations with 2 different programs for 2 different purposes

    - by Hedgetrimmer
    Because of stupid questionable IT policies at my not-to-be-named place of occupation, I have been (and will be, for the forseeable future) carrying on an external hard drive a unison-synchronized copy of all of my documents and code, including code which resides in some of my "dotfiles" and other code which resides in ~/bin (things I've made are there because ~/bin is in my $PATH) along with some cruft generated (and to be generated) by conscript and its related "giter8" templating system for Scala project boilerplates. Despite this, I do use a symlinking program to store all of my important dotfiles in a subdirectory. Thanks to that somewhat complicated setup, I have resorted to making a directory full of symlinks to every directory (or file, as is the case with stuff under ~/bin) that I want synchronized, and then follow = True is in my unison profile. It happens to be that this collection of odds and ends—plus an automatically-generated text file containing every package installed on my system—is everything under ~ that needs to be backed up to a remote (rsync-over-ssh) host with client-side encryption and signing from GPG. I already believe that duplicity is the most appropriate program to do that. What isn't as clear-cut is how to make duplicity use the exact same set of files when it runs a backup; it would be simple if duplicity would follow symlinks, but it does not and the manpage lists no option for enabling any such behavior. Comparing unison's file selection algorithm to duplicity's, I don't think I can write a program that could compute a ruleset for one program given one for the other. For the record, I would rather not keep the symlinks manually synchronized with duplicity file-selection rules, as they can change thanks to the above-mentioned complications regarding ~/bin. I don't think running duplicity on the external hard disk is such a good idea either; I usually keep that hard disk unmounted and unplugged in case of a power failure or other physical problem with the computer, plus I'm not sure about duplicity's performance given that: the hard disk is NTFS-formatted in order to be useable at my Windows-imprisoned place of occupation. despite being a USB 3.0 disk, my computer has no USB 3.0 ports so it acts as a USB 2.0 disk. How can I have duplicity (or is there a better program that I have overlooked?) back up the exact same set of files that is bidirectionally synchronized with my external hard disk?

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • Group traffic shaping with traffic control?

    - by mmcbro
    I'm trying to limit the output bandwidth generated by an application with linux tc. This application sends me the source port of the request that I use has a filter to limit each user at a given downloadspeed. I feel that my setup could be managed way better if I had a better knowledge of linux tc. At the application level users are categorized as members of a group, each group have a limited bandwidth. Example : Members of group A : 512kbit/s Members of group B : 1Mbit/s Members of group C : 2Mbit/s When a user connects to the application, it retrieves the source port to the origin of the request from the user and sends me the source port and the bandwidth at which the user must be limited depending on group to which it belongs. With these informations I must add the appropriate rules so that the user (the source port in reality) is limited to the right bandwidth. If the user that connect isn't a member of any group it should be limited at a default bandwidth speed. I'm actually managing this by using a self made daemon that add or remove rules from when it receive a request from the application. With my little knowledge of tc I'm not able to limit other users (ones that aren't in a group, all others in fact) at a default speed and my configuration seems awful to me. Here is the base of my tc qdisc and classes : tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbps ceil 125mbps To classify a user at a given speed I have to add one subclass and then associate one filter to it : # a member of group A tc class add dev eth0 parent 1:1 classid 1:11 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 50001 flowid 1:11 # a member of group A again tc class add dev eth0 parent 1:1 classid 1:12 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 61524 flowid 1:12 # a member of group B again tc class add dev eth0 parent 1:1 classid 1:13 htb rate 1000kbps ceil 1000kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 57200 flowid 1:13 I already know that a source port could be the same if its coming from a different IP address the thing is the application is behind a proxy so I don't have to manage any IP address in that situation. I would like to know how to manage the fact that for all other users (request/source port, whatever you name it) could be limited at a given speed each. I mean that each connection should be able to use at max 100kbit/s for example, not a shared 100kbit/s. I also would like to know if there is a way to simplify my rules. I don't know if it is possible to use only one class per group and associate multiple filters to the same class so each users could be handled by one class and not one class per user. I appreciate any advice, thanks.

    Read the article

  • Recovering data from failed Raid configuration with 4 drives and two raid sets (Asus P6T / Intel ICH10r)

    - by user56365
    I've added the complete detailed version for my question below for those who can help, but want to quickly summarize my question first. I setup two Raid arrays using (4) WD Raptors, a striped set for the OS and 1+0 set for crucial data. After booting once out of the 50 times a cable fell out, the drive wasn't recognized in the array anymore. After trying to fix it, another drive did the same. I now have two drives remaining, luckily with the parity information. I know the striped set is gone, but I need the data on the other set. Can anyone recommend anything to recover the data, or fix the two drives that doesn't allow the raid controller to recognize the drives, even though they are listed on the utility screen as still apart of the configuration but that they are not found? More Details I recently upgraded to a ASUS P6T motherboard with an Intel ICH10R raid controller and changed my previous 4 drive raid array from strictly a Raid 1+0 set to a Raid 0 for the OS/Page/Scratch drive and a Raid 1+0 set for crucial data. I never had problems after upgrading with my configuration, even when a drive died and was replaced. I managed to rebuild the array fine. Unfortunately this time around, a cable came unattached and I booted my system up until the raid status screen with the degraded error. This shouldn't have been a problem, but after I attached the drive it was no longer recognized as a member in the array. Both drives actually show up as a non-member disk. I've spent a very, very long time online trying to find information or support and haven't had much luck. After spending time trying to scan the drive for errors, damaged partition info, etc.. another drive in the set decided it didn't want to be recognized as a part of the array. At this point, I have two out of the four drives still functioning, but the Raid 1+0 array went from degraded to failed and I must find a way to retrieve that data. I think the two drives still in the array have the parity information because they show up as OS (110GB),BACKUP(80GB) and OS:1(110GB),BACKUP(80GB) under windows data management. The other two are simply 74gb Raw unallocated Is it possible recover the data using those two drive only, and which tool would I use? Could it be a simple partition table or any other error that is repairable with hard drive utilities out there? I know the Raid 0 set is done for, but I would assume because the correct drives failed in a 1+0 config to save the data I can retrieve it some how.

    Read the article

< Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >