Search Results

Search found 2280 results on 92 pages for 'ressource pool'.

Page 69/92 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • OpenVPN bad source address from client

    - by Bogdan
    I have one problem with OpenVPN. There are a lot drops records in the openvpn log file on the server: Mon Oct 22 10:14:41 2012 us=726541 laptop/???:1194 MULTI: bad source address from client [192.168.1.107], packet dropped grep -E "^[a-z]" server.conf ----- port 1194 proto udp dev tun ca data/ca.crt cert data/server.crt key data/server.key dh data/dh1024.pem tls-server tls-auth data/ta.key 0 remote-cert-tls client cipher AES-256-CBC tun-mtu 1200 server 10.10.10.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" client-to-client client-config-dir /etc/openvpn/ccd route 10.10.10.0 255.255.255.0 keepalive 10 120 comp-lzo persist-key persist-tun max-clients 5 status /var/log/status-openvpn.log log /var/log/openvpn.log verb 4 auth-user-pass-verify /etc/openvpn/verify.sh via-file tmp-dir /tmp script-security 2 ----- cat ccd/laptop ----- iroute 10.10.10.0 255.255.255.0 ----- cat client.conf ----- remote server ip 1194 client dev tun ping 10 comp-lzo proto udp tls-client tls-auth data/ta.key 1 pkcs12 data/vpn.laptop.p12 remote-cert-tls server #ns-cert-type server persist-key persist-tun cipher AES-256-CBC verb 3 pull auth-user-pass /home/user/.openvpn/users.db ----- According to "Jan Just Keijser - OpenVPN 2 Cookbook" root of the problem is incorrect config options.see the screenshot But, as you see, my config has such options. Could you please help me to solve this problem. @week Verb leverl=6; client log. Mon Oct 22 16:06:02 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Mon Oct 22 16:06:02 2012 /sbin/ifconfig tun0 10.10.10.3 pointopoint 10.10.10.5 mtu 1500 Mon Oct 22 16:06:02 2012 /sbin/route add -net xxxx netmask 255.255.255.255 gw 192.168.1.1 Mon Oct 22 16:06:02 2012 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 Initialization Sequence Completed cat ccd/latop iroute 10.10.10.0 255.255.255.0 ifconfig-push 10.10.10.3 10.10.10.5

    Read the article

  • copy large LVM volume(14TB) from one server to another

    - by bruce
    recently,I have to copy a very large LVM volume()rom server A to server B. Below is the filesystem of server A and server B - server A [root@AVDVD-Filer ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_avdvdfiler-lv_root 16T 14T 1.5T 91% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/vg_avdvdfiler-test 2.3T 201M 2.1T 1% /test /dev/sr0 3.3G 3.3G 0 100% /mnt server B [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol00 20G 2.5G 16G 14% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/VolGroup00-LogVol00 16T 133M 15T 1% /xiangao/lv1 /dev/mapper/VolGroup00-LogVol01 4.7T 190M 4.5T 1% /xiangao/lv2 I want to copy LVM volume /dev/mapper/vg_avdvdfiler-lv_root on server A to LVM volume /dev/mapper/VolGroup00-LogVol00 on server B . The server A and server B is in the same IP segment. IN the LVM volume on server A , there is all average 500M avi wmv mp4 etc. I tried mount /dev/mapper/vg_avdvdfiler-lv_root on server A to server B through NFS , then use cp command copy. It is clear I faild . Because the LVM volume is too big , I do not have good idea . I hope a good solution here. I'm a chinese, my english is very pool. sorry thanks everyone!

    Read the article

  • pgpoolAdmin keeps going straight back to it's login page

    - by user705142
    I'm trying to run pgpoolAdmin through nginx - it seems to be working properly, at least initially. I've gone through the initial set-up, which works fine, but now after logging in every link takes me straight back to the login page. It also shows japanese text instead of english, despite picking english in the installation. It seems to me just as if it was unable to save any user data, session information etc. I have javascript/cookies enabled, so it's not that. The ownership of the folder is nginx, and so too is pgmgt.conf.php, so it shouldn't be a problem with permissions. One potential issue is that I can't seem to see any confirmation that php postgresql support is enabled in the php info screen, despite the correct package installed and in the config line. Any ideas as to what's happening here? The nginx rules are pretty standard: server { # pg-pool admin listen 997; server_name localhost; root /opt/pgpooladmin; index index.php; location ~ .php$ { fastcgi_pass_header Set-Cookie; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } }

    Read the article

  • Abysmal transfer speeds on gigabit network

    - by Vegard Larsen
    I am having trouble getting my Gigabit network to work properly between my desktop computer and my Windows Home Server. When copying files to my server (connected through my switch), I am seeing file transfer speeds of below 10MB/s, sometimes even below 1MB/s. The machine configurations are: Desktop Intel Core 2 Quad Q6600 Windows 7 Ultimate x64 2x WD Green 1TB drives in striped RAID 4GB RAM AB9 QuadGT motherboard Realtek RTL8810SC network adapter Windows Home Server AMD Athlon 64 X2 4GB RAM 6x WD Green 1,5TB drives in storage pool Gigabyte GA-MA78GM-S2H motherboard Realtek 8111C network adapter Switch dLink Green DGS-1008D 8-port Both machines report being connected at 1Gbps. The switch lights up with green lights for those two ports, indicating 1Gbps. When connecting the machines through the switch, I am seeing insanely low speeds from WHS to the desktop measured with iperf: 10Kbits/sec (WHS is running iperf -c, desktop is iperf -s). Using iperf the other way (WHS is iperf -s, desktop iperf -c) speeds are also bad (~20Mbits/sec). Connecting the machines directly with a patch cable, I see much higher speeds when connecting from desktop to WHS (~300 Mbits/sec), but still around 10Kbits/sec when connecting from WHS to the desktop. File transfer speeds are also much quicker (both directions). Log from desktop for iperf connection from WHS (through switch): C:\temp>iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [248] local 192.168.1.32 port 5001 connected with 192.168.1.20 port 3227 [ ID] Interval Transfer Bandwidth [248] 0.0-18.5 sec 24.0 KBytes 10.6 Kbits/sec Log from desktop for iperf connection to WHS (through switch): C:\temp>iperf -c 192.168.1.20 ------------------------------------------------------------ Client connecting to 192.168.1.20, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [148] local 192.168.1.32 port 57012 connected with 192.168.1.20 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0-10.3 sec 28.5 MBytes 23.3 Mbits/sec What is going on here? Unfortunately I don't have any other gigabit-capable devices to try with.

    Read the article

  • Windows Server 2008 R2 RAS VPN: access server on internal interface ip

    - by Mathias
    Hey, short question: I'm usually a linux admin but need to setup a Win2k8 R2 server for a student project. The server is running as VM on a root server and has a public internet IP assigned. Additionally I need a VPN server to access some services running on the server. I managed to set up a working VPN gateway via the Routing and RAS service which assigns clients an IP in the private subnet 192.168.88.0/24 with the Interface "Internal" listening on 192.168.88.1. Additionally I set up the external interface as NAT interface. So I can connect to the VPN server, get an IP assigned and the server additionally does NAT and I can access the internet over the VPN connection. The only thing I additionally need, is that I can access the server itself over that internal IP (e.g. client 192.168.88.2, server 192.168.88.1) as I want to access some services which I don't like to expose to the internet and restrict them to connected VPN clients. Does anybody have a hint, which configuration I'm missing here to be able to access the server over the VPN connection? EDIT: VPN clients get assigned the IP from the private subnet with subnetmask 255.255.255.255, I guess that might be the reason I can't access the server on the private IP address although it's in the same network range. Any ideas how to change this? I defined a static address pool in the Routing and RAS service, but I can't change the netmask there. EDIT2: I can't access the server from the client, but I can fully access the client from the server (ping, HTTP). I guess it has to do with firewall configuration. Thanks in advance, Mathias

    Read the article

  • Migrateing to Windows Server 2008 R2 Domain Controllers - a few Questions/Issues

    - by Chris
    Ok so here's our setup: We have 2 Windows2k3 Domain Controllers. I am trying to replace them with Windows 2008 R2. The Win2k3 servers are DC01 and DC02. The Windows2k8 servers are DC1 and DC2. I prepared the Windows Server 2003 Forest Schema for a Domain Controller That Runs Windows Server 2008 or Windows Server 2008 R2. Then with both of the new servers up as member servers I dcpromo'd DC1 using the advanced option and added it successfully to my exisiting domain. Roles are GC, DNS and Active Directory Domain Services.I transferred The PDC, RID pool manager and Infrastructure master FSMO to the new DC.(DC1) The Schema master and Domain naming master are still on the old DC (DC01). The first issue I'm encountering is when i dcpromo the second DC (DC2) and select "Replicate data over the network from and existing domain controller" I select the new DC to replicate from (DC1) I get the following error: "Failed to identify the requested replica partner (dc1.xxx.org) as a valid domain controller with a machine account for (DC2$). This is likely due to either the machine account not being replicated to this domain controller because of replication latency or the domain controller not advertising the Active Directory Domain Services. Please consider retrying the operation with \dc01.xxx.org as the replica partner. "The server is unwilling to process the request." Is this because the Schema master and Domain naming master roles are still on the old DC (DC01)? And if so, if I transfer Schema master and Domain naming master roles to DC1 what is the risk or breaking my AD? I'm a little paranoid because this process HAS to be transparent. ANY down time or interruption will result in me getting a verbal ass kicking from my I.T. Director. Both of the new servers DNS point the the old DNS servers (DC01 and DC02) not themselves by the way. Thanks in Advance -Chris

    Read the article

  • Distributed File Systems.

    - by GruffTech
    So, I've been reading several articles around ServerFault as well as google. (For Example, this link) My Requirements are very similar to the link above, however i'd like to also have dynamic or at least resizeable file volumes, so if necessary i can add 4-5 servers to the pool, and then expand the volume. Any Distributed File systems that support that, to save me some time? Thanks! LustreFS will be my next test cluster to build. GlusterFS I've build a 3-machine test GlusterFS cluster, However i quickly became aware of several of its limitations that it doesn't seem to make clearly public. One, i can't seem to resize a volume. Once a volume is created, its done. Which seems retarded, why have a fully scalable file system if i can't scale a volume? So maybe i'm doing something wrong. I'm not sure. AmazonS3 while gives the cheapest startup adds too much cost when broken down to per client per month, so its out. Building my own system when prorated over several years with no bandwidth costs makes it significantly cheaper. MogileFS isn't an option as we'd like this server to be a SAN-Replacement, for storing tons of media from a multitude of systems, which for us means it needs to be POSIX compliant so it can be remotely mounted via NFS or CIFS.

    Read the article

  • Haproxy ACL for balance on URL request

    - by Elgreco08
    I'm usung Ubuntu with haproxy 1.4.13 version. Its load balancing two subdomains: app1.domain.com app2.domain.com now i want to be able to use ACL to send based on url request to the right backends For example: http://app1.domain.com/path/games/index.php sould be send to backend1 http://app1.domain.com/path/photos/index.php should be send to backend2 http://app2.domain.com/path/mail/index.php sould be send to backend3 http://app2.domain.com/path/wazap/index.php should be send to backend4 i did used the code the the following acl frontend http-farm bind 0.0.0.0:80 acl app1web hdr_beg(host) -i app1 # for http://app1.domain.com acl app2web hdr_beg(host) -i app2 # for http://app2.domain.com acl msg-url-1 url_reg ^\/path/games/.* acl msg-url-2 url_reg ^\/path/photos/.* acl msg-url-3 url_reg ^\/path/mail/.* acl msg-url-4 url_reg ^\/path/wazap/.* use_backend games if msg-url-1 app1web use_backend photos if msg-url-2 app2web use_backend mail if ..... backend games option httpchk GET /alive.php HTTP/1.1\r\nHost:\ app1.domain.com option forwardfor balance roundrobin server appsrv-1 192.168.1.10:80 check inter 2000 fall 3 server appsrv-2 192.168.1.11:80 check inter 2000 fall 3 backend photos option httpchk GET /alive.php HTTP/1.1\r\nHost:\ app2.domain.com option forwardfor balance roundrobin server appsrv-1 192.168.1.13:80 check inter 2000 fall 3 server appsrv-2 192.168.1.14:80 check inter 2000 fall 3 .... Since the path mail, photos...etc will be application pools on iis, i want to monitor them if they are alive, if the pool does not respond it should stop serving it. my problem is for sure in the regular expression in the ACL acl msg-url-4 url_reg ^\/path/wazap/.* What should i change in the ACL to make it work ? thanks for any hints

    Read the article

  • Cannot access server shares over VPN

    - by DuncanDavies
    I've set up a single hosted server to use as a development environment for a web-based application. The web app is served up fine on port 80, however I'm struggling to get my VPN to behave how I'd expect so the developers don't have the access they require. The VPN connects fine and I can access the back-end database (SQL Server) which resides on the server with the client tools from the laptops. However they cannot access any shared folders. The server's local IP address is 10.x.x.x, and I've assigned a static IP address pool to RRAS (of 192.168.100.1 - 20). The clients pick up a valid IP Address (i.e. 192.168.100.9) when they connect. There is no name resolution setup, DNS or WINS. When connected via VPN the clients can ping the server (192.168.100.1) by IP Address, but cannot map a drive to a shared folder (net use * \\192.168.100.1\xxxxx) - I get 'System error 53 has occurred. The network path was not found.' I don't understand why I can ping by the ip, but not map by it. Some details: Server OS is Windows 2008 (Datacenter) VPN is SSTP using RRAS Clients are all Windows 7 I've tried temporarily disabling the firewalls So, why can we not access the file system when everything else (ping, RDP, SQL Server clients tools) works? Thanks for your help Duncan

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • Load balanced IIS. Should I use NLB, or linux-based reverse proxy, or something else?

    - by growse
    What would be the best approach for load-balancing at least 2-3 Windows 2008 R2 IIS webservers running a multitude of .NET applications? My choices appear to be: 1) Hardware-based network device load balancer, like a Cisco CSS 2) Windows NLB 3) Some sort of linux based proxy, either haproxy or other The three servers sit as VMs on a vSphere farm, so I have the ability to clone to up the instance count in times of high load. I control the switch that the vSphere hosts are plugged into (Cisco 3750), but don't control the switching/routing infrastructure beyond that to the clients. (1) Is too expensive, and probably overkill for my needs. I've included this in case someone figures out a cunning way to do it on my existing network kit, which I doubt. (2) would seem to be the obvious "built-in" option, but seems to be quite fiddly messing around with network interfaces, multicast, and generally other things that seem to be needlessly complex. It's also fairly stupid, in that it can't remove hosts from the pool if they start throwing 500 errors or otherwise go wrong (3) is the most interesting option, as it would appear to offer the most flexibility and customizability, but without having to mess around with the network. However, while I'm familiar with the reverse-proxy capabilities of lighttpd etc, I'm not that well read on other options like HAProxy, which might be able to offer a lot more. Which would you go for, and is there anything I've not thought of?

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • Cisco Spam Blocker, Iron Port, Lotus Domino, Integration Help

    - by NickToyota
    Hi serverfault universe, I work for a medium sized (roughly 200 user) company. We are attempting to intagrate our new Cisco Spam Video Blocker (ironport) device into our network so that it acts as an incoming filter then passes it off to our Lotus domino mail server. And also vise versa. The way our network is setup currently has an mx record pointing to our Domino mail SMTP incoming server which is currently setup to be an inbound gateway and filter (using symantec domino mail software). We want to replace the inbound gateway with the ironport. Our company has also invested in a pool of external IP addresses which I believe has been currently assigned to our web, email, servers. What would the proper course of action be to successfully integrate the device be? Mx record change? Replace the domino gateway completely with the ironport? We attempted to set the ironport device to the external IP of what our mx record is pointing to without much success. Any help on proper setup would be greatly appreciated.

    Read the article

  • How to configure IIS 7.5 to allow special chars in Url for ASP.NET 3.5?

    - by Sebastian P.R. Gingter
    I'm trying to configure my IIS 7.5 to allow specials chars in the url for ASP.NET. This is important to support wide-spread legacy url's on a new system. Sample url: http://mydomain.com/FileWith%inTheName.html This would be encoded in the url and requested as http://mydomain.com/FileWith25%inTheName.html This simply works, when creating a new web in IIS 7.5, placing a file with the percentage sign in the file name in the web root and pointing the browser to it. This does not work, however, when the web site is an ASP.NET application. ASP.NET always returns a 400.0 - Bad Request error in the WindowsAuthentication module from the StaticFile handler, when pointing to that url. It however displays the requested url correctly and also resolves correctly to the correct physical file (the information from the field 'Physical Path' from the Server error page points to the physically available file). There are hints on how to enable this, so I followed the instructions on these websites step by step: http://dirk.net/2008/06/09/ampersand-the-request-url-in-iis7/ http://adorr.net/2010/01/configure-iis-to-accept-url-with-special-characters.html The second one actually sums up the information from the first post and adds some more information about x64 systems (we're running x64) and on an additional web.config change for this. I tried all that, and still can't get this running from an asp.net web application. And yes: I rebooted after applying the registry changes. So, what do I have to do in addition to the settings described in above posts, to support the legacy url's which contain percentage characters? Additional info: Application Pool mode is integrated. Push after some days. No idea anyone?

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • Virtual Machine Network Architecture, Isolating Public and Private Networks

    - by Mark
    I'm looking for some insight into best practices for network traffic isolation within a virtual environment, specifically under VMWARE ESXi. Currently I have (in testing) 1 hardware server running ESXi but i expect to expand this to multiple pieces of hardware. The current setup is as follows: 1 pfsense VM, this VM accepts all outside (WAN/internet) traffic and performs firewall/port forwarding/NAT functionality. I have multiple public IP addresses sent to the this VM that are used for access to individual servers (via per incoming IP port forwarding rules). This VM is attached to the private (virtual) network that all other VMs are on. It also manages a VPN link into the private network with some access restrictions. This isn't the perimeter firewall but rather the firewall for this virtual pool only. I have 3 VMs that communicate with each other, as well as have some public access requirements: 1 LAMP server running an eCommerce site, public internet accessible 1 accounting server, access via windows server 2008 RDS services for remote access by users 1 inventory/warehouse management server, VPN to client terminals in warehouses These servers constantly talk with each other for data synchronization. Currently all the servers are on the same subnet/virtual network and connected to the internet through the pfsense VM. The pfsense firewall uses port forwarding and NAT to allow outside access to the servers for services and for server access to the internet. My main question is this: Is there a security benefit to adding a second virtual network adapter to each server and controlling traffic such that all server to server communication is on one separate virtual network, while any access to the outside world is routed through the other network adapter, through the firewall, and on the the internet. This is the type of architecture i would use if these were all physical servers, but i'm unsure if the networks being virtual changes the way i should approach locking down this system. Thank you for any thoughts or direction to any appropriate literature.

    Read the article

  • What is the best way/Software to manage multiple short lived instances of virtual machines ?

    - by Newtopian
    Hi, We have a QA department that have to test our software on multiple combination of OS and DMBS. With Windows spewing out many different versions the combinatorial math of all this can be daunting. So we decided on visualizing our setups but so far it only displaces the problem. The cost of hardware is expensive and we need many different combination far exceeding your server capacity to deliver. Also, these instances are throw away, once the test is complete we no longer need it, furthermore to ensure proper test isolation we should start fresh from a new instance. Lastly we only need a small subset of these system online at any given time. What I am looking for is a way to manage inventory so that our QA staff can order instances to be put online as required and discarded once used. Instances are spawned from a pool of freshly installed systems with the appropriate combination ready to accept our software. It also should be possible for two or more people to start the same instance at the same time, though we could manage without this if it proves too complex to put in place. Finally our budget is pretty thin, we can probably make some purchases but ideally expenditures should be kept to a minimum. To summarize we should be able to : Bring instances online on demand. Ideally should offer queue and scheduling management Destroy instances on demand Keep masters in inventory but not online. Manage large inventory of VMs (30-100 maybe more) with small staff of users (5-10). Allow adding, deleting and changing instances from inventory (bring online, make changes and check back in, or create new and check in). Allow few long lived instances for support tools (normal VM server usage) Thanks for your answers

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • Hugepages not utilized by MySQL 5.0, CentOS 5

    - by TechZilla
    I've set up Hugepages, but i'm not seeing any of them reserved. Have I missed a step, or for some particular reason, is MySQL is unable to utilize the Hugepages? I have not created a mount of hugetlbfs, although from what I read, MySQL would not call pages in such a manner. If I'm wrong, please let me know, as that would be a trivial solution. Almost all my MySQL tables are using InnoDB. NOTE: I created a hugetlbfs, no change as expected. Is it possible that rebooting would rectify this situation? I would not want to go through the procedure, as this is high availability, but would do so if necessary. This is the configurations, which I believe are relevant. /etc/sysctl.conf ... ## Huge Pages vm.nr_hugepages = 4096 vm.hugetlb_shm_group = 27 ## SHM kernel.shmmax = 34359738368 kernel.shmall = 8589934592 ... /etc/security/limits.conf ... mysql soft nofile 12888 mysql hard nofile 51552 @mysql soft memlock unlimited @mysql hard memlock unlimited /etc/my.cnf [mysqld] large-pages ... grep Huge /proc/meminfo HugePages_Total: 4096 HugePages_Free: 4096 HugePages_Rsvd: 0 Hugepagesize: 2048 kB id mysql uid=27(mysql) gid=27(mysql) groups=27(mysql) context=root:system_r:unconfined_t:SystemLow-SystemHigh tail -6 /var/log/mysqld.log InnoDB: HugeTLB: Warning: Failed to allocate 1342193664 bytes. errno 12 InnoDB HugeTLB: Warning: Using conventional memory pool 120808 15:49:25 InnoDB: Started; log sequence number 0 1729804158 120808 15:49:25 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution I would really appreciate any help, I'm completely out of ideas. If I missed any more relevant configs, or diagnostics, please comment and I'll add it to the question.

    Read the article

  • Cluster FIle System

    - by Ben
    We are looking for to choose a clustered file system for our in house appplication. Let me first highlight my requirement. we have a storage and 2 servers at present.We get the data files from remote servers to our server and on both servers we are running our application to access those data and make a final result as per our requirements. In future may be after 3-4 months, we can add another servers in current cluster pool to handle more data load from remote location data senders. So my requirement is that to integrate same storage partition on 2-3 servers , it might be 4-5 more servers in future, My application read data from storage partition and write back to storage partition. Is there any bottleneck / limitation from RHCS , GFS2 or anything.? We are new with RHCS + GFS and all. Can we have any other better approach or someway to deal with our requirement light way? what is the best OS version for this ? how's RHEL 6.4 64 bit ? please share some case study or some gudie reference as per past experiences with such environnmnets Regards, Ben

    Read the article

  • Disk wipe preferences

    - by hmvm123
    I manage a pool of systems that are loaded with software and sent to potential customers for evaluations which often land sensitive information on the drives. Before shipping them back, they typically like a standard wipe to be run to clean out the drives. Most are familiar with DBAN so I try to make sure it can work on my systems. Unfortunately, this means I'm usually in RAID driver hell trying to make sure that the versions out there support the ones my systems are shipping with. These are various kinds of 3ware and LSI ones. Consequently, I have DBAN 1.0.7 working on some, a beta version of 2.0 on the others and 2.2.6 on some of the latest SSD based ones. Now with the LSI controllers on my IBM x3550 M3s (1064/1068) I'm getting no love at all. Is there a way out? Do you buildroot with DBAN and try to piece the drivers together? Any other tools, free or commerical, that stay updated. I'm trying to walk people of varying technical proficiencies through this, so a boot disk with simple choices is preferable.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >