Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 663/739 | < Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >

  • postfix concurrency limit with round robin dns

    - by goose
    Take the following internal round robin dns setup mymta.com. IN A 172.31.1.1 mymta.com. IN A 172.31.1.2 mymta.com. IN A 172.31.1.3 mymta.com. IN A 172.31.1.4 mymta.com. IN A 172.31.1.5 mymta.com. IN A 172.31.1.6 mymta.com. IN A 172.31.1.7 mymta.com. IN A 172.31.1.8 mymta.com. IN A 172.31.1.9 mymta.com. IN A 172.31.1.10 Now assume the following postfix setup (assume these are the only tweaks from defaults in debian package) main.cf: smtp_connection_cache_destinations = mymta.com smtp_connection_cache_reuse_limit = 750 smtp_destination_concurrency_limit = 75 transport * :[mymta.com] I would expect 75 concurrent connections spread across the 10 A records I've set in DNS. However I'm seeing more than a few hundred connections to mymta.com and I'm wondering if Postfix is "smart" enough to set up 75 concurrent connections for each IP address. Thoughts?

    Read the article

  • Split Tunnel VPN using incorrect Tunnel

    - by Brian Schmeltz
    Our company has a handful of field offices that have recently been setup with a regular internet connection after we removed the T1 and router that connected them directly to our network. Now, when the users are in the office, they log in to the VPN to be able to connect to the network. For the sake of them being able to print and scan from the local multi-function we have setup a split tunnel VPN. We currently have about 15-20 users using this setup around the country without any problems. Recently one of our users started having problems accessing internal programs/sites when connecting from both home and the office. There are three other users in the same office and they do not have this problem. I assumed that it was something with the computer and went ahead and replaced it with another of the same model. The computer worked fine in our home office; however, when the user received it, she had the exact same problem both at home and in the field office. Thinking it may be a NIC driver issue I sent her another computer, this time a different model, same problem occurred. If I update the host file to point to the correct paths, things will work, and if I connect via a normal VPN connection everything works, but the user cannot scan or print - which is a problem. Have tried to find ways to create another tunnel on a normal VPN and have tried to find ways to force the correct tunnel on the split tunnel VPN. It appears that there is something related to the ISP because if I connect to Comcast or Verizon it is fine but once she connects to Insite then she has problems. I have been unable to get any support from Insite as they don't feel the issue is with them. We use a Nortel VPN client. Any thoughts or ideas would be appreciated.

    Read the article

  • Returning a 404 page when a folder is accessed from one domain, but allowing access from other domains and IP addresses

    - by okw
    Situation: I want to return a 404 page ("404.php") when a folder ("hidden") is accessed from the example.com domain. I want the same folder to be accessible from a subdomain ("hidden.example.com") or from a different domain ("hidden.com") which are both configured in a single VirtualHost entry. The server has multiple IP addresses that it listens on. Each IP address serves identical content from the example.com domain (sharing a VirtualHost entry.) I want the folder to be accessible from the IP address. The server is configured to use SSL/TLS/HTTPS. HTTPS is optional on example.com, but HTTPS is enforced in the .htaccess file for the hidden folder using a rewrite rule shown below. /www/hidden/.htaccess RewriteCond %{HTTPS} !=on RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [R,L] I know that {SERVER_ADDR} gives the server's IP address, but does it return the one that the client is requesting from? I'm also starting to think that something in the VirtualHosts file would be more appropriate. Any thoughts on this? What should be allowed: http://87.65.43.21/hidden/ https://87.65.43.21/hidden/ http://12.34.56.78/hidden/ https://12.34.56.78/hidden/ http://hidden.example.com/ https://hidden.example.com/ http://hidden.com/ https://hidden.com/ http://www.hidden.com/ https://www.hidden.com/ What should be 404-ed with 404.php http://example.com/hidden/ https://example.com/hidden/ http://www.example.com/hidden/ https://www.example.com/hidden/ http://example.com/hidden/hiddenfile.php https://example.com/hidden/hiddenfile.php etc. Thanks.

    Read the article

  • Apache 2.4, Ubuntu 12.04 Forbidden Errors

    - by tubaguy50035
    I just installed Apache 2.4 today, and I'm having some issues getting vhost configuration to work correctly. Below is the vhost conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /hosting/Client/site.com/www ServerName site.com ServerAlias www.site.com <Directory "/hosting/Client/site.com/www"> Options +Indexes +FollowSymLinks Order allow,deny Allow from all </Directory> DirectoryIndex index.html </VirtualHost> There is an index.html file in /hosting/Client/site.com/www. When I go to the site, I receive a 403 forbidden error. The www-data group is the group on the www folder, which I've already given all permissions (r/w/x). I'm really at a loss as to why this is happening. Any thoughts? If I remove the vhost and go straight to the IP address, I get the default, "It works!" page. So I know that it's working. The error log says "client denied by server configuration". apache2ctl -S dump: nick@server:~$ apache2ctl -S /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted) VirtualHost configuration: *:80 is a NameVirtualHost default server site.com (/etc/apache2/sites-enabled/site.com.conf:1) port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex watchdog-callback: using_defaults Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults PidFile: "/var/run/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG Define: ENALBLE_USR_LIB_CGI_BIN User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used Ouput of namei -mo /hosting/Client/site/www/index.html f: /hosting/Client/site.com/www/index.html drwxr-xr-x root root / drwxr-xr-x root root hosting drwxr-xr-x root root Client drwxr-xr-x nick www-data site.com drwxr-xr-x nick www-data www -rw-rwxr-x nick www-data index.html

    Read the article

  • Windows product key is valid but wont activate

    - by pnongrata
    Last month, I needed to install Windows XP (Pro Version 2002 SP3) from a Reinstallation CD a co-worker gave me, and with a product key the IT team told me to use. Everything installed successfully and I have been using the XP machine for the last 30 days without any problems; however it kept reminding me to activate Windows, and of course, I never did (laziness). It now has me locked out of my machine and won't let me log in until I activate it. So I proceed to the Activation Screen which asks me: Do you want to activate Windows now? I choose "Yes, let's activate Windows over the Internet now.", and click the Next button. It now asks me: Do you want to register while you are activating Windows? I choose "No, I don't want to register now; let's just activate Windows.", and click the Next button. I now see the following screen: Notice how the title reads "Unauthorized product key", and how there are only 3 buttons: Telephone Remind me later Retry Please note that the Retry button is disabled until I enter the full product key that IT gave me, then it enables. However, at no point in time do I see a Next button, indicating that the product key was valid/successful. So instead, I just click the Retry button, and the screen refreshes, this time with a different title Incorrect product key Could something be wrong with the Windows XP reinstallation CD (do they "expire" after a certain amount of time, etc.)? Or is this the normal/typical workflow for what happens when you just have a bad product key? I ask because, after this happened I emailed IT and they supplied me whether several other product keys to try. But every time its the same result, same thing happening over again and again. So I guess it's possible that IT has given me several bad keys, but it's more likely something else is going on here. Any thoughts or ways to troubleshoot? Thanks in advance!

    Read the article

  • Connection timeout when trying to SSH

    - by dan
    The other day I tried to connect to my remote server via SSH as i always have. But now when I try to connect it just times out after about 60 seconds. I run service ssh start Which tells me that Job is already running: ssh. I then ran $netstat -tnlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:995 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2030/mysqld tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 2157/perl tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3028/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2273/master tcp6 0 0 :::80 :::* LISTEN 2618/apache2 tcp6 0 0 :::21 :::* LISTEN 2291/proftpd: (acce tcp6 0 0 :::22 :::* LISTEN 3028/sshd I am able to access subdomains on my site, and FTP, but don't have the ability to SSH or even ping remotely. Any thoughts?

    Read the article

  • Moving Farm to co-location hosting - network settings requirements

    - by Saariko
    I am moving my farm (2 Dell's R620) to a co-location hosting service. I am trying to figure out the secure way to have my network settings The requirements are: VM1 is the working HOST, includes: esxi 5.1, vSphere, 4 clients (w2008r2 all) VM2 has esxi 5.1 installed, and a single machine with Veeam Backup and copy 6.5 - keeping a copy of VM1 clients on the VM2 internal storage (this solution is due to a very small budget - in case of failure on Host 1 - can redirect IP's) Only 2 VM clients require network address and access from the WWAN - ISP provides IP's range for them (with Gateway and DNS) I need connection to the iDrac's from my office (option to create a VPN-SSL tunnel) Connection to the vSphere appliances I want to be able to RDP to the VM clients The current configuration is that each host has the iDrac dedicated nic connected , and another (NIC #1) connected - with a static IP on 192.168.3.x The iDrac's have a static IP from the same network range (19.168.3.x) It will look something like this: My thoughts: On NIC#2 of both hosts I will connected a crossed cable I will give each VM clients that needs internet access a 2ndry VM network with the assigned IP from the ISP open only to web - can not access from the My Question: Should I give IP's (external) to the machines who DO NOT require WWAN Access? - I can't see a way to RDP to them directly if not. Should I use the crossed cable? or just plug NIC #2 to the switch? Will this setup even work? What do I need to verify? What Virtual nic's and/or switches should I create on the Hosts?

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • php5-mysqlnd on debian wheezy/sid?

    - by Joseph
    I am trying to install php5-mysqlnd on a fresh install of Wheezy (/etc/debian_version refers to it as wheezy/sid) and I'm having a problem: root@debian:/var/www/lottery1# apt-get install php5-mysqlnd Reading package lists... Done Building dependency tree Reading state information... Done php5-mysqlnd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up php5-mysqlnd (5.4.0-3) ... ucfr: Attempt from package php5-mysqlnd to take /etc/php5/mods-available/mysql.ini away from package php5-mysql ucfr: Aborting. dpkg: error processing php5-mysqlnd (--configure): subprocess installed post-installation script returned error exit status 4 Processing triggers for libapache2-mod-php5 ... configured to not write apport reports Reloading web server config: apache2. Errors were encountered while processing: php5-mysqlnd E: Sub-process /usr/bin/dpkg returned an error code (1) It seems there is some sort of conflict with the php5-mysql package, but I still get this error even after removing (with --purge) the php5-mysql package. Any thoughts? I'm trying to run a web tool that makes heavy use of mysqli_result::fetch_all(). Thanks!

    Read the article

  • Are Windows Domain Service Accounts Really Necessary?

    - by Zach Bonham
    One of the biggest problems we have in automating application deployments is the idea that running IIS AppPools and Windows Services under domain service accounts is a 'best practice'. Unfortunately, this best practice sometimes causes deployment headaches in that either we need to provision a new domain level service account quickly, or once we have the account, we now need to manage the account credentials. I had a great conversation about not making domain level service accounts a requirement and effectively taking one of two approaches: Secure at the node level using machine account(domain\machine$) and add the node to appropriate ActiveDirectory/Sql groups/roles Create local app specific accounts on each machine (machine\myapp) and add that account to appropriate ActiveDirectory/Sql groups/roles (the password here can change per deployment, it doesn't need to be stored) In both cases, it seems that its easier to manage either adding an account to appropriate group/role, or even stand up new, local account, than it is to have to provision a new domain level account and manage those credentials. This would hopefully ease the management burden on ActiveDirectory, Sql Server and Operations teams as there would be no more password management. We've not actually been able to implement this in practice yet. I am coming from a development background, so I'm curious as to how many ways this approach could go wrong? Can we really get rid of domain level service accounts with this direction? I'd appreciate any thoughts from anyone who has taken this path! Thanks! Zach

    Read the article

  • Apache /server-status/ gives a 404 not found

    - by kapshure
    I am trying to solve a problem where Apache stats aren't displaying correctly in Munin. I've ran through quite a bit of checks and tests regarding Munin setup, but I think my issue is related to Apache, but my skill set there is lacking. first, system info: monitored server CentOS 5.3 kernel 2.6.18-128.1.1.el5 Apache/2.2.3 "server-status" directive in httpd.conf (i've cross-compared this with another system that i did a successful parallel install of Munin on, correctly showing Apache stats, and the directive below is the same for both) ExtendedStatus On <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ran lynx http://localhost/server-status got HTTP/1.1 404 taking a look at Apache access_log: 127.0.0.1 - - [13/Oct/2010:07:00:47 -0700] "GET /server-status HTTP/1.0" 404 11237 "-" "Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8e-fips-rhel5" mod_status is also loaded: % grep "mod_status" /etc/httpd/conf/httpd.conf LoadModule status_module modules/mod_status.so iptables is turned off also i did notice that the ownership status on httpd.conf on this system is root.root.. whereas the system that is displaying correctly is apache.www -- not certain that this matters?? its got to be permission issue, but i'm not certain where the permissions are messed up. any thoughts on why the test of server-status is giving me a 404?

    Read the article

  • i7 overclocking problem

    - by benwebdev
    Hi everyone, I've got an overclocked system that seems to be misbehaving. When trying to boot up from cold the system just hangs and nothing is output to the screen. Fans are on as they should but nothing happens to finish the boot. From here I have to switch it off then on again and the boot completes. If I go into the tweak menu in the BIOS I'm informed that a boot has failed I've been in touch with Overclockers UK support a bit and theres not yet been a solution. We've mainly been tweaking the voltage for the CPU. Any suggestions? I'm new to Overclocking which is why I got a bundle with OCUK. With this issue happening on Cold Boot too its tricky to test as I have to make a change then wait till the next day. My system is here: Intel Core i7 930 2.80Ghz overclocked to 4GHz Gigabyte X58A-UD3R (BIOS Version: FC) 6GB RAM Power Supply - CoolerMaster Silent Pro M series 700W One suggestion made by OCUK was that maybe its the power supply but I'm not sure and dont have a spare - it's brand new and a pretty expensive piece of kit. Any thoughts on this? Other recomendations for Power? thanks Ben

    Read the article

  • ffmpeg - creating DNxHD MFX files with alphas

    - by Hugh
    Hi all, I'm struggling with something in FFMpeg at the moment... I'm trying to make DNxHD 1080p/24, 36Mb/s MXF files from a sequence of PNG files. My current command-line is: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mxf -pix_fmt rgb32 -b 36Mb /tmp/temp.mxf To which ffmpeg gives me the output: Input #0, image2, from '/tmp/temp.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mxf, to '/tmp/temp.mxf': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 [mxf @ 0x1005800]unsupported video frame rate Could not write header for output file #0 (incorrect codec parameters ?) There are a few things in here that concern me: The output stream is insisting on being yuv422p, which doesn't support alpha. 24fps is an unsupported video frame rate? I've tried 23.976 too, and get the same thing. I then tried the same thing, but writing to a quicktime (still DNxHD, though) with: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mov -pix_fmt rgb32 -b 36Mb /tmp/temp.mov This gives me the output: Input #0, image2, from '/tmp/1274263259.28098.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mov, to '/tmp/1274263259.28098.mov': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding frame= 39 fps= 9 q=1.0 Lsize= 7177kB time=1.62 bitrate=36180.8kbits/s video:7176kB audio:0kB global headers:0kB muxing overhead 0.013636% Which obviously works, to a certain extent, but still has the issue of being yuv422p, and therefore losing the alpha. If I'm going to QuickTime, then I can get what I need using Shake, but my main aim here is to be able to generate .mxf files. Any thoughts? Thanks

    Read the article

  • Need Routing help (tagged/untagged)

    - by TheCleaner
    I really need some help trying to figure some "basic" routing. My brain is fried from being sick for a week and I'm not thinking clearly. Picture below describes my "setup". I'm trying to accomplish routing a user from their workstation to the Juniper SSG520 and then "OUT" through the internet connection. I can't move the connection as it is physically located where the user's switch is. Here's what I CAN do at this point: I can ping from the Juniper SSG520 eth3/3 to 6x.xxx.253.116 from 6x.xxx.253.114 I can ping from the x450 in the top right to 6x.xxx.253.112 from 6x.xxx.253.116 What I CANNOT DO: I cannot ping from the SSG520 eth3/3 to 6x.xxx.253.112 from 6x.xxx.253.114 (basically from the Juniper box to the gateway. I've tried changing port 1 in the x450 VLAN 666 as tagged but when I do that then I can't even ping from the Juniper SSG520 eth3/3 to the VLAN on the x450 (6x.xxx.253.116). I need to route traffic out the eth3/3 interface on the SSG520 THROUGH the 2 x450 Switches and out the internet connection. The caveat is that the 2 x450 switches are connected via fiber over distance and have tagged VLANs in them for the routing. Thoughts? http://img251.imageshack.us/img251/7752/drawing1.jpg

    Read the article

  • Cannot 301 redirect with IIS URL Rewrite Module

    - by Justin
    I am trying to troubleshoot my issue with the URL Rewrite Module on IIS 7. I migrated a Wordpress blog over to BlogEngine.net. There were only about 5 entries that I wanted to use 301 redirects to the new blog, so I wanted to simply create 5 exact match redirect rules using the rewrite module. For some reason the exact match rule never seems to take effect, I always get a 404 error when the original url is navigated to. I verified that my exact match pattern matched the existing backlinks and it does. I then tried a simple test and got the same behavior, no redirection. I created a page, test.html, on my site, I then created a second page, test2.html. So my exact match pattern is: "http://www.mydomain.com/test.html" And the rule is supposed to do a 301 redirect to "http://www.mydomain.com/test2.html " The redirect never happens. I created the steps for the rule based on the instructions in this page: http://learn.iis.net/page.aspx/461/creating-rewrite-rules-for-the-url-rewrite-module/ I don't see that I left out a step. After I apply the rule I've even gone as far as doing an IISReset to make sure it would be in effect but still no luck. Any thoughts on what I might have left out? (Note: my rewrite rules dont include the " " around them but I had to add since serverfault thinks I am trying to spam the system with multiple urls.)

    Read the article

  • Windows XP seemingly out of resources but plenty of free RAM and swap available

    - by Artem Russakovskii
    This one has been bothering me for years and so far I couldn't find an adequate solution. The problem occurs on pretty much every XP install I've done. After opening a variety of programs or the system running existing programs for a while, Windows seemingly runs out of resources, without telling me. There's ALWAYS free RAM. For example, it just happened to me and I had over a gig of free RAM. There are no viruses, spyware, or other nonsense - it is a Windows resource problem, but the question is which resource is it running out of, how does one pinpoint it, and how does one prevent it? Sometimes, this happens after running specific programs - for example, today it happened when I started Photoshop CS4 and Flash CS4 at the same time. I also noticed that restarting The Bat (email client by Ritlabs) seems to get rid of this problem for a while but again, this happens on machines that don't even have The Bat installed. So what does exactly happen? The symptoms are: pressing alt-tab doesn't bring up the list anymore - it just jumps to the next window instantly, very similar to the way Alt-Esc works, however in this case, it's due to not having enough resources to bring up the alt-tab menu random programs would randomly crash, citing random errors, out of memory errors, system resources, inabilities to do system calls, etc. random programs would start missing random parts - for example, Firefox top menus might disappear, pull up partial selections, or not pull up anymore altogether. IE might lose a few of its toolbars. Some programs might fail to redraw or would just plain go gray where the UI used to be. Windows itself never complains about running out of RAM, virtual memory, or anything at all, yet it's running out of something. The only clue I was able to find and apply the fix today was this Desktop Heap Limitation. I haven't confirmed the fix working as not enough time passed. In the meantime, what are everyone's thoughts?

    Read the article

  • ESX Scheduler and NUMA issue

    - by babyg_wc
    On our 24 core bl685 (4sockets x 6core), we find that NUMA nodes 0 and 1 are pretty busy (unfortunately resulting in elevated cpu ready times on the VMS), whilst NUMA nodes 2 and 3 are almost unused. I thought this just maybe a ESX4 U1 issue, so I had a colleague with a 32 core (dl785) farm investigate, and it seems that his last 3 or 4 NUMA nodes are also not really being utilised. ESX seems to have a weakness when it comes to balancing lightly loaded NUMA boxes, Im going to enabled node interleaving in the BIOS and see if the scheduler balancers across all 24 cores, instead of just 12!... For those of you with large core counts, I would suggest you fire up you viclient, and check Physical CPU useage (or esxtop), I would be interested to hear what your results are. Please note, that its only the lightly loaded (eg less than 30% cpu load on the esx host) that seems to have the biggest issue with load imbalance. Thoughts/comments. PS ive logged a SR with vmware to assist, also the other "problem" could be that we have 128gb of ram in each host, and therefore the scheduler sees no good reason why it shouldnt try and cram all vms's into the first two NUMA nodes, as we only have around 50gb of ram worth of vms on each host...

    Read the article

  • How do I handle a request for "index.php/somepath" in Nginx config?

    - by Arne
    I'm currently attempting to serve Icinga from nginx. The new Icinga-Web ui attempts to load a file from URL http://SERVER/icinga-web/index.php/appkit/squishloader/javascript (leading to a 404). How do I setup a location for this scenario? How do I get nginx to serve index.php and pass the full request path in a way Icinga understands? This is my current nginx config for icinga (adapted from the default apache config): server { server_name SERVER; root /opt/icinga/icinga-1.2.1; location /icinga-web/js/ext3 { alias /opt/icinga/icinga-1.2.1/lib/ext3; } location /icinga-web { alias /opt/icinga/icinga-1.2.1/pub; } location /icinga/cgi-bin { alias /opt/icinga/icinga-1.2.1/sbin; } location /icinga { alias /opt/icinga/icinga-1.2.1/share; } location ~ \.php([\?/].*)?$ { include fastcgi_params; fastcgi_pass 127.0.0.1:61000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Any thoughts? Thanks!

    Read the article

  • IIS 6 + ASP.NET web service - DW20 and stackoverflow exception

    - by pcampbell
    Consider an ASP.NET SOAP web service that starts up fine, but craters hard when receiving its first hit. Please note that this is deployment works in the Test environment, but not in the PreProd environment. Both are Windows 2003 SP3 + IIS 6 + ASP.NET 3.5. All up-to-date. The behaviour that we're seeing is: restart the site & app pool the app pool is configured to run under Network Service. browsing to the .asmx and .wsdl responds normally, as expected. send a normal well-formed SOAP request / normal payload to the web service 100% CPU usage after 5 seconds, the page request / site returns "Service Unavailable" no entry is created in the IIS log file (i.e. c:\windows\system32\logfiles\W3C-foo) the app pool ends up being stopped The processes that hit the CPU hard are dw20.exe. I am unsure why Dr Watson is involved here. Event Log shows an ASP.NET Runtime error: Task Manager: Event log text: EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.3959, P3 45d6968e, P4 errormanagement, P5 1.0.0.0, P6 4b86a13f, P7 24, P8 0, P9 system.stackoverflowexception, P10 NIL. Questions Any thoughts on what this system.stackoverflow exception might be? Given that the code is the same between environments, might it be a payload problem? Could it be a configuration issue? You can see the name of my .NET assembly there in the exception message: "ErrorManagement"

    Read the article

  • hosts file seems to be ignored

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.mydomain.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • Where do I define a group policy that will set a users desktop background color to green the first time they log in?

    - by Tyler
    Servers: W2k8 R2 x64 Desktops: Win7 Pro x64 Our current group policy uses a custom ADM file to define certain properties of the desktop (Background Image (centered), Background Color is green (00 74 00)). This policy works for us, but the down-side is that policies defined in our custom ADM are only applied after a GPUpdate /Force is applied. We would like these desktop theme settings to be applied the first time the user logs onto the computer. I've been working on a new policy that forces the computer to wait for the network when the user logs on to handle folder redirection. The reason for writing the new policy was to resolve the issue that a user needs to run GPupdate /Force the first time they log in, so it doesn't make sense for me to implement the new policy if there is still something that requires GPUpdate /Force to get the user in the state that we want them. I've moved the setting for background image out into Admin Templates- Desktop- Desktop- "Desktop Wallpaper" so this is now being set properly when the user first logs in. Now I'm left with a black background until I force a group policy update. I have tried to play around with setting a default "Theme" and had limited success; this was not reliable enough to call a solution. I suppose I could set the background color with a script? Any thoughts? It feels like I'm missing something obvious, or that this should be much easier than it is.

    Read the article

  • Some clients cannot connect to Server 2008 R2 VPN

    - by Robl
    Hi all, Have a server 2008 r2 setup as a VPN server. We have created a windows group to control access to the VPN called vpn-users. Clients are all Windows 7 Pro. This all seems to work fine except some users cannot connect to the VPN! For example I try to logon to the VPN from a client and get an error saying the server refused the connect due to a policy in place. Specifically authentication type! Fine I think. So i drop that user into the vpn-users group created for this and try again and hey presto the user can now logon! Great. Now try this with another user. But this time I get the same error even though I have dropped them into the vpn-users group!! So does anyone have any idea why this works for some users and not for others?? I have tried moving the user from certain OU's in AD to others, copying the account, taking the user out of the vpn-users group and then back in but get the same error each time. Any thoughts anyone?

    Read the article

  • File corruption after copying files in Windows 7 64 bit using two methods

    - by DustByte
    I have 5000 pictures and other files in a directory taking up 35 GB. I want to duplicate this directory. Method 1: I do a simple copy and paste of the directory in explorer. I have the habit of checking the checksums after copying important files. In this case I noticed that around 2000 files failed the MD5 test. At a closer inspection of a randomly chosen JPEG with different checksums it turns out that some XMP metadata had changed. In particular, the tag <MicrosoftPhoto:DateAcquired> had changed the date from 2009 to today (possibly around the time I was copying the files). I have no idea what triggered this XMP data to be changed and exactly when it was changed and why for these particular files, but at least it seems to explain the checksum discrepancy. Method 2: As I want the exact files to be duplicated, I tried the program FreeFileSync to mirror the directory, hoping no XMP metadata would mysteriously change. A checksum test in addition to a thorough file comparison test in FreeFileSync lead to two similar but yet different results: 31 files fail the checksum test, 23 files fail the file comparison test. The smaller set is not entirely contained in the bigger set, although many files occur in both. What is alarming here is that not only JPEGs are flagged as altered but also som AVIs, MPGs and a large 7-zip file. Closer inspection of a JPEG indicates that it is indeed corrupt: the bottom half of the picture is simply plain gray. Due to the size of the 7-zip file, I have not been able to pin down the discrepancy. Note, in both methods, every file has its correct file size after being copied. Question: Any thoughts on what is possibly going on here? I have never had this problem before, and I am now terrified that files get corrupted after simple actions like copy/paste and file sync. Even if I manage to successfully copy the files somehow, I would still like an explanation to this.

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

< Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >