Search Results

Search found 37074 results on 1483 pages for 'define method'.

Page 587/1483 | < Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >

  • What is the standard term for my role?

    - by sigil
    I'm doing work that involves writing code and managing developers in a "special projects" division of a large company. I'd like to define my role better and figure out if there's an industry standard term for what I do, so that it will be easier for me to research best practices and work on a career path What I do all day: A macro that connects an Excel sheet to an Access database is acting funny; I get called in to figure out what's happening and debug it. Someone needs data extracted from a bunch of files on Sharepoint. I figure out a client-side solution because I'm not authorized to do anything server-side and getting IT to do anything would take several months and need a business case. A manager wants a new data entry tool for their team. I interview the manager and team members to work out the functional requirements, then design/develop/test the application. Someone needs a VBA script to crunch some data for their presentation that's due in two hours. I drop everything I'm doing to hack out a quick script and run the analysis, without much in the way of testing. A developer has been hired to build a database for one of the teams, since I'm working on too many different things and don't have time to take this project on in the timeframe required. I direct his work and push him to meet certain deadlines, interview stakeholders to get more info that will help him figure out how to build the necessary forms, and modify the functional requirements of the database to fit in the timeframe. Someone wants to load a set of data into a GIS system and set up an ongoing refresh and reporting of this data set. I facilitate the conversation between the GIS developers and the owners of this data set, and design a demo application as proof of concept. It's kind of an "all-purpose programming and IT management" position, but it's not officially IT because the company has an actual IT department with a rigorously defined system of submitting requests, developing code, and managing projects. What I do, I guess, is more of a handyman job, where stuff falls to me because I'm the geekiest one in the room. Is there a standard term in the software world for what I do?

    Read the article

  • Camera not working

    - by user17548
    I made a camera in DX9. To move forward I press the Up arrow. To rotate on the Y axis I use the mouse. When I perform these movements on their own the camera moves at the speed I want. However, if I hold down Up and move the mouse at the same time then the camera moves a lot faster than it should. I want it to move at the same speed as it does when only the Up arrow is pressed. I think I need to normalize something somewhere but not sure what and not sure where. Have tried various combinations without success so if anyone can point me in the right direction that would be great. Thanks. My code #define KEY_DOWN(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0) LRESULT WINAPI MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { if( KEY_DOWN(VK_UP)) MovePlayer(D3DXVECTOR3(0, 0, -1.0f)); if( KEY_DOWN(VK_DOWN)) MovePlayer(D3DXVECTOR3(0, 0, 1.0f)); switch( msg ) { case WM_MOUSEMOVE: ProcessMouseInput(); } } void MovePlayer( D3DXVECTOR3 in_vec ) { D3DXMATRIX CameraRot; D3DXMatrixRotationY(&CameraRot,D3DXToRadian(AngleY)); D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&in_vec,&CameraRot); CameraPos += (m_timeElapsed * CameraRotTarget); } void ProcessMouseInput() { GetCursorPos( &CurrentMouseState ); if ((CurrentMouseState.x != GameMouseState.x) || (CurrentMouseState.y != GameMouseState.y)) { int dx = CurrentMouseState.x - GameMouseState.x; int dy = CurrentMouseState.y - GameMouseState.y; AngleY+=m_timeElapsed*dx*7.0f; } GameMouseState = CurrentMouseState; // Set back to window center in Render function } VOID UpdateCamera() { D3DXVECTOR3 CameraOrigTarget(0, 0, -1); D3DXVECTOR3 CameraOrigUp(0, 1, 0); D3DXMATRIX CameraRot; D3DXMATRIX CameraRotX; D3DXMatrixRotationX(&CameraRotX,D3DXToRadian(AngleX)); D3DXMATRIX CameraRotY; D3DXMatrixRotationY(&CameraRotY,D3DXToRadian(AngleY)); CameraRot = CameraRotX * CameraRotY; D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&CameraOrigTarget,&CameraRot); D3DXVECTOR3 CameraTarget; CameraTarget = CameraPos + CameraRotTarget; D3DXVECTOR3 vUpVec( 0.0f, 1.0f, 0.0f ); D3DXMatrixLookAtLH( &matView, &CameraPos, &CameraTarget, &vUpVec ); g_pd3dDevice->SetTransform( D3DTS_VIEW, &matView ); D3DXMatrixPerspectiveFovLH( &matProj, D3DX_PI / 4, 1.0f, 1.0f, 100.0f ); g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj ); }

    Read the article

  • Query specific logs from event log using nxlog

    - by user170899
    Below is my nxlog configuration define ROOT C:\Program Files (x86)\nxlog Moduledir %ROOT%\modules CacheDir %ROOT%\data Pidfile %ROOT%\data\nxlog.pid SpoolDir %ROOT%\data LogFile %ROOT%\data\nxlog.log <Extension json> Module xm_json </Extension> <Input internal> Module im_internal </Input> <Input eventlog> Module im_msvistalog Query <QueryList>\ <Query Id="0">\ <Select Path="Security">*</Select>\ </Query>\ </QueryList> </Input> <Output out> Module om_tcp Host localhost Port 3515 Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; \ to_json(); </Output> <Route 1> Path eventlog, internal => out </Route> <Select Path="Security">*</Select>\ - * gets everything from the Security log, but my requirement is to get specific logs starting with EventId - 4663. How do i do this? Please help. Thanks.

    Read the article

  • Trying to run an ASP.NET MVC application using Mono on Apache with FastCGI.

    - by Arda Xi
    I have a hosting account with DreamHost, and I would like to use the same account to run ASP.NET applications. I have an application deployed in a subdomain, a .htaccess with a handler like this: # Define the FastCGI Mono launcher as an Apache handler and let # it manage this web-application (its files and subdirectories) SetHandler monoWrapper Action monoWrapper /home/arienh4/<domain>/cgi-bin/mono.fcgi virtual My mono.fcgi is set up as such: #!/bin/sh #umask 0077 exec >>/home/arienh4/tmp/mono-fcgi.log exec 2>>/home/arienh4/tmp/mono-fcgi.err echo $(date +"[%F %T]") Starting fastcgi-mono-server2 cd / chmod 0700 /home/arienh4/tmp/mono-fcgi.sock echo $$>/home/arienh4/tmp/mono-fcgi.pid # stdin is the socket handle export PATH="/home/arienh4/mono/bin:$PATH" export LD_LIBRARY_PATH="/home/arienh4/mono/lib:$LD_LIBRARY_PATH" export TMP="/home/arienh4/tmp" export MONO_SHARED_DIR="/home/arienh4/tmp" exec /home/arienh4/mono/bin/mono /home/arienh4/mono/lib/mono/2.0/fastcgi-mono-server2.exe \ /logfile=/home/arienh4/logs/fastcgi-mono-web.log /loglevels=All \ /applications=/:/home/arienh4/<domain> I took this from the Mono site for CGI, I'm not sure if I'm doing it correctly though. This code is resulting in this error: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. I have no idea what's causing this. As far as I can see, Mono isn't even hit (no log files are created).

    Read the article

  • Installing ikiwiki on nginx - fastcgi/fcgi wrapper

    - by meder
    My ultimate goal is to setup ikiwiki, my current goal is to get a fcgi wrapper working for nginx, so I can move on to the next step... The ikiwiki page points out this page as an example for a fcgi wrapper: http://technotes.1000lines.net/?p=23 So far I've installed the ikiwiki and libfcgi-perl modules through aptitude: aptitude install libfcgi-perl aptitude install ikiwiki It installed those packages as well as some minimal dependency packages. So the next step following the guide at technotes, I grabbed http://technotes.1000lines.net/fastcgi-wrapper.pl but I'm not sure where to actually place this file... do I run it as a service? The script makes a socket file in /var/run/nginx but that directory does not exist.. do I manually create it? So in addition to the .pl file for the cgi wrapper, I need to also define a separate cgi file for parameters. If my conf looks like this... server { listen 80; server_name notes.domain.org; access_log /www/notes/public_html/notes.domain.org/log/access.log; error_log /www/notes/public_html/notes.domain.org/log/error.log; location / { root /www/notes/public_html/notes.domain.org/public/; index index.html; } } And I don't have a cgi-bin directory, where exactly should I create it within my structure, and regarding that I'd obviously have to update the below before I include it in my conf, but I'm just not exactly sure how this would work out. # /cgi-bin configuration location ~ ^/cgi-bin/.*\.cgi$ { gzip off; fastcgi_pass unix:/var/run/nginx/perl_cgi-dispatch.sock; [1]* fastcgi_param SCRIPT_FILENAME /www/blah.com$fastcgi_script_name; [2]* include fastcgi_params; [3]* } Also since the user is www-data and /var/run is root owned, what's the proper way of giving it access? Any tips appreciated.

    Read the article

  • PHP-FPM and APC for shared hosting?

    - by Tiffany Walker
    We are looking into finding a way to get APC to only create one cache per account / site. This can be done with Fastcgi (last update 2006…) but with Fastcgid APC will have to create multiple caches for multiple processes run by the same account. To get around this problem, we have been looking into PHP-FPM PHP process manager allows multiple PHP processes to share a single APC cache. But from what I have read (I hope I'm wrong) , even if you create a pool per process, all sites accross all pools will share the same APC cache. This brings us back to the same problem as with shared Memcached: it's not secure ! On php-fpm's site I read that you can chroot php-fpm pools and define a specific UID and GID per pool… if this is the case then shouldn't APC have to use this user and not have access to other pools cache ? An article here (in 2011) suggests that you would need to run one process per pool creating multiple launchers on different ports and different config files with one pool per config file : http://groups.drupal.org/node/198168 Is this still neceessary ? If so what would be the impact of running say 800 processes of php-fpm ? Would it be mainly memory ? If so how can I work out what the memory impact would be ? I guess that it would be better to run 800 times php-fpm then to have accounts creating multiple APC caches for a single site ? If on average an account creates a 50MB cache and creates 3 caches per account that makes 150Mb per account which makes 120GB… However if each account uses on average only 50Mb that would make 40GB We will have at least 128GB of ram on our next server so 40GB is acceptable if running 800 x PHP-FPM does not create an overhead of more than 20GB ! What do you think is PHP-FPM the best way to go to provide secure APC cache on shared hosting with a server that has a decent amount of memory ? Or should I be looking at another system ? Thanks !

    Read the article

  • "Unable to open MRTG log file" error with nagios and mrtg

    - by Simone Magnaschi
    We have a strange issue with our setup of icinga / nagios and mrtg. Icinga is working great and has no problem, it can monitor basically everything without issues. We setup mrtg to gather bandwith data from our routers and switches. MRTG is working fine: it stores the log data in the /var/www/mrtg/ directory and displays the graph data via web. We assume so MRTG is doing great. We tried to setup bandwidth checks in nagios: define service{ use generic-service ; Inherit values from a template host_name zywall-agora service_description ZYWALL AGORA TRAFFICO check_command check_local_mrtgtraf!/var/www/mrtg/x.x.x.x_2.log!AVG!1000000,2000000!5000000,5000000!1000 check_interval 1 ; Check the service every 1 minute under normal conditions retry_interval 1 ; Re-check every minute until its final/hard state is determined } Where /var/www/mrtg/x.x.x.x_2.log is the correct log path file. We keep on getting Unable to open MRTG log file error in the test result in icinga web interface. We tried everything: give ownership to user nagios or icinga to the log file give chmod 777 to the file try to copy the file in another directory and give it full permission Same error. The strange thing is that if we use the command that nagios generate in a bash session the command works like a charm: /usr/lib64/nagios/plugins/check_mrtgtraf -F /var/www/mrtg/x.x.x.x_2.log -a AVG -w 10,20 -c 5000000,5000000 -e 10 Result: Traffic WARNING - Avg. In = 17.9 KB/s, Avg. Out = 5.0 KB/s|in=17.877930KB/s;10.000000;5000000.000000;0.000000 out=5.000000KB/s;20.000000;5000000.000000;0.000000 We ran that command line as root, as user nagios and as user icinga and all three worked ok. We thought that the command that nagios perform maybe has something wrong in it, so we debugged nagios but we found out that the generated command from nagios is the same as above. Searching on google for these kind of problem returns only issues of systems where mrtg is not installed or issues with the wrong path to the log file, but these seems not to be our case. We are stuck, can somebody help?

    Read the article

  • Trying to run an ASP.NET MVC application using Mono on Apache with FastCGI

    - by Arda Xi
    I have a hosting account with DreamHost and I would like to use the same account to run ASP.NET applications. I have an application deployed in a subdomain, a .htaccess with a handler like this: # Define the FastCGI Mono launcher as an Apache handler and let # it manage this web-application (its files and subdirectories) SetHandler monoWrapper Action monoWrapper /home/arienh4/<domain>/cgi-bin/mono.fcgi virtual My mono.fcgi is set up as such: #!/bin/sh #umask 0077 exec >>/home/arienh4/tmp/mono-fcgi.log exec 2>>/home/arienh4/tmp/mono-fcgi.err echo $(date +"[%F %T]") Starting fastcgi-mono-server2 cd / chmod 0700 /home/arienh4/tmp/mono-fcgi.sock echo $$>/home/arienh4/tmp/mono-fcgi.pid # stdin is the socket handle export PATH="/home/arienh4/mono/bin:$PATH" export LD_LIBRARY_PATH="/home/arienh4/mono/lib:$LD_LIBRARY_PATH" export TMP="/home/arienh4/tmp" export MONO_SHARED_DIR="/home/arienh4/tmp" exec /home/arienh4/mono/bin/mono /home/arienh4/mono/lib/mono/2.0/fastcgi-mono-server2.exe \ /logfile=/home/arienh4/logs/fastcgi-mono-web.log /loglevels=All \ /applications=/:/home/arienh4/<domain> I took this from the Mono site for CGI. I'm not sure if I'm doing it correctly though. This code is resulting in this error: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. I have no idea what's causing this. As far as I can see, Mono isn't even hit (no log files are created).

    Read the article

  • Simple image editor to select area of image as wallpaper

    - by Kevin
    I've spent way to many hours looking for software to do the following simple task, so now I'll ask. I need software that will open an image and put a 'crop box' on it. You can set the 'crop box' to standard screen resolutions (1024x768) or define a custom one. You can move the 'crop box' around on the image to select the area you want. You can re-size the 'crop box' (selecting a corner and dragging w/mouse) and it maintains the correct aspect ratio. You can save the area in the 'crop box' to use as the Windows background. (The software doesn't need to set it as the background, I can do that myself in Windows XP.) The free software sites (CNET.com, etc.) have lots of image editing software that do things much more complicated than this simple task. I've spent too many hours downloading them to see if they will do this particular task. The ones I've tried would require manual trial and error to get the part of an image that I want saved as an image, with the correct aspect ratio so Windows doesn't screw with it (stretch, tile, crop) when I select it as the wallpaper.

    Read the article

  • Nginx dynamic upstream configuration / routing

    - by Dan Sosedoff
    I was experimenting with dynamic upstream configuration for nginx and cant find any good solution to implement upstream configuration from third-party source like redis or mysql. The idea behind it is to have a single file configuration in primary server and proxy requests to various app servers based on environment conditions. Think of dynamic deployments where you have X servers that are running Y workers on different ports. For instance, i create a new app and deploy. App manager selects a server and then rolls out a worker (Ruby/PHP/Python) and then reports the ip:port to the central database with status "up". At this time when i go to the given url nginx should proxy all requests to the specified ip:port upstream. The whole thing is pretty similar to what heroku does, except this proof-of-concept is not supposed to be production ready, mostly for internal needs. The easiest solution i found was using resolver with ruby-based DNS server. It works, nginx gets the IP address correctly, but the only problem is that you cant define port number for that IP. Second solution (which i havent tried yet) is to roll something else as a proxy server, maybe written in Erlang. In this case we need to use something to serve static content. Any ideas how to implement this in more flexible and stable way? P.S. Some research options: http://openresty.org/#DynamicRoutingBasedOnRedis https://github.com/nodejitsu/node-http-proxy

    Read the article

  • EWS connect to ExchangeServer authentication specifications

    - by dankyy1
    Hi all I'm connecting to ExchangeServer with username,password,doain properities(my code below) but what how to define server uses Kerberos,ntlm or basic authentication e.g? thnx xchangeServiceBinding binding = new ExchangeServiceBinding(); ServicePointManager.ServerCertificateValidationCallback = CertificateValidationCallBack; System.Net.WebProxy proxyObject = new System.Net.WebProxy(); proxyObject.Credentials = System.Net.CredentialCache.DefaultCredentials; if (string.IsNullOrEmpty(credentials.UserName) || string.IsNullOrEmpty(credentials.Password) || string.IsNullOrEmpty(credentials.Domain)) throw new ArgumentNullException("The Crediantial values could not be null or empty."); binding.Credentials = new NetworkCredential(credentials.UserName, credentials.Password, credentials.Domain); if (string.IsNullOrEmpty(serverURL)) throw new ArgumentNullException("The Exchange server Url could not be null or empty."); binding.Url = serverURL; binding.UseDefaultCredentials = true; binding.Proxy = proxyObject; //TO DO:take version over parameter..or configration!! binding.RequestServerVersionValue = new RequestServerVersion(); binding.RequestServerVersionValue.Version = (ExchangeVersionType)Enum.Parse(typeof(ExchangeVersionType), serverVersion);// ExchangeVersionType.Exchange2007_SP1;//.Exchange2010;

    Read the article

  • NVidia ION and /dev/mapper/nvidia_... issues.

    - by Ritsaert Hornstra
    I have an NVidia ION board with 4 SATA ports and want to use that to run a Linux Server (CentOS 5.4). I first hooed up 3 HDs (that will be a RAID5 array) and a forth small boot HD. I first started to use the onboard RAID capability but that does not work correctly under Linux: the raid capacity is not a real RAID but uses lvm to define some arays. After setting the BIOS back to normal SATA mode and whiping the HDs, the first boot harddisk (/dev/sda) is seen as /dev/sda BEFORE mounting and after mounting as /dev/mapper/nvidia_. CentOS is unable to install on it (and grub is not installable on it either). So somehow the harddisk is still seen as if it belongs to some lvm volume. I tried to clean out the HD by issuing a few dd if=/dev/zero of=/dev/sda commands to wipe the starting cylinders and final cylinders but to no avail. Did anyone see this problem and did anyone find a solution? UPDATE When I create only a single ext3 partition on the first HD (/dev/mapper/nvidia_...) no LVM partitions are seen and I can boot from /dev/mapper/nvidia_.... Now the next step is to see how I can get rid of this folly.

    Read the article

  • Puppet: array in parameterized classes VS using resources

    - by Luke404
    I have some use cases where I want to define multiple similar resources that should end up in a single file (via a template). As an example I'm trying to write a puppet module that will let me manage the mapping between MAC addresses and network interface names (writing udev's persistent-net-rules file from puppet), but there are also many other similar usage cases. I searched around and found that it could be done with the new parameterised classes syntax: if implemented that way it should end up being used like this: node { "myserver.example.com": class { "network::iftab": interfaces => { "eth0" => { "mac" => "ab:cd:ef:98:76:54" } "eth1" => { "mac" => "98:76:de:ad:be:ef" } } } } Not too bad, I agree, but it would rapidly explode when you manage more complex stuff (think network configurations like in this module or any other multiple-complex-resources-in-a-single-config-file stuff). In a similar question on SF someone suggested using Pienaar's puppet-concat module but I doubt it could get any better than parameterised classes. What would be really cool and clean in the configuration definition would be something like the included host type, it's usage is simple, pretty and clean and naturally maps to multiple resources that will end up being configured in a single place. Transposed to my example it would be like: node { "myserver.example.com": interface { "eth0": "mac" => "ab:cd:ef:98:76:54", "foo" => "bar", "asd" => "lol", "eth1": "mac" => "98:76:de:ad:be:ef", "foo" => "rab", "asd" => "olo", } } ...that looks much better to my eyes, even with 3x options to each resource. Should I really be passing arrays to parameterised classes, or there is a better way to do this kind of stuff? Is there some accepted consensus in the puppet [users|developers] community? By the way, I'm referring to the latest stable release of the 2.7 branch and I am not interested in compatibility with older versions.

    Read the article

  • Routing Apache TracEnv

    - by fampinheiro
    Hello, i have a situation with many trac instances. They all have the same structure in the filesystem. PATH/trac1 PATH/trac2 PATH/trac3 i have this configuration <Location /trac/trac1> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac1 PythonOption TracUriRoot /trac/trac1 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> <Location /trac/trac2> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac2 PythonOption TracUriRoot /trac/trac2 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> <Location /trac/trac3> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac3 PythonOption TracUriRoot /trac/trac3 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> i wonder if it's possible to do something like (TracEnvParentDir is not an option) <Location /trac/{ENV}> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/{ENV} PythonOption TracUriRoot /trac/{ENV} PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> Thank you for your time. EDIT: TracEnvParentDir is not an option because my structure is the following +---projs +---trac1 ¦ +---public [instance] ¦ +---t1 ¦ ¦ +---common [instance] ¦ ¦ +---g1 [instance] ¦ ¦ +---g2 [instance] ¦ ¦ +---g3 [instance] ¦ ¦ +---g4 [instance] ¦ ¦ +---g5 [instance] ¦ +---t2 ¦ ¦ +---common [instance] ¦ ¦ +---g1 [instance] ¦ ¦ +---g2 [instance] ¦ ¦ +---g3 [instance] ¦ ¦ +---g4 [instance] ¦ ¦ +---g5 [instance] ¦ +---t3 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] ¦ +---trac2 +---public [instance] +---t1 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] +---t2 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] +---t3 +---common [instance] +---g1 [instance] +---g2 [instance] +---g3 [instance] +---g4 [instance] +---g5 [instance] I use the TracEnvParentDir on t1, t2 and t3 and TracEnv on trac1/public and trac2/public I wonder if it's possible to define a part of the url variable.

    Read the article

  • Windows 7, network connection with no default gateway: any way to change the "Unknown network" statu

    - by e-t172
    Hi, I have a computer running Windows 7 Pro RTM. This computer has two network connections: A Wi-fi connection to the Internet (through a home router) which works just fine. An OpenVPN virtual network connection. More precisely, this is a virtual Ethernet connection which behaves exactly like a physical Ethernet wired connection. My problem is that the "Network and sharing center" shows "Unknown network" for the OpenVPN connection. After some research I found that logical networks (outside a domain) are identified by the MAC address of the default gateway of the connection. Problem is, the OpenVPN connection has no default gateway: it is a private network, so I don't need one... Consequently, the "Unknown network" is always considered public, so the firewall is always in "public mode", which I don't want. Plus, I can't rename "Unknown connection" or anything (which makes sense), so it is kinda ugly. My goal is to define a proper logical network for the OpenVPN connection with the private profile. I know of some workarounds (disable the firewall, modify security policy to make all unknown networks "private") but they're still workarounds. I just want my clients to connect to the VPN without having to disable their firewall settings, without changing global configuration with potential side-effects (the "security policy" solution) and without having to look at an ugly "Unknown connection" in the Network and sharing center. Is there any way I can do this? I tried to check what was going on in the registry (HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList is interesting), but I still didn't find a way to "force" the OpenVPN connection to be assigned to a logical network. Any help would be very appreciated. A related question showed up at Superuser: http://superuser.com/questions/37355/windows-7-cant-identify-network/37422

    Read the article

  • TinyDNS and proper settings for SPF records

    - by Teddy
    I've inherited a TinyDNS configuration that have following entries for SPF: @domain.com:x.x.x.3:a::86400 @domain.com:x.x.x.103:c:10:86400 =domain.com:x.x.x.3:86400 =mail.domain.com:x.x.x.3:86400 =mail.domain.com:x.x.x.103:86400 'domain.com:v=spf1 ip4\072x.x.x.3 ip4\07231.130.96.103 ptr\072mail.domain.com +mx a -all:3600 'mail.domain.com:v=spf1 ip4\072x.x.x.3 ip4\072x.x.x.103 ptr\072mail.domain.com +mx a -all:3600 'a.mx.domain.com:v=spf1 ip4\072x.x.x.3 ip4\072x.x.x.103 ptr\072mail.domain.com +mx a -all:3600 This is the result from http://www.kitterman.com/spf/validate.html SPF record lookup and validation for: domain.com SPF records are primarily published in DNS as TXT records. The TXT records found for your domain are: v=spf1 ip4:x.x.x.3 ip4:x.x.x.103 ptr:mail.domain.com +mx a -all SPF records should also be published in DNS as type SPF records. No type SPF records found. Checking to see if there is a valid SPF record. Found v=spf1 record for domain.com: v=spf1 ip4:x.x.x.3 ip4:x.x.x.103 ptr:mail.domain.com +mx a -all evaluating... SPF record passed validation test with pySPF (Python SPF library)! I'm struggling with this from yesterday and cant figure it why this validator returns No type SPF records found. I see in BIND we cand define SPF type record with example.com. IN SPF "v=spf1 a -all", but in TinyDNS we only have TXT records that we set for SPF, maybe this is a problem?

    Read the article

  • Strange Jmeter connection refuse on Tomcat

    - by Tommy
    I tried difference setting in Jmeter and Tomcat. If the Threads number in JMeter is 1~200, Then tomcat is okay. If It is 300, Then after serving few requests, tomcat starts to output errors. Here is the error show in JMeter java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at sun.net.NetworkClient.doConnect(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.<init>(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source) at org.apache.jmeter.protocol.http.sampler.HTTPJavaImpl.sample(HTTPJavaImpl.java:483) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:62) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1018) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1004) at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:411) at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:297) at java.lang.Thread.run(Unknown Source) My tomcat server.xml in eclipse <!--The connectors can use a shared executor, you can define one or more named thread pools--> <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="2000" minSpareThreads="250" acceptCount="2000"/> <Connector executor="tomcatThreadPool" URIEncoding="UTF-8" connectionTimeout="20000" port="8080" protocol="HTTP/1.1" redirectPort="8443" /> Any idea why this is happening ? How do i check the server.xml is correctly used? It is a JSF2 application if it helps. Thanks in advance.

    Read the article

  • An XKB keyboard map that responds to the left and right shift key individually

    - by mbfisher
    First off, excuse my ignorance of X and XKB; I've been trying to hack together a solution in the hope of being able to achieve what I want without requiring a detailed grasp of it. I'm trying to create an XKB keyboard map on Ubuntu 12.04 that allows me to stipulate which of the two shift keys constitutes the Level2 modifier. Specifically, the 4 key should only produce a $ when the right shift is held, not the left. My reading so far: http://www.charvolant.org/~doug/xkb/html/node5.html http://people.uleth.ca/~daniel.odonnell/Blog/custom-keyboard-in-linuxx11 http://www.x.org/releases/X11R7.5/doc/input/XKB-Enhancing.html Lots of searching! I've attempted to define a custom type, and then refer to it explicitly in a symbols map: /usr/share/X11/xkb/types/mbfisher: default xkb_types "mbfisher" { type "RIGHT_SHIFT" { modifiers = None+Shift_R; map[None] = Level1; map[Shift_R] = Level2; }; } /usr/share/X11/xkb/symbols/mbfisher: default partial alphanumeric_keys xkb_symbols "basic" { name[Group1]= "mbfisher"; key <AE04> { type= "RIGHT_SHIFT", symbols[Group1]= [ 4, dollar ] }; }; I'm then selecting the map with the Ubuntu Keyboard Layout GUI. This obviously disables the alphanumeric keyboard apart from the 4 key, but the dollar sign can still be typed with either shift key. I'm conscious of writing a massive question with lots of useless information so I'll stop here; please ask for anything I've missed out. Any ideas?

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP.

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • Debian/Redmine: Upgrade multiple instances at once

    - by Davey
    I have multiple Redmine instances. Let's call them InstanceA and InstanceB. InstanceA and InstanceB share the same Redmine installation on Debian. Suppose I would want to install Redmine 1.3 on both instances, how would I do that? After upgrading the core files I would have to migrate the databases. What I would like to know is: can I migrate all databases in a single action? Normally I would do something like: rake -s db:migrate RAILS_ENV=production X_DEBIAN_SITEID=InstanceA for each instance, but this would get tedious if you have 50+ instances. Thanks in advance! Edit: The README.Debian file that's in the (Debian) Redmine package states: SUPPORTS SETUP AND UPGRADES OF MULTIPLE DATABASE INSTANCES This redmine package is designed to automatically configure database BUT NOT the web server. The default database instance is called "default". A debconf facility is provided for configuring several redmine instances. Use dpkg-reconfigure to define the instances identifiers. But can't figure out what to do with the "debconf facility". Edit2: My environment is a default Debian 6.0 "Squeeze" installation with a default Redmine (aptitude install redmine) installation on a default libapache2-mod-passenger. I have setup two instances with dpkg-reconfigure redmine.

    Read the article

  • Testlink stop working when configuring it to use LDAP

    - by YuriAlbuquerque
    I have a TestLink webservice running on a server, and OpenLDAP running on other server. There are no firewall problems between them (I managed to configure Redmine, on the same server as TestLink, to use LDAP authentication). But whenever I place the configuration for LDAP in TestLink, TestLink stops working. I have no clue on what is happening. This is where I define LDAP's settings on custom_config.inc.php: $tlCfg->authentication['method'] = 'LDAP'; $tlCfg->authentication['ldap_server'] = 'serverip'; $tlCfg->authentication['ldap_port'] = '389'; $tlCfg->authentication['ldap_version'] = '2'; $tlCfg->authentication['ldap_root_dn'] = 'dc=mycompany,dc=com,dc=br'; $tlCfg->ldap_organization'] = ''; $tlCfg->authentication['ldap_uid_field'] = 'uid'; $tlCfg->authentication['ldap_bind_dn'] = 'myuser'; //Not actual login name and password, for obvious reasons $tlCfg->authentication['ldap_bind_passwd'] = 'mypassword'; $tlCfg->authentication['ldap_tls'] = false; $tlCfg->user_self_signup = true; I'm certain that OpenLDAP is 2.X. My TestLink version is 1.9.3 What could I be doing wrong?

    Read the article

  • Hyperic HQ- Monitor process statistics for 50+ processes on Linux machine

    - by Chris
    Is there an easy way to get metrics on all processes that start with the letters XYZ? I have about 80 processes that I have to monitor individually that all start with the prefix XYZ. I have created a query using the sigar shell: ps State.Name.sw=XYZ, which will give me a list of the processes that I want. What I need to do is define this list of processes through said query and collect and track statistics from the Process service: http://support.hyperic.com/display/hypcomm/Process+service What I need is 3 or 4 key statistics for each of the XYZ processes defined by my query to show up as graphs in the web front end. Note: Hyperic HQ server is installed on a windows machine and I'm monitoring a Linux box via an agent. Thanks, Chris Edit: Here is my try at a plugin that may give me what I want, but it's not being inventoried/detected by the Hyperic web UI. Simply pointing me to one of Hyperic's tutorials won't do. Thanks. <!DOCTYPE plugin [ <!ENTITY process-metrics SYSTEM "/pdk/plugins/process-metrics.xml">]> <plugin> <server name="ABCStats"> <config> <option name="process.query" description="Process Query" default="State.Name.sw=XYZ"/> </config> <metric name="Availability" alias="Availability" template="sigar:Type=ProcState,Arg=%process.query%:State" category="AVAILABILITY" indicator="true" units="percentage" collectionType="dynamic"/> &process-metrics; <plugin type="autoinventory"/> <plugin type="measurement" class="org.hyperic.hq.product.MeasurementPlugin"/> </server> </plugin>

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • Amazon EC2 Reserved Instances: "Heavy Utilization" clarification

    - by gravyface
    Should be another easy one here, but I need clarification on what they define as "heavy utilization" for Reserved Instance types. From their Website: Heavy Utilization RIs – Heavy Utilization RIs offer the most absolute savings of any Reserved Instance type. They’re most appropriate for steady-state workloads where you’re willing to commit to always running these instances in exchange for our lowest hourly usage fee. With this RI, you pay a little higher upfront payment than Medium Utilization RIs, a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase. Using Heavy Utilization RIs, you can save up to 41% for a 1-year term and 58% for a 3-year term vs. running On-Demand Instances. If you’re trying to find a break-even utilization, you’re economically advantaged using Heavy Utilization RIs (vs. On-Demand Instances) if you plan to use your instance more than 43% of a 1-year term or 79% of a 3-year term. I'm assuming that, if I'm planning on running a 24/7 Web Server, then regardless of how many resources I consume (bandwidth, cpu cycles, memory), I would want to go with a Heavy Utilization Reserved Instance? This one Web Server in particular will likely barely budge the cpu, but it needs to be up and running 24/7. Not 100% on what they're defining as "heavy".

    Read the article

< Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >