Search Results

Search found 5011 results on 201 pages for 'grand master t'.

Page 88/201 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • How to set up Git on remote instance using keys from local machine?

    - by Lucas
    I have a setup where I can ssh into my remote server (ie a Google Compute instance) from my local machine. I used to be able to clone, push, and pull from a repository on my remote instance without adding any keys to my remote instance, nor adding any new keys to my repository online (just the public key from my local machine). I believe the remote instance was using the keys from my local machine to authenticate my Git pushes and pulls. However, the system broke when I reinstalled the OS on my local machine. Now I when I try to connect with the Github server from my remote instance, I get the following: Cannot clone: [lucas@ecoinstance]~/node$ git clone [email protected]:lucasExample/test.git test Cloning into 'test'... Permission denied (publickey). fatal: The remote end hung up unexpectedly Cannot push: [lucas@ecoinstance]~/node/nodetest1$ git status # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # nothing to commit (working directory clean) [lucas@ecoinstance]~/node/nodetest1$ git push Permission denied (publickey). fatal: The remote end hung up unexpectedly Additional info: [lucas@ecoinstance]~/node/nodetest1$ ssh-add -l Could not open a connection to your authentication agent. [lucas@ecoinstance]~/.ssh$ ls authorized_keys known_hosts As you can see, I have no keys on my remote instance. I have never had keys on the remote, and it would push and pull just fine until I re-installed my local OS. I can still clone, push, and pull on my local machine, it is just my remote machine that cannot get authentication. My local OS is Ubuntu 14.04 and my remote OS is Debian Wheezy. Any suggestions would be great. I am not sure how to search for this concept where I can authenticate from a remote instance via my local machine, so any reference are appreciated as well.

    Read the article

  • Fix Corrupted Ruby in Mac OS X Lion

    - by luckyb56
    I screwed up my ruby buy executing the command sudo easy_install pip> /usr/bin/ruby -e "$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)" It showed error: Couldn't find index page for '-e' (maybe misspelled?) No local packages or download links found for -e error: Could not find suitable distribution for Requirement.parse('-e') After that when I tried to install Brew by: /usr/bin/ruby -e "$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)" It shows error which I have no idea: /usr/bin/ruby: line 1: Searching: command not found /usr/bin/ruby: line 2: Best: command not found /usr/bin/ruby: line 3: Processing: command not found Usage: pip COMMAND [OPTIONS] pip: error: No command by the name pip 1.1 (maybe you meant "pip install 1.1") /usr/bin/ruby: line 5: Installing: command not found /usr/bin/ruby: line 6: Installing: command not found /usr/bin/ruby: line 8: Using: command not found /usr/bin/ruby: line 9: Processing: command not found /usr/bin/ruby: line 10: Finished: command not found /usr/bin/ruby: line 11: Searching: command not found /usr/bin/ruby: line 12: Reading: command not found /usr/bin/ruby: line 13: syntax error near unexpected token `(' /usr/bin/ruby: line 13: `Scanning index of all packages (this may take a while)' Can this be fixed?

    Read the article

  • Mirror a Dropbox repository in Sharepoint and restrict access

    - by Dan Robson
    I'm looking for an elegant way to solve the following problem: My development team uses Dropbox for sharing documents amongst our immediate group. We'd like to put some of those documents into a SharePoint repository for the larger group to be able to access, as granting Dropbox access to the group at large is not ideal. However, we'd like to continue to be able to propagate changes to the SharePoint site simply by updating the files in Dropbox on our local client machines, and also vice versa - users granted access on SharePoint that update files in that workspace should be able to save their files and the changes should appear automatically on our client PC's. I've already done the organization of the folders so that in Dropbox, there exists a SharePoint folder that looks something like this: SharePoint ----Team --------Restricted Access Folders ----Organization --------Open Access Folders The Dropbox master account and the SharePoint master account are both set up on my file server. Unfortunately, Dropbox doesn't seem to allow syncing of folders anywhere above the \Dropbox\ part of the file system's hierarchy - or all I would have to do is find where the Sharepoint repository is maintained locally, and I'd be golden. So it seems I have to do some sort of 2-way synchronization between the Dropbox folder on the file server and the SharePoint folder on the file server. I messed around with Microsoft SyncToy, but it seems to be lacking in the area of real-time updating - and as much as I love rsync, I've had nothing but bad luck with it on Windows, and again, it has to be kicked off manually or through Task Scheduler - and I just have a feeling if I go down that route, it's only a matter of time before I get conflicts all over the place in either Dropbox, SharePoint, or both. I really want something that's going to watch both folders, and when one item changes, the other automatically updates in "real-time". It's quite possible I'm going down the entirely wrong route, which is why I'm asking the question. For simplicity's sake, I'll restate the goal: To be able to update Dropbox and have it viewable on the SharePoint site, or to update the SharePoint site and have it viewable in Dropbox. And since I'm a SharePoint noob, I'll also need help hiding the "Team" subfolder from everyone not in a specific group in AD.

    Read the article

  • Nginx + uWSGI + Django performance stuck on 100rq/s

    - by dancio
    I have configured Nginx with uWSGI and Django on CentOS 6 x64 (3.06GHz i3 540, 4GB), which should easily handle 2500 rq/s but when I run ab test ( ab -n 1000 -c 100 ) performance stops at 92 - 100 rq/s. Nginx: user nginx; worker_processes 2; events { worker_connections 2048; use epoll; } uWSGI: Emperor /usr/sbin/uwsgi --master --no-orphans --pythonpath /var/python --emperor /var/python/*/uwsgi.ini [uwsgi] socket = 127.0.0.2:3031 master = true processes = 5 env = DJANGO_SETTINGS_MODULE=x.settings env = HTTPS=on module = django.core.handlers.wsgi:WSGIHandler() disable-logging = true catch-exceptions = false post-buffering = 8192 harakiri = 30 harakiri-verbose = true vacuum = true listen = 500 optimize = 2 sysclt changes: # Increase TCP max buffer size setable using setsockopt() net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 87380 8388608 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.core.netdev_max_backlog = 5000 net.ipv4.tcp_max_syn_backlog = 5000 net.ipv4.tcp_window_scaling = 1 net.core.somaxconn = 2048 # Avoid a smurf attack net.ipv4.icmp_echo_ignore_broadcasts = 1 # Optimization for port usefor LBs # Increase system file descriptor limit fs.file-max = 65535 I did sysctl -p to enable changes. Idle server info: top - 13:34:58 up 102 days, 18:35, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 118 total, 1 running, 117 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3983068k total, 2125088k used, 1857980k free, 262528k buffers Swap: 2104504k total, 0k used, 2104504k free, 606996k cached free -m total used free shared buffers cached Mem: 3889 2075 1814 0 256 592 -/+ buffers/cache: 1226 2663 Swap: 2055 0 2055 **During the test:** top - 13:45:21 up 102 days, 18:46, 1 user, load average: 3.73, 1.51, 0.58 Tasks: 122 total, 8 running, 114 sleeping, 0 stopped, 0 zombie Cpu(s): 93.5%us, 5.2%sy, 0.0%ni, 0.2%id, 0.0%wa, 0.1%hi, 1.1%si, 0.0%st Mem: 3983068k total, 2127564k used, 1855504k free, 262580k buffers Swap: 2104504k total, 0k used, 2104504k free, 608760k cached free -m total used free shared buffers cached Mem: 3889 2125 1763 0 256 595 -/+ buffers/cache: 1274 2615 Swap: 2055 0 2055 iotop 30141 be/4 nginx 0.00 B/s 7.78 K/s 0.00 % 0.00 % nginx: wo~er process Where is the bottleneck ? Or what am I doing wrong ?

    Read the article

  • Cisco ASA5505 won't sync with NTP

    - by Martijn Heemels
    Today I noticed that the clock my Cisco ASA 5505 firewall was running about 15 minutes late, which surprised me since I've set up the NTP client. My two NTP servers 10.10.0.1 and 10.10.0.2 are virtualized Windows Server 2008 R2 domain controllers, and both have the correct time. As shown below, the ASA knows about the two servers, can ping them and seems to poll them periodically, so I suppose it can reach them both. The ASA claims its time source is NTP, however the clock is unsynchronized. Neither host is marked as synced. Result of the command: "ping 10.10.0.1" Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.10.0.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms Result of the command: "sh ntp ass" address ref clock st when poll reach delay offset disp ~10.10.0.1 .LOCL. 1 78 1024 377 0.5 643.69 17.0 ~10.10.0.2 10.10.0.1 2 190 1024 377 0.9 655.91 58.4 * master (synced), # master (unsynced), + selected, - candidate, ~ configured Result of the command: "sh ntp stat" Clock is unsynchronized, stratum 16, no reference clock nominal freq is 99.9984 Hz, actual freq is 99.9984 Hz, precision is 2**6 reference time is 00000000.00000000 (07:28:16.000 CEST Thu Feb 7 2036) clock offset is 0.0000 msec, root delay is 0.00 msec root dispersion is 0.00 msec, peer dispersion is 0.00 msec Result of the command: "sh clock detail" 10:33:23.769 CEDT Tue Jun 26 2012 Time source is NTP UTC time is: 08:33:23 UTC Tue Jun 26 2012 Summer time starts 02:00:00 CEST Sun Mar 25 2012 Summer time ends 03:00:00 CEDT Sun Oct 28 2012 I've tried the basic steps of manually setting the time and removing and adding the timeservers, to no avail. My ASA's ntp config is simply: ntp server 10.10.0.1 ntp server 10.10.0.2 Do I need to enable authentication to use a Windows NTP server? Any thoughts?

    Read the article

  • Postfix Postscreen: how to use postscreen for smtp and smtps both

    - by petermolnar
    I'm trying to get postscreen work. I've followed the man page and it's already running correctly for smtp. But it I want to use it for smtps as well (adding the same line as smtp in master.cf but with smtps) i receive failure messages in syslog like: postfix/postscreen[8851]: fatal: btree:/var/lib/postfix/postscreen_cache: unable to get exclusive lock: Resource temporarily unavailable Some say that postscreen can only run once; that's ok. But can I use the same postscreen session for both smtp and smtps? If not, how to enable postscreen for smtps as well? Any help would be apprecieted! The parts of the configs: main.cf postscreen_access_list = permit_mynetworks, cidr:/etc/postfix/postscreen_access.cidr postscreen_dnsbl_threshold = 8 postscreen_dnsbl_sites = dnsbl.ahbl.org*3 dnsbl.njabl.org*3 dnsbl.sorbs.net*3 pbl.spamhaus.org*3 cbl.abuseat.org*3 bl.spamcannibal.org*3 nsbl.inps.de*3 spamrbl.imp.ch*3 postscreen_dnsbl_action = enforce postscreen_greet_action = enforce master.cf (full) smtpd pass - - n - - smtpd smtp inet n - n - 1 postscreen tlsproxy unix - - n - 0 tlsproxy dnsblog unix - - n - 0 dnsblog ### the problematic line ### smtps inet n - - - - smtpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp relay unix - - - - - smtp showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache dovecot unix - n n - - pipe flags=DRhu user=virtuser:virtuser argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender}

    Read the article

  • PostgreSQL failover cluster on Windows Server

    - by user36997
    We are looking for advice on how to setup a basic failover cluster for our application: We will be using 4 machines running Microsoft Windows Server (most probably 2003). All four will always run our application, which is essentially a web service. Load balancing is "outsourced" - somebody else handles the distribution of the web requests among the servers. Only one of the servers will be running the PostgreSQL server actively at any given time. Another server (of the four) also has the DB installed, but is on standby/passive. The DB data is stored on shared storage. No copying data between servers. Reads are done very frequently by many end-users, and in rather small chunks of data. Writes are done much less frequently, by less users, and in very large bulks of data. Now, how can one configure Microsoft Cluster Service to keep only one instance of the DB server and 4 instances (1 per server) of our application at all times? And does PostgreSQL integrate neatly with MSCS at all? Update: Instead of keeping the data on shared storage, I also consider using log shipping to replicate data on a couple of DB servers. There are two issues with this option: Log shipping only makes sure that I have a second server that gets all of the data and is ready to take over. How do I implement the actual failure detection and failover switch? Switching back: Suppose the master fails and the system automatically fails over to the slave, and later the master comes back online. I understand that with WAL shipping this will require to reconfigure the log shipping once again, and that switching back is far from seamless. Is that so?

    Read the article

  • Router failover not detecting outside interface link lost

    - by Matt
    Suppose I have two routers configured in master/slave configuration. They look something like this (addresses are not real ones) 123.123.123.10 <===> [eth0] Router 1 (10.1.1.2) [eth1] ===> +----------+ | 10.1.1.1 | ===> LAN 172.123.123.10 <===> [eth0] Router 2 (10.1.1.3) [eth1] ===> +----------+ The 10.1.1.1 is the default route for the Network (10.1.1.0). What's slightly different in this config to other's I've seen is that I don't have an external virtual IP. Also, the 10.1.1.1 addresses are in real life, public IP's (not private ones shown here). This is more of a router setup than a firewall setup so I'm not using NAT here. Now the issue that I'm having is that I can't see any way to configure UCARP or VRRP to monitor both eth0 & eth1 and fail over to the backup router should either of them go down. What I'm seeing is that if Router1 is the master and I unplug eth0 on router1, it doesn't fail over to router 2. However, it will if instead I unplug eth1 of router 1. In VRRP I see there is a cluster group, but it seems that for this to work you need to have virtual ip's or vrrp instances rather than actual interfaces assigned to it. I hope my explanation is clear. How do I get around this?

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • User directive in nginx generates error despite running as UID root

    - by Joost Schuur
    I'm running nginx on a MacOS X machine, installed with brew, and when I launch nginx, even with sudo, I get the following warning in my log file over and over again: 4/21/11 2:03:42 AM org.nginx[3788] nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/etc/nginx/conf/nginx.conf:2 From nginx.conf: user jschuur staff; I'm already launching nginx with sudo, since I want the thing to listen on port 80. Shouldn't that be enough to give it the proper super user privileges? The nginx binary as it's installed: jschuur@Glenna:sbin ? master ls -la total 4544 drwxr-xr-x 3 jschuur staff 102 Apr 12 20:53 . drwxrwxr-x 15 jschuur staff 510 Apr 12 15:25 .. -rwxr-xr-x 1 jschuur staff 2325648 Apr 12 20:39 nginx FWIW, I recompiled the binary to set passenger up and moved it around from it's original location into /usr/local/sbin. Update: As it turns out MacOS X was restarting nginx after I'd stopped it, because the launchd plist in ~/Library/LaunchAgents had set it to 'KeepAlive'. However, because I installed this plist into my local user's LaunchAgents folder as opposed to /Library/LaunchAgents (or better yet /Library/LaunchDaemons, which run before you even log on), it wasn't executed as root. Because of an error about not having permissions to use port 80, it actually exited right away, but still wrote to the same log file as the nginx process I started with sudo. I had thought the errors stemming from the automatic restart were actually coming from my manual restart via sudo. So, bottom line, problem solved. The real problem here was the homebrew instructions specifically asking you to install the plist file into an area that wouldn't allow a local site to use port 80.

    Read the article

  • Git clone/push/pull - where's that username comes from?

    - by Kuroki Kaze
    I've set up gitosis and able to pull/push through ssh. Gitosis is installed on Debian Lenny server, I'm using git from windows machine (msysgit). The strange thing, if I enable loglevel = DEBUG in gitosis.conf, I see something like this when doing any actions with gitosis server: D:\Kaze\source\test-project>git pull origin master DEBUG:gitosis.serve.main:Got command "git-upload-pack 'test_project.git'" DEBUG:gitosis.access.haveAccess:Access check for '[email protected]' as 'writable' on 'test_project.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'test_project.git', new value 'test_project' DEBUG:gitosis.group.getMembership:found '[email protected]' in 'test' DEBUG:gitosis.access.haveAccess:Access ok for '[email protected]' as 'writable' on 'test_project' DEBUG:gitosis.access.haveAccess:Using prefix 'repositories' for 'test_project' DEBUG:gitosis.serve.main:Serving git-upload-pack 'repositories/test_project.git' From 192.168.175.128:test_project * branch master -> FETCH_HEAD Already up-to-date. Question is: why am I *[email protected]? This email is in global user.email config variable, too. Yesterday, when the gitosis was installed, it seen me as kaze@KAZE, this is the name under which I was added to gitosis-admin group (and it worked). But today git (or gitosis) started to see me as [email protected]. This is true for all repositories I push or clone. I had to add this address to gitosis.conf directly on server to be able to edit configs again (it worked). There is 2 public keys in keydir: [email protected] and [email protected], their content is identical and they have kaze@KAZE at end. Origin URL looks like git@lennyserver:test_project. Now, the question is - why Git (or gitosis) suddenly decided to call me by email instead of name@machinename? I've changed a couple things trying to set up Gitosis (updated git on server to 1.6.0 for example), but maybe I broke something in my local git installation?

    Read the article

  • Apache 2 Fails to Start After Upgrade with No Errors

    - by Mark Davidson
    Hi all Hoping someone can help me with a server issue. Recently we upgraded to the latest apache on 2 boxes within are organisation. One being the master box the other being for failover. The upgrade went fine on the master box but on the failover box apache fails to start with no errors, being output or logged. Both boxes have the exact same configuration so found this a bit strange. I've reinstalled apache and have been through checking the configs and did not find any obvious errors. Eventally I ran a syntax check on each config file being included and found that one of the files apparently has syntax errors. Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration Invalid command 'php_value', perhaps misspelled or defined by a module not included in the server configuration Invalid command 'GeoIPEnable', perhaps misspelled or defined by a module not included in the server configuration I've trippled checked all the modules are enabled but it still fails. I've googled the subject of these errors loads but have been unable to fine a solution. I was wondering if anyone had encountered such a problem before and could point me towards a solution. Thanks for your help in advance. P.s: Apache related versions on server. ii apache2 2.2.3-4+etch10 Next generation, scalable, extendable web se ii apache2-mpm-prefork 2.2.3-4+etch10 Traditional model for Apache HTTPD 2.1 ii apache2-utils 2.2.3-4+etch10 utility programs for webservers ii apache2.2-common 2.2.3-4+etch10 Next generation, scalable, extendable web se ii libapache2-mod-geoip 1.1.8-2 GeoIP support for apache2 ii libapache2-mod-php5 5.2.0+dfsg-8+etch15 server-side, HTML-embedded scripting languag

    Read the article

  • bond0 and xen = crash

    - by Rajat
    Bonding with xen 1 - Stop all guests. Reboot dom0 after running "chkconfig xend off" and "chkconfig xendomains off". 2 - Configure bond0 by enslaving eth0 and eth1 to it. I added the below two entries to /etc/modprobe.conf. alias bond0 bonding options bond0 mode=6,miimon=100 Content of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none Content of /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none Content of /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 IPADDR= NETMASK= ONBOOT=yes BOOTPROTO=static USERCTL=no Did "modprobe bond0" and "service network restart" after that. 3 - Edit /etc/xen/xend-config.sxp Change (network-script network-bridge) To (network-script 'network-bridge netdev=bond0') 4 - Start xend. "service xend start". 5 - chkconfig xend on. 6 - modprode bond0 7 - more /proc/net/bonding/bond0 8 - Create guest images as usual and bridge it to xenbr0. about config i did for my xen kernel rhel 5.3 after i reboot the host server i get in place bond0 get pbond0 and its get disconnect from network only i ping to my vm's on the host server any one have any idea why xen bond0 is acting like that or what is solutions to come out of pbond0 to bond0.

    Read the article

  • Postfix not working

    - by user1488723
    A while ago I installed the postfix mail server on my ubuntu 10.04 VPS. At the time it was working good but now it's just stopped working. I was trying to enable SASL authentification and somewhere it must have went really wrong. I've studied the postfix main.cf and done everything in an orderly fashion to ensure that it is nothing wrong. I also have Dovecot installed and configured dovecot.conf to run with Postfix. If I try to do telnet localhost 25 while logged in on the server I just get: Connection closed by foreign host. If I try to do telnet mail.example.com 25 "from the outside" I get: telnet: Unable to connect to remote host: No route to host And when I check the server log after the failed attempts I see this: Jun 28 15:49:31 msv postfix/smtpd[11839]: initializing the server-side TLS engine Jun 28 15:49:31 msv postfix/smtpd[11839]: connect from localhost.localdomain[127.0.0.1] Jun 28 15:49:31 msv postfix/smtpd[11839]: warning: SASL: Connect to /var/spool/postfix/private/auth failed: Connection refused Jun 28 15:49:31 msv postfix/smtpd[11839]: fatal: no SASL authentication mechanisms Jun 28 15:49:32 msv postfix/master[11598]: warning: process /usr/lib/postfix/smtpd pid 11839 exit status 1 Jun 28 15:49:32 msv postfix/master[11598]: warning: /usr/lib/postfix/smtpd: bad command startup -- throttling main.cf file looks like this: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no delay_warning_time = 4h myhostname = mail.example.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydomain = example.com myorigin = $mydomain mydestination = $mydomain relayhost = mynetworks = 127.0.0.1 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_use_tls = yes smtpd_tls_loglevel = 2 smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_sasl_auth_enable = yes smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = /var/spool/postfix/private/auth smtpd_sasl_security_options = noanonymous Dovecot.conf file looks like this: protocols = imap imaps disable_plaintext_auth = no log_timestamp = "%b %d %H:%M:%S " ssl = yes ssl_cert_file = /etc/postfix/ssl/smtpd.crt ssl_key_file = /etc/postfix/ssl/smtpd.key mail_location = maildir:~/mail mail_access_groups = mail auth_username_chars = abcdefghijklmnopqrstuvwxyz protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep } auth default { mechanisms = plain login passdb pam { } userdb passwd { } socket listen { client { path = /var/spool/postfix/private/auth user = postfix group = postfix mode = 0660 } } }

    Read the article

  • PHP memory_limit local value does not match php.ini value

    - by Buttle Butkus
    CentOS system. Summary: changed memory_limit in master and local php.ini and yet no change in the local value for a particular virtual host. Trying to improve performance, I set the memory_limit to 1024M in /etc/php.ini phpinfo() shows Master and Local values for other virtual hosts on the server as 1024M. Changing the value in /etc/php.ini changes all values, except one. One site is stuck with a local value of 256M. I thought I found the problem: there is a php.ini file (which I didn't know about) in that site's root, and it had memory_limit = 256M I changed it to 1024M. Problem solved? No. And now I don't know where to look. Obviously, I've restarted apache (/etc/init.d/httpd restart), and that usually does the trick. I also turned off APC cache, though I don't think it would cache ini files. And finally, I tried adding this to the virtual host in httpd.conf: php_value memory_limit 536870912 (yes, that would be 5 GB) And that had no effect whatsoever. What else could be the problem? Thanks.

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • dovecot login issue with plain passwords

    - by user3028
    I am having an odd problem in dovecot, the first time I try to login via telnet dovecot gives a error, the second time it works, both within the same telnet session. This is the telnet session, note the 'BAD Error in IMAP command received by server' and the "a OK" just after that : telnet 192.168.1.2 143 * OK Waiting for authentication process to respond.. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot ready. a login someUserLogin supersecretpassword * BAD Error in IMAP command received by server. a login someUserLogin supersecretpassword a OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS] Logged in dovecot configuration >dovecot -n # 2.0.19: /etc/dovecot/dovecot.conf # OS: Linux 3.5.0-34-generic x86_64 Ubuntu 12.04.2 LTS auth_debug = yes auth_verbose = yes disable_plaintext_auth = no login_trusted_networks = 192.168.1.0/16 mail_location = maildir:~/Maildir passdb { driver = pam } protocols = " imap" ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = passwd } This is the log file: Jul 3 12:27:51 linuxServer dovecot: auth: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:27:51 linuxServer dovecot: auth: Debug: auth client connected (pid=23499) Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client in: AUTH#0111#011PLAIN#011service=imap#011secured#011no-penalty#011lip=192.168.1.2#011rip=192.169.1.3#011lport=143#011rport=50438#011resp=<hidden> Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): lookup service=dovecot Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): #1/1 style=1 msg=Password: Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client out: OK#0111#011user=someUserLogin Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master in: REQUEST#0111823473665#01123499#0111#0113a58da53e091957d3cd306ac4114f0b9 Jul 3 12:28:06 linuxServer dovecot: auth: Debug: passwd(someUserLogin,192.169.1.3): lookup Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master out: USER#0111823473665#011someUserLogin#011system_groups_user=someUserLogin#011uid=1000#011gid=1000#011home=/home/someUserLogin Jul 3 12:28:06 linuxServer dovecot: imap-login: Login: user=<someUserLogin>, method=PLAIN, rip=192.169.1.3, lip=192.168.1.2, mpid=23503, secured

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • Translating IPTables rule to UFW

    - by Dario Fumagalli
    we are using an Ubuntu 12.04 x64 LTS VPS. Firewall being used is UFW. I have setup a Varnish + LEMP setup. along with other things, including an Openswan IPSEC VPN from our office to the VPS data center. A second in house Ubuntu box is to act as MySQL slave and fetch data from the VPS through the VPN. Master's ppp0 is seen as 10.1.2.1 from the slave, they ping etc. I have done the various required tasks but I can't get the client (slave) MySQL (nor telnet 10.1.2.1 3306) to access the master through the VPN unless I issue this fairly obvious IPTables command: iptables -A INPUT -s 10.1.2.0/24 -p tcp --dport 3306 -j ACCEPT I willingly forced the accepted input to come from the last octet. With this rule everything works just fine! However I want to translate this command to UFW syntax so to keep everything in one place. Now I admit being inexperienced with UFW, I prepared rules like: ufw allow proto tcp from 10.1.2.0/24 port mysql and 2-3 variations involving specifying 3306 instead of mysql, specifying a target IP (MySQL's my.cnf at the moment is configured as 0.0.0.0) and similar but I just don't seem to be able to replicate the simple iptables rule in a functional way. Anyone could kindly give me a suggestion that is not to dump UFW? Thanks in advance.

    Read the article

  • DRBD Replication failure

    - by user62513
    A couple of weeks ago I setup a 2 nodes CRM system with one of the resources managed being MySQL over DRBD. Today for maintenance reasons I restarted both nodes but now they can't connect to each other anymore. DRBD fell out of sync and I followed this guide to get it back connected but it's only able to run successfully on one node. But this strange thing happens: If I crm node standby both nodes and I try: crm node online node0 before crm node online node1, all the CRM resources start successfully but the DRBD partitions are still running in StandAlone state. crm node online node1 beofre crm node online node0, the DRBD resource fails to start, thus causing mysql not to start. If I standby both resources and call crm node online node0 then it times out and prints this error: Running crm node online node0 produces this output after timing out Error setting standby=off (section=nodes, set=<null>): Remote node did not respond Error performing operation: Remote node did not respond Is there anything I'm doing wrong here? An alternative will be just do MySQL replication but I'm not sure how to promote a slave to master when the master database is not available.

    Read the article

  • Is this way of using Excel 2007 Pivot table for BI scalable ?

    - by Sim
    Hi all, Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question I tried several SaaS BI solution (GoodData, ZohoReports) and while they're good, they seem not to fully support what we need We're looking at 'bout 2 millions record for every 2 months My current approach Our (10) sites currently gathers data from all their branches and consolidate them into 1 Excel file with Pivot table and embed source data In HQ, I will request 10 sites to send back those Excel files periodically We will import those Excel to our MSSQL server There will be a master Excel file, that will also have the same pivot table (as those came from site Excel file), and datasource is the MSSQL server More details For testing, I currently use MSSQL 2008 Express on my laptop So far, I imported our transactions for the past 2 months and there are 2 millions+ row in 1 table in MSSQL (we just use 1 table, corresponding to our common pivot table structure). DB size is ~ 600 MB In the master Excel file, if not including the source data, it's just < 10MB. Including the source data will increase the size to 60 MB (so I supposed Office 2007 automatically zip the data ?) I try using the Pivot (drag-and-drop fields) and the performance so far is OK (my laptop specs: C2D T7200, 3GB RAM, Windows XP) So my question is : If we're looking at full year transaction (roughly 15 millions rows in MSSQL 2008 Express, 3.6 GB in size), is there any issue with that 15 million rows in 1 table in SQL Express ? Is there any performance issue with the pivot table at that time ? Can it still embed the source data ? (I google-ed but didn't find the maximum size of source data Excel 2007 can embed) Any other suggestions on how we can better do this ? Given that we can't afford the full BI solution, any light-weight/budget/SaaS BI that you can recommend ? Thanks

    Read the article

  • In an environment with multiple WiFi access points, do wireless clients sometimes connect to both at the same time?

    - by Bobby Burgess
    This is more of a curiosity than a problem, but in this new office I have two D-link DAP-2553's connected in a master/slave array (this just means the master keeps certain configuration options aligned with the slave). The network is set to 802.11n-only, and each AP has the same SSID and WPA2 key. The only difference is that they are on different channels (1 and 11). The WiFi network itself is working well. Users can roam around and the signal/speed is fairly consistent. However, I notice that when I look at the 802.11 client list in the web admin page for each of the 2 APs, I see that certain clients are connected to both, for extended periods of time, but I assume they are only passing data through one of them. Not every client is seen on each AP, but at any given time the same MAC address of a WiFi adapter can be associated (and remain associated) with both APs. The client list auto-refreshes every few seconds so I believe I'm looking at the most recent rather than stale information. One of the WiFi adapters that consistently associates with both APs is an Intel Centrino Wireless-N 1030 (laptop chip). Is it part of the WiFi standard that more than one association per WiFi card can be established concurrently on separate APS?

    Read the article

  • Fixing a typo in machine name

    - by justSteve
    When i installed windows i had a typo in the machine name that i corrected from the system's 'Computer Name/Domain Changes' - the workstation is a member of a workgroup not a domain. From everything i can see the renamed machine name is correct. Shift gears.... I'm importing SQL logins from my remote server to this, my development workstation and have used the script presented here - a script that generates a CREATE statement for each login found. While I was preparing to run this script's output (from the remote box) i needed to change the domain name from the remote to my local's name - so i ran the same script locally (in order to see what SQL things my domain name is. SQL has the original machine name - the one with the typo. However, the scripts are tossing errors if i try to create logins with that identifier. CREATE LOGIN [Setve\Admin] FROM WINDOWS WITH DEFAULT_DATABASE = [master] But works correctly if i use the updated machine name: CREATE LOGIN [Steve\Admin] FROM WINDOWS WITH DEFAULT_DATABASE = [master] So the problem is: do i have a problem i need to solve? Somewhere, deep in the guts of SQL Server, it has record of a Domain name that does not exist. Should i find and fix that discrepancy? thx

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >