Search Results

Search found 5583 results on 224 pages for 'global nomad'.

Page 159/224 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Rsyslogd not listening on port

    - by amorfis
    I installed rsyslogd on ubuntu server, started it and everything looks fine, but the port the server should listen on is not opened. ubuntu@node7:~$ sudo service rsyslog restart rsyslog stop/waiting rsyslog start/running, process 14114 Netstat shows it is not listening: ubuntu@node7:~$ netstat -tlan Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 320 172.22.0.17:22 10.8.8.38:61335 ESTABLISHED tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 :::2776 :::* LISTEN tcp6 0 0 :::2777 :::* LISTEN tcp6 0 0 172.22.0.17:2777 172.22.0.11:56554 ESTABLISHED tcp6 0 0 172.22.0.17:2776 172.22.0.11:39780 ESTABLISHED This is how /etc/rsyslog.conf looks like (most comments omitted): ubuntu@node7:~$ cat /etc/rsyslog.conf ################# #### MODULES #### ################# $ModLoad imuxsock # provides support for local system logging $ModLoad imklog # provides kernel logging support (previously done by rklogd) $ModLoad imtcp $InputTCPServerRun 514 ########################### #### GLOBAL DIRECTIVES #### ########################### $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $RepeatedMsgReduction on $WorkDirectory /var/spool/rsyslog $FileOwner syslog $FileGroup adm $FileCreateMode 0640 $DirCreateMode 0755 $Umask 0022 $PrivDropToUser syslog $PrivDropToGroup adm $IncludeConfig /etc/rsyslog.d/*.conf In /etc/rsyslog.d/35-server-per-host.conf I have following lines, and I suspect this can be the cause. What does it mean? # Stop processing of all non-local messages. You can process remote messages # on levels less than 35. :fromhost-ip,!isequal,"127.0.0.1" ~ and if it is, how could I change it to have server listening and receiving and logging messages? UPDATE: I commented out suspected line, but still it's not listening on port 514

    Read the article

  • Setting up Mako with Cherrypy on nginx through FastCGI

    - by xuniluser
    I'm trying to use TemplateLookup from Mako, but can't seem to get it to work. Layout of the test site is: /var/www main.py templates/ index.html Nginx's config is setup as: location / { fastcgi_pass 127.0.0.1:8080; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; } Cherrypy's config has: [global] server.socket_port = 8080 server.thread_pool = 10 engine.autoreload_on = False tools.sessions.on = True A simple cherrypy setup in main.py seems to work fine. import cherrypy class Main: @cherrypy.expose def index(self): return 'Hello' cherrypy.tree.mount(Main(), '/', config='config') Now, if I modify this to use Mako's template lookup, I get a 500 error. I know it has something to do with serving static files, but I've tried over a dozen different configurations accoring to the cherrypy wiki, but none of them work. Here's a bare setup I have for the templates: import cherrypy from mako.template import Template from mako.lookup import TemplateLookup templates = TemplateLookup(directories=['templates'], output_encoding='utf-8') class Main: @cherrypy.expose def index(self): return templates.get_template('index.html').render(msg='hello') cherrypy.tree.mount(Main(), '/', config='config') Does anyone know how I can get this to work ?

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP?

    Read the article

  • --log-slave-updates is OFF but updates received from master are still logged to slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the updates that are received from a master to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something?

    Read the article

  • Virtual machines interconnection inside Proxmox 2.1 Cluster

    - by Anton
    We have 3 physical servers (each with 1 NIC) in different datacentres, all of them are interconnected by openvpn bridged private network (10.x.x.x). Inside this network we have fully functional 3 nodes Proxmox 2.1 cluster. So, actually question is: Is there any "proper" way to make "global" local network (172.16.x.x) for all VMs inside cluster, so even if we move VM from one node to other we could reach it by static IP regardless of it's physical location? BTW, we can't add dedicated NIC to each server. Thanks in advance. EDIT: I have tried to make a separate openvpn bridge for 172.16.x.x, now I have at each server two interfaces: SRV1: openvpnbr1 - 172.16.13.1 vmbr0 - 172.16.1.1 SRV2: openvpnbr1 - 172.16.13.2 vmbr0 - 172.16.2.1 But now there is no connection between those ifaces: SRV1: ping 172.16.13.2 From 172.16.1.1 icmp_seq=2 Destination Host Unreachable SRV2: ping 172.16.13.1 From 172.16.2.1 icmp_seq=2 Destination Host Unreachable If I shut down vmbr0 interfaces, so there is connection between servers over openvpn, but vmbr0 is used by Proxmox... Where I am wrong?

    Read the article

  • Shadow copy referencing invalid volume from symboliclink

    - by ccook
    I recently replaced my motherboard after the last one failed (was shorting and causing random reboots). I'm sure this was not healthy for the machine, and that a clean install would do wonders, but I'd like to fix the current install. That aside, I've been tracking down a pair of errors in the application log. Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details IVssSnapshotProvider::QueryVolumesSupportedForSnapshots(ProviderId,29,...) [hr = 0x80042302, A Volume Shadow Copy Service component encountered an unexpected error. Check the Application event log for more information. ]. Operation: Query volumes supported by this provider Context: Provider ID: {b5946137-7b9f-4925-af80-51abd60b20d5} Snapshot Context: 29 Followed by Volume Shadow Copy Service error: Unexpected error calling routine Error calling CreateFile on volume '\?\Volume{f4bda86e-049d-11e1-9255-bcaec56690a1}\'. hr = 0x80070020, The process cannot access the file because it is being used by another process. This error is reproducible at command line, creating the two event log entries C:\Windows\system32>vssadmin list volumes vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2005 Microsoft Corp. Error: The shadow copy provider had an unexpected error while trying to process the specified command. Using WinObj from Sysinternals, I have tracked down the global object. '\?\Volume{f4bda86e-049d-11e1-9255-bcaec56690a1}\' - SymbolicLink - '\Device\HarddiskVolume8' Running DISKPART, and running the command "list volume" within it lists volumes 0 through 6, there is not a HarddiskVolume8. How can I remove this reference to HarddiskVolume8, and get shadow copy up and running?

    Read the article

  • Where to place Nginx IP blacklist config file?

    - by ProfessionalAmateur
    I have an Nginx web server hosting two sites. I created a blockips.conf file to blacklist IP addresses that are constantly probing the server and included this file in the nginx.conf file. However in my access logs for the sites I still see these IP addresses showing up. Do I need to include the black list in each site's conf instead of the global conf for Nginx? Here is my nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; # Load virtual host configuration files. include /etc/nginx/sites-enabled/*; # BLOCK SPAMMERS IP ADDRESSES include /etc/nginx/conf.d/blockips.conf; } blockips.conf deny 58.218.199.250; access.log still shows this IP address. 58.218.199.250 - - [27/Sep/2012:06:41:03 -0600] "GET http://59.53.91.9/proxy/judge.php HTTP/1.1" 403 570 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" "-" What am I doing incorrectly?

    Read the article

  • MediaWiki migrated from Tiger to Snow Leopard throwing an exceptions

    - by Matt S
    I had an old laptop running Mac OS X 10.4 with macports for web development: Apache 2, PHP 5.3.2, Mysql 5, etc. I got a new laptop running Mac OS X 10.6 and installed macports. I installed the same web development apps: Apache 2, PHP 5.3.2, Mysql 5, etc. All versions the same as my old laptop. A Mediawiki site (version 1.15) was copied over from my old system (via the Migration Assistant). Having a fresh Mysql setup, I dumped my old database and imported it on the new system. When I try to browse to mediawiki's "Special" pages, I get the following exception thrown: Invalid language code requested Backtrace: #0 /languages/Language.php(2539): Language::loadLocalisation(NULL) #1 /includes/MessageCache.php(846): Language::getFallbackFor(NULL) #2 /includes/MessageCache.php(821): MessageCache->processMessagesArray(Array, NULL) #3 /includes/GlobalFunctions.php(2901): MessageCache->loadMessagesFile('/Users/matt/Sit...', false) #4 /extensions/OpenID/OpenID.setup.php(181): wfLoadExtensionMessages('OpenID') #5 [internal function]: OpenIDLocalizedPageName(Array, 'en') #6 /includes/Hooks.php(117): call_user_func_array('OpenIDLocalized...', Array) #7 /languages/Language.php(1851): wfRunHooks('LanguageGetSpec...', Array) #8 /includes/SpecialPage.php(240): Language->getSpecialPageAliases() #9 /includes/SpecialPage.php(262): SpecialPage::initAliasList() #10 /includes/SpecialPage.php(406): SpecialPage::resolveAlias('UserLogin') #11 /includes/SpecialPage.php(507): SpecialPage::getPageByAlias('UserLogin') #12 /includes/Wiki.php(229): SpecialPage::executePath(Object(Title)) #13 /includes/Wiki.php(59): MediaWiki->initializeSpecialCases(Object(Title), Object(OutputPage), Object(WebRequest)) #14 /index.php(116): MediaWiki->initialize(Object(Title), NULL, Object(OutputPage), Object(User), Object(WebRequest)) #15 {main} I tried to step through Mediawiki's code, but it's a mess. There are global variables everywhere. If I change the code slightly to get around the exception, the page comes up blank and there are no errors (implying there are multiple problems). Anyone else get Mediawiki 1.15 working on OS X 10.6 with macports? Anything in the migration from Tiger that could cause a problem? Any clues where to look for answers?

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Passive FTP Server Port Configuration Troubles Win2003

    - by Chris
    Win2003 Ports 20 & 21 are open IIS6 - Direct Metabase Edit enabled Configured FTP service passive range to 5500-5550 5500-5550 added to windows firewall iisreset and double checked by restarting ftp service nothing has changed, when I connect and enter passive, it still hangs when ever I try to LIST or transfer files. Active is just as useless. Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\user>ftp ftp> open x.x.x.x Connected to x.x.x.x. 220-Microsoft FTP Service xxxxxxxxxxxxxxxxxx 220 xxxxxxxxxxxxxxxxxx User (x.x.x.x:(none)): user 331 Password required for user. Password: 230-YOUR ACTIVITY IS BEING RECORDED TO THE FULLEST EXTENT 230 User user logged in. ftp> QUOTE PASV 227 Entering Passive Mode (82,19,25,134,21,124) ftp> ls 200 PORT command successful. 150 Opening ASCII mode data connection for file list. and it hangs.. Now I can see from microsooft documentation that on newer windows releases, additional steps such as these are suggested, but they dont work on 2003... netsh advfirewall firewall add rule name=”FTP Service” action=allow service=ftpsvc protocol=TCP dir=in netsh advfirewall set global StatefulFTP disable is there anything I am missing, what is this StatefulFTP malarkey at the end EDIT I can connect and transfer binary files using WinSCP client - Therefore the problem must be with my ftp commands no? Can anyone see anything wrong with my windows ftp client example? why would it hang on ls, i tried QUOTE LIST as well, and that just hangs, and the windows ftp client doesnt work in active, it hangs if I try to go "binary" then put - This worked before I added 5500-5550 on the router. I have since added this range to the router but no difference to the windows ftp client.

    Read the article

  • Getting 502 instead of 503 when all backend servers are down running HAProxy behind Apache

    - by scarba05
    I'm testing running HAProxy as a dedicated load balancer behind Apache 2.2, replacing our current configuration where we use Apache's load balancer. In our current, Apache only, set-up if all the backend (origin) servers are down Apache will serve a 503 service unavailable message. With HAProxy I get a 502 bad gateway response. I'm using a simple reverse proxy rewrite rule in Apache RewriteRule ^/(.*) http://127.0.0.1:8000/$1 [last,proxy] In HAProxy I have the following (running in default tcp mode) defaults log global option tcp-smart-accept timeout connect 7s timeout client 60s timeout queue 120s timeout server 60s listen my_server 127.0.0.1:8000 balance leastconn server backend1 127.0.0.1:8001 check observe layer4 maxconn 2 server backend1 127.0.0.1:8001 check observe layer4 maxconn 2 Testing connecting directly to the load balancer when the backend servers are down: [root@dev ~]# wget http://127.0.0.1:8000/ test.html --2012-05-28 11:45:28-- http://127.0.0.1:8000/ Connecting to 127.0.0.1:8000... connected. HTTP request sent, awaiting response... No data received. So presumably this is down to the fact that HAProxy accepts the connection and then closes it.

    Read the article

  • samba "username map" stopped to work after upgrade to 3.6

    - by Kris_R
    It was time to upgrade our group server (new HDs, problems with old installation of DRBD, etc..). Going as usually for CentOS i upgraded whole system from 6.3 to 6.4 The later one came with samba 3.6 as the old one was 3.5. I transferred most of users by copying /etc/password, /etc/shadow and samba accounts with pdbedit. Homes were on nfs-drive. The translation of unix accounts to samba accounts are located in /etc/samba/smbusers. Strangely enough on some windows clients there was problem to connect to samba-shares. In one case the only thing that worked was, instead of giving windows name, to use the unix account. In another one, it was possible to mount network drive and to open it in Windows Explorer, however other applications like "Total commander" at the attempt of opening this drive gave the message "Cannot connect to z:" (sometimes at this moment user/pass were requested). The smb.conf has following entries: [global] security = user passdb backend = tdbsam username map = /etc/samba/smbusers ... [Kris] comment = Kris's Private path = /SMB/Users/Kris writeable = yes read only = no browseable = yes users = krisr printable = no security mask = 0777 force security mode = 0 directory security mask = 0777 force directory security mode = 0 force create mode = 0775 force directory mode = 6775 The smbusers: # Unix_name = SMB_name1 SMB_name2 ... krisr = Kris Of course testparm runs without any errors. I was used from samba 3.5 to outputs of form Mapped user kris to krisr. Nothing like this happens now. Just message check_sam_security: Couldn't find user Kris in passdb. I read on web that some guys had problem with 3.6 and security = ADS, but these were not helpful for me. I'm seriously thinking about downgrading back to samba 3.5 but before this step I wanted to ask if somebody knows the solution of these problems.

    Read the article

  • Git clone/push/pull - where's that username comes from?

    - by Kuroki Kaze
    I've set up gitosis and able to pull/push through ssh. Gitosis is installed on Debian Lenny server, I'm using git from windows machine (msysgit). The strange thing, if I enable loglevel = DEBUG in gitosis.conf, I see something like this when doing any actions with gitosis server: D:\Kaze\source\test-project>git pull origin master DEBUG:gitosis.serve.main:Got command "git-upload-pack 'test_project.git'" DEBUG:gitosis.access.haveAccess:Access check for '[email protected]' as 'writable' on 'test_project.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'test_project.git', new value 'test_project' DEBUG:gitosis.group.getMembership:found '[email protected]' in 'test' DEBUG:gitosis.access.haveAccess:Access ok for '[email protected]' as 'writable' on 'test_project' DEBUG:gitosis.access.haveAccess:Using prefix 'repositories' for 'test_project' DEBUG:gitosis.serve.main:Serving git-upload-pack 'repositories/test_project.git' From 192.168.175.128:test_project * branch master -> FETCH_HEAD Already up-to-date. Question is: why am I *[email protected]? This email is in global user.email config variable, too. Yesterday, when the gitosis was installed, it seen me as kaze@KAZE, this is the name under which I was added to gitosis-admin group (and it worked). But today git (or gitosis) started to see me as [email protected]. This is true for all repositories I push or clone. I had to add this address to gitosis.conf directly on server to be able to edit configs again (it worked). There is 2 public keys in keydir: [email protected] and [email protected], their content is identical and they have kaze@KAZE at end. Origin URL looks like git@lennyserver:test_project. Now, the question is - why Git (or gitosis) suddenly decided to call me by email instead of name@machinename? I've changed a couple things trying to set up Gitosis (updated git on server to 1.6.0 for example), but maybe I broke something in my local git installation?

    Read the article

  • Add IPv6 support to DirectAdmin server

    - by George Boot
    I just set up an new DirectAdmin, and I want to prepare it for IPv6 use. My ISP have gave me an range of IPv6 addresses that I can use. Lets say that address is 2a01:7c8:**:1f::. My neworkadapter user DHCP to resolves its IP-addresses. When i type ifoncig eth0 I get the following result: eth0 Link encap:Ethernet HWaddr 52:**:**:**:ce:f3 inet addr:37.**.**.44 Bcast:37.**.**.255 Mask:255.255.255.0 inet6 addr: 2a01:7c8:****:1f::/64 Scope:Global inet6 addr: fe80::5054:ff:fe87:cef3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38941 errors:0 dropped:0 overruns:0 frame:0 TX packets:29439 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3779534 (3.6 MiB) TX bytes:5089379 (4.8 MiB) As you can see, I have an IPv6 address set, but I can't ping6 an IPv6 host. I get the error: connect: Network is unreachable. I decided that I needed an gateway, so I tryed to add one: ip -6 route add default via 2a01:7c8:****::1 dev eth0 (2a01:7c8:**::1 is the gateway of my ISP). But it trows an error: RTNETLINK answers: No route to host. Does somebody know what to do, and how to solve this issue? Thanks a lot!

    Read the article

  • Minimum permissions needed to create a user Home Folder in Windows Active Directory

    - by Jim
    We would like the Help Desk to have the responsibility of creating User Home folders instead of our 2nd level support. The help desk global group is already an Account Operator, so in Active Directory they are able to edit all User Attributes just fine. The problem is figuring out the minimum level of permissions needed on the File Server to create the home share, with out giving them access to everyone home share. So if they open AD Users and Computer, open the properties for a user, and enter \home\users\%username% in the profile tab and then click OK, they get the following error. The \home\users\username home folder was not created because you do not have create access on the server. The user account has been updated with the new home folder value but you must create the directory manually after obtaining the required access right. Right now I have given the Helpdesk group Full Control on the root folder only (no files or subdirectories) The directory is actually created, but the permissions on the newly created folder only show administrators full control, and no permissions for the configured user account. It sure sounds like I'd have to make the helpdesk local admins on the file servers, which is what I'd like to avoid. Especially since the file servers are a large cluster hosting much much more than the entire orgs home share structure.

    Read the article

  • Server 2003 Terminal Services Printers not redirecting, no sessions created.

    - by mikerdz
    Ok, odd scenario on a Windows Server 2003 Server Standard running as Terminal Server. Friday, installed 2 new Windows 7 machines to replace older XP machines. After adding these machines and their local printers, none of the otehr 16 Windows 7 machines can redirect printing to the server. I have checked Global Policy on domain controller, nothing is being blocked. In Terminal Services Manager, the client settings are set to User Client Settings. On RDP client, port redirection is enabled. I have tried disabling the Use Client Settings option and manually selected the options for print redirection and default printer connection, but still does not work. After some reaserching, I found this MS article: http://support.microsoft.com/kb/2492632 I went ahead and added the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\Wds\rdpwd\fEnablePrintRDR DWORD that the article references and set it to "1" to enable the option. I restarted the server, but still would not print. I am getting quite desperate with this issue because nothing seems to have changed when installing the two new clients and printers. I uninstalled the print drivers for the printers from the server. I have even gone as far as connecting each of the printers manually via UPD (\computername\printer) but even thought it works, it prints awfully slow. Please help!!!!

    Read the article

  • jdbc4 CommunicationsException

    - by letronje
    I have a machine running a java app talking to a mysql instance running on the same instance. the app uses jdbc4 drivers from mysql. I keep getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException at random times. Here is the whole message. Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was25899 milliseconds ago.The last packet sent successfully to the server was 25899 milliseconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. For mysql, the value of global 'wait_timeout' and 'interactive_timeout' is set to 3600 seconds and 'connect_timeout' is set to 60 secs. the wait timeout value is much higher than the 26 secs(25899 msecs). mentioned in the exception trace. I use dbcp for connection pooling and here is spring bean config for the datasource. <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource" > <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/db"/> <property name="username" value="xxx"/> <property name="password" value="xxx" /> <property name="poolPreparedStatements" value="false" /> <property name="maxActive" value="3" /> <property name="maxIdle" value="3" /> </bean> Any idea why this could be happening? Will using c3p0 solve the problem ?

    Read the article

  • Apache2 Segmentation fault with wsgi_module

    - by a coder
    Apache 2.2.3 is running as an existing web server under RHEL 5. Attempting to set up Trac using wsgi_module. RHEL 5 ships with python 2.4, so in order to use the current version of Trac (1.0) I needed to install it with easy_install-2.6. Trac works with the default mod_python, however users strongly encourage not using this module as it is officially dead. Using RHEL's package manager, I downloaded/installed python26-mod_wsgi.so. I backed up the httpd.conf, then made the following additions: LoadModule wsgi_module modules/python26-mod_wsgi.so #...# WSGIScriptAlias /trac /www/virtualhosts/trac/deploy/cgi-bin/trac.wsgi <Directory /www/virtualhosts/trac/deploy/cgi-bin> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> Next I moved trac.conf to trac.conf.bak (contains mod_python calls). I tested the configuration using: apachectl configtest Syntax is OK. So I reloaded the server config using: service httpd reload At this time, all virtualhosted sites stopped responding. I restored my backup copy of httpd.conf, reloaded the server config, and the virtualhosted sites are being served again. A quick look at the httpd error_log shows: [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28282): Initializing Python. [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28280): Attach interpreter ''. [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1817): proxy: grabbed scoreboard slot 0 in child 28283 for worker proxy:reverse [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1836): proxy: worker proxy:reverse already initialized [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1930): proxy: initialized single connection worker 0 in child 28283 for (*) [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28283): Initializing Python. [Mon Oct 08 10:20:04 2012] [notice] child pid 28249 exit signal Segmentation fault (11) [Mon Oct 08 10:20:04 2012] [notice] child pid 28250 exit signal Segmentation fault (11) [Mon Oct 08 10:20:04 2012] [notice] child pid 28251 exit signal Segmentation fault (11) There are many similar lines, this is just a snip of the log file. Suggestions on what could be going on to cause the Segmentation faults?

    Read the article

  • ntop to analyse bandwidth usage on multiple ASA 5505

    - by dunxd
    I have set up a netflow server at our data centre, which is connected via VPN to ~40 remote offices using Cisco ASA 5505. The aim is to analyse usage data and find out exactly how the remote connections are being used. I followed through http://techowto.files.wordpress.com/2008/09/ntop-guide.pdf to set up ntop and https://supportforums.cisco.com/docs/DOC-6114 to set up the ASAs. I can see from the Plugin Netflow Statistics page that netflow packets from my ASAs are being received - the counter is increasing. However, I am not seeing any breakdown on the Global Traffic Statistic page after switching to the Netflow interface. I'm just seeing a pie chart showing 100% traffic for eth0. The interfaces and documentation are a little hard to follow so I am not sure I have got things configured correctly. When setting up my NetFlow-device.2 I can specify Virtual NetFlow Interface Network Address - the web UI says This value is in the form of a network address and mask on the network where the actual NetFlow probe is located. is this a Network address (e.g. 192.168.0.0/24) or an actual host IP address (192.167.0.1/24)? If that should be a network address, is this the network in which one of my ASAs is or the network in which my ntop server is? If a host IP address, is this the IP address used by eth0 on my ntop server, the IP address of an ASA, or something else? Do I need a separate virtual interface for each ASA I am collecting netflow data from? Any guidance would be greatly welcome.

    Read the article

  • How (in)secure are cell phones in reality?

    - by Aron Rotteveel
    I was recently re-reading an old Wired article about the Kaminsky DNS Vulnerability and the story behind it. In this article there was a quote that came across a little bit exaggerated to me: "The first thing I want to say to you," Vixie told Kaminsky, trying to contain the flood of feeling, "is never, ever repeat what you just told me over a cell phone." Vixie knew how easy it was to eavesdrop on a cell signal, and he had heard enough to know that he was facing a problem of global significance. If the information were intercepted by the wrong people, the wired world could be held ransom. Hackers could wreak havoc. Billions of dollars were at stake, and Vixie wasn't going to take any risks. When reading this I could not help but feel like it was a bit blown-up and theatrical. Now, I know absolutely nothing about cell phones and the security problems involved, but to my understanding, cell phone security has quite improved over the past few years. So my question is: how insecure are cell phones in reality? Are there any good articles that dig a bit deeper into this matter?

    Read the article

  • Website cannot be accessed with google DNS because of unsigned DNS

    - by Sinan Samet
    I get this error: Inconsistent security for stakeholdergame.com - DS found at parent, but no DNSKEY found at child. On http://dnscheck.pingdom.com/?domain=stakeholdergame.com People can't access my site with google public DNS because of this. How do I solve this problem? dig @ns1.haveabyte.nl stakeholdergame.com DS shows me this ; <<>> DiG 9.8.3-P1 <<>> @ns1.haveabyte.nl stakeholdergame.com DS ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42223 ;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;stakeholdergame.com. IN DS ;; AUTHORITY SECTION: stakeholdergame.com. 14400 IN SOA ns1.haveabyte.nl. hostmaster.stakeholdergame.com. 2014030300 14400 3600 1209600 86400 ;; Query time: 21 msec ;; SERVER: 79.170.93.174#53(79.170.93.174) ;; WHEN: Tue Jun 10 11:20:41 2014 ;; MSG SIZE rcvd: 100

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • Bind a key to a commandline command in Mac OS X?

    - by Stefan Lasiewski
    I have a Mac Powerbook running Leopard (10.5.8). Does Leopard provide an easy way to bind keys to commands which are typically run on the commandline? For example, I can open up Terminal.app and run the command /System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine which will activate the screensaver and lock my screen. What if I want to bind 'Apple-key L' to this command and execute this globally, regardless of which application is in use at the moment? Can I do this, or can I only run ScreenSaverEngine from a Terminal window? I tried to set up global keyboard shortcuts, but it seems that this won't allow me to bind a key to an arbitrary shell command: Note: You can create keyboard shortcuts only for existing menu commands. You cannot define keyboard shortcuts for general purpose tasks such as opening an application or switching between applications. I tried to set up a application keyboard shortcut, but commands like ScreenSaverEngine don't seem to be an application. Note that this Screensaver/Lock screen is just one example. I have come across other nifty commands which I might want to bind to a key-combination as well. I can do this in Gnome and Windows (with varying success). How about with Leopard? Should I be looking at doing this with AppleScript? (I haven't used that since the Hypercard days ...)

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >