Search Results

Search found 22893 results on 916 pages for 'client scripting'.

Page 769/916 | < Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >

  • Getting 400 Bad Request when requesting by server name on nginx/uwsgi

    - by Marc Hughes
    I'm trying to run 2 different sites on nginx via different ports (they each have a load balancer that points to the appropriate port). The first site work perfectly. The second site... If I access http://localhost:81/ it works correctly If I access http://127.0.01:81/ it works correctly If I access the hostname http://THEHOSTNAME:81/ it fails with a 400 error If I access the public IP http://x.x.x.x:81/ it fails with a 400 error I've set the error_log to info, but the only lines I get in the log when this happens is: ==> /var/log/nginx/access.log <== 10.183.38.141 - - [24/Aug/2014:21:03:28 +0000] "GET / HTTP/1.1" 400 37 "-" "curl/7.36.0" "-" ==> /var/log/nginx/error.log <== 2014/08/24 21:03:28 [info] 7029#0: *5 client 10.183.38.141 closed keepalive connection In my uwsgi log, I only see this: [pid: 6870|app: 0|req: 87/92] 10.28.23.224 () {32 vars in 380 bytes} [Sun Aug 24 21:05:21 2014] GET / => generated 26 bytes in 1 msecs (HTTP/1.1 400) 2 headers in 82 bytes (1 switches on core 2) What should be my next step in debugging this?

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • Sharepoint db issue after DB move to SQL 08

    - by JohnyV
    Recently we have moved our sharepoint 2007 db from sql 2000 server to 2008 x64 SQL server. All seems well, however there is a problem where the sql server stops running and the service has to be restarted. The errors mention insufficient internal memory etc. I have tried to start the db using -g384 which is the default in sql 2000 but 256 is default for 2008 I believe. This has not rectified the issue. I was advised that perhaps the issue may be rectified by upgrading to wss 3.0 sp2 however When I have tried to install this i get another error post sp2 update and have to refer back to a vm snapshot. The error after the service pack is Server error: http://go.microsoft.com/fwlink?LinkID=96177 So I guess I have a few questions How can I fix the first issue and the 2nd issue. I have checked out many forums and posts and have tried a few things and still get no joy. Any assistance would be great. UPDATE I have fixed the Server error: http://go.microsoft.com/fwlink?LinkID=96177 the i needed to run the wss sp2 as well as the office servers sp2 then the config wizard then the moss configuration worked. The errors I am getting in SQL are SQL Server was unable to run a new system task, either because there is insufficient memory or the number of configured sessions exceeds the maximum allowed in the server. Verify that the server has adequate memory. Use sp_configure with option 'user connections' to check the maximum number of user connections allowed. Use sys.dm_exec_sessions to check the current number of sessions, including user processes. A read operation on a large object failed while sending data to the client. A common cause for this is if the application is running in READ UNCOMMITED isolation level. The connection will be terminated. There is insufficient system memory in resource pool 'internal' to run this query. These errors are by a user that was created as a service for sharepoint.

    Read the article

  • Configure Plesk only for Tomcat-Java

    - by AJIT RANA
    I need to configure tomcat on Linux dedicate server only for Java project through Plesk . Following services is running on it. '1.Apache on port 80 ' '2.Tomcat on port 8080/9080' '3.Mysql on port 3306 ' Now problem is this, i need to run only java project on this server from port 80 .this time user type my site name then default page call index.html or .php file from root directory of Apache. so how it can be possible to run java project from this server default port 80 after deploye .war(java project) file to this server. Because user who wants to access my site does not know its port number for Tomcat as here is 9080 and also deploy file name. Pls look below for detail about problem Suppose my sit name is www.example.com and hosted on Linux dedicate server with Plesk install on it with Apache, Tomcat and Mysql. Now for running my java project on it, i need to enter www.example.com:9080/java_projrect_name/ in browser. So how can i run this project only from URL www.example.com and it will call default file .jsp from java_project_name directory. I do not want to enter port number and java_project_name in url and my client who wants to access this project did not know about port number as well as project name . He knows only about URL as www.example.com and when he browses it then it should call default page from java_project directory. So to implement this what should we need to do? Pls help. Thanks

    Read the article

  • mysql master-master setup as a way to simply master-slave promotion

    - by Chris Go
    I'm trying to see if the following plan is viable. Goal here is to be able to do HA (uptime) and not necessarily for load -- writes are fine on one MySQL 5.5 server (with innodb) but not really possible when the database is down. Currently, I have a master-slave replication setup which works fine except it doesn't have automatic promotion (obviously). what I am planning on doing is setup master-master replication to possibly do this "automatic promotion" using Amazon Route 53 DNS Failover (Health checks). What I am trying to avoid is to NOT have to do the auto-increment trick because the "business folks" got used to the auto-incrementing PK as consecutive numbers (yeah, I know this is bad but data is from 2004). So, setup the master-master replication WITHOUT the auto-increment collision prevention bit. The primary master is db1.domain.com and secondary master is db2.domain.com In Amazon Route 53, setup DNS Failover record for db.domain.com - primary failover is db1.domain.com - with a TCP healthcheck on IP address port 3306 - secondary failover is db2.domain.com - with a TCP healthcheck on IP address port 3306 Most of the time (99%), unless tcp://db1.domain.com:3306 is dead, db1.domain.com will be served up on DNS hits to db.domain.com. In fact, hopefully this is 100%. The possible downsides of this is the loss of a primary key (collision) and I think I am OK with losing one order. We are a low data volume B2B business and can just call our client up if this occurs (like an order disappearing). Does this sound like a good plan? Then I will also run another slave replication on db1.domain.com as "master" to a slave-db1.domain.com -- not sure why, maybe for heavy SELECTs?

    Read the article

  • Remote SQL server connection failure

    - by Sevki
    I am trying to connect to my MSSQL server 2008 web instance and im failing horribly... i get the error 26 and before you jump on me i have done these Check the spelling of the SQL Server instance name that is specified in the connection string. Use the SQL Server Surface Area Configuration tool to enable SQL Server to accept remote connections over the TCP or named pipes protocols. For more information about the SQL Server Surface Area Configuration Tool, see Surface Area Configuration for Services and Connections. Make sure that you have configured the firewall on the server instance of SQL Server to open ports for SQL Server and the SQL Server Browser port (UDP 1434). Make sure that the SQL Server Browser service is started on the server. in addition to theese i have disabled the firewall completely and tried other ports nothing works the same credentials work on the server but not on the client. this is the exact error message A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider) Can anybody help?

    Read the article

  • Contacts Issue: Google Contacts, Nokia 5800 And Outlook, Sync and Images

    - by josecortesp
    Hello everyone, this is a multi-question, so please, feel free to answer just one or all of them. My actual situation is: Google contacts sync with Nokia 5800 using MFE(google sync), using google's fake "exchange server". Outlook sync with google using GoContact Sync software (http://www.webgear.co.nz/Products/GOContactSync.aspx). What I want, a easy(or not) way to correctly sync the three of them, even if I have to change some of my "escenario". I know outlook<-google contacts sync is not good yet, but if someone can suggest me some better SW than GoContact Sync. It'll be good. What I really want is to sync a least name, phone, email and image. Then, is there a way to get all my google contacts to by default use their own google image? I noted that, usually i have to manually choose it, and sometimes it doesn't work. I'll like to use something or someone's service to do this automatically. I have tried some services like Soocial, but it gives me a lot of trouble after the first sync, because soccial can ONLY get ALL your contacts, not just "My Contacts"; and also it gives me like 100 bad contacts in outlook because of their beta client. The final goal of all of this is to Get only "My Contacts" from google in Outlook and in the 5800 with their images Get ALL my contacts with their default image, meaning, the contact's personal image Get a solution to correctly sync the three of them, using Google's as the base for the rest of them. Thanks in advance. Again, feel free to sugest me solutions to just one or all them.

    Read the article

  • Apache mod-pagespeed installation affects mod-spdy?

    - by tim peterson
    Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only things to my knowledge that I've changed is that I recently installed mod_spdy and then a few weeks later I installed mod_pagespeed, https://developers.google.com/speed/pagespeed/mod. However, I have since turned mod_pagespeed off by setting its pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. The line below is how every of last 10 lines of my error.log look: # tail -f /var/log/apache2/error.log ... [32728:32729:ERROR:mod_spdy.cc(162)] request->chunked == 1 in request GET / HTTP/1.1 [Sat Jun 02 04:50:08 2012] [warn] [client 50.136.93.153] [stream 5] [32728:32729:WARNING:http_to_spdy_filter.cc(113)] HttpToSpdyFilter is not the last filter in the chain: chunk any thoughts? thank you, tim

    Read the article

  • IPv6: Can't ping anything - "Operation not permitted"

    - by Matthew Iselin
    I've been working on getting IPv6 support into my network, and had everything working properly for a short while. The server is running Ubuntu Server 8.10. Now however whenever I attempt to do anything related to IPv6 on the server, I get "Operation not permitted". This is coming from things like wide-dhcpv6-client (when trying to get an IPv6 address from the ISP) and radvd - both log errors of this type into syslog. Even pinging the loopback interface fails: xxx@gordon:~$ ping6 ::1 PING ::1(::1) 56 data bytes ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ^C --- ::1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms xxx@gordon:~$ sudo ping6 ::1 sudo: unable to resolve host gordon PING ::1(::1) 56 data bytes ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ^C --- ::1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms As you can see, I have attempted pinging as root, as most of the material I've found on the internet points to a permission problem. However, that has not helped. Any hints to getting unstuck would be appreciated.

    Read the article

  • chmod -R 777 / on ubuntu - numerous problems

    - by ncatnow
    A client has accidentally given the entire filesystem full permissions on their ubuntu 10.04 box. chmod -R 777 httpdocs/cd / As you can see they attempted to cd to the root, and instead gave chmod a fun parameter to play with. First sign of the problem was inability to use 'su', giving an authentication error. sudo also complained of a missing setuid bit. This was fixed by logging in as root from the machine itself, and running chmod +s /usr/bin/sudo. I can now sudo su and do what I need to as root. su still gives an authentication failure. I followed the advice here: http://swiss.ubuntuforums.org/showthread.php?t=1180661&page=2 chmod 0755 / chmod 0755 /* chmod 1777 /tmp chmod 0750 /root chmod 0700 /lost+found I then tried to reset root password. I still cannot su to become root, or su root. The system seems to be running fine. Are there any suggestions for getting su to work once again? Where can I look for more problems?

    Read the article

  • getting "No LoginModules configured" for JAAS login under WebSphere security domain

    - by user1739040
    I have a JAX-RPC web service running on WebSphere V7. It requires a UserNameToken for security. I have a custom login module (MyLoginModule) which extracts the username and password, and that module is defined as a JAAS application login in the websphere admin console. Using IBM RAD 8.0, I have bound the token consumer to the login module using the JAAS config name of the module. This all works fine and happy on my development server. Now I realize, that for deployment to another server, I am required to move the JAAS login from global security to a security domain. When I do that, it breaks my web service. I get this SOAP Fault message: com.ibm.wsspi.wssecurity.SoapSecurityException: WSEC6520E: Construction of the login context failed. The exception is : javax.security.auth.login.LoginException: No LoginModules configured for MyLoginModule According to the IBM docs: The JAAS application logins, the JAAS system logins, and the JAAS J2C authentication data aliases can all be configured at the domain level. By default, all of the applications in the system have access to the JAAS logins configured at the global level. The security runtime first checks for the JAAS logins at the domain level. If it does not find them, it then checks for them in the global security configuration. Configure any of these JAAS logins at a domain only when you need to specify a login that is used exclusively by the applications in the security domain. So I am looking to make sure my application is in the domain, and I have tried everything I can think of. (I have assigned the domain to "all scopes", to the entire cell, etc.) No luck, I keep getting the same error response to my web service client. Any help or hints are appreciated.

    Read the article

  • SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified data

    - by eulerfx
    I am running SQL Server 2008 Express Edition on Windows Server 2008 with an ASP.NET application which must access the server. The ASP.NET application is associated with an application pool that runs on the NetworkService account. This account in turn has a Login and User record on SQL Server in the required database. When I attempt to run the ASP.NET website I get a blank page and when viewed in the error log, I seem to be getting this information event record: Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. Reason: Failed to open the explicitly specified database. [CLIENT: myLocalMachine] The connection string has Trusted_Connection=True; and the required database specified. When I explicitly specify the user name and password I get another login error stating the password is incorrect, even though the same un/pw combination works through SQL Server Management studio. The NETWORK SERVICE account seems to have all the required privileges for the database. Also, I made a test ASP.NET website project which does a simple select from a table in that database, and using the same config file I am not getting the error and it seems to work. Is it something to do with trust levels then, because the original ASP.NET web app references various DLLs including open source libraries. Also, the application does not seem to be able to write to the event log itself, throwing a security exception, even though everything in the config files, including machine.config states the app is in full trust.

    Read the article

  • Thunderbird doesn't show folders on a new Dovecot install

    - by Zoran Zaric
    Hey, I set up a new mailserver with postfix and Dovecot some days ago, everything is working except for Thunderbird not showing any folders. Evolution shows me all folders. I migrated from a Courier install using imapsync. In the filesystem the folders don't have a INBOX in their name, so the tho folders ar called .Folder 1 not .INBOX.Folder 1. This is the output of dovecot -n: # 1.0.10: /etc/dovecot/dovecot.conf Warning: mail_extra_groups setting was often used insecurely so it is now deprecated, use mail_access_groups or mail_privileged_group instead base_dir: /var/run/dovecot/ log_timestamp: “%Y-%m-%d %H:%M:%S ” protocols: imap pop3 listen(default): *:143 listen(imap): *:143 listen(pop3): *:110 disable_plaintext_auth: no login_dir: /var/run/dovecot//login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login first_valid_uid: 1001 last_valid_uid: 1001 mail_extra_groups: vmail mail_access_groups: vmail mail_location: maildir:/var/vmail/%d/%u maildir_copy_with_hardlinks: yes mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 pop3_uidl_format(default): pop3_uidl_format(imap): pop3_uidl_format(pop3): %08Xu%08Xv auth default: user: nobody passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: sql args: /etc/dovecot/dovecot-sql.conf socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 432 user: vmail group: vmail Thanks!

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • MySQL socket connections working, but not port connections

    - by Neil
    I installed MySQL community 5.1.45 on my Snow Leopard 10.6, using the pkg from their site. I had previously installed a MySQL binary from entropy.ch. In the previous installation, the connections were working fine before I upgrade to Snow Leopard. In Snow Leopard, both the installations are problematic. Using an app called Sequel Pro, if I connect with the socket operation, it connects properly. However, a standard connection with the same credentials doesn't work. From what I've understood, socket connections happen on the machine itself between processes, whereas normal connections occur over the network/ports, in this case a loopback to my machine, since the server and client are both on the same machine. My new CakePHP installation isn't being able to connect to the db with the root credentials I provided. Btw, I've been starting the MySQL server using the Preference Pane. When I tried running mysqld from terminal, it gave me: 100323 1:54:37 [Warning] Can't create test file /usr/local/mysql-5.1.45-osx10.6-x86_64/data/mbp.lower-test 100323 1:54:37 [Warning] Can't create test file /usr/local/mysql-5.1.45-osx10.6-x86_64/data/mbp.lower-test mysqld: Can't change dir to '/usr/local/mysql-5.1.45-osx10.6-x86_64/data/' (Errcode: 13) 100323 1:54:37 [ERROR] Aborting 100323 1:54:37 [Note] mysqld: Shutdown complete mbp is the name of my machine. How do I fix this so that my webserver can connect to the mysql server?

    Read the article

  • Wifi Drops Connections with WPA2-PSK

    - by graf_ignotiev
    I run a small computer lab made up of 10 computers of identical hardware and software (Dell Latitudes with Windows 7 x64 Enterprise) and I use a ZyWALL 2WG as a router/firewall. Nine of the computers connect to the router over wifi using WPA2-PSK encryption while the last one is connected by ethernet cable. I'm having a problem where any computer connected to the wi-fi occasionally drops off the network (it cannot be pinged and the client cannot ping the gateway). It only happens on the wifi side and only when the encryption is WPA2-PSK or WPA-PSK. I tried using another router with a different make and model and had no problems. Thinking it could be a software error, I reset the router to factory defaults and installed the newest firmware (V4.04(AQI.8) | 04/09/2010), but still have the problem. The 802.1X log gives the following error User logout because of user disassociation. with this note WPA2-PSK:00242c582ece:logout where 00242c582ece is the mac address of the device. At this point I'm out of things to try and leads to follow. It looks like this user had the same or similar problem, but none of those proposed solutions work for me.

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • Netscaler - Involving Response Body Replacement with GeoIP Implementation

    - by MrGoodbyte
    I have a Netscaler (10000) NS9.3: Build 52.3.cl There is a document contains a short tutorial about GeoIP database implementation at http://support.citrix.com/article/CTX130701 I have added location database by following this tutorial successfully but instead of dropping connections I need to replace a string in content body. To do that, I created a rewrite action with those parameters. * Name: ns_country_replacement_action * Type: REPLACE_ALL * Target Expression: HTTP.RES.BODY(50000) * Replacement Text: HTTP.RES.HEADER("international") * Pattern: \{\/\*COUNTRY_NAME\*\/\} To insert header called "international", I try to create another rewrite action with those parameters. I'll add this one's policy prior. * Name: ns_country_injection_action * Type: INSERT_HTTP_HEADER * Header Name: international * Header Value: CLIENT.IP.SRC.MATCHES_LOCATION("*.US.*.*.*.*").NOT When I click on create button, it says; Compound expression syntax error, [.*.*").NOT^, 50] I'm not sure but I think that two things might cause this error. The expression I use for MATCHES_LOCATION is wrong but I use same expression in the tutorial. MATCHES_LOCATION().NOT returns BOOLEAN but the field expects STRING. How can I get this work? Do I use right tools to accomplish what I need to do? Thank you. -Umut

    Read the article

  • Using dnsmasq for accessing multiple nameservers assigned by DHCP

    - by Ash
    At my work desktop running openSUSE 11.4, I have a local network which gets its address, domain (work.site) and nameservers (10.100.1.1, 10.100.1.2) info through DHCP - which get written into /etc/resolv.conf I get to access the internet using the work network, and these 2 nameservers end up returning the entries for any public domain name lookups on the internet. I also have a private VPN that I end up connecting. The nameserver (10.111.1.1) and domain (private.site) are rarely bound to change for this network, but currently they're pushed by the openVPN client into networkmanager, and which also gets merged with the existing /etc/resolv.conf My resolv.conf ultimately ends up looking like this: search private.site work.site nameserver 127.0.0.1 nameserver 10.111.1.1 nameserver 10.100.1.1 As you can see the 2nd nameserver from my work network was pushed out because of the max 3 entry limitations. It is fine still, but would be a problem if that nameserver goes down for maintenance or something. So I found out that dnsmasq could help me here, and hence I setup dnsmasq just as a local DNS resolver without any DHCP support. So right now this is my /etc/dnsmasq.conf: resolv-file=/etc/resolv.conf server=/private.site/10.111.1.1 server=/1.111.10.in-addr.arpa/10.111.1.1 listen-address=127.0.0.1 bind-interfaces log-queries I've made dnsmasq get the list of nameservers from /etc/resolv.conf since NetworkManager seems to be updating this list correctly (for a max of 3 nameservers). I'm able to resolve the host names in both the networks correctly. So these are the questions I have: Is there a way I can make either NetworkManager or dhclient write out the list of nameservers somewhere else which I can make dnsmasq use as resolv-file ? How do I make dnsmasq use certain nameservers as the default for all queries ? Right now I notice that lookups for public domains on the internet are usually sent to both the nameservers - the one on work.site as well as private.site. It would be good if I can limit this only to work.site.

    Read the article

  • Help diagnosing Likewise Open Active Directory authentication problem

    - by purpletonic
    I have two servers which were up until recently authenticating against the companies Active Directory Domain controller. I believe a recent change to the Active Directory administrator password caused the servers to stop authenticating against AD. I tried to add the servers back to the domain using the command: domainjoin-cli join example.com adusername this seemed to work without complaints, but when I try to login via ssh with my domain account, I get an invalid password error. When I run the command: lw-enum-users it prints all of the domain users, and looking up my own account, I see that it is valid and my password hasn't expired. I also ran lw-get-status and received the following: LSA Server Status: Agent version: 5.0.0 Uptime: 0 days 3 hours 35 minutes 46 seconds [Authentication provider: lsa-activedirectory-provider] Status: Online Mode: Un-provisioned Domain: example.com Forest: example.com Site: Default-First-Site-Name Online check interval: 300 seconds \[Trusted Domains: 1\] \[Domain: EXAMPLE\] DNS Domain: example.com Netbios name: EXAMPLE Forest name: example.com Trustee DNS name: Client site name: Default-First-Site-Name Domain SID: S-1-5-24-1081533780-4562211299-822531512 Domain GUID: 057f0239-7715-4711-e64b-eb5eeed20e65 Trust Flags: \[0x001d\] \[0x0001 - In forest\] \[0x0004 - Tree root\] \[0x0008 - Primary\] \[0x0010 - Native\] Trust type: Up Level Trust Attributes: \[0x0000\] Trust Direction: Primary Domain Trust Mode: In my forest Trust (MFT) Domain flags: \[0x0001\] \[0x0001 - Primary\] \[Domain Controller (DC) Information\] DC Name: dc1.example.com DC Address: 10.11.0.103 DC Site: Default-First-Site-Name DC Flags: \[0x000003fd\] DC Is PDC: yes DC is time server: yes DC has writeable DS: yes DC is Global Catalog: yes DC is running KDC: yes [Authentication provider: lsa-local-provider] Status: Online Mode: Local system Anyone got any ideas what might be occurring? Thanks in advance!

    Read the article

  • Viewing a large-resolution VNC server through a small-resolution viewer in Ubuntu

    - by Madiyaan Damha
    I have two Ubuntu computers, one with a large screen resolution (1920x1600) that is running default ubuntu vnc server. I have another computer that has a resolution of about 1200x1024 that I use to vnc into the server (I use the default ubuntu vnc viewer). Now everything works fine except there are annoying scrollbars in the viewer because the server's desktop resolution is so much higher than the viewer's. Is there a way to: 1) Scale the server's desktop down to the viewer's resolution. I know there will be a loss of image quality, but I am willing to try it out. This should be something like how windows media player or vlc scales down the window (and does some interpolation of pixels). 2) Automatically shrink the resolution of the server to the client's when I connect and scale the resolution back when I disconnect. This seems like a less attractive solution. 3) Any other solution that gurus out there use? I am sure someone has experienced this before (annoying scroll bars) so there must be a solution out there. Thanks,

    Read the article

  • haproxy and tomcat intermittent hangs

    - by user7347
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • Can't get an IBM xSeries 345 server to load Windows Server 2003 using ServerGuide utility

    - by Kyle Noland
    I have a client that has an IBM xSeries 345 eServer. Per the IBM support website, I have downloaded the ServerGuide Setup 7.4.17 installation ISO and burned a bootable CD. The CD boots fine and loads the utility. I walk through the following screens without any issue: Set the date and Time Detect the IBM ServeRAID card and install the latest firmware Clear the hard disks Set up the RAID array The next step is format the NOS partition. I select my partition size and the utility goes through the following steps: Creating NOS partition Formatting NOS partition (NTFS) Copying W32 files The copying W32 files takes about 10 minutes. I see the CD drive and disks working hard. When the copying is complete, I'm taken to a blank page just NOS Partitioning at the top. At the bottom of the screen are the familiar Back and Exit buttons. I see the place where the Next button should be, and if I click on it I can tell there is something there, but the space is empty. No button is displayed and clicking the empty spot doesn't ever take me to the next screen. I can't load the OS until I get past this part. I have already tried: Burning multiple copies and versions of the ServerGuide CD Letting the final screen just sit there over the weekend thinking it might advance after syncing the drives or something Has anybody else seen this? I'm really at a loss here. EDIT: I found another person who has the exact same problem as me: http://www.ibm.com/developerworks/forums/thread.jspa?messageID=14451763

    Read the article

  • Using virtual IP with stunnel and haproxy

    - by beardtwizzle
    Hi there, We have a load-balancer setup, in which an HTTPS Request flows through the following steps:- Client -> DNS -> stunnel on Load-Balancer -> HAProxy on LB -> Web-Server This setup works perfectly when stunnel is listening to the local IP of the Load-Balancer. However in our setup we have 2 load-balancers and we want to be able to listen to a virtual IP, which only ever exists on one LB at a time (keepalived flips the IP to the second LB if the first one falls over). HAProxy has no problem in doing this (and I can ping the assigned virtual IP on the load-balancer I'm testing), but it seems stunnel hates the concept. Has anyone achieved this before (below is my stunnel config - as you can see I'm actually listening for ALL traffic on 443):- cert= /etc/ssl/certs/mycert.crt key = /etc/ssl/certs/mykey.key ;setuid = nobody ;setgid = nogroup pid = /etc/stunnel/stunnel.pid debug = 3 output = /etc/stunnel/stunnel.log socket=l:TCP_NODELAY=1 socket=r:TCP_NODELAY=1 [https] accept=443 connect=127.0.0.1:8443 TIMEOUTclose=0 xforwardedfor=yes Sorry for the long-winded question!

    Read the article

  • Cassandra Remote Connection

    - by Lyuben Todorov
    I'm not managing to connect to cassandra from outside machines. The database is hosted on a windows machine and im trying to connect through a mac (but this shouldn't cause problems) Local connection works: C:\cassandra\bin>cassandra-cli Starting Cassandra Client Connected to: "Test Cluster" on 127.0.0.1/9160 Welcome to Cassandra CLI version 1.1.6 But fails from other machines on the same network bin/cassandra-cli --host 192.168.0.10 --port 9160 org.apache.thrift.transport.TTransportException: java.net.ConnectException: Operation timed out at org.apache.thrift.transport.TSocket.open(TSocket.java:183) at org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81) at org.apache.cassandra.cli.CliMain.connect(CliMain.java:70) at org.apache.cassandra.cli.CliMain.main(CliMain.java:246) Exception connecting to 192.168.0.10/9160. Reason: Operation timed out. Welcome to Cassandra CLI version 1.2.0-beta3 Type 'help;' or '?' for help. Type 'quit;' or 'exit;' to quit. There is a router on the network but these ports have been triggred Ports: 1024, 7000, 7001, 7199, 9160 And the same ports were forwarded to 192.168.0.10 (where Cassandra is hosted) Cassandra version is 1.0.7 And the settings I think i need to change in cassandra.yaml listen_address: 192.168.0.10 rpc_address: I'm not really sure if I've missed any steps. Any help would be appreciated.

    Read the article

< Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >