Search Results

Search found 11051 results on 443 pages for 'bind variables'.

Page 243/443 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • How do I enable write access for an sFTP only user under Ubuntu?

    - by Jon Cage
    I'm running Ubuntu 12.04 and am trying to configure a user to allow chroot'd sFTP connections to another section of the filesystem. I've added the following to my /etc/ssh/sshd_config file: Match Group mygroup X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp ChrootDirectory /home/%u I've set their home directory so that it's owned by root but has their group. I've created a mount --bind from /home/myuser/transfers to /my/filesystem which appears to be navigable. The problem I'm having is that I'm not able to write to any part of the filesystem which makes this pretty useless as an FTP server. What am I missing? What can I check?

    Read the article

  • nginx dynamic servername with regular expression doesn't work for co.uk

    - by redn0x
    I'm trying to setup a nginx server which dynamically loads content from a folder for a domain. To do this I'm using regular expressions in the server name like so: server_name ((?<subdomain>.+)\.)?(?<domain>.+)\.(?<tld>.*); This will create a 3 variables for nginx to use later on, for example when using the following url: test.foo.example.com this will evaluate to: $subdomain = test.foo $domain = example $tld = com The problem arises when the co.uk top-level domain is used. In this case when using the url test.foo.example.co.ukit will evaluate to: $subdomain = test.foo.cedira $domain = co $tld = uk How can I edit the regular expression so that it will also work for co.uk?

    Read the article

  • HAProxy with 'backup' failure

    - by A.RG
    I have a simple configuration of 2 MySQL being load balanced by HAProxy. For an unfortunate reason I need to use them in Passive\Active mode. So I thought I'd configure one DB as 'backup' and go to sleep. But I was wrong. Whenever I add the 'backup' to the server line HAProxy throws a communication link error (essentially saying 'no DB available" (with the 'backup' it works great). It just doesnt consider that server as a valid option any more... I have tried this configuration: listen mysql 10.0.0.109:3307 mode tcp balance roundrobin option httpchk server db01 10.0.0.236:3306 server db02 10.0.0.68:3306 backup and also this configuration: frontend mysql_proxy bind 10.0.0.109:3307 default_backend mysql backend mysql mode tcp balance roundrobin option httpchk server db01 10.0.0.236:3306 server db02 10.0.0.68:3306 backup Nothing worked! Can anyone point me in the right direction? Thanks

    Read the article

  • jQuery::Scrollable Div does not work

    - by Legend
    I am trying to create a scrollable DIV using the following: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <title>Test</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/> <script type="text/javascript" src="lib/jquery/jquery-1.3.2.js"></script> <style type="text/css"> div.container { overflow:hidden; width:200px; height:200px; } div.content { position:relative; top:0; } </style> <script type="text/javascript"> $(function(){ $(".container a.up").bind("click", function(){ var topVal = $(this).parents(".container").find(".content").css("top"); $(this).parents(".container").find(".content").css("top", topVal-10); }); $(".container a.dn").bind("click", function(){ var topVal = $(this).parents(".container").find(".content").css("top"); $(this).parents(".container").find(".content").css("top", topVal+10); }); }); </script> </head> <body> <div class="container"> <p> <a href="#" class="up">Up</a> / <a href="#" class="dn">Down</a> </p> <div class="content"> <p>Hello World 1</p> <p>Hello World 2</p> <p>Hello World 3</p> <p>Hello World 4</p> <p>Hello World 5</p> <p>Hello World 6</p> <p>Hello World 7</p> <p>Hello World 8</p> <p>Hello World 9</p> <p>Hello World 10</p> <p>Hello World 10</p> <p>Hello World 11</p> <p>Hello World 12</p> <p>Hello World 13</p> <p>Hello World 14</p> <p>Hello World 15</p> <p>Hello World 16</p> <p>Hello World 17</p> <p>Hello World 18</p> <p>Hello World 19</p> <p>Hello World 20</p> <p>Hello World 21</p> <p>Hello World 22</p> <p>Hello World 23</p> <p>Hello World 24</p> <p>Hello World 25</p> <p>Hello World 26</p> <p>Hello World 27</p> </div> </div> </body> </html> I don't know where I am messing up, but it simply refuses to work. Any suggestions?

    Read the article

  • Does Dynamic DNS require separate subdomains?

    - by kce
    Hello. I have a functioning DHCP/DNS (ISC Bind 9.6, DHCP 3.1.1) server running on Debian that I would like to add DynamicDNS functionality to. I have a pretty simple question: Does DynamicDNS require (or recommend) separate sub-domains? I have seen a few tutorials where the the clients that are acquiring their IP addresses and other networking information via DHCP are on a different sub-domain as the servers which are statically configured (both in terms of IP, and DNS). For example: All the clients are on ws.example.org and the servers on example.org. Right now all of our servers and clients are in the same domain (example.org) but spread across different zone files (because we have multiple subnets). The clients are configured with DHCP and the servers are configured statically. If I want to setup DynamicDNS for the clients should I use a separate sub-domain? What's the best practice here (and why or why not would it be a bad idea to do otherwise)? Thanks.

    Read the article

  • Incremental PCA

    - by smichak
    Hi, Lately, I've been looking into an implementation of an incremental PCA algorithm in python - I couldn't find something that would meet my needs so I did some reading and implemented an algorithm I found in some paper. Here is the module's code - the relevant paper on which it is based is mentioned in the module's documentation. I would appreciate any feedback from people who are interested in this. Micha #!/usr/bin/env python """ Incremental PCA calculation module. Based on P.Hall, D. Marshall and R. Martin "Incremental Eigenalysis for Classification" which appeared in British Machine Vision Conference, volume 1, pages 286-295, September 1998. Principal components are updated sequentially as new observations are introduced. Each new observation (x) is projected on the eigenspace spanned by the current principal components (U) and the residual vector (r = x - U(U.T*x)) is used as a new principal component (U' = [U r]). The new principal components are then rotated by a rotation matrix (R) whose columns are the eigenvectors of the transformed covariance matrix (D=U'.T*C*U) to yield p + 1 principal components. From those, only the first p are selected. """ __author__ = "Micha Kalfon" import numpy as np _ZERO_THRESHOLD = 1e-9 # Everything below this is zero class IPCA(object): """Incremental PCA calculation object. General Parameters: m - Number of variables per observation n - Number of observations p - Dimension to which the data should be reduced """ def __init__(self, m, p): """Creates an incremental PCA object for m-dimensional observations in order to reduce them to a p-dimensional subspace. @param m: Number of variables per observation. @param p: Number of principle components. @return: An IPCA object. """ self._m = float(m) self._n = 0.0 self._p = float(p) self._mean = np.matrix(np.zeros((m , 1), dtype=np.float64)) self._covariance = np.matrix(np.zeros((m, m), dtype=np.float64)) self._eigenvectors = np.matrix(np.zeros((m, p), dtype=np.float64)) self._eigenvalues = np.matrix(np.zeros((1, p), dtype=np.float64)) def update(self, x): """Updates with a new observation vector x. @param x: Next observation as a column vector (m x 1). """ m = self._m n = self._n p = self._p mean = self._mean C = self._covariance U = self._eigenvectors E = self._eigenvalues if type(x) is not np.matrix or x.shape != (m, 1): raise TypeError('Input is not a matrix (%d, 1)' % int(m)) # Update covariance matrix and mean vector and centralize input around # new mean oldmean = mean mean = (n*mean + x) / (n + 1.0) C = (n*C + x*x.T + n*oldmean*oldmean.T - (n+1)*mean*mean.T) / (n + 1.0) x -= mean # Project new input on current p-dimensional subspace and calculate # the normalized residual vector g = U.T*x r = x - (U*g) r = (r / np.linalg.norm(r)) if not _is_zero(r) else np.zeros_like(r) # Extend the transformation matrix with the residual vector and find # the rotation matrix by solving the eigenproblem DR=RE U = np.concatenate((U, r), 1) D = U.T*C*U (E, R) = np.linalg.eigh(D) # Sort eigenvalues and eigenvectors from largest to smallest to get the # rotation matrix R sorter = list(reversed(E.argsort(0))) E = E[sorter] R = R[:,sorter] # Apply the rotation matrix U = U*R # Select only p largest eigenvectors and values and update state self._n += 1.0 self._mean = mean self._covariance = C self._eigenvectors = U[:, 0:p] self._eigenvalues = E[0:p] @property def components(self): """Returns a matrix with the current principal components as columns. """ return self._eigenvectors @property def variances(self): """Returns a list with the appropriate variance along each principal component. """ return self._eigenvalues def _is_zero(x): """Return a boolean indicating whether the given vector is a zero vector up to a threshold. """ return np.fabs(x).min() < _ZERO_THRESHOLD if __name__ == '__main__': import sys def pca_svd(X): X = X - X.mean(0).repeat(X.shape[0], 0) [_, _, V] = np.linalg.svd(X) return V N = 1000 obs = np.matrix([np.random.normal(size=10) for _ in xrange(N)]) V = pca_svd(obs) print V[0:2] pca = IPCA(obs.shape[1], 2) for i in xrange(obs.shape[0]): x = obs[i,:].transpose() pca.update(x) U = pca.components print U

    Read the article

  • Prevent IIS7 HTTPS from binding to all SSL IP addresses

    - by robpaveza
    I've had this interesting problem with IIS7. I have a number of HTTPS sites in IIS7. That hasn't been a problem, until I wanted to go and set up VisualSVN Server using an SSL certificate. The installer had trouble starting the service. When I looked in the event log, the error was that "the file is already in use by another process." I figured that the "file" was really a socket, and checked with netstat - even though IIS was only bound to three specific IP addresses (.160, .156, and .168) with port 443, it was consuming *:443. I could stop the World Wide Web Publishing Service, start VisualSVN, and then start IIS, but then none of my SSL servers would start. Any helpful hints about how I could make IIS not try to default-bind to *:443? Thanks!!

    Read the article

  • apache2 installation conflicts with previous lampp content

    - by Fruit
    hej, i downloaded xampp-linux-1.7.3a.tar.gz and installed it manually, not with aptitude. That was a mistake as proved later, due to some configurational issues i decided to remove it and install it via aptitude install apache2. Now the previous lampp conflicts with the new apache2, i get this error message. Starting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down how can delete the existing lampp files entirely or how can i configure apache2 to work as intended ? any help will be appreciated

    Read the article

  • Chrome Java Plugin not working after moving Java installation directory

    - by Florian Peschka
    I recently moved all my Java installations from C:\Program Files\Java to D:\Java, as my C:\ partition only had a few MB of free space left and windows kept on bugging me about it. I also edited every registry entry in HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft that pointed to the old directory to now point to the new one. Then changed the PATH variables and removed the old files, did a restart, and all Java applications work just fine. Yet, Chrome is the only application that seems to have a problem with what I did, as the Java Plugin isn't recognized any more. Are there any registry entires I may have missed? Maybe in the Chrome registry? How can I make this work again?

    Read the article

  • opengl, Black lines in-between tiles

    - by MiJyn
    When its translated in an integral value (1,2,3, etc....), there are no black lines in-between the tiles, it looks fine. But when it's translated to a non-integral (1.1, 1.5, 1.67), there are small blackish lines between each tile (I'm imagining that it's due to subpixel rendering, right?) ... and it doesn't look pretty =P So... what should I do? This is my image-loading code, by the way: bool Image::load_opengl() { this->id = 0; glGenTextures(1, &this->id); this->bind(); // Parameters... TODO: Should we change this? glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, this->size.x, this->size.y, 0, GL_BGRA, GL_UNSIGNED_BYTE, (void*) FreeImage_GetBits(this->data)); this->unbind(); return true; } I've also tried using: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); and: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); Here is my image drawing code: void Image::draw(Pos pos, CROP crop, SCALE scale) { if (!this->loaded || this->id == 0) { return; } // Start position & size Pos s_p; Pos s_s; // End size Pos e_s; if (crop.active) { s_p = crop.pos / this->size; s_s = crop.size / this->size; //debug("%f %f", s_s.x, s_s.y); s_s = s_s + s_p; s_s.clamp(1); //debug("%f %f", s_s.x, s_s.y); } else { s_s = 1; } if (scale.active) { e_s = scale.size; } else if (crop.active) { e_s = crop.size; } else { e_s = this->size; } // FIXME: Is this okay? s_p.y = 1 - s_p.y; s_s.y = 1 - s_s.y; // TODO: Make this use VAO/VBO's!! glPushMatrix(); glTranslate(pos.x, pos.y, 0); this->bind(); glBegin(GL_QUADS); glTexCoord2(s_p.x, s_p.y); glVertex2(0, 0); glTexCoord2(s_s.x, s_p.y); glVertex2(e_s.x, 0); glTexCoord2(s_s.x, s_s.y); glVertex2(e_s.x, e_s.y); glTexCoord2(s_p.x, s_s.y); glVertex2(0, e_s.y); glEnd(); this->unbind(); glPopMatrix(); } OpenGL Initialization code: void game__gl_init() { glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(0.0, config.window.size.x, config.window.size.y, 0.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glDisable(GL_DEPTH_TEST); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); } Screenshots of the issue:

    Read the article

  • SSH as root using public key still prompts for password on RHEL 6.1

    - by Dean Schulze
    I've generated rsa keys with cygwin ssh-keygen and copied them to the server with ssh-copy-id -i id_rsa.pub [email protected] I've got the following settings in my /etc/ssh/sshd_config file RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys PermitRootLogin yes When I ssh [email protected] it still prompts for a password. The output below from /usr/sbin/sshd -d says that a matching keys was found in the .ssh/authorized_keys file, but it still requires a password from the client. I've read a bunch of web postings about permissions on files and directories, but nothing works. Is it possible to ssh with keys in RHEL 6.1 or is this forbidden? The debug output from ssh and sshd is below. $ ssh -v [email protected] OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012 debug1: Connecting to my.ip.address [my.ip.address] port 22. debug1: Connection established. debug1: identity file /home/dschulze/.ssh/id_rsa type 1 debug1: identity file /home/dschulze/.ssh/id_rsa-cert type -1 debug1: identity file /home/dschulze/.ssh/id_dsa type 2 debug1: identity file /home/dschulze/.ssh/id_dsa-cert type -1 debug1: identity file /home/dschulze/.ssh/id_ecdsa type -1 debug1: identity file /home/dschulze/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 9f:00:e0:1e:a2:cd:05:53:c8:21:d5:69:25:80:39:92 debug1: Host 'my.ip.address' is known and matches the RSA host key. debug1: Found key in /home/dschulze/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/dschulze/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Offering DSA public key: /home/dschulze/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Trying private key: /home/dschulze/.ssh/id_ecdsa debug1: Next authentication method: password Here is the server output from /usr/sbin/sshd -d [root@ga2-lab .ssh]# /usr/sbin/sshd -d debug1: sshd version OpenSSH_5.3p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-d' debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. debug1: Bind to port 22 on ::. Server listening on :: port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8 debug1: inetd sockets after dupping: 3, 3 Connection from 172.60.254.24 port 53401 debug1: Client protocol version 2.0; client software version OpenSSH_6.1 debug1: match: OpenSSH_6.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: permanently_set_uid: 74/74 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user root service ssh-connection method none debug1: attempt 0 failures 0 debug1: PAM: initializing for "root" debug1: userauth-request for user root service ssh-connection method publickey debug1: attempt 1 failures 0 debug1: test whether pkalg/pkblob are acceptable debug1: PAM: setting PAM_RHOST to "172.60.254.24" debug1: PAM: setting PAM_TTY to "ssh" debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file /root/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /root/.ssh/authorized_keys, line 1 Found matching RSA key: db:b3:b9:b1:c9:df:6d:e1:03:5b:57:d3:d9:c4:4e:5c debug1: restore_uid: 0/0 Postponed publickey for root from 172.60.254.24 port 53401 ssh2 debug1: userauth-request for user root service ssh-connection method publickey debug1: attempt 2 failures 0 debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file /root/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /root/.ssh/authorized_keys, line 1 Found matching RSA key: db:b3:b9:b1:c9:df:6d:e1:03:5b:57:d3:d9:c4:4e:5c debug1: restore_uid: 0/0 debug1: ssh_rsa_verify: signature correct debug1: do_pam_account: called Accepted publickey for root from 172.60.254.24 port 53401 ssh2 debug1: monitor_child_preauth: root has been authenticated by privileged process debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: ssh_gssapi_storecreds: Not a GSSAPI mechanism debug1: restore_uid: 0/0 debug1: SELinux support enabled debug1: PAM: establishing credentials PAM: pam_open_session(): Authentication failure debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_pty_req: session 0 alloc /dev/pts/1 ssh_selinux_setup_pty: security_compute_relabel: Invalid argument debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 17323 debug1: session_exit_message: session 0 channel 0 pid 17323 debug1: session_exit_message: release channel 0 debug1: session_pty_cleanup: session 0 release /dev/pts/1 debug1: session_by_channel: session 0 channel 0 debug1: session_close_by_channel: channel 0 child 0 debug1: session_close: session 0 pid 0 debug1: channel 0: free: server-session, nchannels 1 Received disconnect from 172.60.254.24: 11: disconnected by user debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: deleting credentials

    Read the article

  • Bug in CF9: values for unique struct keys referenced and overwritten by other keys.

    - by Gin Doe
    We've run into a serious issue with CF9 wherein values for certain struct keys can be referenced by other keys, despite those other keys never being set. See the following examples: Edit: Looks like it isn't just something our servers ate. This is Adobe bug-track ticket 81884: http://cfbugs.adobe.com/cfbugreport/flexbugui/cfbugtracker/main.html#bugId=81884. <cfset a = { AO = "foo" } /> <cfset b = { AO = "foo", B0 = "bar" } /> <cfoutput> The following should throw an error. Instead both keys refer to the same value. <br />Struct a: <cfdump var="#a#" /> <br />a.AO: #a.AO# <br />a.B0: #a.B0# <hr /> The following should show a struct with 2 distinct keys and values. Instead it contains a single key, "AO", with a value of "bar". <br />Struct b: <cfdump var="#b#" /> This is obviously a complete show-stopper for us. I'd be curious to know if anyone has encountered this or can reproduce this in their environment. For us, it happens 100% of the time on Apache/CF9 running on Linux, both RH4 and RH5. We're using the default JRun install on Java 1.6.0_14. To see the extent of the problem, we ran a quick loop to find other naming sequences that are affected and found hundreds of matches for 2 letter key names. A similar loop found more conflicts in 3 letter names. <cfoutput>Testing a range of affected key combinations. This found hundreds of cases on our platform. Aborting after 50 here.</cfoutput> <cfscript> teststring = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; stringlen = len(teststring); matchesfound = 0; matches = ""; for (i1 = 1; i1 <= stringlen; i1++) { symbol1 = mid(teststring, i1, 1); for (i2 = 1; i2 <= stringlen; i2++) { teststruct = structnew(); symbol2 = mid(teststring, i2, 1); symbolwhole = symbol1 & symbol2; teststruct[ symbolwhole ] = "a string"; for (q1 = 1; q1 <= stringlen; q1++) { innersymbol1 = mid(teststring, q1, 1); for (q2 = 1; q2 <= stringlen; q2++) { innersymbol2 = mid(teststring, q2, 1); innersymbolwhole = innersymbol1 & innersymbol2; if ((i1 != q1 || i2 != q2) && structkeyexists(teststruct, innersymbolwhole)) { // another affected pair of keys! writeoutput ("<br />#symbolwhole# = #innersymbolwhole#"); if (matchesfound++ > 50) { // we've seen enough abort; } } } } } } </cfscript> And edit again: This doesn't just affect struct keys but names in the variables scope as well. At least the variables scope has the presence of mind to throw an error, "can't load a null": <cfset test_b0 = "foo" /> <cfset test_ao = "bar" /> <cfoutput> test_b0: #test_b0# <br />test_ao: #test_ao# </cfoutput>

    Read the article

  • Mismatched coordinate systems in LWJGL with Mouse and Textures

    - by Braains
    I'm not really sure how to expand on this other than to say that it appears that my LWJGL seems to have different coordinate systems for the Mouse and for painting Textures. It seems that Textures have the usual Java2D way of putting (0, 0) in the upper-left corner, while the Mouse goes by the more sensible way of having the origin in the lower-left corner. I've checked my code a bunch but I don't see anything modifying the values between where I read them and where I use them. It's thrown me for a loop and I can't quite figure it out. I'll post all the code that involves the Mouse input and Texture painting for you guys to look at. private static void pollHelpers() { while(Mouse.next()) { InputHelper.acceptMouseInput(Mouse.getEventButton(), Mouse.getEventX(), Mouse.getEventY()); } while (Keyboard.next()) { if (Keyboard.getEventKeyState()) { InputHelper.acceptKeyboardInput(Keyboard.getEventKey(), true); } else { InputHelper.acceptKeyboardInput(Keyboard.getEventKey(), false); } } } public static void acceptMouseInput(int mouseButton, int x, int y) { for(InputHelper ie: InputHelper.instances) { if(ie.checkRectangle(x, y)) { ie.sendMouseInputToParent(mouseButton); } } } private void sendMouseInputToParent(int mouseButton) { parent.onClick(mouseButton); } public boolean checkRectangle(int x, int y) { //y = InputManager.HEIGHT - y; See below for explanation return x > parent.getX() && x < parent.getX() + parent.getWidth() && y > parent.getY() && y < parent.getY() + parent.getHeight(); } I put this line of code in because it temporarily fixed the coordinate system problem. However, I want to have my code to be as independent as possible so I want to remove as much reliance on other classes as possible. These are the only methods that touch the Mouse input, and as far as I can tell none of them change anything so all is good here. Now for the Texture methods: public void draw() { if(!clicked || deactivatedImage == null) { activatedImage.bind(); glBegin(GL_QUADS); { DrawHelper.drawGLQuad(activatedImage, x, y, width, height); } glEnd(); } else { deactivatedImage.bind(); glBegin(GL_QUADS); { DrawHelper.drawGLQuad(deactivatedImage, x, y, width, height); } glEnd(); } } public static void drawGLQuad(Texture texture, float x, float y, float width, float height) { glTexCoord2f(x, y); glVertex2f(x, y); glTexCoord2f(x, y + texture.getHeight()); glVertex2f(x, y + height); glTexCoord2f(x + texture.getWidth(), y + texture.getHeight()); glVertex2f(x + width, y +height); glTexCoord2f(x + texture.getWidth(), y); glVertex2f(x + width, y); } I'll be honest, I do not have the faintest clue as to what is really happening in this method but I was able to put it together on my own and get it to work. my guess is that this is where the problem is. Thanks for any help!

    Read the article

  • Linux Bridge, Samba netbios name/hostname access

    - by Christopher Wilson
    I am currently running a linux bridge in the following configuration ADSL Modem: 192.168.1.1 Linux Bridge: eth0: 192.168.1.2 eth1: no address Wireless Router: 192.168.0.1 My issue is that i cannot access the "Linux Bridge" shares using the WINS name of the server via client systems (yes i understand it is a transparent bridge but i can access it via the 192.168.1.2 address this is not on the same subnet as the client systems). This is the global section of my SMB.CONF [global] unix extensions = off os level = 20 netbios name = server guest account = nobody server string = 447 Server security = share #unix extensions = no #wins support = yes #wins server = 192.168.0.1 name resolve order = wins lmhosts hosts bcast interfaces bridge1 eth0 eth1 lo bind interfaces only = yes Can i access a bridged server using it's WINS name to access samba shares? Cheers Chris

    Read the article

  • Receiving POST data in ASP.NET

    - by grast
    Hi, I want to use ASP for code generation in a C# desktop application. To achieve this, I set up a simple host (derived from System.MarshalByRefObject) that processes a System.Web.Hosting.SimpleWorkerRequest via HttpRuntime.ProcessRequest. This processes the ASPX script specified by the incoming request (using System.Net.HttpListener to wait for requests). The client-part is represented by a System.ComponentModel.BackgroundWorker that builds the System.Net.HttpWebRequest and receives the response from the server. A simplified version of my client-part-code looks like this: private void SendRequest(object sender, DoWorkEventArgs e) { // create request with GET parameter var uri = "http://localhost:9876/test.aspx?getTest=321"; var request = (HttpWebRequest)WebRequest.Create(uri); // append POST parameter request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; var postData = Encoding.Default.GetBytes("postTest=654"); var postDataStream = request.GetRequestStream(); postDataStream.Write(postData, 0, postData.Length); // send request, wait for response and store/print content using (var response = (HttpWebResponse)request.GetResponse()) { using (var reader = new StreamReader(response.GetResponseStream(), Encoding.UTF8)) { _processsedContent = reader.ReadToEnd(); Debug.Print(_processsedContent); } } } My server-part-code looks like this (without exception-handling etc.): public void ProcessRequests() { // HttpListener at http://localhost:9876/ var listener = SetupListener(); // SimpleHost created by ApplicationHost.CreateApplicationHost var host = SetupHost(); while (_running) { var context = listener.GetContext(); using (var writer = new StreamWriter(context.Response.OutputStream)) { // process ASP script and send response back to client host.ProcessRequest(GetPage(context), GetQuery(context), writer); } context.Response.Close(); } } So far all this works fine as long as I just use GET parameters. But when it comes to receiving POST data in my ASPX script I run into trouble. For testing I use the following script: // GET parameters are working: var getTest = Request.QueryString["getTest"]; Response.Write("getTest: " + getTest); // prints "getTest: 321" // don't know how to access POST parameters: var postTest1 = Request.Form["postTest"]; // Request.Form is empty?! Response.Write("postTest1: " + postTest1); // so this prints "postTest1: " var postTest2 = Request.Params["postTest"]; // Request.Params is empty?! Response.Write("postTest2: " + postTest2); // so this prints "postTest2: " It seems that the System.Web.HttpRequest object I'm dealing with in ASP does not contain any information about my POST parameter "postTest". I inspected it in debug mode and none of the members did contain neither the parameter-name "postTest" nor the parameter-value "654". I also tried the BinaryRead method of Request, but unfortunately it is empty. This corresponds to Request.InputStream==null and Request.ContentLength==0. And to make things really confusing the Request.HttpMethod member is set to "GET"?! To isolate the problem I tested the code by using a PHP script instead of the ASPX script. This is very simple: print_r($_GET); // prints all GET variables print_r($_POST); // prints all POST variables And the result is: Array ( [getTest] = 321 ) Array ( [postTest] = 654 ) So with the PHP script it works, I can access the POST data. Why does the ASPX script don't? What am I doing wrong? Is there a special accessor or method in the Response object? Can anyone give a hint or even know how to solve this? Thanks in advance.

    Read the article

  • Mount /tmp in /mnt on EC2

    - by Claudio Poli
    I was wondering what is the best way to mount the /tmp endpoint in the ephemeral storage /mnt on an EC2 instance and give the ubuntu user default write permissions. Some suggest editing /etc/rc.local this way: mkdir -p /mnt/tmp && mount --bind -o nobootwait /mnt/tmp /tmp However that doesn't work for me. I tried editing the fstab: /dev/xvdb /tmp auto defaults,nobootwait,comment=cloudconfig 0 2 and giving it a umask=0777, however it doesn't work because of cloudconfig. I'm using Ubuntu 12.04.

    Read the article

  • IIS- defining a website as a dev site

    - by Lock
    I am new to IIS. Is there a way during the setup of IIS to have a variable of some sort set that I can use to tell my site that this is the development copy? I am using PHP via IIS 7.5 and would like to have a file with a few lines that define which databases etc is used by my application. Is this the purpose of web.config? I would love there to be a place in the setup of the website where I can set a few variables that are accessibly by my application. That way, when I migrate files to live, I don't need to worry about access details to databases etc.

    Read the article

  • Samba4 advice for production use

    - by pgb
    I have an old Samba 3 + LDAP server installed that needs to be rebuilt. I'm weighting my options, and Windows Server seems too expensive at the moment, and Samba 4 appeared to be a nice option, coupled with the last Bind 9 that can dynamically add the computers to the DNS. I have about 30 workstations, so I still consider it a small network. My questions are: Is Samba 4 stable enough for production? It seems as if the Samba team is too cautious on when to call their version final, or even beta, as compared with other open source projects. What Linux distribution would you recommend to set it up? I usually use Ubuntu Server, but may use another one if installing / maintaining Samba 4 is better on that one.

    Read the article

  • Can't set up IIS Web Server on Server 2008 x64 correctly (what have I missed?)

    - by balexandre
    Using a VM I installed Windows 2008 Server x64 and as the image below shows, added the IIS Role full image and assigned all role features of IIS full image But if I have an ASP.NET (aspx) page that does (C#) Session["test-session"] = "A"; and read in other page I always get nothing! NOTE: I do have an entire ASP.NET web application, the example above is to be succinct and explicit on what is the problem I'm facing. Can anyone know what do I have to do to the Server, so I can use the Session variables? All help is greatly appreciated, Thank you

    Read the article

  • Access the failing addresses within bounce_message_file in exim

    - by mkurz
    We are running exim version 4.72 I'm using bounce_message_file to customize our bounce messages, like desribed in the documentation Within this bounce message template I can access various variables like $message_exim_id, etc. Is there also a way to access the list of the failing addresses (maybe with their error messages) in such a variable (as string, list,... whatever) or in any other way? It does not really matter how and in which format they are, I just want access them within the bounce message template file. (I am referencing to the failed addresses usually listed automatically after the line "This is a permanent error. The following address(es) failed:") Thank you very much for your help!

    Read the article

  • Openstack keystone issue

    - by kevin
    i am having an issue with keystone,keystone is configured with users nova,glance and admin user and their endpoints are also defined. when performing keystone token-get it is showing token,but for commands like keystone user-list its showing No handlers could be found for logger "keystoneclient.client" Unable to communicate with identity service: 404 Not Found The resource could not be found. . (HTTP 404) but after setting these env variables it worked export SERVICE_ENDPOINT=http://192.168.10.15:35357/v2.0 export SERVICE_TOKEN=token but after that for keystone token-get its showing 'Client' object has no attribute 'service_catalog' Why is it so?How can it be fixed any ideas

    Read the article

  • %HOMEPATH% Posh-Git error in Powershell, in ConEmu on Windows 7 64-bit

    - by atwright
    I am always getting the following error in Posh-Git in Powershell, in ConEmu on Windows 7 64-bit: Resolve-Path : Cannot find path 'C:\wamp\www\MobileApps\Backbone\%HOMEPATH%' because it does not exist. At D:\Users\Andy\Documents\WindowsPowerShell\Modules\posh-git\GitUtils.ps1:265 char:13 + $home = Resolve-Path (Invoke-NullCoalescing $Env:HOME ~) + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (C:\wamp\www\Mob...bone\%HOMEPATH%:String) [Resolve-Path], ItemNotFoundException + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.ResolvePathComma nd Join-Path : Cannot bind argument to parameter 'Path' because it is null. At D:\Users\Andy\Documents\WindowsPowerShell\Modules\posh-git\GitUtils.ps1:266 char:29 + Resolve-Path (Join-Path $home ".ssh\$File") -ErrorAction SilentlyContinue 2> ... + ~~~~~ + CategoryInfo : InvalidData: (:) [Join-Path], ParameterBindingValidationExc eption + FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Microsoft.Po werShell.Commands.JoinPathCommand Can anybody advise what might be wrong?

    Read the article

  • Socket in C: recv overwrite a char[]

    - by Possa
    Hi all, I'm trying to make a little client-server script like many others that I've done in the past. But in this one I have a problem. It is better if I post the code and the output it give me. Code: #include <mysql.h> //not important now #include <stdlib.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <signal.h> #include <string.h> //constant definition #define SERVER_PORT 2121 #define LINESIZE 21 //global var definition char victim_ip[LINESIZE], file_write[LINESIZE], hacker_ip[LINESIZE]; //function void leggi (int); //not use now for debugging purpose //void scriviDB (); //not important now main () { int sock, client_len, fd; struct sockaddr_in server, client; // transport end point if((sock = socket(AF_INET, SOCK_STREAM, 0)) == -1) { perror("system call socket fail"); exit(1); } server.sin_family = AF_INET; server.sin_addr.s_addr = inet_addr("10.10.10.1"); server.sin_port = htons(SERVER_PORT); // binding address at transport end point if (bind(sock, (struct sockaddr *)&server, sizeof server) == -1) { perror("system call bind fail"); exit(1); } //fprintf(stderr, "Server open: listening.\n"); listen(sock, 5); /* managae client connection */ while (1) { client_len = sizeof(client); if ((fd = accept(sock, (struct sockaddr *)&client, &client_len)) < 0) { perror("accepting connection"); exit(1); } strcpy(hacker_ip, inet_ntoa(client.sin_addr)); printf("1 %s\n", hacker_ip); //debugging purpose //leggi(fd); ////////////////////////// //receive client recv(fd, victim_ip, LINESIZE, 0); victim_ip[sizeof(victim_ip)] = '\0'; printf("2 %s\n", hacker_ip); //debugging purpose recv(fd, file_write, LINESIZE, 0); file_write[sizeof(file_write)] = '\0'; printf("3 %s\n", hacker_ip); //debugging purpose printf("%s@%s for %s\n", file_write, victim_ip, hacker_ip); //send to client send(fd, hacker_ip, 40, 0); //now is hacker_ip for debug ///////////////////////// close(fd); }//end while exit(0); } //end main Client send string: ./send -i 10.10.10.4 -f filename.ext so the script send -i (IP) and -f (FILE) at the server. Here's my output server side: 1 10.10.10.6 2 10.10.10.6 3 [email protected] for As you can see the printf(3) and the printf(ip,file,ip) fail. I don't know how and where but someone overwrite my hacker_ip string. Thanks for your help! :)

    Read the article

  • ft_stopword_file not picked up

    - by Alex Holsgrove
    I have a VPS server with a company called Webfusion. I want to remove some or all of the FULLTEXT stopwords because some specific words needs to be searchable with my DB content. I opened /etc/mysql/my.cnf and added the line ft_stopword_file="". I restarted the mysql service, ran a repair table and then tried my MATCH query with no success. I ran SHOW VARIABLES LIKE 'ft_%' and it simply shows (built-in) next to the stopword file. I am running WAMP on my workstation, and whilst I realise this isn't configured the same as a commercial VPS, the above method worked just fine. Couple someone please offer some guidance?

    Read the article

  • Ubuntu and mysql server. Something isnt allowing me to connect

    - by acidzombie24
    I have a question about mysql settings http://serverfault.com/questions/94054/remote-connections-and-mysql-on-ubuntu/94088#94088 now i want to figure out why i cannot connect. I made sure bind-address was commented out. I can ping the server within the VM but i cannot ping it from within the VM using mysqladmin --protocol=tcp --host=self_ip ping. I also followed along and check if my ports were open and they look like they are. I setup samba on that VM and can access that with no problem as well. It looks like ubuntu does not have a firewall either (i figured this out before) so i am stumped why the server isnt allowing my connection. Apparently the config file works on another person side http://www.pastie.org/742545 I am using Ubuntu 6.06 LTS just because of 'support' reasons. So hopefully this will be 'easy'?

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >