Search Results

Search found 10383 results on 416 pages for 'exact match'.

Page 394/416 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Amazon EC2 pem file stopped working suddenly

    - by Jashwant
    I was connecting to Amazon EC2 through SSH and it was working well. But all of a sudden, it stopped working. I am not able to connect anymore with the same key file. What can go wrong ? Here's the debug info. ssh -vvv -i ~/Downloads/mykey.pem [email protected] OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-54-222-60-78.eu.compute.amazonaws.com [54.229.60.78] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/jashwant/Downloads/mykey.pem" as a RSA1 public key debug1: identity file /home/jashwant/Downloads/mykey.pem type -1 debug1: identity file /home/jashwant/Downloads/mykey.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1p1 Debian-4 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "ec2-54-222-60-78.eu.compute.amazonaws.com" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA d8:05:8e:fe:37:2d:1e:2c:f1:27:c2:e7:90:7f:45:48 debug3: load_hostkeys: loading entries for host "ec2-54-222-60-78.eu.compute.amazonaws.com" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "54.229.60.78" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys debug1: Host 'ec2-54-222-60-78.eu.compute.amazonaws.com' is known and matches the ECDSA host key. debug1: Found key in /home/jashwant/.ssh/known_hosts:4 debug1: ssh_ecdsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: jashwant@jashwant-linux (0x7f827cbe4f00) debug2: key: /home/jashwant/Downloads/mykey.pem ((nil)) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: jashwant@jashwant-linux debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /home/jashwant/Downloads/mykey.pem debug1: read PEM private key done: type RSA debug3: sign_and_send_pubkey: RSA 9b:7d:9f:2e:7a:ef:51:a2:4e:fb:0c:c0:e8:d4:66:12 debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). I've already googled everything and checked : Public DNS is same (It hasnt changed), Username is ubuntu as it's a Ubuntu AMI ( Used the same earlier), Permission is 400 on mykey.pem file ssh port is enabled via security groups ( Used the same ealier )

    Read the article

  • How to create Large resumable download from a secured location .NET

    - by Kelvin H
    I need to preface I'm not a .NET coder at all, but to get partial functionality, I modified a technet chunkedfilefetch.aspx script that uses chunked Data Reading and writing Streamed method of doing file transfer, to get me half-way. iStream = New System.IO.FileStream(path, System.IO.FileMode.Open, _ IO.FileAccess.Read, IO.FileShare.Read) dataToRead = iStream.Length Response.ContentType = "application/octet-stream" Response.AddHeader("Content-Length", file.Length.ToString()) Response.AddHeader("Content-Disposition", "attachment; filename=" & filedownload) ' Read and send the file 16,000 bytes at a time. ' While dataToRead 0 If Response.IsClientConnected Then length = iStream.Read(buffer, 0, 16000) Response.OutputStream.Write(buffer, 0, length) Response.Flush() ReDim buffer(16000) ' Clear the buffer ' dataToRead = dataToRead - length Else ' Prevent infinite loop if user disconnects ' dataToRead = -1 End If End While This works great on files up to 2GB and is fully functioning now.. But only one problem it doesn't allow for resume. I took the original code called it fetch.aspx and pass an orderNUM through the URL. fetch.aspx&ordernum=xxxxxxx It then reads the filename/location from the database occording to the ordernumber, and chunks it out from a secured location NOT under the webroot. I need a way to make this resumable, by the nature of the internet and large files people always get disconnected and would like to resume where they left off. But any resumable articles i've read, assume the file is within the webroot.. ie. http://www.devx.com/dotnet/Article/22533/1954 Great article and works well, but I need to stream from a secured location. I'm not a .NET coder at all, at best i can do a bit of coldfusion, if anyone could help me modify a handler to do this, i would really appreciate it. Requirements: I Have a working fetch.aspx script that functions well and uses the above code snippet as a base for the streamed downloading. Download files are large 600MB and are stored in a secured location outside of the webroot. Users click on the fetch.aspx to start the download, and would therefore be clicking it again if it was to fail. If the ext is a .ASPX and the file being sent is a AVI, clicking on it would completely bypass an IHTTP handler mapped to .AVI ext, so this confuses me From what I understand the browser will read and match etag value and file modified date to determine they are talking about the same file, then a subsequent accept-range is exchanged between the browser and IIS. Since this dialog happens with IIS, we need to use a handler to intercept and respond accordingly, but clicking on the link would send it to an ASPX file which the handeler needs to be on an AVI fiel.. Also confusing me. If there is a way to request the initial HTTP request header containing etag, accept-range into the normal .ASPX file, i could read those values and if the accept-range and etag exist, start chunking at that byte value somehow? but I couldn't find a way to transfer the http request headers since they seem to get lost at the IIS level. OrderNum which is passed in the URL string is unique and could be used as the ETag Response.AddHeader("ETag", request("ordernum")) Files need to be resumable and chunked out due to size. File extensions are .AVI so a handler could be written around it. IIS 6.0 Web Server Any help would really be appreciated, i've been reading and reading and downloading code, but none of the examples given meet my situation with the original file being streamed from outside of the webroot. Please help me get a handle on these httphandlers :)

    Read the article

  • Can't log in via SSH to any accounts set to use /bin/bash as a default shell

    - by Gui Ambros
    I'm trying to install bash as the default shell on a ARM Linux running on an embedded device (Synology DS212+ NAS). But there's something really wrong, and I can't figure out what it is. Symptoms: 1) Root has /bin/bash as default shell, and can log in normally via SSH: $ grep root /etc/passwd root:x:0:0:root:/root:/bin/bash $ ssh root@NAS root@NAS's password: Last login: Sun Dec 16 14:06:56 2012 from desktop # 2) joeuser has /bin/bash as default shell, and receives "Permission denied" when trying to log in via SSH: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/bash $ ssh joeuser@localhost joeuser@NAS's password: Last login: Sun Dec 16 14:07:22 2012 from desktop Permission denied, please try again. Connection to localhost closed. 3) changing joeuser's shell back to /bin/sh: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/sh $ ssh joeuser@localhost Last login: Sun Dec 16 15:50:52 2012 from localhost $ To make things even more strange, I can log in as joeuser using /bin/bash using the serial console (!). Also a su - joeuser as root works fine, so the bash binary itself is working fine. In an act of despair, I changed joeuser's uid to 0 on /etc/passwd, but also didn't work, so it doesn't seem to be anything permission related. Seems that bash is doing some extra checking that sshd didn't like, and blocking the connections for non-root users. Maybe some sort of sanity checking - or terminal emulation - that is triggering the SIGCHLD, but only when called via ssh. I already went through every single item on sshd_config, and also put SSHD in debug mode, but didn't find anything strange. Here's my /etc/ssh/sshd_config: LogLevel DEBUG LoginGraceTime 2m PermitRootLogin yes RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile %h/.ssh/authorized_keys ChallengeResponseAuthentication no UsePAM yes AllowTcpForwarding no ChrootDirectory none Subsystem sftp internal-sftp -f DAEMON -u 000 And here's the output from /usr/syno/sbin/sshd -d, showing the failed attempt of joeuser trying to log in, with /bin/bash as the shell: debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: HPN Buffer Size: 87380 debug1: sshd version OpenSSH_5.8p1-hpn13v11 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: read PEM private key done: type ECDSA debug1: private host key: #2 type 3 ECDSA debug1: rexec_argv[0]='/usr/syno/sbin/sshd' debug1: rexec_argv[1]='-d' Set /proc/self/oom_adj from 0 to -17 debug1: Bind to port 22 on ::. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on :: port 22. debug1: Bind to port 22 on 0.0.0.0. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on 0.0.0.0 port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 6 out 6 newsock 6 pipe -1 sock 9 debug1: inetd sockets after dupping: 4, 4 Connection from 127.0.0.1 port 52212 debug1: HPN Disabled: 0, HPN Buffer Size: 87380 debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1-hpn13v11 SSH: Server;Ltype: Version;Remote: 127.0.0.1-52212;Protocol: 2.0;Client: OpenSSH_5.8p1-hpn13v11 debug1: match: OpenSSH_5.8p1-hpn13v11 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1-hpn13v11 debug1: permanently_set_uid: 1024/100 debug1: MYFLAG IS 1 debug1: list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: AUTH STATE IS 0 debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: client->server aes128-ctr hmac-md5 none SSH: Server;Ltype: Kex;Remote: 127.0.0.1-52212;Enc: aes128-ctr;MAC: hmac-md5;Comp: none debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: server->client aes128-ctr hmac-md5 none debug1: expecting SSH2_MSG_KEX_ECDH_INIT debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user joeuser service ssh-connection method none SSH: Server;Ltype: Authname;Remote: 127.0.0.1-52212;Name: joeuser debug1: attempt 0 failures 0 debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: PAM: initializing for "joeuser" debug1: PAM: setting PAM_RHOST to "localhost" debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user joeuser service ssh-connection method password debug1: attempt 1 failures 0 debug1: do_pam_account: called Accepted password for joeuser from 127.0.0.1 port 52212 ssh2 debug1: monitor_child_preauth: joeuser has been authenticated by privileged process debug1: PAM: establishing credentials User child is on pid 9129 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 65536 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_new: session 0 debug1: session_pty_req: session 0 alloc /dev/pts/1 debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 9130 debug1: session_exit_message: session 0 channel 0 pid 9130 debug1: session_exit_message: release channel 0 debug1: session_by_tty: session 0 tty /dev/pts/1 debug1: session_pty_cleanup: session 0 release /dev/pts/1 Received disconnect from 127.0.0.1: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials Here you have the full output of sshd -dd, together with ssh -vv. Bash: # bash --version GNU bash, version 3.2.49(1)-release (arm-none-linux-gnueabi) Copyright (C) 2007 Free Software Foundation, Inc. The bash binary was cross compiled from source. I also tried using a pre-compiled binary from the Optware distribution, but had the exact same problem. I checked for missing shared libraries using objdump -x, but they're all there. Any ideas what could be causing this "Permission denied, please try again."? I'm almost diving in the bash source code to investigate, but trying to avoid hours chasing something that may be silly.

    Read the article

  • XmlException - inserting attribute gives "unexpected token" exception

    - by Anders Svensson
    Hi, I have an XmlDocument object in C# that I transform, using XslTransform, to html. In the stylesheet I insert an id attribute in a span tag, taking the id from an element in the XmlDocument. Here is the template for the element: <xsl:template match="word"> <span> <xsl:attribute name="id"><xsl:value-of select="@id"></xsl:value-of></xsl:attribute> <xsl:apply-templates/> </span> </xsl:template> But then I want to process the result document as an xhtml document (using the XmlDocument dom). So I'm taking a selected element in the html, creating a range out of it, and try to load the element using XmlLoad(): wordElem.LoadXml(range.htmlText); But this gives me the following exception: "'598' is an unexpected token. The expected token is '"' or '''. Line 1, position 10." And if I move the cursor over the range.htmlText, I see the tags for the element, and the "id" shows without quotes, which confuses me (i.e.SPAN id=598 instead of SPAN id="598"). To confuse the matter further, if I insert a blank space or something like that in the value of the id in the stylesheet, it works fine, i.e.: <span> <xsl:attribute name="id"><xsl:text> </xsl:text> <xsl:value-of select="@id"></xsl:value-of></xsl:attribute> <xsl:apply-templates/> </span> (Notice the whitespace in the xsl:text element). Now if I move the cursor over the range.htmlText, I see an id with quotes as usual in attributes (and as it shows if I open the html file in notepad or something). What is going on here? Why can't I insert an attribute this way and have a result that is acceptable as xhtml for XmlDocument to read? I feel I am missing something fundamental, but all this surprises me, since I do this sort of transformations using xsl:attribute to insert attributes all the time for other types of xsl transformations. Why doesn't XmlDocument accept this value? By the way, it doesn't matter if it is an id attribute. i have tried with the "class" attribute, "style" etc, and also using literal values such as "style" and setting the value to "color:red" and so on. The compiler always complains it is an unvalid token, and does not include quotes for the value unless there is a whitespace or something else in there (linebreaks etc.). I hope I have provided enough information. Any help will be greatly appreciated. Basically, what I want to accomplish is set an id in a span element in html, select a word in a webbrowser control with this document loaded, and get the id attribute out of the selected element. I've accomplished everything, and can actually do what I want, but only if I use regex e.g. to get the attribute value, and I want to be able to use XmlDocument instead to simply get the value out of the attribute directly. I'm sure I'm missing something simple, but if so please tell me. Regards, Anders

    Read the article

  • twitter bootstrap collapsable nav menu

    - by clifgray
    I am using twitter bootstrap on my web app to do all of the styling and everything is great for the most part. The problem is that my nav bar, once collapsed, will not drop down when I click the dropdown icon. I have listed my HTML but I think the problem is with my javascript seeing as this fiddle with the exact same code: http://jsfiddle.net/YWUmb/30/ works just fine. <div class="navbar navbar-fixed-top"> <div class="navbar-inner"> <div class="container"> <a class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse"> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </a> <a class="brand" href="/">Exployre</a> <div class="nav-collapse"> <ul class="nav"> <li><a href="/">Home</a></li> <li><a href="/profile">Profile</a></li> <li><a href="/sponsors">Sponsors</a></li> <li><a href="/working">Exployrers</a></li> <li><a href="/about">About</a></li> <li><a href="/share">Share</a></li> <li> {% if user %} User: {{ user }}<br> <a href="{{ logout }}">Logout</a> {% else %} <a href="/login">Signin or Register</a> {% endif %} <li> <a href="/profile"> {% if visits %} {% if visits > 10 %} Prestige: {{ visits }} {% elif visits > 5 %} {% endif %} {% endif %} </a> </li> <li> <form action="/search" class="navbar-search"> <div> <input type="text" placeholder="Search" class="search-query" name="q" size="55"/> </div> </form> <script type="text/javascript" src="http://www.google.com/coop/cse/brand?form=cse-search-box&amp;lang=en"></script> </li> </ul> </div><!--/.nav-collapse --> </div> </div> </div> <script type="text/javascript" src="/static/js/bootstrap.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script> Is there anything wrong with my source code links or anything additional that I need to do to make it work?

    Read the article

  • Common Mercator Projection formulas for Google Maps not working correctly

    - by Tom Halladay
    I am building a Tile Overlay server for Google maps in C#, and have found a few different code examples for calculating Y from Latitude. After getting them to work in general, I started to notice certain cases where the overlays were not lining up properly. To test this, I made a test harness to compare Google Map's Mercator LatToY conversion against the formulas I found online. As you can see below, they do not match in certain cases. Case #1 Zoomed Out: The problem is most evident when zoomed out. Up close, the problem is barely visible. Case #2 Point Proximity to Top & Bottom of viewing bounds: The problem is worse in the middle of the viewing bounds, and gets better towards the edges. This behavior can negate the behavior of Case #1 The Test: I created a google maps page to display red lines using the Google Map API's built in Mercator conversion, and overlay this with an image using the reference code for doing Mercator conversion. These conversions are represented as black lines. Compare the difference. The Results: Check out the top-most and bottom-most lines: The problem gets visually larger but numerically smaller as you zoom in: And it all but disappears at closer zoom levels, regardless of screen orientation. The Code: Google Maps Client Side Code: var lat = 0; for (lat = -80; lat <= 80; lat += 5) { map.addOverlay(new GPolyline([new GLatLng(lat, -180), new GLatLng(lat, 0)], "#FF0033", 2)); map.addOverlay(new GPolyline([new GLatLng(lat, 0), new GLatLng(lat, 180)], "#FF0033", 2)); } Server Side Code: Tile Cutter : http://mapki.com/wiki/Tile_Cutter OpenStreetMap Wiki : http://wiki.openstreetmap.org/wiki/Mercator protected override void ImageOverlay_ComposeImage(ref Bitmap ZipCodeBitMap) { Graphics LinesGraphic = Graphics.FromImage(ZipCodeBitMap); Int32 MapWidth = Convert.ToInt32(Math.Pow(2, zoom) * 255); Point Offset = Cartographer.Mercator2.toZoomedPixelCoords(North, West, zoom); TrimPoint(ref Offset, MapWidth); for (Double lat = -80; lat <= 80; lat += 5) { Point StartPoint = Cartographer.Mercator2.toZoomedPixelCoords(lat, -179, zoom); Point EndPoint = Cartographer.Mercator2.toZoomedPixelCoords(lat, -1, zoom); TrimPoint(ref StartPoint, MapWidth); TrimPoint(ref EndPoint, MapWidth); StartPoint.X = StartPoint.X - Offset.X; EndPoint.X = EndPoint.X - Offset.X; StartPoint.Y = StartPoint.Y - Offset.Y; EndPoint.Y = EndPoint.Y - Offset.Y; LinesGraphic.DrawLine(new Pen(Color.Black, 2), StartPoint.X, StartPoint.Y, EndPoint.X, EndPoint.Y); LinesGraphic.DrawString( lat.ToString(), new Font("Verdana", 10), new SolidBrush(Color.Black), new Point( Convert.ToInt32((width / 3.0) * 2.0), StartPoint.Y)); } } protected void TrimPoint(ref Point point, Int32 MapWidth) { point.X = Math.Max(point.X, 0); point.X = Math.Min(point.X, MapWidth - 1); point.Y = Math.Max(point.Y, 0); point.Y = Math.Min(point.Y, MapWidth - 1); } So, Anyone ever experienced this? Dare I ask, resolved this? Or simply have a better C# implementation of Mercator Project coordinate conversion? Thanks!

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • copying link id to an input field

    - by zurna
    The code works fine when links are images. But I have a problem when there are flash movies. Another page opens with undefined in the address bar when it needs to copy the link id to imagesID input box. <script type="text/javascript"> var $input = $("#imagesID"); // <-- your input field $('a.thumb').click(function() { var value = $input.val(); var id = $(this).attr('id'); if (value.match(id)) { value = value.replace(id + ';', ''); } else { value += id + ';'; } $input.val(value); }); </script> <ul class="thumbs"> <li> <a class="thumb" id="62"> <img src="/FLPM/media/images/2A9L1V2X_sm.jpg" alt="Dock" id="62" class="floatLeft" /> </a> <br /> <a href="?Process=&IMAGEID=62" class="thumb"><span class="floatLeft">DELETE</span></a> </li> <li> <a class="thumb" id="61"> <img src="/FLPM/media/images/0E7Q9Z0C_sm.jpg" alt="Desert Landscape" id="61" class="floatLeft" /> </a> <br /> <a href="?Process=&IMAGEID=61" class="thumb"><span class="floatLeft">DELETE</span></a> </li> <li> <a class="thumb" id="60"> <img src="/FLPM/media/images/8R5D7M8O_sm.jpg" alt="Creek" id="60" class="floatLeft" /> </a> <br /> <a href="?Process=&IMAGEID=60" class="thumb"><span class="floatLeft">DELETE</span></a> </li> <li> <a class="thumb" id="59"> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://active.macromedia.com/flash4/cabs/swflash.cab#version=4,0,0,0" id="59" width="150" height="100" style="float:left; border:5px solid #CCCCCC; margin:5px 10px 10px 0;"> <param name="scale" value="exactfit"> <param name="AllowScriptAccess" value="always" /> <embed name="name" src="http://www.refinethetaste.com/FLPM/media/flashes/7P4A6K7M.swf" quality="high" scale="exactfit" width="150" height="100" type="application/x-shockwave-flash" AllowScriptAccess="always" pluginspage="http://www.macromedia.com/shockwave/download/index.cgi? P1_Prod_Version=ShockwaveFlash"> </embed> </object> </a> <br /> <a href="?Process=&IMAGEID=59" class="thumb"><span class="floatLeft">DELETE</span></a> </li> </ul>

    Read the article

  • XML to CSV using XSLT help.

    - by Adam Kahtava
    I'd like to convert XML into CSV using an XSLT, but when applying the XSL from the SO thread titled XML To CSV XSLT against my input: <WhoisRecord> <DomainName>127.0.0.1</DomainName> <RegistryData> <AbuseContact> <Email>[email protected]</Email> <Name>Internet Corporation for Assigned Names and Number</Name> <Phone>+1-310-301-5820</Phone> </AbuseContact> <AdministrativeContact i:nil="true"/> <BillingContact i:nil="true"/> <CreatedDate/> <RawText>...</RawText> <Registrant> <Address>4676 Admiralty Way, Suite 330</Address> <City>Marina del Rey</City> <Country>US</Country> <Name>Internet Assigned Numbers Authority</Name> <PostalCode>90292-6695</PostalCode> <StateProv>CA</StateProv> </Registrant> <TechnicalContact> <Email>[email protected]</Email> <Name>Internet Corporation for Assigned Names and Number</Name> <Phone>+1-310-301-5820</Phone> </TechnicalContact> <UpdatedDate>2010-04-14</UpdatedDate> <ZoneContact i:nil="true"/> </RegistryData> </WhoisRecord> I end up with: [email protected] Corporation for Assigned Names and Number+1-310-301-5820, , , , ..., 4676 Admiralty Way, Suite 330Marina del ReyUSInternet Assigned Numbers Authority90292-6695CA, [email protected] Corporation for Assigned Names and Number+1-310-301-5820, 2010-04-14, My problem is that, the resulting transformation is missing nodes (like the DomainName element containing the IP address) and some child nodes are concatenated without commas (like the children of AbuseContact). I'd like to see all the XML output in CVS form, and strings like: "[email protected] Corporation for Assigned Names and Number+1-310-301-5820," delimited by commas. My XSL is pretty rusty. Your help is appreciated. :) Here's the XSL I'm using: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text" encoding="iso-8859-1"/> <xsl:strip-space elements="*" /> <xsl:template match="/*/child::*"> <xsl:for-each select="child::*"> <xsl:if test="position() != last()"><xsl:value-of select="normalize-space(.)"/>, </xsl:if> <xsl:if test="position() = last()"><xsl:value-of select="normalize-space(.)"/><xsl:text> </xsl:text> </xsl:if> </xsl:for-each> </xsl:template> </xsl:stylesheet>

    Read the article

  • JQuery - Adding Cells to Dynamic Tree Table

    - by BPotocki
    I can't quite wrap my head around this one... A table like the one below is spit out by a renderer from some XML. The XML it comes from has a variable depth. (Please point out if I'm using the rowspans in a horrific manner as well, I just came to this table with some experimentation). <table border="1"> <tr> <td rowspan="12">> 1</td> </tr> <tr> <td rowspan="3">1 > 2</td> </tr> <tr> <td>1 > 2 > 5</td> </tr> <tr> <td>1 > 2 > 6</td> </tr> <tr> <td rowspan="3">1 > 3</td> </tr> <tr> <td>1 > 3 > 7</td> </tr> <tr> <td>1 > 3 > 8</td> </tr> <tr> <td rowspan="3">1 > 4</td> </tr> <tr> <td>1 > 4 > 9</td> </tr> <tr> <td>1 > 4 > 10</td> </tr> </table> What I'm trying to accomplish in JQuery is this... User clicks cell "1 2" (which will have an ID). An item is dynamically added to the third level, but UNDER "1 2 6". What I could do is just add another row underneath "1 2" and increase the rowspan from "1 2", but the new cell would then appear out of order amongst "1 2 5" and "1 2 6". I don't really need the exact code to do this, just something to stop my head from spinning around this...Thanks!

    Read the article

  • How to set up port forwarding and firewall settings for torrents using Transmsission on Mac OSX 10.5

    - by Liz
    I have picked up bits of advice here and there on the internet and got someway through this tortuous exercise (after it took 18 hours to download the first torrent I tried yesterday - magnet-link for a film). Where I have got stuck is with configuring the firewall on the Netgear Router but I am not sure if I have caused the problem myself by something else I have done configuring the Mac System Preferences for Security or Networking. I have been following the sections of these instructions that seem to apply, although they are written for a different OSX version (don't know which one, but the screen shots do not match what I see) and I am not wanting to set up my Mac as a server and attending to the parts that apply to port forwarding for Netgear rather than LinkSys: http://homepage.mac.com/car1son/static_port_fwd_intro.html I have been trying to follow these instructions: Instructions for DG834, DG834G, DG824M, FR114W, FM114P, FR114P, FR328S, FVL328, FVS328, FVS338, FVX538, FWAG114, FWG114P, or FVS318v3 These routers do port forwarding by assigning port numbers to a "service" associated with the application you want to run. "Rules" are set for particular services. Rules block or allow access, based on various conditions such as the time of day and the name of the service. To Create a New Inbound or Outbound Rule 1. Submit the router's address in an Internet browser. (The default is 192.168.0.1). 2. Enter the router's username and password. 3. From the main menu, click Security > Rules. 4. Click Add for inbound or outbound traffic, as appropriate to the application you are planning to run. 5. Select the Service. The services the router knows about are listed in the drop down. If the service you want is not listed, add it as described in the next section. 6. Select the Action, for example ALLOW always. 7. For Send to LAN Server, enter the IP address of the local server. Note that this is also the IP address the computers on your LAN will access. 8. For WAN User choose Any, or limit access to particular IP addresses. 9. For Log selection it is reasonable to turn logs on, especially at the beginning when you are unsure of the result of the changes you are making. Later, you may want to set logs to "Never" for performance reasons. 10. Click Apply. As noted in user manual for some models: * Consider using the Dynamic DNS feature on the Advanced menu, so that external users can find your network when the DHCP lease is renewed by your ISP. * If your own LAN server uses DHCP, and your IPs change on rebooting, consider using the Reserved IP Address feature in the LAN IP menu. To Add a Service for These Routers 1. Click Security > Services > Add Custom Service. 2. Enter any name you choose for the service. 3. Select whether the service is to use TCP or UDP. If you are unsure, select both. 4. Enter the lowest port number used by the service. 5. Enter the highest port number used. If the service uses only one port number, enter the same number. 6. Click Apply. There is no "Security - Rules" submenu in the Netgear page, so I have been trying to access "Security - Firewall Rules". I can access everthing else in the Netgear settings as Admin but I cannot get the "Firewall Rules" section to open up. (I am not 100% sure I will know exactly what to do if and when I do get it opened up!) I haven't managed to find though searching the internet any instructions that would seem to apply specifically to what I am trying to achieve, so would be very grateful if someone could either point me in the right direction or give me some advice directly. Best wishes, Liz

    Read the article

  • OpenGL-ES Texture Mapping. Texture is reversed?

    - by Feet
    I am trying to get my head around Texture mapping, I thought I had it the other day after asking this. However, I am having some trouble with my texture coordinates being flipped from what I am expecting. I am loading my texture like so int[] textures = new int[1]; gl.glGenTextures(1, textures, 0); _textureID = textures[0]; gl.glBindTexture(GL10.GL_TEXTURE_2D, _textureID); Bitmap bmp = BitmapFactory.decodeResource( _context.getResources(), R.drawable.die_1); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bmp, 0); gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); bmp.recycle(); My cube is this float vertices[] = { // Front face -width, -height, depth, // 0 width, -height, depth, // 1 width, height, depth, // 2 -width, height, depth, // 3 // Back Face width, -height, -depth, // 4 -width, -height, -depth, // 5 -width, height, -depth, // 6 width, height, -depth, // 7 // Left face -width, -height, -depth, // 8 -width, -height, depth, // 9 -width, height, depth, // 10 -width, height, -depth, // 11 // Right face width, -height, depth, // 12 width, -height, -depth, // 13 width, height, -depth, // 14 width, height, depth, // 15 // Top face -width, height, depth, // 16 width, height, depth, // 17 width, height, -depth, // 18 -width, height, -depth, // 19 // Bottom face -width, -height, -depth, // 20 width, -height, -depth, // 21 width, -height, depth, // 22 -width, -height, depth, // 23 }; short indices[] = { // Front 0,1,2, 0,2,3, // Back 4,5,6, 4,6,7, // Left 8,9,10, 8,10,11, // Right 12,13,14, 12,14,15, // Top 16,17,18, 16,18,19, // Bottom 20,21,22, 20,22,23, }; float texCoords[] = { // Front face textured only, for simplicity 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f}; And it is drawn like so // Counter-clockwise winding. gl.glFrontFace(GL10.GL_CCW); // Enable face culling. gl.glEnable(GL10.GL_CULL_FACE); // What faces to remove with the face culling. gl.glCullFace(GL10.GL_BACK); // Enabled the vertices buffer for writing and to be used during // rendering. gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); // Specifies the location and data format of an array of vertex // coordinates to use when rendering. gl.glVertexPointer(3, GL10.GL_FLOAT, 0, verticesBuffer); if (normalsBuffer != null) { // Enabled the normal buffer for writing and to be used during rendering. gl.glEnableClientState(GL10.GL_NORMAL_ARRAY); // Specifies the location and data format of an array of normals to use when rendering. gl.glNormalPointer(GL10.GL_FLOAT, 0, normalsBuffer); } // Set flat color gl.glColor4f(rgba[0], rgba[1], rgba[2], rgba[3]); // Smooth color if ( colorBuffer != null ) { // Enable the color array buffer to be used during rendering. gl.glEnableClientState(GL10.GL_COLOR_ARRAY); // Point out the where the color buffer is. gl.glColorPointer(4, GL10.GL_FLOAT, 0, colorBuffer); } // Use textures? if ( textureBuffer != null) { gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glEnable(GL10.GL_TEXTURE_2D); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); } // Translation and rotation before drawing gl.glTranslatef(x, y, z); gl.glRotatef(rx, 1, 0, 0); gl.glRotatef(ry, 0, 1, 0); gl.glRotatef(rz, 0, 0, 1); gl.glDrawElements(GL10.GL_TRIANGLES, numOfIndices, GL10.GL_UNSIGNED_SHORT, indicesBuffer); // Disable the vertices buffer. gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); gl.glDisableClientState(GL10.GL_NORMAL_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); // Disable face culling. gl.glDisable(GL10.GL_CULL_FACE); However my front face looks like this I also add, I haven't got any normals set, are textures affected by normals? float texCoords[] = { // Front 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f} It seems as if the texture is being flipped, so the coordinates don't match up properly?

    Read the article

  • Does this code follow the definition of recursion?

    - by dekz
    Hi All, I have a piece of code which I am doubting it as a implementation of recursion by its definition. My understanding is that the code must call itself, the exact same function. I also question whether writing the code this way adds additional overhead which can be seen with the use of recursion. What are your thoughts? class dhObject { public: dhObject** children; int numChildren; GLdouble linkLength; //ai GLdouble theta; //angle of rot about the z axis GLdouble twist; //about the x axis GLdouble displacement; // displacement from the end point of prev along z GLdouble thetaMax; GLdouble thetaMin; GLdouble thetaInc; GLdouble direction; dhObject(ifstream &fin) { fin >> numChildren >> linkLength >> theta >> twist >> displacement >> thetaMax >> thetaMin; //std::cout << numChildren << std::endl; direction = 1; thetaInc = 1.0; if (numChildren > 0) { children = new dhObject*[numChildren]; for(int i = 0; i < numChildren; ++i) { children[i] = new dhObject(fin); } } } void traverse(void) { glPushMatrix(); //draw move initial and draw transform(); draw(); //draw children for(int i = 0; i < numChildren; ++i) { children[i]->traverse(); } glPopMatrix(); } void update(void) { //Update the animation, if it has finished all animation go backwards if (theta <= thetaMin) { thetaInc = 1.0; } else if (theta >= thetaMax) { thetaInc = -1.0; } theta += thetaInc; //std::cout << thetaMin << " " << theta << " " << thetaMax << std::endl; for(int i = 0; i < numChildren; ++i) { children[i]->update(); } } void draw(void) { glPushMatrix(); glColor3f (0.0f,0.0f,1.0f); glutSolidCube(0.1); glPopMatrix(); } void transform(void) { //Move in the correct way, R, T, T, R glRotatef(theta, 0, 0, 1.0); glTranslatef(0,0,displacement); glTranslatef(linkLength, 0,0); glRotatef(twist, 1.0,0.0,0.0); } };

    Read the article

  • How do I combine or merge grouped nodes?

    - by LOlliffe
    Using the XSL: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs" version="2.0"> <xsl:output method="xml"/> <xsl:template match="/"> <records> <record> <!-- Group record by bigID, for further processing --> <xsl:for-each-group select="records/record" group-by="bigID"> <xsl:sort select="bigID"/> <xsl:for-each select="current-group()"> <!-- Create new combined record --> <bigID> <!-- <xsl:value-of select="."/> --> <xsl:for-each select="."> <xsl:value-of select="bigID"/> </xsl:for-each> </bigID> <text> <xsl:value-of select="text"/> </text> </xsl:for-each> </xsl:for-each-group> </record> </records> </xsl:template> I'm trying to change: <?xml version="1.0" encoding="UTF-8"?> <records> <record> <bigID>123</bigID> <text>Contains text for 123</text> <bigID>456</bigID> <text>Some 456 text</text> <bigID>123</bigID> <text>More 123 text</text> <bigID>123</bigID> <text>Yet more 123 text</text> </record> into: <?xml version="1.0" encoding="UTF-8"?> <records> <record> <bigID>123</bigID> <text>Contains text for 123</text> <text>More 123 text</text> <text>Yet more 123 text</text> </bigID> <bigID>456 <text>Some 456 text</text> </bigID> </record> Right now, I'm just listing the grouped <bigIDs, individually. I'm missing the step after grouping, where I combine the grouped <bigID nodes. My suspicion is that I need to use the "key" function somehow, but I'm not sure. Thanks for any help.

    Read the article

  • WCF via SSL connectivity problems

    - by Brett Widmeier
    Hello, I am hosting a WCF service from inside a Windows service using WAS. When I set the service to listen on 127.0.0.1, I have connectivity from my local machine as well as from my network. However, when I set it to listen on my outbound interface port 443, I can no longer even see the wsdl by connecting with a browser. Strangely, I can connect to the service by using telnet. The cert I am using was generated for my interface by a CA, and I have successfully used this exact cert with this service before. When checking the application log, I see that the service starts without error and is listening on the correct interface. From this information, it seems to me that the config file is in a valid state, but somehow misconfigured for what I want. I have, however, previously deployed this same setup on other sites using this config file. In case it is helpful, below is my config file. Any thoughts? <!--<system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Warning, ActivityTracing" propagateActivity="true"> <listeners> <add type="System.Diagnostics.DefaultTraceListener" name="Default"> <filter type="" /> </add> <add name="ServiceModelTraceListener"> <filter type="" /> </add> </listeners> </source> </sources> <sharedListeners> <add initializeData="app_tracelog.svclog" type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ServiceModelTraceListener" traceOutputOptions="Timestamp"> <filter type="" /> </add> </sharedListeners> </system.diagnostics>--> <appSettings/> <connectionStrings/> <system.serviceModel> <!--<diagnostics> <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog ="1000" maxSizeOfMessageToLog="524288"/> </diagnostics>--> <bindings> <basicHttpBinding> <binding name="basicHttps"> <security mode="Transport"> <transport clientCredentialType="None"/> <message /> </security> </binding> </basicHttpBinding> </bindings> <services> <service behaviorConfiguration="ServiceBehavior" name="<fully qualified name of service>"> <endpoint address="" binding="basicHttpBinding" name="OrdersSoap" contract="<fully qualified name of contract>" bindingNamespace="http://emr.orders.com/WebServices" bindingConfiguration="basicHttps" /> <endpoint binding="mexHttpsBinding" address="mex" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="https://<external IP>/<name of service>>/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceMetadata httpsGetEnabled="False"/> <serviceDebug includeExceptionDetailInFaults="True" /> <dataContractSerializer maxItemsInObjectGraph="2147483646"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>

    Read the article

  • using JQuery and Prototype in the same page; more explanation needed!

    - by xenogen
    Hi everybody! I'm continuously having the problem when i use jquery lightbox (which runs prototype) and jquery news slider. I tried the "noconflict" method. The problem is I don't know the exact place to put the code. So, here, i'm putting my scripts within . So, please troubleshoot it and explain me where to put the patch. thank you very much. <head> <meta http-equiv="Content-Language" content="en-us"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Jquery</title> <script type="text/javascript" src="lb/js/prototype.js"></script> <script type="text/javascript" src="lb/js/scriptaculous.js?load=effects"></script> <script type="text/javascript" src="lb/js/lightbox.js"></script> <link href="lb/css/lightbox.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="news/jquery-1.2.3.pack.js"></script> <script type="text/javascript" src="news/jquery.easynews.js"></script> <style> html { background-color: #FFA928; font: normal 76% "Arial", "Lucida Grande",Verdana, Sans-Serif; color:black; } a { text-decoration: none; font-weight: bold; } .news_style{ display:none; } .news_show { background-color: white; color:black; width:350px; height:150px; font: normal 100% "Arial", "Lucida Grande",Verdana, Sans-Serif; overflow: auto; } .news_border { background-color: white; width:350px; height:150px; font: normal 100% "Arial", "Lucida Grande",Verdana, Sans-Serif; border: 1px solid gray; padding: 5px 5px 5px 5px; overflow: auto; } .news_mark{ background-color:white ; font: normal 70% "Arial", "Lucida Grande",Verdana, Sans-Serif; border: 0px solid gray; width:361px; height:35px; color:black; text-align:center; } .news_title{ font: bold 120% "Arial", "Lucida Grande",Verdana, Sans-Serif; border: 0px solid gray; padding: 5px 0px 9px 5px; color:black; } .news_show img{ margin-left: 5px; margin-right: 5px; } .buttondiv { position: absolute; /*float: left;*/ /*top: 169px;*/ padding: 5px 5px 5px 5px; background-color:white ; border: 1px solid gray; /*border-top-color: white;*/ border-top:none; height:20px; } </style> <script> $(document).ready(function(){ var newsoption1 = { firstname: "mynews", secondname: "showhere", thirdname:"news_display", fourthname:"news_button", newsspeed:'6000' } $.init_news(newsoption1); var myoffset=$('#news_button').offset(); var mytop=myoffset.top-1; $('#news_button').css({top:mytop}); }); </script> </head>

    Read the article

  • Wireless internet connection connects but internet does not work (no packets received). Wired does.

    - by Rodney
    When I connect my PC via ethernet cable to my ADSL router it works fine. When I connect via Wireless it connects and the internet will work for a random amount of time and then stop working. It stays connected with a strong signal but no packets are received. My laptop/iphone are right next to it and wireless works fine. If I open the Wireless USB status, it says it is connected to my SSID with full strength (54 mps - I am 3 meteres away from my router) and the activty shows as Packets 594 SENT and 105 RECEIVED (this goes up VERY slowly) I have tried the following: Turned off anitvirus and firewall completely. Tested the wifi signal- I am writing this on my laptop which is next to my PC and also has full wifi strength. Tried a different wireless adapter - I dug out an old PCI wireless card - it does the exact same thing. Compared all wireless settings to my laptop. I can ping google.com and it replies (sometimes with packet loss) When I reboot the PC it will connect for a minute or two (random time) and then just stops again. I tried Firefox, IE etc. no joy I have updated all latest versions (Netgear WG111v2) and drivers Checked Event Log - nothing unusual Ping the router (and even connect as admin for the few minutes when the internet does work) Changed the MTU down to 1200 using DrTCP Checked Device Manager for conflicts - none. I ping the router from the PC (192.168.0.10 - 192.168.0.1) and it replies with 4 packets. BUT, on my router admin page (which I access via http on my laptop wirelessly) - if I ping 192.168.0.10 all packets timeout (pinging my laptop 192.168.0.12 works fine) My router admin page shows the leased IP address for 192.168.0.10 (ie it is definitely talking to the router initially) Now I am out of ideas - please help. I think it is an OS/Software issue as I have tried 2 different wireless adapaters (PCI and USB) with the same result but all other wireless devices work fine around mine). It's not the firewall. It is getting assigned an IP address correctly (my PC gets 192.168.0.10, my laptop is .12) It is assigned by DHCP. As soon as I plug in the ethernet cable it all works fine. Repairing the adapter sometimes helps but it will always stop working after a random time. The wireless adapter always shows as connected with Excellent signal but the internet does not work. I am running Windows XP SP3 and have tried a Netgear WG111v2 USB adapter. Thanks in advance! UPDATE: The internet seems to be working, it is just either sending packets too small or slow to work (some small pages load bits of them very slowly but then hang). XP seems to have a networking diagnostic app - here is the output: Last diagnostic run time: 08/30/10 08:16:38 IP Configuration Diagnostic Invalid IP address info Valid IP address detected: 192.168.0.10 IP Layer Diagnostic Corrupted IP routing table info The default route is valid info The loopback route is valid info The local host route is valid info The local subnet route is valid Invalid ARP cache entries action The ARP cache has been flushed Gateway Diagnostic Gateway info The following proxy configuration is being used by IE: Automatically Detect Settings:Disabled Automatic Configuration Script: Proxy Server: Proxy Bypass list: info This computer has the following default gateway entry(ies): 192.168.0.1 info This computer has the following IP address(es): 192.168.0.10 info The default gateway is in the same subnet as this computer info The default gateway entry is a valid unicast address info The default gateway address was resolved via ARP in 1 try(ies) info The default gateway was reached via ICMP Ping in 1 try(ies) info TCP port 80 on host 65.55.12.249 was successfully reached info The Internet host www.microsoft.com was successfully reached info The default gateway is OK DNS Client Diagnostic DNS - Not a home user scenario info Using Web Proxy: no info Resolving name ok for (www.microsoft.com): yes No DNS servers DNS failure HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Passive): Successfully connected to ftp.microsoft.com. info HTTP: Successfully connected to www.microsoft.com. warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out error Could not make an HTTPS connection. info Redirecting user to support call WinSock Diagnostic WinSock status info All base service provider entries are present in the Winsock catalog. info The Winsock Service provider chains are valid. info Provider entry MSAFD Tcpip [TCP/IP] passed the loopback communication test. info Provider entry MSAFD Tcpip [UDP/IP] passed the loopback communication test. info Provider entry RSVP UDP Service Provider passed the loopback communication test. info Provider entry RSVP TCP Service Provider passed the loopback communication test. info Connectivity is valid for all Winsock service providers. Wireless Diagnostic Wireless - Service disabled Wireless - User SSID action User input required: Specify network name or SSID Wireless - First time setup info The Wireless Network name (SSID) to which the user would like to connect = RodSof Wifi. Wireless - Radio off info Valid IP address detected: 192.168.0.10 Wireless - Out of range Wireless - Hardware issue Wireless - Novice user Wireless - Ad-hoc network Wireless - Less preferred Wireless - 802.1x enabled Wireless - Configuration mismatch Wireless - Low SNR Network Adapter Diagnostic Network location detection info Using home Internet connection Network adapter identification info Network connection: Name=Local Area Connection 2, Device=Realtek RTL8168C(P)/8111C(P) PCI-E Gigabit Ethernet NIC, MediaType=LAN, SubMediaType=LAN info Network connection: Name=Wireless USB, Device=NETGEAR WG111v2 54Mbps Wireless USB 2.0 Adapter, MediaType=LAN, SubMediaType=WIRELESS info Both Ethernet and Wireless connections available, prompting user for selection action User input required: Select network connection info Wireless connection selected Network adapter status info Network connection status: Connected HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Active): Successfully connected to ftp.microsoft.com. warn HTTP: Error 12007 connecting to www.microsoft.com: The server name or address could not be resolved warn HTTP: Error 12002 connecting to www.hotmail.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out error Could not make an HTTP connection. error Could not make an HTTPS connection.

    Read the article

  • Build a gem with native extension (Gem::Installer::ExtensionBuildError)

    - by Arnaud Leymet
    I have the following configuration: uname -a : Linux 2.6.24.2 i686 GNU/Linux (Ubuntu) ruby -v : ruby 1.9.0 (2007-12-25 revision 14709) [i486-linux] rails -v : Rails 3.0.0.beta3 gem -v : 1.3.5 rake --version : rake, version 0.8.7 make -v : GNU Make 3.81 gem env : RUBYGEMS VERSION: 1.3.5 RUBY VERSION: 1.9.0 (2007-12-25 patchlevel 0) [i486-linux] INSTALLATION DIRECTORY: /usr/lib/ruby1.9/gems/1.9.0 RUBY EXECUTABLE: /usr/bin/ruby1.9 EXECUTABLE DIRECTORY: /usr/bin RUBYGEMS PLATFORMS: ruby x86-linux GEM PATHS: /usr/lib/ruby1.9/gems/1.9.0 /root/.gem/ruby/1.9.0 GEM CONFIGURATION: :update_sources = true :verbose = true :benchmark = false :backtrace = false :bulk_threshold = 1000 REMOTE SOURCES: http://gems.rubyforge.org/ And when I try this simple command: gem install nokogiri Here is what I get: # gem install nokogiri Building native extensions. This could take a while... ERROR: Error installing nokogiri: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for iconv.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxml/parser.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxslt/xslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libexslt/exslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for xmlParseDoc() in -lxml2... yes checking for xsltParseStylesheetDoc() in -lxslt... yes checking for exsltFuncRegister() in -lexslt... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetValidStructuredErrors()... yes checking for xmlSchemaSetValidStructuredErrors()... yes checking for xmlSchemaSetParserStructuredErrors()... yes creating Makefile make cc -I. -I/usr/include/libxml2 -I/usr/include -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETPARSERSTRUCTUREDERRORS -I/opt/local/include/ -I/opt/local/include/libxml2 -I/opt/local/include -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -g -DXP_UNIX -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -o xml_document_fragment.o -c xml_document_fragment.c In the included file starting at ./nokogiri.h:75, From ./xml_document_fragment.h:4, From xml_document_fragment.c:1: ./xml_document.h:5:16: error: st.h : No file or folder with this type make: *** [xml_document_fragment.o] Error 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1/ext/nokogiri/gem_make.out The "gem_make.out" file contains the exact same information as described above. If I try with another gem: gem install gherkin Here is what I get: u# gem install gherkin Building native extensions. This could take a while... ERROR: Error installing gherkin: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for main() in -lc... yes creating Makefile make cc -I. -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -o gherkin_lexer_ar.o -c gherkin_lexer_ar.c /Users/aslakhellesoy/scm/gherkin/tasks/../ragel/i18n/ar.c.rl:11:16: erreur: re.h : Aucun fichier ou dossier de ce type make: *** [gherkin_lexer_ar.o] Erreur 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30/ext/gherkin_lexer_ar/gem_make.out In fact whenever I try to install a gem with native extension, I get the same type of error. Would that ring a bell to anyone?

    Read the article

  • Why is my code returning a Null Object Refrence error when using WatIn?

    - by Fuzz Evans
    I keep getting a Null Object Refrence Error, but can't tell why. I have a CSV file that contains 100 urls. The file is read into an array called "lines". public partial class Form1 : System.Windows.Forms.Form { string[] lines; public Form1() ... private void ReadLinksIntoMemory() { //this reads the chosen csv file into our "lines" array //and splits on comma's and new lines to create new objects within the array using (StreamReader sr = new StreamReader(@"C:\temp.csv")) { //reads everything in our csv into 1 long line string fileContents = sr.ReadToEnd(); //splits the 1 long line read in into multiple objects of the lines array lines = fileContents.Split(new string[] { ",", Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries); sr.Dispose(); } } The next part is where I get the null object error. When I try to use WatIn to go to the first item in the lines array it says I'm referencing a null object. private void GoToEditLinks() { for (int i = 0; i < lines.Length; i++) { //go to each link sequentially myIE.GoTo(lines[i].ToString()); //sleep so we can make sure the page loads System.Threading.Thread.Sleep(5000); } } When I debug the code it says that the GoTo request calls lines which is null. It seems like I need to declare the array, but don't I need to tell it an exact size to do that? Example: lines = new string[10] I thought I could use the lines.Length to tell it how big to make the array but that didn't work. What is weird to me is I can use the following code without problem: //returns the accurate number of urls that were in the CSV we read in earlier txtbx1.text = lines.Length; //or //this returns the last entry in the csv, as well as the last entry in the array TextBox2.Text = lines[lines.Length - 1]; I am confused why the array clearly has items in it, they can be called to fill a text box, but when I try to call them in my for loop it says its a null reference? UPDATE: By placing my cursor on both calls to lines and pressing f12 I find they both go to the same instance. The thought next is that I am not calling ReadLinksIntoMemory in time, below is my code: private void button1_Click(object sender, EventArgs e) { button1.Enabled = false; ReadLinksIntoMemory(); GoToEditLinks(); button1.Enabled = true; } Unless I'm mistaken the code says that the ReadLinksIntoMemory method must complete before GoToEditLinks can be called? If ReadLinksIntoMemory didn't finish in time I shouldn't be able to fill my text boxes with the lines array length and/or last entry. UPDATE: Stepping into the method GoToEditLinks() I see that lines is null before it calls: myIE.GoTo(lines[i]); but when it hits the goto command the value changes from null to the url it is suppose to go to, but at that same time it gives me the null object error? UPDATE: I added a IsNullOrEmpty check method and lines array passes it without any issue. I'm beginning to think it is an issue with WatIn and the myIE.GoTo command. I think this is the stack trace/call stack? Program.exe!Program.Form1.GoToEditLinks() Line 284 C# Program.exe!Program.Form1.button1_Click(object sender, System.EventArgs e) Line 191 + 0x8 bytes C# [External Code] Program.exe!Program.Program.Main() Line 18 + 0x1d bytes C# [External Code]

    Read the article

  • Character-encoding problem spring

    - by aelshereay
    Hi All, I am stuck in a big problem with encoding in my website! I use spring 3, tomcat 6, and mysql db. I want to support German and Czech along with English in my website, I created all the JSPs as UTF-8 files, and in each jsp I include the following: <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> I created messages.properties (the default which is Czech), messages_de.properties, and messages_en.properties. And all of them are saved as UTF-8 files. I added the following to web.xml: <filter> <filter-name>encodingFilter</filter-name> <filterclass> org.springframework.web.filter.CharacterEncodingFilter</filterclass> <init-param> <param-name>encoding</param-name> <param-value>UTF-8</param-value> </init-param> <init-param> <param-name>forceEncoding</param-name> <param-value>true</param-value> </init-param> </filter> <locale-encoding-mapping-list> <locale-encoding-mapping> <locale>en</locale> <encoding>UTF-8</encoding> </locale-encoding-mapping> <locale-encoding-mapping> <locale>cz</locale> <encoding>UTF-8</encoding> </locale-encoding-mapping> <locale-encoding-mapping> <locale>de</locale> <encoding>UTF-8</encoding> </locale-encoding-mapping> </locale-encoding-mapping-list> <filter-mapping> <filter-name>encodingFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> And add the following to my applicationContext.xml: <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource" p:basenames="messages"/> <!-- Declare the Interceptor --> <mvc:interceptors> <bean class="org.springframework.web.servlet.i18n.LocaleChangeInterceptor" p:paramName="locale" /> </mvc:interceptors> <!-- Declare the Resolver --> <bean id="localeResolver" class="org.springframework.web.servlet.i18n.SessionLocaleResolver" /> I set the useBodyEncodingForURI attribute to true in the element of server.xml under: %CATALINA_HOME%/conf, also another time tried to add URIEncoding="UTF-8" instead. I created all the tables and fields with charset [utf8] and collection [utf8_general_ci] The encoding in my browser is UTF-8 (BTW, I have IE8 and Firefox 3.6.3) When I open the MYSQL Query browser and insert manually Czech or German data, it's being inserted correctly, and displayed correctly in my app as well. So, here's the list of problems I have: By default the messages.properties (Czech) should load, instead the messages_en.properties loads by default. In the web form, when I enter Czech data, then click submit, in the Controller I print out the data in the console before to save it to db, what's being printed is not correct having strange chars, and this is the exact data that saves to db. I don't know where's the mistake! Why can't I get it working although I did what people did and worked for them! don't know.. Please help me, I am stuck in this crappy problem since days, and it drives me crazy! Thank you in advance.

    Read the article

  • xpath 2.0 query - how to test for the first occurence of an element in a xml document

    - by Mike
    I have the following xsl template: <xsl:template match="para"> <fo:block xsl:use-attribute-sets="paragraph.para"> <!-- if first para in document --> <!--<xsl:if test="//para[1] intersect .">--> <xsl:if test="//para[1] intersect ."> <xsl:attribute name="space-after">10pt</xsl:attribute> <xsl:attribute name="background-color">yellow</xsl:attribute> </xsl:if> <xsl:choose> <xsl:when test="preceding-sibling::*[1][self::title]"> <xsl:attribute name="text-indent">0em</xsl:attribute> </xsl:when> <xsl:when test="parent::item"> <xsl:attribute name="text-indent">0em</xsl:attribute> </xsl:when> <xsl:otherwise> <xsl:attribute name="text-indent">1em</xsl:attribute> </xsl:otherwise> </xsl:choose> <xsl:apply-templates/> </fo:block> </xsl:template> The problem I am having is selecting the very first para node in the document from the following xml: <document> <section> <paragraph> <para>Para Text 1*#</para> <para>Para Text 2</para> </paragraph> </section> <paragraph> <para>Para Text 3*</para> <para>Para Text 4</para> <para>Para Text 5</para> <sub-paragraph> <para>Para Text 6*</para> <para>Para Text 7</para> </sub-paragraph> </paragraph> <appendix> <paragraph> <para>Para Text 8*</para> </paragraph> <paragraph> <para>Para Text 9</para> </paragraph> </appendix> </document> the xpath I am currently using is "//para[1] intersect ." which is selecting the first para for each group of para (denoted with a * in XML sample). Any ideas on how I can just select the first occurance of para within document (denoted with a #)?

    Read the article

  • Two UIViews, one UIViewController (in one UINavigationController)

    - by jdandrea
    Given an iPhone app with a UITableViewController pushed onto a UINavigationController, I would like to add a right bar button item to toggle between the table view and an "alternate" view of the same data. Let's also say that this other view uses the same data but is not a UITableView. Now, I know variations on this question already exist on Stack Overflow. However, in this case, that alternate view would not be pushed onto the UINavigationController. It would be visually akin to flipping the current UIViewController's table view over and revealing the other view, then being able to flip back. In other words, it's intended to take up a single spot in the UINavigationController hierarchy. Moreover, whatever selection you ultimately make from within either view will push a common UIViewController onto the UINavigationController stack. Still more info: We don't want to use a separate UINavigationController just to handle this pair of views, and we don't want to split these apart via a UITabBarController either. Visually and contextually, the UX is meant to show two sides of the same coin. It's just that those two sides happen to involve their own View Controllers in normal practice. Now … it turns out I have already gone and quickly set this up to see how it might work! However, upon stepping back to examine it, I get the distinct impression that I went about it in a rather non-MVC way, which of course concerns me a bit. Here's what I did at a high level. Right now, I have a UIViewController (not a UITableViewController) that handles all commonalities between the two views, such as fetching the raw data. I also have two NIBs, one for each view, and two UIView objects to go along with them. (One of them is a UITableView, which is a kind of UIView.) I switch between the views using animation (easy enough). Also, in an effort to keep things encapsulated, the now-split-apart UITableView (not the UIViewController!) acts as its own delegate and data source, fetching data from the VC. The VC is set up as a weak, non-retained object in the table view. In parallel, the alternate view gets at the raw data from the VC in the exact same way. So, there are a few things that smell funny here. The weak linking from child to parent, while polite, seems like it might be wrong. Making a UITableView the table's data source and delegate also seems odd to me, thinking that a view controller is where you want to put that per Apple's MVC diagrams. As it stands now, it would appear as if the view knows about the model, which isn't good. Loading up both views in advance also seems odd, because lazy loading is no longer in effect. Losing the benefits of a UITableViewController (like auto-scrolling to cells with text fields) is also a bit frustrating, and I'd rather not reinvent the wheel to work around that as well. Given all of the above, and given we want that "flip effect" in the context of a single spot on a single UINavigationController, and given that both views are two sides of the same coin, is there a better, more obvious way to design this that I'm just happening to miss completely? Clues appreciated!

    Read the article

  • Cisco ASA 5505 - L2TP over IPsec

    - by xraminx
    I have followed this document on cisco site to set up the L2TP over IPsec connection. When I try to establish a VPN to ASA 5505 from my Windows XP, after I click on "connect" button, the "Connecting ...." dialog box appears and after a while I get this error message: Error 800: Unable to establish VPN connection. The VPN server may be unreachable, or security parameters may not be configured properly for this connection. ASA version 7.2(4) ASDM version 5.2(4) Windows XP SP3 Windows XP and ASA 5505 are on the same LAN for test purposes. Edit 1: There are two VLANs defined on the cisco device (the standard setup on cisco ASA5505). - port 0 is on VLAN2, outside; - and ports 1 to 7 on VLAN1, inside. I run a cable from my linksys home router (10.50.10.1) to the cisco ASA5505 router on port 0 (outside). Port 0 have IP 192.168.1.1 used internally by cisco and I have also assigned the external IP 10.50.10.206 to port 0 (outside). I run a cable from Windows XP to Cisco router on port 1 (inside). Port 1 is assigned an IP from Cisco router 192.168.1.2. The Windows XP is also connected to my linksys home router via wireless (10.50.10.141). Edit 2: When I try to establish vpn, the Cisco device real time Log viewer shows 7 entries like this: Severity:5 Date:Sep 15 2009 Time: 14:51:29 SyslogID: 713904 Destination IP = 10.50.10.141, Decription: No crypto map bound to interface... dropping pkt Edit 3: This is the setup on the router right now. Result of the command: "show run" : Saved : ASA Version 7.2(4) ! hostname ciscoasa domain-name default.domain.invalid enable password HGFHGFGHFHGHGFHGF encrypted passwd NMMNMNMNMNMNMN encrypted names name 192.168.1.200 WebServer1 name 10.50.10.206 external-ip-address ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address external-ip-address 255.0.0.0 ! interface Vlan3 no nameif security-level 50 no ip address ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive dns server-group DefaultDNS domain-name default.domain.invalid object-group service l2tp udp port-object eq 1701 access-list outside_access_in remark Allow incoming tcp/http access-list outside_access_in extended permit tcp any host WebServer1 eq www access-list outside_access_in extended permit udp any any eq 1701 access-list inside_nat0_outbound extended permit ip any 192.168.1.208 255.255.255.240 access-list inside_cryptomap_1 extended permit ip interface outside interface inside pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool PPTP-VPN 192.168.1.210-192.168.1.220 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-524.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www WebServer1 www netmask 255.255.255.255 access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute http server enable http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec transform-set TRANS_ESP_3DES_MD5 esp-3des esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_MD5 mode transport crypto map outside_map 1 match address inside_cryptomap_1 crypto map outside_map 1 set transform-set TRANS_ESP_3DES_MD5 crypto map outside_map interface inside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd enable inside ! group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.1.1 vpn-tunnel-protocol IPSec l2tp-ipsec username myusername password FGHFGHFHGFHGFGFHF nt-encrypted tunnel-group DefaultRAGroup general-attributes address-pool PPTP-VPN default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key * tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! ! prompt hostname context Cryptochecksum:a9331e84064f27e6220a8667bf5076c1 : end

    Read the article

  • Need some help deciphering a line of assembler code, from .NET JITted code

    - by Lasse V. Karlsen
    In a C# constructor, that ends up with a call to this(...), the actual call gets translated to this: 0000003d call dword ptr ds:[199B88E8h] What is the DS register contents here? I know it's the data-segment, but is this call through a VMT-table or similar? I doubt it though, since this(...) wouldn't be a call to a virtual method, just another constructor. I ask because the value at that location seems to be bad in some way, if I hit F11, trace into (Visual Studio 2008), on that call-instruction, the program crashes with an access violation. The code is deep inside a 3rd party control library, where, though I have the source code, I don't have the assemblies compiled with enough debug information that I can trace it through C# code, only through the disassembler, and then I have to match that back to the actual code. The C# code in question is this: public AxisRangeData(AxisRange range) : this(range, range.Axis) { } Reflector shows me this IL code: .maxstack 8 L_0000: ldarg.0 L_0001: ldarg.1 L_0002: ldarg.1 L_0003: callvirt instance class DevExpress.XtraCharts.AxisBase DevExpress.XtraCharts.AxisRange::get_Axis() L_0008: call instance void DevExpress.XtraCharts.Native.AxisRangeData::.ctor(class DevExpress.XtraCharts.ChartElement, class DevExpress.XtraCharts.AxisBase) L_000d: ret It's that last call there, to the other constructor of the same class, that fails. The debugger never surfaces inside the other method, it just crashes. The disassembly for the method after JITting is this: 00000000 push ebp 00000001 mov ebp,esp 00000003 sub esp,14h 00000006 mov dword ptr [ebp-4],ecx 00000009 mov dword ptr [ebp-8],edx 0000000c cmp dword ptr ds:[18890E24h],0 00000013 je 0000001A 00000015 call 61843511 0000001a mov eax,dword ptr [ebp-4] 0000001d mov dword ptr [ebp-0Ch],eax 00000020 mov eax,dword ptr [ebp-8] 00000023 mov dword ptr [ebp-10h],eax 00000026 mov ecx,dword ptr [ebp-8] 00000029 cmp dword ptr [ecx],ecx 0000002b call dword ptr ds:[1889D0DCh] // range.Axis 00000031 mov dword ptr [ebp-14h],eax 00000034 push dword ptr [ebp-14h] 00000037 mov edx,dword ptr [ebp-10h] 0000003a mov ecx,dword ptr [ebp-0Ch] 0000003d call dword ptr ds:[199B88E8h] // this(range, range.Axis)? 00000043 nop 00000044 mov esp,ebp 00000046 pop ebp 00000047 ret Basically what I'm asking is this: What the purpose of the ds:[ADDR] indirection here? VMT-table is only for virtual isn't it? and this is constructor Could the constructor have yet to be JITted, which could mean that the call would actually call through a JIT shim? I'm afraid I'm in deep water here, so anything might and could help. Edit: Well, the problem just got worse, or better, or whatever. We are developing the .NET feature in a C# project in a Visual Studio 2008 solution, and debugging and developing through Visual Studio. However, in the end, this code will be loaded into a .NET runtime hosted by a Win32 Delphi application. In order to facilitate easy experimentation of such features, we can also configure the Visual Studio project/solution/debugger to copy the produced dll's to the Delphi app's directory, and then execute the Delphi app, through the Visual Studio debugger. Turns out, the problem goes away if I run the program outside of the debugger, but during debugging, it crops up, every time. Not sure that helps, but since the code isn't slated for production release for another 6 months or so, then it takes some of the pressure off of it for the test release that we have soon. I'll dive into the memory parts later, but probably not until over the weekend, and post a followup.

    Read the article

  • Java: how to do fast copy of a BufferedImage's pixels? (include unit test)

    - by WizardOfOdds
    I want to do a copy (of a rectangle area) of the ARGB values from a source BufferedImage into a destination BufferedImage. No compositing should be done: if I copy a pixel with an ARGB value of 0x8000BE50 (alpha value at 128), then the destination pixel must be exactly 0x8000BE50, totally overriding the destination pixel. I've got a very precise question and I made a unit test to show what I need. The unit test is fully functional and self-contained and is passing fine and is doing precisely what I want. However, I want a faster and more memory efficient method to replace copySrcIntoDstAt(...). That's the whole point of my question: I'm not after how to "fill" the image in a faster way (what I did is just an example to have a unit test). All I want is to know what would be a fast and memory efficient way to do it (ie fast and not creating needless objects). The proof-of-concept implementation I've made is obviously very memory efficient, but it is slow (doing one getRGB and one setRGB for every pixel). Schematically, I've got this: (where A indicates corresponding pixels from the destination image before the copy) AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA And I want to have this: AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAABBBBAAA AAAAAAAAAAAAABBBBAAA AAAAAAAAAAAAAAAAAAAA where 'B' represents the pixels from the src image. I'm looking for an exact replacement of the method, not for an API link/quote. import org.junit.Test; import java.awt.image.BufferedImage; import static org.junit.Assert.*; public class TestCopy { private static final int COL1 = 0x8000BE50; // alpha at 128 private static final int COL2 = 0x1732FE87; // alpha at 23 @Test public void testPixelsCopy() { final BufferedImage src = new BufferedImage( 5, 5, BufferedImage.TYPE_INT_ARGB ); final BufferedImage dst = new BufferedImage( 20, 20, BufferedImage.TYPE_INT_ARGB ); convenienceFill( src, COL1 ); convenienceFill( dst, COL2 ); copySrcIntoDstAt( src, dst, 3, 4 ); for (int x = 0; x < dst.getWidth(); x++) { for (int y = 0; y < dst.getHeight(); y++) { if ( x >= 3 && x <= 7 && y >= 4 && y <= 8 ) { assertEquals( COL1, dst.getRGB(x,y) ); } else { assertEquals( COL2, dst.getRGB(x,y) ); } } } } // clipping is unnecessary private static void copySrcIntoDstAt( final BufferedImage src, final BufferedImage dst, final int dx, final int dy ) { // TODO: replace this by a much more efficient method for (int x = 0; x < src.getWidth(); x++) { for (int y = 0; y < src.getHeight(); y++) { dst.setRGB( dx + x, dy + y, src.getRGB(x,y) ); } } } // This method is just a convenience method, there's // no point in optimizing this method, this is not what // this question is about private static void convenienceFill( final BufferedImage bi, final int color ) { for (int x = 0; x < bi.getWidth(); x++) { for (int y = 0; y < bi.getHeight(); y++) { bi.setRGB( x, y, color ); } } } }

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >