Search Results

Search found 4740 results on 190 pages for 'split mirror'.

Page 43/190 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Cannot get libcurl-devl on OpenSUSE 11.3

    - by Dai
    I have a server running OpenSUSE 11.3 that I can't really upgrade to a newer version of OpenSUSE (it's a managed appliance). I have some PHP shell scripts that need to run on the server that have a dependency on both cURL and OpenSSL. I discovered that the PHP 5.3.3 binaries on the server did not include OpenSSL but did include cURL I downloaded the latest PHP sources, extracted them, and ran ./configure --with-openssl --with-zlib --with-bcmath --with-curl --with-readline --with-libxml --enable-sockets This failed: the configure script complained that it couldn't find cURL: checking for cURL support... yes checking for cURL in default path... not found configure: error: Please reinstall the libcurl distribution - easy.h should be in /include/curl/ I tried to install libcurl by running zypper install libcurl-devl This failed too: doom:~/phpworksite/php-5.5.15 # zypper install libcurl-devl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... 'libcurl-devl' not found in package names. Trying capabilities. No provider of 'libcurl-devl' found. Resolving package dependencies... Nothing to do. However, libcurl-devl is listed when I run zypper search curl. doom:~/phpworksite/php-5.5.15 # zypper search curl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... S | Name | Summary | Type --+-----------------------------+----------------------------------------------------------+-------- i | curl | A Tool for Transferring Data from URLs | package | curlftpfs | Filesystem for mounting FTP hosts using FUSE and libcurl | package | libcurl-devel | A Tool for Transferring Data from URLs | package i | libcurl4 | cURL shared library version 4 | package i | perl-WWW-Curl | Perl extension interface for libcurl | package i | php5-curl | PHP5 Extension Module | package | python-curl | Python module interface to the cURL library | package | python-curl-doc | Documentation for python-curl | package | xmms2-plugin-curl | Curl Support for xmms2 | package | xmms2-plugin-curl-debuginfo | Debug information for package xmms2-plugin-curl | package doom:~/phpworksite/php-5.5.15 # Here are the current repositories. doom:~/phpworksite/php-5.5.15 # zypper repos # | Alias | Name | Enabled | Refresh ---+----------------------------------------------+----------------------------------------------+---------+-------- 1 | PHP_extensions_(openSUSE_11.3) | PHP_extensions_(openSUSE_11.3) | No | Yes 2 | Packman_11.3 | Packman_11.3 | Yes | Yes 3 | Updates for openSUSE 11.3 11.3-1.82 | Updates for openSUSE 11.3 11.3-1.82 | Yes | Yes 4 | openSUSE_11.3_OSS | openSUSE_11.3_OSS | Yes | Yes 5 | openSUSE_11.3_Updates | openSUSE_11.3_Updates | Yes | Yes 6 | openSUSE_BuildService_-_devel:languages:perl | openSUSE_BuildService_-_devel:languages:perl | No | Yes 7 | repo-debug | openSUSE-11.3-Debug | No | Yes 8 | repo-non-oss | openSUSE-11.3-Non-Oss | Yes | Yes 9 | repo-oss | openSUSE-11.3-Oss | Yes | Yes 10 | repo-source | openSUSE-11.3-Source | No | Yes BTW, I did try building PHP without cURL, however it broke a lot of things, so apparently I really need cURL. My question: how can I install libcurl-devl (or just install cURL) so that I can build PHP?

    Read the article

  • Upgrading PHP from 5.1 to 5.2 on CentOS 5.4

    - by andufo
    i'm trying to upgrade php 5.1 to 5.2 on a CentOS 5.4 I use: yum upgrade php The result is this (check out the last part): [root@mail httpd]# yum update php Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: mirrors.serveraxis.net * centosplus: mirrors.tummy.com * contrib: mirror.raystedman.net * extras: mirror.raystedman.net * updates: mirrors.netdna.com Setting up Update Process Resolving Dependencies --> Running transaction check --> Processing Dependency: php = 5.1.6-27.el5 for package: php-devel --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php.x86_64 0:5.2.10-1.el5.centos set to be updated --> Processing Dependency: php-cli = 5.2.10-1.el5.centos for package: php --> Processing Dependency: php-common = 5.2.10-1.el5.centos for package: php --> Running transaction check --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php-cli.x86_64 0:5.2.10-1.el5.centos set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-xml --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-pdo --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-gd --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-ldap --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-mbstring --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-mysql --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-imap ---> Package php-common.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-devel.x86_64 0:5.2.10-1.el5.centos set to be updated --> Running transaction check --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php-gd.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-imap.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-ldap.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-mbstring.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-mysql.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-pdo.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-xml.x86_64 0:5.2.10-1.el5.centos set to be updated --> Finished Dependency Resolution php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 from installed has depsolving problems --> Missing Dependency: php = 5.1.6 is needed by package php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 (installed) Error: Missing Dependency: php = 5.1.6 is needed by package php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. [root@mail httpd]# What are the consequences of using --skip-broken? Any recommendations?

    Read the article

  • PHP install sqlite3 extension

    - by Kevin
    We are using PHP 5.3.6 here, but we used the --without-sqlite3 command when compiling PHP. (It stands in the 'Configure Command' column). But, it is very risky to recompile PHP on that server; there are many visitors. How can we install/use sqlite3? Regards, Kevin [EDIT] yum repolist gives: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.nl.leaseweb.net * extras: mirror.nl.leaseweb.net * updates: mirror.nl.leaseweb.net repo id repo name status base CentOS-5 - Base 3,566 extras CentOS-5 - Extras 237 updates CentOS-5 - Updates 376 repolist: 4,179 rpm -qa | grep php gives: php-pdo-5.3.6-1.w5 php-mysql-5.3.6-1.w5 psa-php5-configurator-1.5.3-cos5.build95101022.10 php-mbstring-5.3.6-1.w5 php-imap-5.3.6-1.w5 php-cli-5.3.6-1.w5 php-gd-5.3.6-1.w5 php-5.3.6-1.w5 php-common-5.3.6-1.w5 php-xml-5.3.6-1.w5 php -i | grep sqlite gives: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/sqlite3.so' - /usr/lib64/php/modules/sqlite3.so: cannot open shared object file: No such file or directory in Unknown on line 0 Configure Command => './configure' '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' '--sharedstatedir=/usr/com' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--cache-file=../config.cache' '--with-libdir=lib64' '--with-config-file-path=/etc' '--with-config-file-scan-dir=/etc/php.d' '--disable-debug' '--with-pic' '--disable-rpath' '--without-pear' '--with-bz2' '--with-exec-dir=/usr/bin' '--with-freetype-dir=/usr' '--with-png-dir=/usr' '--with-xpm-dir=/usr' '--enable-gd-native-ttf' '--without-gdbm' '--with-gettext' '--with-gmp' '--with-iconv' '--with-jpeg-dir=/usr' '--with-openssl' '--with-pcre-regex=/usr' '--with-zlib' '--with-layout=GNU' '--enable-exif' '--enable-ftp' '--enable-magic-quotes' '--enable-sockets' '--enable-sysvsem' '--enable-sysvshm' '--enable-sysvmsg' '--with-kerberos' '--enable-ucd-snmp-hack' '--enable-shmop' '--enable-calendar' '--without-mime-magic' '--without-sqlite' '--without-sqlite3' '--with-libxml-dir=/usr' '--enable-xml' '--with-system-tzdata' '--enable-force-cgi-redirect' '--enable-pcntl' '--with-imap=shared' '--with-imap-ssl' '--enable-mbstring=shared' '--enable-mbregex' '--with-gd=shared' '--enable-bcmath=shared' '--enable-dba=shared' '--with-db4=/usr' '--with-xmlrpc=shared' '--with-ldap=shared' '--with-ldap-sasl' '--with-mysql=shared,/usr' '--with-mysqli=shared,/usr/bin/mysql_config' '--enable-dom=shared' '--with-pgsql=shared' '--enable-wddx=shared' '--with-snmp=shared,/usr' '--enable-soap=shared' '--with-xsl=shared,/usr' '--enable-xmlreader=shared' '--enable-xmlwriter=shared' '--with-curl=shared,/usr' '--enable-fastcgi' '--enable-pdo=shared' '--with-pdo-odbc=shared,unixODBC,/usr' '--with-pdo-mysql=shared,/usr' '--with-pdo-pgsql=shared,/usr' '--with-pdo-sqlite=shared,/usr' '--with-pdo-dblib=shared,/usr' '--enable-json=shared' '--enable-zip=shared' '--with-readline' '--with-pspell=shared' '--enable-phar=shared' '--with-mcrypt=shared,/usr' '--with-tidy=shared,/usr' '--with-mssql=shared,/usr' '--enable-sysvmsg=shared' '--enable-sysvshm=shared' '--enable-sysvsem=shared' '--enable-posix=shared' '--with-unixODBC=shared,/usr' '--enable-fileinfo=shared' '--enable-intl=shared' '--with-icu-dir=/usr' '--with-recode=shared,/usr' /etc/php.d/pdo_sqlite.ini, /etc/php.d/sqlite3.ini, PHP Warning: Unknown: It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Europe/Berlin' for 'CET/1.0/no DST' instead in Unknown on line 0 PDO drivers => mysql, sqlite pdo_sqlite PWD => /root/sqlite _SERVER["PWD"] => /root/sqlite _ENV["PWD"] => /root/sqlite

    Read the article

  • ssh authentication nfs

    - by user40135
    Hi all I would like to do ssh from machine "ub0" to another machine "ub1" without using passwords. I setup using nfs on "ub0" but still I am asked to insert a password. Here is my scenario: * machine ub0 and ub1 have the same user "mpiu", with same pwd, same userid, and same group id * the 2 servers are sharing a folder that is the HOME directory for "mpiu" * I did a chmod 700 on the .ssh * I created a key using ssh-keygene -t dsa * I did "cat id_dsa.pub authorized_keys". On this last file I tried also chmod 600 and chmod 640 * off course I can guarantee that on machine ub1 the user "shared_user" can see the same fodler that wes mounted with no problem. Below the content of my .ssh folder Code: authorized_keys id_dsa id_dsa.pub known_hosts After all of this calling wathever function "ssh ub1 hostname" I am requested my password. Do you know what I can try? I also UNcommented in the ssh_config file for both machines this line IdentityFile ~/.ssh/id_dsa I also tried ssh -i $HOME/.ssh/id_dsa mpiu@ub1 Below the ssh -vv Code: OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ub1 [192.168.2.9] port 22. debug1: Connection established. debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /mirror/mpiu/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version lshd-2.0.4 lsh - a GNU ssh debug1: no match: lshd-2.0.4 lsh - a GNU ssh debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-3ubuntu1 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,spki-sign-rsa debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server-client 3des-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client-server 3des-cbc hmac-md5 none debug2: dh_gen_key: priv key bits set: 183/384 debug2: bits set: 1028/2048 debug1: sending SSH2_MSG_KEXDH_INIT debug1: expecting SSH2_MSG_KEXDH_REPLY debug1: Host 'ub1' is known and matches the RSA host key. debug1: Found key in /mirror/mpiu/.ssh/known_hosts:1 debug2: bits set: 1039/2048 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /mirror/mpiu/.ssh/id_dsa (0xb874b098) debug1: Authentications that can continue: password,publickey debug1: Next authentication method: publickey debug1: Offering public key: /mirror/mpiu/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: password,publickey debug2: we did not send a packet, disable method debug1: Next authentication method: password mpiu@ub1's password: I hangs here!

    Read the article

  • Add a small RAID card? Will it help overall stability and performance of my nine hard drives?

    - by Ray
    Hi, Will I get any extra genuine added performance and RAID stability if I insert a basic RAID card into a PCI-E x1 slot? I am considering the Adaptec 1220SA - 2 port SATA , pci-express (1x) , raid 0/1. Ok it only supports two SATA drives. Purpose is to help support the eight internal hard drives (1TB each), a DVD drive and an external e-SATA connected 2TB hard drive - by dealing with two of the internal hard drives. My current configuration of eight internal 1TB Barracuda (7200.12) SATA hard drives, one external 2TB SATA Western Digital Green Drive (e-SATA) and one DVD drive can already be supported by the Intel P55 & JMicron controllers on the ASUS motherboard : the Intel P55 (controls six HDD; configured as three x RAID 1), and the JMicron (controls two HDD as one RAID 1, as well as the DVD drive and the external SATA drive via the motherboard's e-SATA port (controlled by the JMicron)). Bigger picture details : I have an ASUS motherboard designed for the LGA1156 type processor and it includes the Intel P55 Express Chipset and JMicron. I am using the Intel Core i7-870 processor, and have 8GB DDR3 (1333) memory (four x 2GB Corsair DIMMs). Enough overall power. The power supply is more than sufficicient for the system. Corsair AX850. The system will never need the full 850 watts (future : second graphics card). The RAID card would provide hardware RAID 1 for two of the eight intrnal drives. It would either reduce the load on : the Intel P55 firmware RAID support, or replace the JMicron controller's RAID 1 set. I am busy installing the above configuration using Windows 7 Ultimate 64-bit as the OS. The RAID card is a last minute addition to the plan. Is it worth spending the extra R700 - R900 on the Adaptec 1220SA, or equivalent RAID card? I cannot afford to spend yet another R2000 - R3000 on a RAID card that would support many SATA2 hard drives, with a better RAID, example the RAID 5. My Issue & assumption : I am trusting that the Intel P55 chipset can properly handle six drives, configured as three * RAID 1. I am assuming that the JMicron can handle, using its RED SATA ports, one RAID-1 (two HDDs). The DVD drive connects to the JMicron optical SATA port 1 (white port 1). White port 2 is not used. The e-SATA connection is from the JMicron straight to, and through the motherboard - to an on-board (rear panel) e-SATA port. Am I being a little hopeful in only using the on-board Intel P55 and the JMicron? Is it a waste of money to install a RAID card that handles two SATA2 drives? OR Is it wisdom to take the pressure a little off the Intel P55? Obviously I am interested in data security, hence RAID 1, not RAID Zero. RAID 5 would be nice. The CPU, Intel Core i7-870 will provide the clout. Context to nine drives : I am using virtualisation with Windows 7 Ultimate. Bootable VMs. The operating system gets a mirror. Loaded apps gets a mirror. The current design data is kept in another mirror and Another mirror is back-up one and / or VM territory. Then the external 2TB drive (via e-SATA) is the next layer of data security and then finally, I use off-site data security. Thanks.

    Read the article

  • ovs-vsctl: "eth0" is not a valid UUID

    - by Przemek Lach
    I'm trying to setup an open v-switch inside my Ubuntu 12.04 Server VM. I have created three interfaces for this VM and I want to create a port mirror inside of the VM using these there interfaces and open v-switch. There are three Host-Only Adapters: eth0, eth1, eth2. The idea is that three other VM's will be connected to these adapters. One of these VM's will stream UDP video to eth0 and I want the vswitch'd VM to mirror those packets from eth0 onto eth1 and eth2. Each of the VM's connected to eth1 and eth2 will get the same video stream. I performed the following steps to install open v-switch: $ apt-get install python-simplejson python-qt4 python-twisted-conch automake autoconf gcc uml-utilities libtool build-essential $ apt-get install build-essential autoconf automake pkg-config $ wget http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ tar xf http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ cd http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ apt-get install libssl-dev iproute tcpdump linux-headers-`uname -r` $ ./boot.sh $ ./configure - -with-linux=/lib/modules/`uname -r`/build $ make $ sudo make install After installation I configured as follows: $ insmod datapath/linux/openvswitch.ko $ sudo touch /usr/local/etc/ovs-vswitchd.conf $ mkdir -p /usr/local/etc/openvswitch $ ovsdb-tool create /usr/local/etc/openvswitch/conf.db Then I started the server: $ ovsdb-server /usr/local/etc/openvswitch/conf.db \ --remote=punix:/usr/local/var/run/openvswitch/db.sock \ --remote=db:Open_vSwitch,manager_options \ --private-key=db:SSL,private_key \ --certificate=db:SSL,certificate \ --bootstrap-ca-cert=db:SSL,ca_cert --pidfile --detach --log-file $ ovs-vsctl –no-wait init (run only once) $ ovs-vswitchd --pidfile --detach The above steps I got from this tutorial and it all worked fine. I then proceeded to add a port mirror based on the open v-switch documentation under Port Mirroring. I successfully completed the following commands: $ ovs-vsctl add-br br0 $ ovs-vsctl add-port br0 eth0 $ ovs-vsctl add-port br0 eth1 $ ovs-vsctl add-port br0 eth2 $ ifconfig eth0 promisc up $ ifconfig eth1 promisc up $ ifconfig eth2 promisc up At this point when I run ovs-vsctl show I get the following: 75bda8c2-b870-438b-9115-e36288ea1cd8 Bridge "br0" Port "br0" Interface "br0" type: internal Port "eth0" Interface "eth0" Port "eth2" Interface "eth2" Port "eth1" Interface "eth1" And when I run ifconfig I get the following: eth0 Link encap:Ethernet HWaddr 08:00:27:9f:51:ca inet6 addr: fe80::a00:27ff:fe9f:51ca/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth1 Link encap:Ethernet HWaddr 08:00:27:53:02:d4 inet6 addr: fe80::a00:27ff:fe53:2d4/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth2 Link encap:Ethernet HWaddr 08:00:27:cb:a5:93 inet6 addr: fe80::a00:27ff:fecb:a593/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth3 Link encap:Ethernet HWaddr 08:00:27:df:bb:d8 inet addr:192.168.1.139 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fedf:bbd8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2211 errors:0 dropped:0 overruns:0 frame:0 TX packets:1196 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:182987 (182.9 KB) TX bytes:125441 (125.4 KB) NOTE: I use eth3 as a bridge adapter for SSH'ing into the VM. So now, I think I've done everything correctly but when I try to create the bridge using the following command: $ ovs-vsctl -- set Bridge br0 mirrors=@m -- --id=@eth0 get Port eth0 -- --id=@eth1 get Port eth1 -- --id=@m create Mirror name=app1Mirror select-dst-port=eth0 select-src-port=@eth0 output-port=@eth1,eth2 I get the following error: ovs-vsctl: "eth0" is not a valid UUID I don't understand why it's not able to find the interfaces?

    Read the article

  • jQuery load Google Visualization API with AJAX

    - by Curro
    Hello. There is an issue that I cannot solve, I've been looking a lot in the internet but found nothing. I have this JavaScript that is used to do an Ajax request by PHP. When the request is done, it calls a function that uses the Google Visualization API to draw an annotatedtimeline to present the data. The script works great without AJAX, if I do everything inline it works great, but when I try to do it with AJAX it doesn't work!!! The error that I get is in the declaration of the "data" DataTable, in the Google Chrome Developer Tools I get a Uncaught TypeError: Cannot read property 'DataTable' of undefined. When the script gets to the error, everything on the page is cleared, it just shows a blank page. So I don't know how to make it work. Please help Thanks in advance $(document).ready(function(){ // Get TIER1Tickets $("#divTendency").addClass("loading"); $.ajax({ type: "POST", url: "getTIER1Tickets.php", data: "", success: function(html){ // Succesful, load visualization API and send data google.load('visualization', '1', {'packages': ['annotatedtimeline']}); google.setOnLoadCallback(drawData(html)); } }); }); function drawData(response){ $("#divTendency").removeClass("loading"); // Data comes from PHP like: <CSV ticket count for each day>*<CSV dates for ticket counts>*<total number of days counted> // So it has to be split first by * then by , var dataArray = response.split("*"); var dataTickets = dataArray[0]; var dataDates = dataArray[1]; var dataCount = dataArray[2]; // The comma separation now splits the ticket counts and the dates var dataTicketArray = dataTickets.split(","); var dataDatesArray = dataDates.split(","); // Visualization data var data = new google.visualization.DataTable(); data.addColumn('date', 'Date'); data.addColumn('number', 'Tickets'); data.addRows(dataCount); var dateSplit = new Array(); for(var i = 0 ; i < dataCount ; i++){ // Separating the data because must be entered as "new Date(YYYY,M,D)" dateSplit = dataDatesArray[i].split("-"); data.setValue(i, 0, new Date(dateSplit[2],dateSplit[1],dateSplit[0])); data.setValue(i, 1, parseInt(dataTicketArray[i])); } var annotatedtimeline = new google.visualization.AnnotatedTimeLine(document.getElementById('divTendency')); annotatedtimeline.draw(data, {displayAnnotations: true}); }

    Read the article

  • Infinite loop when adding a row to a list in a class in python3

    - by Margaret
    I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): and class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): My initialisation code is currently: if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.

    Read the article

  • Convert string from getline into a number

    - by haskellguy
    I am trying to create a 2D array with vectors. I have a file that has for each line a set of numbers. So what I did I implemented a split function that every time I have a new number (separated by \t) it splits that and add it to the vector vector<double> &split(const string &s, char delim, vector<double> &elems) { stringstream ss(s); string item; while (getline(ss, item, delim)) { cout << item << endl; double number = atof(item.c_str()); cout << number; elems.push_back(number); } return elems; } vector<double> split(const string &s, char delim) { vector<double> elems; split(s, delim, elems); return elems; } After that I simply iterate through it. int main() { ifstream file("./data/file.txt"); string row; vector< vector<double> > matrix; int line_count = -1; while (getline(file, row)) { line_count++; if (line_count <= 4) continue; vector<double> cols = split(row, '\t'); matrix.push_back(cols); } ... } Now my issues is in this bit here: while (getline(ss, item, delim)) { cout << item << endl; double number = atof(item.c_str()); cout << number; Where item.c_str() is converted to a 0. Shouldn't that be still a string having the same value as item? It works on a separate example if I do straight from string to c_string, but when I use this getline I end up in this error situation, hints?

    Read the article

  • How to compare DateTime Objects while looping through a list?

    - by Taniq
    I'm trying to loop through a list (csv) containing two fields; a name and a date. There are various duplicated names and various dates in the list. I'm trying to deduce for each name in the list, where there are multiple instances of the same name, which corresponding date is the latest. I realise, from looking at another answer, that I need to use the DateTime.Compare method which is fine, but my problem is working out which date is later. Once I know this I need to produce a file with unique names and the latest date relating to it. This is my first question which makes me a newbie. EDIT: Initially I thought it would be 'ok' to set the LatestDate object to a date that wouldn't show up in my file, therefore making any later dates in the file the LatestDate. Here's my coding so far: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace flybe_overwriter { class Program { static DateTime currentDate; static DateTime latestDate = new DateTime(1000,1,1); static HashSet<string> uniqueNames = new HashSet<string>(); static string indexpath = @"e:\flybe test\indexing.csv"; static string[] indexlist = File.ReadAllLines(indexpath); static StreamWriter outputfile = new StreamWriter(@"e:\flybe test\match.csv"); static void Main(string[] args) { foreach (string entry in indexlist) { uniqueNames.Add(entry.Split(',')[0]); } HashSet<string>.Enumerator fenum = new HashSet<string>.Enumerator(); fenum = uniqueNames.GetEnumerator(); while (fenum.MoveNext()) { string currentName = fenum.Current; foreach (string line in indexlist) { currentDate = new DateTime(Convert.ToInt32(line.Split(',')[1].Substring(4, 4)), Convert.ToInt32(line.Split(',')[1].Substring(2, 2)), Convert.ToInt32(line.Split(',')[1].Substring(0, 2))); if (currentName == line.Split(',')[0]) { if(DateTime.Compare(latestDate.Date, currentDate.Date) < 1) { // Console.WriteLine(currentName + " " + latestDate.ToShortDateString() + " is earlier than " + currentDate.ToShortDateString()); } else if (DateTime.Compare(latestDate.Date, currentDate.Date) > 1) { // Console.WriteLine(currentName + " " + latestDate.ToShortDateString() + " is later than " + currentDate.ToShortDateString()); } else if (DateTime.Compare(latestDate.Date, currentDate.Date) == 0) { // Console.WriteLine(currentName + " " + latestDate.ToShortDateString() + " is the same as " + currentDate.ToShortDateString()); } } } } } } } Any help appreciated. Thanks.

    Read the article

  • Pass data to another page

    - by user2331416
    I am trying to pass some data from one page to another page by using jquery but it dose not working, below is the code which I want to click the text in source page and the destination page will hide the current text. Source page: <html> <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script> <script type="text/javascript"> $(function () { $("a.pass").bind("click", function () { var url = "Destination.html?name=" + encodeURIComponent($("a.pass").text()); window.location.href = url; }); }); </script> </head> <body> <a class="pass">a</a><br /> <a class="pass">b</a><br /> <a class="pass">c</a><br /> <a class="pass">d</a> </body> </html> Destination page: <html> <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script> <script type="text/javascript"> var queryString = new Array(); $(function () { if (queryString.length == 0) { if (window.location.search.split('?').length > 1) { var params = window.location.search.split('?')[1].split('&'); for (var i = 0; i < params.length; i++) { var key = params[i].split('=')[0]; var value = decodeURIComponent(params[i].split('=')[1]); queryString[key] = value; } } } if (queryString["name"] != null) { var data = queryString["name"] $("p.+'data'").hide(); } }); </script> </head> <body> <p class="a">a</p> <p class="b">b</p> <p class="c">c</p> <p class="d">d</p> </body> </html> Please Help.

    Read the article

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • Is the Leptonica implementation of 'Modified Median Cut' not using the median at all?

    - by TheCodeJunkie
    I'm playing around a bit with image processing and decided to read up on how color quantization worked and after a bit of reading I found the Modified Median Cut Quantization algorithm. I've been reading the code of the C implementation in Leptonica library and came across something I thought was a bit odd. Now I want to stress that I am far from an expert in this area, not am I a math-head, so I am predicting that this all comes down to me not understanding all of it and not that the implementation of the algorithm is wrong at all. The algorithm states that the vbox should be split along the lagest axis and that it should be split using the following logic The largest axis is divided by locating the bin with the median pixel (by population), selecting the longer side, and dividing in the center of that side. We could have simply put the bin with the median pixel in the shorter side, but in the early stages of subdivision, this tends to put low density clusters (that are not considered in the subdivision) in the same vbox as part of a high density cluster that will outvote it in median vbox color, even with future median-based subdivisions. The algorithm used here is particularly important in early subdivisions, and 3is useful for giving visible but low population color clusters their own vbox. This has little effect on the subdivision of high density clusters, which ultimately will have roughly equal population in their vboxes. For the sake of the argument, let's assume that we have a vbox that we are in the process of splitting and that the red axis is the largest. In the Leptonica algorithm, on line 01297, the code appears to do the following Iterate over all the possible green and blue variations of the red color For each iteration it adds to the total number of pixels (population) it's found along the red axis For each red color it sum up the population of the current red and the previous ones, thus storing an accumulated value, for each red note: when I say 'red' I mean each point along the axis that is covered by the iteration, the actual color may not be red but contains a certain amount of red So for the sake of illustration, assume we have 9 "bins" along the red axis and that they have the following populations 4 8 20 16 1 9 12 8 8 After the iteration of all red bins, the partialsum array will contain the following count for the bins mentioned above 4 12 32 48 49 58 70 78 86 And total would have a value of 86 Once that's done it's time to perform the actual median cut and for the red axis this is performed on line 01346 It iterates over bins and check they accumulated sum. And here's the part that throws me of from the description of the algorithm. It looks for the first bin that has a value that is greater than total/2 Wouldn't total/2 mean that it is looking for a bin that has a value that is greater than the average value and not the median ? The median for the above bins would be 49 The use of 43 or 49 could potentially have a huge impact on how the boxes are split, even though the algorithm then proceeds by moving to the center of the larger side of where the matched value was.. Another thing that puzzles me a bit is that the paper specified that the bin with the median value should be located, but does not mention how to proceed if there are an even number of bins.. the median would be the result of (a+b)/2 and it's not guaranteed that any of the bins contains that population count. So this is what makes me thing that there are some approximations going on that are negligible because of how the split actually takes part at the center of the larger side of the selected bin. Sorry if it got a bit long winded, but I wanted to be as thoroughas I could because it's been driving me nuts for a couple of days now ;)

    Read the article

  • C#: Regex to extract portions of file name

    - by jakesankey
    I have text files formatted as such: R156484COMP_004A7001_20100104_065119.txt I need to consistently extract the R****COMP, the 004A7001 number, 20100104 (date), and don't care about the 065119 number. the problem is that not ALL of the files being parsed have the exact naming convention. some may be like this: R168166CRIT_156B2075_SU2_20091223_123456.txt or R285476COMP_SU1_125A6025_20100407_123456.txt So how could I use regex instead of split to ensure I am always getting that serial (ex. 004A7001), the date (ex. 20100104), and the R****COMP (or CRIT)??? Here is what I do now but it only gets the files formatted like my first example. if (file.Count(c => c == '_') != 3) continue; and further down in the code I have: string RNumber = Path.GetFileNameWithoutExtension(file); string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; DateTime dateTime = DateTime.ParseExact(RNumberDate, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy");

    Read the article

  • Perl unpack in list context

    - by drewk
    A common 'Perlism' is generating a list as something to loop over in this form: for($str=~/./g) { print "the next character from \"$str\"=$_\n"; } In this case the global match regex returns a list that is one character in turn from the string $str, and assigns that value to $_ Instead of a regex, split can be used in the same way or 'a'..'z', map, etc. I am investigating unpack to generate a field by field interpretation of a string. I have always found unpack to be less straightforward to the way my brain works, and I have never really dug that deeply into it. As a simple case, I want to generate a list that is one character in each element from a string using unpack (yes -- I know I can do it with split(//,$str) and /./g but I really want to see if unpack can be used this way...) Obviously, I can use a field list for unpack that is unpack("A1" x length($str), $str) but is there some other way that kinda looks like globbing? ie, can I call unpack(some_format,$str) either in list context or in a loop such that unpack will return the next group of character in the format group until $str is exausted? I have read The Perl 5.12 Pack pod and the Perl 5.12 pack tutorial and the Perkmonks tutorial Here is the sample code: #!/usr/bin/perl use warnings; use strict; my $str=join('',('a'..'z', 'A'..'Z')); #the alphabet... $str=~s/(.{1,3})/$1 /g; #...in groups of three print "str=$str\n\n"; for ($str=~/./g) { print "regex: = $_\n"; } for(split(//,$str)) { print "split: \$_=$_\n"; } for(unpack("A1" x length($str), $str)) { print "unpack: \$_=$_\n"; }

    Read the article

  • Syntax Error with John Resig's Micro Templating after changing template tags <# {% {{ etc..

    - by optician
    I'm having a bit of trouble with John Resig's Micro templating. Can anyone help me with why it isn't working? This is the template <script type="text/html" id="row_tmpl"> test content {%=id%} {%=name%} </script> And the modified section of the engine str .replace(/[\r\t\n]/g, " ") .split("{%").join("\t") .replace(/((^|%>)[^\t]*)'/g, "$1\r") .replace(/\t=(.*?)%>/g, "',$1,'") .split("\t").join("');") .split("%}").join("p.push('") .split("\r").join("\\'") + "');}return p.join('');"); and the javascript var dataObject = { "id": "27", "name": "some more content" }; var html = tmpl("row_tmpl", dataObject); and the result, as you can see =id and =name seem to be in the wrong place? Apart from changing the template syntax blocks from <% % to {% %} I haven't changed anything. This is from Firefox. Error: syntax error Line: 30, Column: 89 Source Code: var p=[],print=function(){p.push.apply(p,arguments);};with(obj){p.push(' test content ');=idp.push(' ');=namep.push(' ');}return p.join('');

    Read the article

  • Regex to extract portions of file name

    - by jakesankey
    I have text files formatted as such: R156484COMP_004A7001_20100104_065119.txt I need to consistently extract the R****COMP, the 004A7001 number, 20100104 (date), and don't care about the 065119 number. the problem is that not ALL of the files being parsed have the exact naming convention. some may be like this: R168166CRIT_156B2075_SU2_20091223_123456.txt or R285476COMP_SU1_125A6025_20100407_123456.txt So how could I use regex instead of split to ensure I am always getting that serial (ex. 004A7001), the date (ex. 20100104), and the R****COMP (or CRIT)??? Here is what I do now but it only gets the files formatted like my first example. if (file.Count(c => c == '_') != 3) continue; and further down in the code I have: string RNumber = Path.GetFileNameWithoutExtension(file); string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; DateTime dateTime = DateTime.ParseExact(RNumberDate, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); UPDATE: This is now where I am at -- I get an error to parse RNumberDate to an actual date format. "Cannot implicitly convert type 'RegularExpressions.Match' to 'string' string RNumber = Path.GetFileNameWithoutExtension(file); Match RNumberE = Regex.Match(RNumber, @"^(R|L)\d{6}(COMP|CRIT|TEST|SU[1-9])(?=_)", RegexOptions.IgnoreCase); Match RNumberD = Regex.Match(RNumber, @"(?<=_)\d{3}[A-Z]\d{4}(?=_)", RegexOptions.IgnoreCase); Match RNumberDate = Regex.Match(RNumber, @"(?<=_)\d{8}(?=_)", RegexOptions.IgnoreCase); DateTime dateTime = DateTime.ParseExact(RNumberDate, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy")

    Read the article

  • Syntax Error with John Resig's Micro Templating.

    - by optician
    I'm having a bit of trouble with John Resig's Micro templating. Can anyone help me with why it isn't working? This is the template <script type="text/html" id="row_tmpl"> test content {%=id%} {%=name%} </script> And the modified section of the engine str .replace(/[\r\t\n]/g, " ") .split("{%").join("\t") .replace(/((^|%>)[^\t]*)'/g, "$1\r") .replace(/\t=(.*?)%>/g, "',$1,'") .split("\t").join("');") .split("%}").join("p.push('") .split("\r").join("\\'") + "');}return p.join('');"); and the javascript var dataObject = { "id": "27", "name": "some more content" }; var html = tmpl("row_tmpl", dataObject); and the result, as you can see =id and =name seem to be in the wrong place? Apart from changing the template syntax blocks from <% % to {% %} I haven't changed anything. This is from Firefox. Error: syntax error Line: 30, Column: 89 Source Code: var p=[],print=function(){p.push.apply(p,arguments);};with(obj){p.push(' test content ');=idp.push(' ');=namep.push(' ');}return p.join('');

    Read the article

  • Use LaTeX Listings to correctly detect and syntax highlight embedded code of a different language in

    - by D W
    I have scripts that have one-liners or sort scripts from other languages within them. How can I have LaTeX listings detect this and change the syntax formating language withing the script? This would be especially useful for awk withing bash I believe. Bash #!/bin/bash ... # usage message to catch bad input without invoking R ... # any bash pre-processing of input ... # etc echo "hello world" R --vanilla << EOF # Data on motor octane ratings for various gasoline blends x <- c(88.5,87.7,83.4,86.7,87.5,91.5,88.6,100.3, 95.6,93.3,94.7,91.1,91.0,94.2,87.5,89.9, 88.3,87.6,84.3,86.7,88.2,90.8,88.3,98.8, 94.2,92.7,93.2,91.0,90.3,93.4,88.5,90.1, 89.2,88.3,85.3,87.9,88.6,90.9,89.0,96.1, 93.3,91.8,92.3,90.4,90.1,93.0,88.7,89.9, 89.8,89.6,87.4,88.9,91.2,89.3,94.4,92.7, 91.8,91.6,90.4,91.1,92.6,89.8,90.6,91.1, 90.4,89.3,89.7,90.3,91.6,90.5,93.7,92.7, 92.2,92.2,91.2,91.0,92.2,90.0,90.7) x length(x) mean(x);var(x) stem(x) EOF perl -n -e ' @t = split(/\t/); %t2 = map { $_ => 1 } split(/,/,$t[1]); $t[1] = join(",",keys %t2); print join("\t",@t); ' knownGeneFromUCSC.txt awk -F'\t' '{ n = split($2, t, ","); _2 = x split(x, _) # use delete _ if supported for (i = 0; ++i <= n;) _[t[i]]++ || _2 = _2 ? _2 "," t[i] : t[i] $2 = _2 }-3' OFS='\t' infile Python #!/usr/local/bin/python print "Hello World" os.system(""" VAR=even; sed -i "s/$VAR/odd/" testfile; for i in `cat testfile` ; do echo $i; done; echo "now the tr command is removing the vowels"; cat testfile |tr 'aeiou' ' ' """)

    Read the article

  • Validate domain against LDAP?

    - by lucian.jp
    I have a procedure to get the name of the logged user show on the site. I get it this way : var winIdentity = (WindowsIdentity) HttpContext.Current.User.Identity; if (winIdentity != null) { string domainUser = winIdentity.Name.Replace(@"\", "/"); string domain = winIdentity.Name.Split('\\')[0]; string user = winIdentity.Name.Split('\\')[1]; var myDe = new DirectoryEntry(ConfigurationManager.ConnectionStrings["LDAP"].ConnectionString, ConfigurationManager.AppSettings["LDAPCredentials"].Split(';')[0], ConfigurationManager.AppSettings["LDAPCredentials"].Split(';')[1]); var deSearcher = new DirectorySearcher(myDe) {Filter = "(&(sAMAccountName=" + user + "))"}; SearchResult result = deSearcher.FindOne(); if (result != null) { DirectoryEntry userDe = result.GetDirectoryEntry(); lblNameAD.Text = string.Format(lblNameAD.Text, userDe.Properties["givenName"].Value, userDe.Properties["sn"].Value); } else { var adEntry = new DirectoryEntry("WinNT://" + domainUser); string fullname = adEntry.Properties["FullName"].Value.ToString(); lblNameAD.Text = string.Format(lblNameAD.Text, !string.IsNullOrEmpty(fullname) ? fullname : user, null); } } Probleme id that if I have a local useraccount with the same username that one from LDAP, it passes the check and return the name. EX: local\MyUser domain\MyUser Both return the name from AD even if the one from local isn't a domain account. It would be perfect if I could search in LDAP for domainuser, but it seems I can't. I also tried to restrict the DC with the DirectorySearcher but the domain name is "domain", but I only have "dc=dom" and "dc=com" and no DC for full domain name.

    Read the article

  • To have efficient many-to-many relation in Java

    - by Masi
    How can you make the efficient many-to-many -relation from fileID to Words and from word to fileIDs without database -tools like Postgres in Java? I have the following classes. The relation from fileID to words is cheap, but not the reverse, since I need three for -loops for it. My solution is not apparently efficient. Other options may be to create an extra class that have word as an ID with the ArrayList of fileIDs. Reply to JacobM's answer The relevant part of MyFile's constructors is: /** * Synopsis of data in wordToWordConutInFile.txt: * fileID|wordID|wordCount * * Synopsis of the data in the file wordToWordID.txt: * word|wordID **/ /** * Getting words by getting first wordIDs from wordToWordCountInFile.txt and then words in wordToWordID.txt. */ InputStream in2 = new FileInputStream("/home/dev/wordToWordCountInFile.txt"); BufferedReader fi2 = new BufferedReader(new InputStreamReader(in2)); ArrayList<Integer> wordIDs = new ArrayList<Integer>(); String line = null; while ((line = fi2.readLine()) != null) { if ((new Integer(line.split("|")[0]) == currentFileID)) { wordIDs.add(new Integer(line.split("|")[6])); } } in2.close(); // Getting now the words by wordIDs. InputStream in3 = new FileInputStream("/home/dev/wordToWordID.txt"); BufferedReader fi3 = new BufferedReader(new InputStreamReader(in3)); line = null; while ((line = fi3.readLine()) != null) { for (Integer wordID : wordIDs) { if (wordID == (new Integer(line.split("|")[1]))) { this.words.add(new Word(new String(line.split("|")[0]), fileID)); break; } } } in3.close(); this.words.addAll(words); The constructor of Word is at the paste.

    Read the article

  • What is better in WPF for UI layout, using one Grid, or nested Grids.

    - by Matthijs Wessels
    I am making a UI in WPF, I have a bunch of functional areas and I use a Grid to organize it. Now the Grid that I want is not uniform, as in, some functional area will span multiple cells in the Grid. I was wondering what the best practise is in solving this. Should I create one grid and then for each functional area set it to span multiple cells, or should I split it up into multiple nested Grids. In this image, the leftmost panel (panels separated by the gray bar) is what I want. The middle panel shows one grid where the blue lines are overlapped by a functional area. The rightmost panel shows how I could do it with nested grids. You can see the green grid has one horizontal split. In the bottom cell is the yellow Grid with a vertical split. In side the left cell is the red Grid with again a horizontal split. I was just wondering what is best practise, the middle or the right panel.

    Read the article

  • javaScript .splice() not working on correctly

    - by adardesign
    I am setting a cookie for each navigation container that is clicked on. It sets an array that is joined and set the cookie value. if its clicked again then its removed from the array. It somehow buggy. It only splices after clicking on other elements. and then it behaves weird. Thanks much. var navLinkToOpen; var setNavCookie = function(value){ var isSet = false; var checkCookies = checkNavCookie() setCookieHelper = checkCookies? checkCookies.split(","): []; console.log("value passed", value) for(i in setCookieHelper){ if(value == setCookieHelper[i]){ setCookieHelper.splice(value,1); isSet = true; } } if(!isSet){ setCookieHelper.push(value) } setCookieHelper.join(",") document.cookie = "navLinkToOpen"+"="+setCookieHelper; } var checkNavCookie = function(){ var allCookies = document.cookie.split( ';' ); for (i = 0; i < allCookies.length; i++ ){ temp = allCookies[i].split("=") if(temp[0].match("navLinkToOpen")){ var getValue = temp[1] } } return getValue || false } $(document).ready(function() { $("#LeftNav li").has("b").addClass("navHeader").not(":first").siblings("li").hide() $(".navHeader").click(function(){ $(this).toggleClass("collapsed").nextUntil("li:has('b')").slideToggle(300); setNavCookie($('.navHeader').index($(this))) return false }) console.log("init",document.cookie) var testCookies = checkNavCookie(); if(testCookies){ finalArrayValue = testCookies.split(",") for(i in finalArrayValue){ $(".navHeader").eq(finalArrayValue[i]).toggleClass("collapsed").nextUntil(".navHeader").slideToggle (0); } } });

    Read the article

  • python add two list and createing a new list

    - by Adam G.
    lst1 = ['company1,AAA,7381.0 ', 'company1,BBB,-8333.0 ', 'company1,CCC, 3079.999 ', 'company1,DDD,5699.0 ', 'company1,EEE,1640.0 ', 'company1,FFF,-600.0 ', 'company1,GGG,3822.0 ', 'company1,HHH,-600.0 ', 'company1,JJJ,-4631.0 ', 'company1,KKK,-400.0 '] lst2 =['company1,AAA,-4805.0 ', 'company1,ZZZ,-2576.0 ', 'company1,BBB,1674.0 ', 'company1,CCC,3600.0 ', 'company1,DDD,1743.998 '] output I need == ['company1,AAA,2576.0','company1,ZZZ,-2576.0 ','company1,KKK,-400.0 ' etc etc] I need to add it similar product number in each list and move it to a new list. I also need any symbol not being added together to be added to that new list. I am having problems with moving through each list. This is what I have: h = [] z = [] a = [] for g in hhl: spl1 = g.split(",") h.append(spl1[1]) for j in c_hhl: spl2 = j.split(",") **if spl2[1] in h: converted_num =(float(spl2[2]) +float(spl1[2])) pos=('{0},{1},{2}'.format(spl2[0],spl2[1],converted_num)) z.append(pos)** else: pos=('{0},{1},{2}'.format(spl2[0],spl2[1],spl2[2])) z.append(pos) for f in z: spl3 = f.split(",") a.append(spl3[1]) for n in hhl[:]: spl4 = n.split(",") if spl4[1] in a: got = (spl4[0],spl4[1],spl4[2]) hhl.remove(n) smash = hhl+z #for i in smash: for i in smash: print(i) I am having problem iterating through the list to make sure I get all of the simliar product to a new list,(bold) and any product not in list 1 but in lst2 to the new list and vice versa. I am sure there is a much easier way.

    Read the article

  • RetinaJS and LESS : Background image doesn't show on iOS

    - by jidma
    I am trying to make a background image into a retina image using LESS CSS and RetinaJs: in my index.html file : <html> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=0, minimum-scale=1.0, maximum-scale=1.0"> <meta name="apple-mobile-web-app-capable" content="yes"> <meta name="apple-mobile-web-app-status-bar-style" content="black"> [...] <link type="text/css" rel="stylesheet/less" href="resources/css/retina.less"> <script type="text/javascript" src="resources/js/less-1.3.0.minjs" ></script> [...] </head> <body> [...] <script type="text/javascript" src="resources/js/retina.js"></script> </body> </html> in my retina.less file: .at2x(@path, @w: auto, @h: auto) { background-image: url("@{path}"); @at2x_path: ~`"@{path}".split('.').slice(0, "@{path}".split('.').length - 1).join(".") + "@2x" + "." + "@{path}".split('.')["@{path}".split('.').length - 1]`; @media all and (-webkit-min-device-pixel-ratio : 1.5) { background-image: url("@{at2x_path}"); background-size: @w @h; } } .topMenu { .at2x('../../resources/img/topMenuTitle.png'); } I have both topMenuTitle.png (320px x 40px) and [email protected] (640px x 80px) in the same folder. When test this code: In Firefox i have the normal Background In the XCode iPhone simulator I also have the normal Background In the iPhone device, I don't have any background at all. I'm using GWT if that matters. Any suggestions ? Thanks.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >