Search Results

Search found 2025 results on 81 pages for 'hough transform'.

Page 20/81 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Using AES encryption in .NET - CryptographicException saying the padding is invalid and cannot be removed

    - by Jake Petroules
    I wrote some AES encryption code in C# and I am having trouble getting it to encrypt and decrypt properly. If I enter "test" as the passphrase and "This data must be kept secret from everyone!" I receive the following exception: System.Security.Cryptography.CryptographicException: Padding is invalid and cannot be removed. at System.Security.Cryptography.RijndaelManagedTransform.DecryptData(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount, Byte[]& outputBuffer, Int32 outputOffset, PaddingMode paddingMode, Boolean fLast) at System.Security.Cryptography.RijndaelManagedTransform.TransformFinalBlock(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount) at System.Security.Cryptography.CryptoStream.FlushFinalBlock() at System.Security.Cryptography.CryptoStream.Dispose(Boolean disposing) at System.IO.Stream.Close() at System.IO.Stream.Dispose() ... And if I enter something less than 16 characters I get no output. I believe I need some special handling in the encryption since AES is a block cipher, but I'm not sure exactly what that is, and I wasn't able to find any examples on the web showing how. Here is my code: using System; using System.IO; using System.Security.Cryptography; using System.Text; public static class DatabaseCrypto { public static EncryptedData Encrypt(string password, string data) { return DatabaseCrypto.Transform(true, password, data, null, null) as EncryptedData; } public static string Decrypt(string password, EncryptedData data) { return DatabaseCrypto.Transform(false, password, data.DataString, data.SaltString, data.MACString) as string; } private static object Transform(bool encrypt, string password, string data, string saltString, string macString) { using (AesManaged aes = new AesManaged()) { aes.Mode = CipherMode.CBC; aes.Padding = PaddingMode.PKCS7; int key_len = aes.KeySize / 8; int iv_len = aes.BlockSize / 8; const int salt_size = 8; const int iterations = 8192; byte[] salt = encrypt ? new Rfc2898DeriveBytes(string.Empty, salt_size).Salt : Convert.FromBase64String(saltString); byte[] bc_key = new Rfc2898DeriveBytes("BLK" + password, salt, iterations).GetBytes(key_len); byte[] iv = new Rfc2898DeriveBytes("IV" + password, salt, iterations).GetBytes(iv_len); byte[] mac_key = new Rfc2898DeriveBytes("MAC" + password, salt, iterations).GetBytes(16); aes.Key = bc_key; aes.IV = iv; byte[] rawData = encrypt ? Encoding.UTF8.GetBytes(data) : Convert.FromBase64String(data); using (ICryptoTransform transform = encrypt ? aes.CreateEncryptor() : aes.CreateDecryptor()) using (MemoryStream memoryStream = encrypt ? new MemoryStream() : new MemoryStream(rawData)) using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, encrypt ? CryptoStreamMode.Write : CryptoStreamMode.Read)) { if (encrypt) { cryptoStream.Write(rawData, 0, rawData.Length); return new EncryptedData(salt, mac_key, memoryStream.ToArray()); } else { byte[] originalData = new byte[rawData.Length]; int count = cryptoStream.Read(originalData, 0, originalData.Length); return Encoding.UTF8.GetString(originalData, 0, count); } } } } } public class EncryptedData { public EncryptedData() { } public EncryptedData(byte[] salt, byte[] mac, byte[] data) { this.Salt = salt; this.MAC = mac; this.Data = data; } public EncryptedData(string salt, string mac, string data) { this.SaltString = salt; this.MACString = mac; this.DataString = data; } public byte[] Salt { get; set; } public string SaltString { get { return Convert.ToBase64String(this.Salt); } set { this.Salt = Convert.FromBase64String(value); } } public byte[] MAC { get; set; } public string MACString { get { return Convert.ToBase64String(this.MAC); } set { this.MAC = Convert.FromBase64String(value); } } public byte[] Data { get; set; } public string DataString { get { return Convert.ToBase64String(this.Data); } set { this.Data = Convert.FromBase64String(value); } } } static void ReadTest() { Console.WriteLine("Enter password: "); string password = Console.ReadLine(); using (StreamReader reader = new StreamReader("aes.cs.txt")) { EncryptedData enc = new EncryptedData(); enc.SaltString = reader.ReadLine(); enc.MACString = reader.ReadLine(); enc.DataString = reader.ReadLine(); Console.WriteLine("The decrypted data was: " + DatabaseCrypto.Decrypt(password, enc)); } } static void WriteTest() { Console.WriteLine("Enter data: "); string data = Console.ReadLine(); Console.WriteLine("Enter password: "); string password = Console.ReadLine(); EncryptedData enc = DatabaseCrypto.Encrypt(password, data); using (StreamWriter stream = new StreamWriter("aes.cs.txt")) { stream.WriteLine(enc.SaltString); stream.WriteLine(enc.MACString); stream.WriteLine(enc.DataString); Console.WriteLine("The encrypted data was: " + enc.DataString); } }

    Read the article

  • jBullet Collision/Physics not working correctly

    - by Kenneth Bray
    Below is the code for one of my objects in the game I am creating (yes although this is a cube, I am not making anything remotely like MineCraft), and my issue is I while the cube will display and is does follow the physics if the cube falls, it does not interact with any other objects in the game. If I was to have multiple cubes in screen at once they all just sit there, or shoot off in all directions never stopping. Anyway, I am new to jBullet, and any help would be appreciated. package Object; import static org.lwjgl.opengl.GL11.GL_QUADS; import static org.lwjgl.opengl.GL11.glBegin; import static org.lwjgl.opengl.GL11.glColor3f; import static org.lwjgl.opengl.GL11.glEnd; import static org.lwjgl.opengl.GL11.glPopMatrix; import static org.lwjgl.opengl.GL11.glPushMatrix; import static org.lwjgl.opengl.GL11.glVertex3f; import javax.vecmath.Matrix4f; import javax.vecmath.Quat4f; import javax.vecmath.Vector3f; import com.bulletphysics.collision.shapes.BoxShape; import com.bulletphysics.collision.shapes.CollisionShape; import com.bulletphysics.dynamics.RigidBody; import com.bulletphysics.dynamics.RigidBodyConstructionInfo; import com.bulletphysics.linearmath.DefaultMotionState; import com.bulletphysics.linearmath.Transform; public class Cube { // Cube size/shape variables private float size; boolean cubeCollidable; boolean cubeDestroyable; // Position variables - currently this defines the center of the cube private float posX; private float posY; private float posZ; // Rotation variables - should be between 0 and 359, might consider letting rotation go higher though I can't think of a purpose currently private float rotX; private float rotY; private float rotZ; //collision shape is a box shape CollisionShape fallShape; // setup the motion state for the ball DefaultMotionState fallMotionState; Vector3f fallInertia = new Vector3f(0, 1, 0); RigidBodyConstructionInfo fallRigidBodyCI; public RigidBody fallRigidBody; int mass = 1; // Constructor public Cube(float pX, float pY, float pZ, float pSize) { posX = pX; posY = pY; posZ = pZ; size = pSize; rotX = 0; rotY = 0; rotZ = 0; // define the physics based on the values passed in fallShape = new BoxShape(new Vector3f(size, size, size)); fallMotionState = new DefaultMotionState(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(0, 50, 0), 1f))); fallRigidBodyCI = new RigidBodyConstructionInfo(mass, fallMotionState, fallShape, fallInertia); fallRigidBody = new RigidBody(fallRigidBodyCI); } public void Update() { Transform trans = new Transform(); fallRigidBody.getMotionState().getWorldTransform(trans); posY = trans.origin.x; posX = trans.origin.y; posZ = trans.origin.z; } public void Draw() { fallShape.calculateLocalInertia(mass, fallInertia); // center point posX, posY, posZ float radius = size / 2; //top glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,0.0f,0.0f); // red glVertex3f(posX + radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY + radius, posZ + radius); } glEnd(); glPopMatrix(); //bottom glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,1.0f,0.0f); // ?? color glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY - radius, posZ - radius); } glEnd(); glPopMatrix(); //right side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,0.0f,1.0f); // ?? color glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); //left side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,1.0f,1.0f); // ?? color glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); } glEnd(); glPopMatrix(); //front side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,0.0f,1.0f); //blue glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); } glEnd(); glPopMatrix(); //back side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,1.0f,0.0f); // green glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); } }

    Read the article

  • Is this XSLT correct for the XML file which I have developed?

    - by atrueguy
    This is my XML file. I need to develop a xslt for this. <?xml version="1.0" encoding="ISO-8859-1"?> <!--<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">--> <!-- Generator: Arbortext IsoDraw 7.0 --> <svg width="100%" height="100%" viewBox="0 0 214.819 278.002"> <g id="Standard_x0020_layer"/> <g id="Catalog"> <line stroke-width="0.353" stroke-linecap="butt" x1="5.839" y1="262.185" x2="209.039" y2="262.185"/> <text transform="matrix(0.984 0 0 0.93 183.515 265.271)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="3.174">© 2009 k Co.</text> <text transform="matrix(0.994 0 0 0.93 7.235 265.3)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="3.174">087156-8-</text> <text transform="matrix(0.995 0 0 0.93 21.708 265.357)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="3.174" font-weight="bold">AB</text> <path stroke-width="0.088" stroke-linecap="butt" stroke-dasharray="2.822 1.058" d="M162.037 107.578L174.439 100.417L180.698 104.03"/> <g id="AUTOID_20445" class="52971"> <line stroke-width="0.088" stroke-linecap="butt" x1="68.859" y1="43.621" x2="65.643" y2="45.399"/> <text transform="matrix(0.944 0 0 0.93 69.165 43.356)" stroke="none" fill="#000000" font-family="'Helvetica'" font-size="2.775" font-weight="bold">52971</text> </g> </g> </svg> I have developed a XSLT for this in this way, but I am failing to produce the desired output can any one help me in this. <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:svg="http://www.w3.org/2000/svg"> <xsl:template match="/"> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"> <fo:layout-master-set> <fo:simple-page-master master-name="simple" page-height="11in" page-width="8.5in"> <fo:region-body margin="0.7in" margin-top="1.15in" margin-left=".8in"/> <fo:region-before extent="1.5in"/> <fo:region-after extent="1.5in"/> <fo:region-start extent="1.5in"/> <fo:region-end extent="1.5in"/> </fo:simple-page-master> </fo:layout-master-set> <fo:page-sequence master-reference="simple"> <fo:flow flow-name="xsl-region-body"> <fo:block> <fo:instream-foreign-object> <svg:svg xmlns:svg="http://www.w3.org/2000/svg" height="100%" width="100%" viewBox="0 0 214.819 278.002"> <xsl:for-each select="svg/g"> <svg:g style="stroke:none;fill:#000000;"> <svg:path> <xsl:variable name="s"> <xsl:value-of select="translate(@d,' ','')"/> </xsl:variable> <xsl:attribute name="d"><xsl:value-of select="translate($s,',',' ')"/></xsl:attribute> </svg:path> </svg:g> </xsl:for-each> <xsl:for-each select="svg/g"> <svg:line x1 = "{$x1}" y1 = "{$y1}" x2 = "{$x2}" y2 = "{$y2}" style = "stroke-width: 0.088; stroke: black;"/> <line xmlns="http://www.w3.org/2000/svg" x1="{$x1}" y1="{$y1}" x2="{$x2}" y2="{$y2}" stroke-width="0.088" stroke="black" fill="#000000" /> </xsl:for-each> </svg:svg> </fo:instream-foreign-object> </fo:block> </fo:flow> </fo:page-sequence> </fo:root> </xsl:template> </xsl:stylesheet> Please help me in this

    Read the article

  • unexplainable packet drops with 5 ethernet NICs and low traffic on Ubuntu

    - by jon
    I'm stuck on problem where my machine started to drops packets with no sign of ANY system load or high interrupt usage after an upgrade to Ubuntu 12.04. My server is a network monitoring sensor, running Ubuntu LTS 12.04, it passively collects packets from 5 interfaces doing network intrusion type stuff. Before the upgrade I managed to collect 200+GB of packets a day while writing them to disk with around 0% packet loss depending on the day with the help of CPU affinity and NIC IRQ to CPU bindings. Now I lose a great deal of packets with none of my applications running and at very low PPS rate which a modern workstation NIC would have no trouble with. Specs: x64 Xeon 4 cores 3.2 Ghz 16 GB RAM NICs: 5 Intel Pro NICs using the e1000 driver (NAPI). [1] eth0 and eth1 are integrated NICs (in the motherboard) There are 2 other PCI-X network cards, each with 2 Ethernet ports. 3 of the interfaces are running at Gigabit Ethernet, the others are not because they're attached to hubs. Specs: [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm uptime 17:36:00 up 1:43, 2 users, load average: 0.00, 0.01, 0.05 # uname -a Linux nms 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I also have the CPU governor set to performance mode and irqbalance off. The problem still occurs with them on. # lspci -t -vv -[0000:00]-+-00.0 Intel Corporation E7520 Memory Controller Hub +-02.0-[01-03]--+-00.0-[02]----0e.0 Dell PowerEdge Expandable RAID controller 4 | \-00.2-[03]-- +-04.0-[04]-- +-05.0-[05-07]--+-00.0-[06]----07.0 Intel Corporation 82541GI Gigabit Ethernet Controller | \-00.2-[07]----08.0 Intel Corporation 82541GI Gigabit Ethernet Controller +-06.0-[08-0a]--+-00.0-[09]--+-04.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | | \-04.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-00.2-[0a]--+-02.0 Digium, Inc. Wildcard TE210P/TE212P dual-span T1/E1/J1 card 3.3V | +-03.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-03.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) +-1d.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #1 +-1d.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 +-1d.2 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 +-1d.7 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller +-1e.0-[0b]----0d.0 Advanced Micro Devices [AMD] nee ATI RV100 QY [Radeon 7000/VE] +-1f.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge \-1f.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller I believe the NIC nor the NIC drivers are dropping the packets because ethtool reports 0 under rx_missed_errors and rx_no_buffer_count for each interface. On the old system, if it couldn't keep up this is where the drops would be. I drop packets on multiple interfaces just about every second, usually in small increments of 2-4. I tried all these sysctl values, I'm currently using the uncommented ones. # cat /etc/sysctl.conf # high net.core.netdev_max_backlog = 3000000 net.core.rmem_max = 16000000 net.core.rmem_default = 8000000 # defaults #net.core.netdev_max_backlog = 1000 #net.core.rmem_max = 131071 #net.core.rmem_default = 163480 # moderate #net.core.netdev_max_backlog = 10000 #net.core.rmem_max = 33554432 #net.core.rmem_default = 33554432 Here's an example of an interface stats report with ethtool. They are all the same, nothing is out of the ordinary ( I think ), so I'm only going to show one: ethtool -S eth2 NIC statistics: rx_packets: 7498 tx_packets: 0 rx_bytes: 2722585 tx_bytes: 0 rx_broadcast: 327 tx_broadcast: 0 rx_multicast: 1504 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 1504 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 0 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 0 tx_tcp_seg_failed: 0 rx_flow_control_xon: 0 rx_flow_control_xoff: 0 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_long_byte_count: 2722585 rx_csum_offload_good: 0 rx_csum_offload_errors: 0 alloc_rx_buff_failed: 0 tx_smbus: 0 rx_smbus: 0 dropped_smbus: 01 # ifconfig eth0 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:373348 errors:16 dropped:95 overruns:0 frame:16 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:356830572 (356.8 MB) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8d UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:13616 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8690528 (8.6 MB) TX bytes:0 (0.0 B) eth2 Link encap:Ethernet HWaddr 00:04:23:e1:77:6a UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:7750 errors:0 dropped:471 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2780935 (2.7 MB) TX bytes:0 (0.0 B) eth3 Link encap:Ethernet HWaddr 00:04:23:e1:77:6b UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:5112 errors:0 dropped:206 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:639472 (639.4 KB) TX bytes:0 (0.0 B) eth4 Link encap:Ethernet HWaddr 00:04:23:b6:35:6c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:961467 errors:0 dropped:935 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:958561305 (958.5 MB) TX bytes:0 (0.0 B) eth5 Link encap:Ethernet HWaddr 00:04:23:b6:35:6d inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4264 errors:0 dropped:16 overruns:0 frame:0 TX packets:699 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:572228 (572.2 KB) TX bytes:124456 (124.4 KB) I tried the defaults, then started to play around with settings. I wasn't using any flow control and I increased the RxDescriptor count to 4096 before the upgrade as well without any problems. # cat /etc/modprobe.d/e1000.conf options e1000 XsumRX=0,0,0,0,0 RxDescriptors=4096,4096,4096,4096,4096 FlowControl=0,0,0,0,0 debug=16 Here's my network configuration file, I turned off checksumming and various offloading mechanisms along with setting CPU affinity with heavy use interfaces getting an entire CPU and light use interfaces sharing a CPU. I used these settings prior to the upgrade without problems. # cat /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual pre-up /sbin/ethtool -G eth0 rx 4096 tx 0 pre-up /sbin/ethtool -K eth0 gro off gso off rx off pre-up /sbin/ethtool -A eth0 rx off autoneg off up ifconfig eth0 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/48/smp_affinity down ifconfig eth0 down post-down /sbin/ethtool -G eth0 rx 256 tx 256 post-down /sbin/ethtool -K eth0 gro on gso on rx on post-down /sbin/ethtool -A eth0 rx on autoneg on auto eth1 iface eth1 inet manual pre-up /sbin/ethtool -G eth1 rx 4096 tx 0 pre-up /sbin/ethtool -K eth1 gro off gso off rx off pre-up /sbin/ethtool -A eth1 rx off autoneg off up ifconfig eth1 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/49/smp_affinity down ifconfig eth1 down post-down /sbin/ethtool -G eth1 rx 256 tx 256 post-down /sbin/ethtool -K eth1 gro on gso on rx on post-down /sbin/ethtool -A eth1 rx on autoneg on auto eth2 iface eth2 inet manual pre-up /sbin/ethtool -G eth2 rx 4096 tx 0 pre-up /sbin/ethtool -K eth2 gro off gso off rx off pre-up /sbin/ethtool -A eth2 rx off autoneg off up ifconfig eth2 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "1" > /proc/irq/82/smp_affinity down ifconfig eth2 down post-down /sbin/ethtool -G eth2 rx 256 tx 256 post-down /sbin/ethtool -K eth2 gro on gso on rx on post-down /sbin/ethtool -A eth2 rx on autoneg on auto eth3 iface eth3 inet manual pre-up /sbin/ethtool -G eth3 rx 4096 tx 0 pre-up /sbin/ethtool -K eth3 gro off gso off rx off pre-up /sbin/ethtool -A eth3 rx off autoneg off up ifconfig eth3 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "2" > /proc/irq/83/smp_affinity down ifconfig eth3 down post-down /sbin/ethtool -G eth3 rx 256 tx 256 post-down /sbin/ethtool -K eth3 gro on gso on rx on post-down /sbin/ethtool -A eth3 rx on autoneg on auto eth4 iface eth4 inet manual pre-up /sbin/ethtool -G eth4 rx 4096 tx 0 pre-up /sbin/ethtool -K eth4 gro off gso off rx off pre-up /sbin/ethtool -A eth4 rx off autoneg off up ifconfig eth4 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/77/smp_affinity down ifconfig eth4 down post-down /sbin/ethtool -G eth4 rx 256 tx 256 post-down /sbin/ethtool -K eth4 gro on gso on rx on post-down /sbin/ethtool -A eth4 rx on autoneg on auto eth5 iface eth5 inet static pre-up /etc/fw.conf address 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 192.168.1.2 192.168.1.3 up ifconfig eth5 up post-up echo "8" > /proc/irq/77/smp_affinity down ifconfig eth5 down Here's a few examples of packet drops, i ran one after another, probabling totaling 3 or 4 seconds. You can see increases in the drops from the 1st and 3rd. This was a non-busy time, very little traffic. # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 505 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 507 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 227 lo: 0 eth2: 512 eth1: 0 eth5: 17 eth0: 105 eth4: 1039 I tried the pci=noacpi options. With and without, it's the same. This is what my interrupt stats looked like before the upgrade, after, with ACPI on PCI it showed multiple NICs bound to an interrupt and shared with other devices such as USB drives which I didn't like so I think i'm going to keep it with ACPI off as it's easier to designate sole purpose interrupts. Is there any advantage I would have using the default i.e. ACPI w/ PCI. ? # cat /etc/default/grub | grep CMD_LINE GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 noacpi pci=noacpi" GRUB_CMDLINE_LINUX="" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 0: 45 0 0 16 IO-APIC-edge timer 1: 1 0 0 7936 IO-APIC-edge i8042 2: 0 0 0 0 XT-PIC-XT-PIC cascade 6: 0 0 0 3 IO-APIC-edge floppy 8: 0 0 0 1 IO-APIC-edge rtc0 9: 0 0 0 0 IO-APIC-edge acpi 12: 0 0 0 1809 IO-APIC-edge i8042 14: 1 0 0 4498 IO-APIC-edge ata_piix 15: 0 0 0 0 IO-APIC-edge ata_piix 16: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb2 18: 0 0 0 1350 IO-APIC-fasteoi uhci_hcd:usb4, radeon 19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3 23: 0 0 0 4099 IO-APIC-fasteoi ehci_hcd:usb1 38: 0 0 0 61963 IO-APIC-fasteoi megaraid 48: 0 0 1002319 4 IO-APIC-fasteoi eth0 49: 0 0 38772 3 IO-APIC-fasteoi eth1 77: 0 0 130076 432159 IO-APIC-fasteoi eth4 78: 0 0 0 23917 IO-APIC-fasteoi eth5 82: 1329033 0 0 4 IO-APIC-fasteoi eth2 83: 0 4886525 0 6 IO-APIC-fasteoi eth3 NMI: 5 6 4 5 Non-maskable interrupts LOC: 61409 57076 64257 114764 Local timer interrupts SPU: 0 0 0 0 Spurious interrupts IWI: 0 0 0 0 IRQ work interrupts RES: 17956 25333 13436 14789 Rescheduling interrupts CAL: 22436 607 539 478 Function call interrupts TLB: 1525 1458 4600 4151 TLB shootdowns TRM: 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 Machine check exceptions MCP: 16 16 16 16 Machine check polls ERR: 0 MIS: 0 Here's sample output of vmstat, showing the system. Barebones system right now. root@nms:~# vmstat -S m 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 14992 192 1029 0 0 56 2 419 29 1 0 99 0 0 0 0 14992 192 1029 0 0 0 0 922 27 0 0 100 0 0 0 0 14991 192 1029 0 0 0 36 763 50 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 646 35 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 722 54 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 793 27 0 0 100 0 ^C Here's dmesg output. I can't figure out why my PCI-X slots are negotiated as PCI. The network cards are all PCI-X with the exception of the integrated NICs that came with the server. In the output below it looks as if eth3 and eth2 negotiated at PCI-X speeds rather than PCI:66Mhz. Wouldn't they all drop to PCI:66Mhz? If your integrated NICs are PCI, as labeled below (eth0,eth1), then wouldn't all devices on your bus speed drop down to that slower bus speed? If not, I still don't know why only one of my NICs ( each has two ethernet ports) is labeled as PCI-X in the output below. Does that mean it is running at PCI-X speeds are is it showing that it's capable? # dmesg | grep e1000 [ 3678.349337] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI [ 3678.349342] e1000: Copyright (c) 1999-2006 Intel Corporation. [ 3678.349394] e1000 0000:06:07.0: PCI->APIC IRQ transform: INT A -> IRQ 48 [ 3678.409725] e1000 0000:06:07.0: Receive Descriptors set to 4096 [ 3678.409730] e1000 0000:06:07.0: Checksum Offload Disabled [ 3678.409734] e1000 0000:06:07.0: Flow Control Disabled [ 3678.586409] e1000 0000:06:07.0: eth0: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8c [ 3678.586419] e1000 0000:06:07.0: eth0: Intel(R) PRO/1000 Network Connection [ 3678.586642] e1000 0000:07:08.0: PCI->APIC IRQ transform: INT A -> IRQ 49 [ 3678.649854] e1000 0000:07:08.0: Receive Descriptors set to 4096 [ 3678.649859] e1000 0000:07:08.0: Checksum Offload Disabled [ 3678.649863] e1000 0000:07:08.0: Flow Control Disabled [ 3678.826436] e1000 0000:07:08.0: eth1: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8d [ 3678.826444] e1000 0000:07:08.0: eth1: Intel(R) PRO/1000 Network Connection [ 3678.826627] e1000 0000:09:04.0: PCI->APIC IRQ transform: INT A -> IRQ 82 [ 3679.093266] e1000 0000:09:04.0: Receive Descriptors set to 4096 [ 3679.093271] e1000 0000:09:04.0: Checksum Offload Disabled [ 3679.093275] e1000 0000:09:04.0: Flow Control Disabled [ 3679.130239] e1000 0000:09:04.0: eth2: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6a [ 3679.130246] e1000 0000:09:04.0: eth2: Intel(R) PRO/1000 Network Connection [ 3679.130449] e1000 0000:09:04.1: PCI->APIC IRQ transform: INT B -> IRQ 83 [ 3679.397312] e1000 0000:09:04.1: Receive Descriptors set to 4096 [ 3679.397318] e1000 0000:09:04.1: Checksum Offload Disabled [ 3679.397321] e1000 0000:09:04.1: Flow Control Disabled [ 3679.434350] e1000 0000:09:04.1: eth3: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6b [ 3679.434360] e1000 0000:09:04.1: eth3: Intel(R) PRO/1000 Network Connection [ 3679.434553] e1000 0000:0a:03.0: PCI->APIC IRQ transform: INT A -> IRQ 77 [ 3679.704072] e1000 0000:0a:03.0: Receive Descriptors set to 4096 [ 3679.704077] e1000 0000:0a:03.0: Checksum Offload Disabled [ 3679.704081] e1000 0000:0a:03.0: Flow Control Disabled [ 3679.738364] e1000 0000:0a:03.0: eth4: (PCI:33MHz:64-bit) 00:04:23:b6:35:6c [ 3679.738371] e1000 0000:0a:03.0: eth4: Intel(R) PRO/1000 Network Connection [ 3679.738538] e1000 0000:0a:03.1: PCI->APIC IRQ transform: INT B -> IRQ 78 [ 3680.046060] e1000 0000:0a:03.1: eth5: (PCI:33MHz:64-bit) 00:04:23:b6:35:6d [ 3680.046067] e1000 0000:0a:03.1: eth5: Intel(R) PRO/1000 Network Connection [ 3682.132415] e1000: eth0 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.224423] e1000: eth1 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.316385] e1000: eth2 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.408391] e1000: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.500396] e1000: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.708401] e1000: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX At first I thought it was the NIC drivers but I'm not so sure. I really have no idea where else to look at the moment. Any help is greatly appreciated as I'm struggling with this. If you need more information just ask. Thanks! [1]http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/Documentation/networking/e1000.txt?v=2.6.11.8 [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm

    Read the article

  • How to implement multi-source XSLT mapping in 11g BPEL

    - by [email protected]
    In SOA 11g, you can create a XSLT mapper that uses multiple sources as the input. To implement a multi-source mapper, just follow the instructions below, Drag and drop a Transform Activity to a BPEL process Double-click on the Transform Activity, the Transform dialog window appears. Add source variables by clicking the Add icon and selecting the variable and part of the variable as needed. You can select multiple input variables. The first variable represents the main XML input to the XSL mapping, while additional variables that are added here are defined in the XSL mapping as input parameters. Select the target variable and its part if available. Specify the mapper file name, the default file name is xsl/Transformation_%SEQ%.xsl, where %SEQ% represents the sequence number of the mapper. Click OK, the xls file will be opened in the graphical mode. You can map the sources to the target as usual. Open the mapper source code, you will notice the variable representing the additional source payload, is defined as the input parameter in the map source spec and body<mapSources>    <source type="XSD">      <schema location="../xsd/po.xsd"/>      <rootElement name="PurchaseOrder" namespace="http://www.oracle.com/pcbpel/po"/>    </source>    <source type="XSD">      <schema location="../xsd/customer.xsd"/>      <rootElement name="Customer" namespace="http://www.oracle.com/pcbpel/Customer"/>      <param name="v_customer" />    </source>  </mapSources>...<xsl:param name="v_customer"/> Let's take a look at the BPEL source code used to execute xslt mapper. <assign name="Transform_1">            <bpelx:annotation>                <bpelx:pattern>transformation</bpelx:pattern>            </bpelx:annotation>            <copy>                <from expression="ora:doXSLTransformForDoc('xsl/Transformation_1.xsl',bpws:getVariableData('v_po'),'v_customer',bpws:getVariableData('v_customer'))"/>                <to variable="v_invoice"/>            </copy>        </assign> You will see BPEL uses ora:doXSLTransformForDoc XPath function to execute the XSLT mapper.This function returns the result of  XSLT transformation when the xslt template matching the document. The signature of this function is  ora:doXSLTransformForDoc(template,input, [paramQName, paramValue]*).Wheretemplate is the XSLT mapper nameinput is the string representation of xml input, paramQName is the parameter defined in the xslt mapper as the additional sourceparameterValue is the additional source payload. You can add more sources to the mapper at the later stage, but you have to modify the ora:doXSLTransformForDoc in the BPEL source code and make sure it passes correct parameter and its value pair that reflects the changes in the XSLT mapper.So the best practices are : create the variables before creating the mapping file, therefore you can add multiple sources when you define the transformation in the first place, which is more straightforward than adding them later on. Review ora:doXSLTransformForDoc code in the BPEL source and make sure it passes the correct parameters to the mapper.

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • Arcball Problems with UDK

    - by opdude
    I'm trying to re-create an arcball example from a Nehe, where an object can be rotated in a more realistic way while floating in the air (in my game the object is attached to the player at a distance like for example the Physics Gun) however I'm having trouble getting this to work with UDK. I have created an LGArcBall which follows the example from Nehe and I've compared outputs from this with the example code. I think where my problem lies is what I do to the Quaternion that is returned from the LGArcBall. Currently I am taking the returned Quaternion converting it to a rotation matrix. Getting the product of the last rotation (set when the object is first clicked) and then returning that into a Rotator and setting that to the objects rotation. If you could point me in the right direction that would be great, my code can be found below. class LGArcBall extends Object; var Quat StartRotation; var Vector StartVector; var float AdjustWidth, AdjustHeight, Epsilon; function SetBounds(float NewWidth, float NewHeight) { AdjustWidth = 1.0f / ((NewWidth - 1.0f) * 0.5f); AdjustHeight = 1.0f / ((NewHeight - 1.0f) * 0.5f); } function StartDrag(Vector2D startPoint, Quat rotation) { StartVector = MapToSphere(startPoint); } function Quat Update(Vector2D currentPoint) { local Vector currentVector, perp; local Quat newRot; //Map the new point to the sphere currentVector = MapToSphere(currentPoint); //Compute the vector perpendicular to the start and current perp = startVector cross currentVector; //Make sure our length is larger than Epsilon if (VSize(perp) > Epsilon) { //Return the perpendicular vector as the transform newRot.X = perp.X; newRot.Y = perp.Y; newRot.Z = perp.Z; //In the quaternion values, w is cosine (theta / 2), where //theta is the rotation angle newRot.W = startVector dot currentVector; } else { //The two vectors coincide, so return an identity transform newRot.X = 0.0f; newRot.Y = 0.0f; newRot.Z = 0.0f; newRot.W = 0.0f; } return newRot; } function Vector MapToSphere(Vector2D point) { local float x, y, length, norm; local Vector result; //Transform the mouse coords to [-1..1] //and inverse the Y coord x = (point.X * AdjustWidth) - 1.0f; y = 1.0f - (point.Y * AdjustHeight); length = (x * x) + (y * y); //If the point is mapped outside of the sphere //( length > radius squared) if (length > 1.0f) { norm = 1.0f / Sqrt(length); //Return the "normalized" vector, a point on the sphere result.X = x * norm; result.Y = y * norm; result.Z = 0.0f; } else //It's inside of the sphere { //Return a vector to the point mapped inside the sphere //sqrt(radius squared - length) result.X = x; result.Y = y; result.Z = Sqrt(1.0f - length); } return result; } DefaultProperties { Epsilon = 0.000001f } I'm then attempting to rotate that object when the mouse is dragged, with the following update code in my PlayerController. //Get Mouse Position MousePosition.X = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.X; MousePosition.Y = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.Y; newQuat = ArcBall.Update(MousePosition); rotMatrix = MakeRotationMatrix(QuatToRotator(newQuat)); rotMatrix = rotMatrix * LastRot; LGMoveableActor(movingPawn.CurrentUseableObject).SetPhysics(EPhysics.PHYS_Rotating); LGMoveableActor(movingPawn.CurrentUseableObject).SetRotation(MatrixGetRotator(rotMatrix));

    Read the article

  • 11gR2 RAC ASM????

    - by Liu Maclean(???)
    11gR2 RAC?ocr?votedisk???????ASM??, ????10g??????2?RAC????????????,  ?? 11gR2 ?ASM?spfile??????ASM diskgroup???????ASM??????? ????????????,????? ASM?????mount diskgroup??????diskgroup????, ??ASM??????ASM spfile????????,?2???????? ????T.askmaclean.com?????ASM?????: hello maclean, ??spfile??ASMCMD> spget+CRSDG/rac/asmparameterfile/registry.253.787925627?????,ASM ?????ORACLE instance,?????????????diskgroup,????????????????????????????thanks.! ?????????: ?11.2??Oracle Cluterware??voting disk files?????????11.1?10.2????,11.2??voting disk file??????OCR?, ?????11.2??ocr?votedisk?????ASM? , ???11.2?voting disk file??GPNP profile??CSS voting file discovery string???? CSS voting disk file?discovery string???ASM,??????ASM discovery string???  ????????udev???????ASM???LUN, ??udev????????/dev/rasm-disk* , ????gpnptool get????gpnp profile: [grid@maclean1 trace]$ gpnptool get Warning: some command line parameters were defaulted. Resulting command line: /g01/grid/app/11.2.0/grid/bin/gpnptool.bin get -o- <?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="9" ClusterUId="452185be9cd14ff4ffdc7688ec5439bf" ClusterName="maclean-cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth0" Use="public"/><gpnp:Network id="net2" IP="172.168.1.0" Adapter="eth1" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile>< orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/rasm*" SPFile="+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933"/>< ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms>< ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>L1SLg10AqGEauCQ4ne9quucITZA=</ds:DigestValue>< /ds:Reference></ds:SignedInfo><ds:SignatureValue>rTyZm9vfcQCMuian6isnAThUmsV4xPoK2fteMc1l0GIvRvHncMwLQzPM/QrXCGGTCEvgvXzUPEKzmdX2oy5vLcztN60UHr6AJtA2JYYodmrsFwEyVBQ1D6wH+HQiOe2SG9UzdQnNtWSbjD4jfZkeQWyMPfWdKm071Ek0Rfb4nxE=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile> Success. ?????2???: <orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400?/>==»css voting disk??+ASM<orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/rasm*” SPFile=”+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933?/>==»??????ASM?DiscoveryString=”/dev/rasm*”,?ASM??????????????,SPFILE???ASM Parameter FILE?ALIAS ???????GPNP???ASM Parameter FILE?ALIAS,?????ASM???????SPFILE,???Diskgroup?Mount???????ASM ALIAS?????? ??????+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933??SPFILE?ASM??????: [grid@maclean1 wallets]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 17 05:45:35 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> set linesize 140 pagesize 1400 col "FILE NAME" format a40 set head on select NAME "FILE NAME", AU_KFFXP "AU NUMBER", NUMBER_KFFXP "FILE NUMBER", DISK_KFFXP "DISK NUMBER" from x$kffxp, v$asm_alias where GROUP_KFFXP = GROUP_NUMBER and NUMBER_KFFXP = FILE_NUMBER and name in ('REGISTRY.253.788682933') order by DISK_KFFXP,AU_KFFXP; FILE NAME AU NUMBER FILE NUMBER DISK NUMBER ---------------------------------------- ---------- ----------- ----------- REGISTRY.253.788682933 39 253 1 REGISTRY.253.788682933 35 253 3 REGISTRY.253.788682933 35 253 4 SQL> col path for a50 SQL> select disk_number,path from v$asm_disk where disk_number in (1,3,4) and GROUP_NUMBER=3; DISK_NUMBER PATH ----------- -------------------------------------------------- 3 /dev/rasm-diske 4 /dev/rasm-diskf 1 /dev/rasm-diskc ?????ASM SPFILE??????(redundancy=high),????? /dev/rasm-diskc?AU=39?/dev/rasm-diske AU=35?/dev/rasm-diskf AU=35? ????kfed?????????ASM DISK?header: [grid@maclean1 wallets]$ kfed read /dev/rasm-diske|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskc|grep spfile kfdhdb.spfile: 39 ; 0x0f4: 0x00000027 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskf|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 ????ASM disk header?kfdhdb.spfile??ASM SPFILE???DISK??AU NUMBER????, ASM???????????GPNP PROFILE?? DiscoveryString?????????,????ASM disk header?????kfdhdb.spfile??????,?????MOUNT DISKGROUP??????ASM SPFILE,?????ASM, ?????????????????

    Read the article

  • Loading cross domain XML with Javascript using a hybrid iframe-proxy/xsl/jsonp concept?

    - by Josef
    On our site www.foo.com we want to download and use http://feeds.foo.com/feed.xml with Javascript. We'll obviously use Access-Control but for browsers that don't support it we are considering the following as a fallback: On www.foo.com, we set document.domain, provide a callback function and load the feed into a (hidden) iframe: document.domain = 'foo.com'; function receive_data(data) { // process data }; var proxy = document.createElement('iframe'); proxy.src = 'http://feeds.foo.com/feed.xml'; document.body.appendChild(proxy); On feeds.foo.com, add an XSL to feed.xml and use it to transform the feed into an html document that also sets document.domain and calls the callback function in its parent with the feed data as json: <?xml version="1.0"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template match="ROOT"> <html><body> <script type="text/javascript"> document.domain = 'foo.com'; parent.receive_data([<xsl:apply-templates/>]); </script> </body></html> </xsl:template> <!-- templates that transform data into json objects go here --> </xsl:stylesheet> Is there a better way to load XML from feeds.foo.com and what are the ramifications of this iframe-proxy/xslt/jsonp trick? (..and in what cases will it fail?) Remarks This does not work in Safari & Chrome but since both support Access-Control it's fine. We want little or no change to feeds.foo.com We are aware of (but not interested in) server-side proxy solutions update: wrote about it

    Read the article

  • WPF ToolTip Style with dynamic LayoutTransform

    - by NoOne
    I have an app that scales it's UI and I want to scale the ToolTips with it. I have tried doing this: <Style TargetType="{x:Type ToolTip}"> <Setter Property="LayoutTransform" Value="{DynamicResource scaleTransf}"/> ... </Style> ...where scaleTransf is a resource that I change via code: Application.Current.Resources["scaleTransf"] = new ScaleTransform(...); Most of the ToolTips do get scaled in size but some of them that are created by C# code don't get scaled. I've checked and it seems that I don't set their Style or LayoutTransform by code, so I don't really understand what is going wrong... Moreover, I have the impression that the above XAML code worked fine a few days ago. :( Is there sth I can do to make it work all the time without setting the LayoutTransform in code-behind? EDIT : The ToolTips that don't change scale are the ones that have become visible before. EDIT2 : Extra code: <ScaleTransform x:Key="scaleTransf" ScaleX="1" ScaleY="1"/> I have also tried this: Application.Current.Resources.Remove("scaleTransf"); Application.Current.Resources.Add("scaleTransf", new ScaleTransform(val, val)); EDIT3 : My attempt to solve this using a DependencyProperty: In MainWindow.xaml.cs : public static readonly DependencyProperty TransformToApplyProperty = DependencyProperty.Register("TransformToApply", typeof(Transform), typeof(MainWindow)); public Transform TransformToApply { get { return (Transform)this.GetValue(TransformToApplyProperty); } } Somewhere in MainWindow, in response to a user input: this.SetValue(TransformToApplyProperty, new ScaleTransform(val, val)); XAML Style: <Style TargetType="{x:Type ToolTip}"> <Setter Property="LayoutTransform" Value="{Binding TransformToApply, ElementName=MainWindow}"/> ... Using this code, not a single one of the ToolTips seem to scale accordingly.

    Read the article

  • UItextfield text alignment issue in cocos2D iPhone

    - by shreya
    Hi All, Please take a look of the following screen shot. Here is my code, I am using cocos2D CGAffineTransform transform = CGAffineTransformMakeRotation(3.14159/2); _view = [[CCDirector sharedDirector]openGLView]; // Input the user name _nameField = [[UITextField alloc]initWithFrame:CGRectMake(130.0, 270.0, 200.0, 30.0)]; _nameField.transform = transform; _nameField.adjustsFontSizeToFitWidth = YES; _nameField.textColor = [UIColor blackColor]; [_nameField setFont:[UIFont fontWithName:@"Arial" size:14]]; _nameField.placeholder = @"<Enter Your Name>"; _nameField.backgroundColor = [UIColor clearColor]; _nameField.autocorrectionType = UITextAutocorrectionTypeNo; _nameField.autocapitalizationType = UITextAutocapitalizationTypeAllCharacters; _nameField.textAlignment = UITextAlignmentCenter; _nameField.keyboardType = UIKeyboardTypeDefault; _nameField.returnKeyType = UIReturnKeyDone; _nameField.tag = 0; _nameField.delegate = self; _nameField.clearButtonMode = UITextFieldViewModeNever; _nameField.borderStyle = UITextBorderStyleRoundedRect; [_view addSubview:_nameField]; The problem is the text is typing on the top of the text field. I want it to be in the middle not to top.

    Read the article

  • trigger config transformation in TFS 2010 or msbuild

    - by grenade
    I'm attempting to make use of configuration transformations in a continuous integration environment. I need a way to tell the TFS build agent to perform the transformations. I was kind of hoping it would just work after discovering the config transform files (web.qa-release.config, web.production-release.config, etc...). But it doesn't. I have a TFS build definition that builds the right configurations (qa-release, production-release, etc...) and I have some specific .proj files that get built within these definitions and those contain some environment specific parameters eg: <PropertyGroup Condition=" '$(Configuration)'=='production-release' "> <TargetHost Condition=" '$(TargetHost)'=='' ">qa.web</TargetHost> ... </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)'=='qa-release' "> <TargetHost Condition=" '$(TargetHost)'=='' ">production.web</TargetHost> ... </PropertyGroup> I know from the output that the correct configurations are being built. Now I just need to learn how to trigger the config transformations. Is there some hocus pocus that I can add to the final .proj in the build to kick off the transform and blow away the individual transform files?

    Read the article

  • CSS Rotation & IE: absolute positioning seems to break IE

    - by user263900
    I'm trying to rotate a variety of text blocks so they are vertically oriented, and position them in very specific locations on a diagram which will be previewed and then printed. CSS rotates the text very nicely in IE, FF, even Opera. But when I try to position a rotated element, IE 7 & 8 (not worried about 6) breaks completely and the element stays in its original location. Any way around this? I really need to-the-pixel control of where these labels are located. HTML <div class="content rotate"> <div id="Div1" class="txtblock">Ardvark Avacado<br />Awkward</div> <div id="Div2" class="txtblock">Brownies<br />Bacteria Brussel Sprouts</div> </div> CSS div.content { position: relative; width: 300px; height: 300px; margin: 30px; border-top: black 4px solid; border-right: blue 4px solid; border-bottom: black 4px dashed; border-left: blue 4px dashed; } .rotate { -webkit-transform: rotate(-90deg); -moz-transform: rotate(-90deg); -o-transform: rotate(-90deg); filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=3); } .txtblock { width: auto; position: absolute; } #Div1 { left:44px; top:70px; border:red 3px solid; } #Div2 { left:13px; top:170px; border:purple 3px solid; }

    Read the article

  • Oracle performance problems with large batch of XSL operations

    - by FrustratedWithFormsDesigner
    I have a system that is performing many XSL transformations on XMLType objects. The problem is that the system gradually slows down over time, and sometimes crashes when it runs out of memory. It seems that the slow down (and possibly memory crash) is around the dbms_xslprocessor.processXSL function call, which gradually takes longer and longer to complete. The code looks like this: v_doc dbms_xmldom.DOMDocument; v_transformer dbms_xmldom.DOMDocument; v_XSLprocessor dbms_xslprocessor.Processor; v_stylesheet dbms_xslprocessor.Stylesheet; v_clob clob; ... transformer := PKG_STUFF.getXSL(); v_transformer := dbms_xmldom.newDOMDocument(transformer); v_XSLprocessor := Dbms_Xslprocessor.newProcessor; v_stylesheet := dbms_xslprocessor.newStylesheet(v_transformer, ''); ... for source_data in (select id in source_tbl) loop begin v_doc := PKG_CONVERT.convert(in_id => source_data.id); --start time of operation v_begin_op_time := dbms_utility.get_time; --reset the CLOB v_clob := ' '; --Apply XSL Transform dbms_xslprocessor.processXSL(p => v_XSLprocessor, ss => v_stylesheet, xmldoc => v_Doc, cl => v_clob); v_doc := dbms_xmldom.newDOMDocument(XMLType(v_clob)); --end time v_end_op_time := dbms_utility.get_time; --calculate duration v_time_taken := (((v_end_op_time - v_begin_op_time))); --log the duration PKG_LOG.log_message('Time taken to transform XML: '||v_time_taken); ... ... DBMS_XMLDOM.freeDocument(v_Doc); DBMS_LOB.freetemporary(lob_loc => v_clob); end loop; The time taken to transform the XML is slowly creeping up (I suppose it might also be the call to dbms_xmldom.newDOMDocument, but I had thought that to be fairly straightforward). I have no idea why.... :( (Oracle 10g)

    Read the article

  • Modify coverflow component ,add backview ?

    - by hib
    Hi all , I am developing an iPhone application in which I needs to implement a Coverflow component. I got a good library from here It is working nicely . Now I want to show a different back view with 6 buttons when a user double taps on the image in the coverflow . For that the tapkulibrary author has implemented a delegate method called which is : - (void) coverflowView:(TKCoverflowView*)coverflowView coverAtIndexWasDoubleTapped:(int)index{ TKCoverView *cover = [coverflowView coverAtIndex:index]; //if(cover == nil) return; // [UIView beginAnimations:nil context:nil]; // [UIView setAnimationDuration:1]; // [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:cover cache:YES]; // [UIView commitAnimations]; *********************MY BACK View ******************************* UIView *c; NSArray *array = [[NSBundle mainBundle] loadNibNamed:@"TestMine" owner:nil options:nil]; c = [array objectAtIndex:0]; [cover addSubview:c]; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:1.0]; [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:cover cache:YES]; [UIView setAnimationDuration:1.0]; CGAffineTransform transform = CGAffineTransformMakeScale(1.2, 1.2); cover.transform = transform; [UIView commitAnimations]; NSLog(@"Index: %d",index); } I am not much aware about the drawing code used in this example my back view is not working perfectly . So can anyone modify me this method or code to push my back view when double tapping the image and again double tapping shows the real image again . I need help .

    Read the article

  • CSS 3 - Scaling CSS Transitions

    - by Viv Shc
    I am trying to scale an image when you mouseenter, which is working. I would like the image to gradually scale up with an ease transition. I used ease-in-out, which it's not working. Any suggestions? Also, I used addClass & removeClass twice in the jquery code. Is there a way to only use it once? Thanks! <style> .image { opacity: 0.5; } .image.opaque { opacity: 1; } .size{ transform:scale(1.2); -ms-transform:scale(1.2); /* IE 9 */ -webkit-transform:scale(1.2); /* Safari and Chrome */ -webkit-transition: scale 2s ease-in-out; -moz-transition: scale 2s ease-in-out; -o-transition: scale 2s ease-in-out; -ms-transition: scale 2s ease-in-out; transition: scale 2s ease-in-out; transition: opacity 2s; } </style> <script> $(document).ready(function() { $(".image").mouseenter(function() { $(this).addClass("opaque"); $(this).addClass("size"); }); $(".image").mouseleave(function() { $(this).removeClass("opaque"); $(this).removeClass("size"); }); }); <div id="gallery"> <h3>Gallery of images</h3> <img class="image" src="images/gnu.jpg" height="200px" width="250px"> <img class="image" src="images/tiger.jpg" height="200px" width="250px"> <img class="image" src="images/black_rhino.jpg" height="200px" width="250px"> <img class="image" src="images/cape_buffalo.jpg" height="200px" width="250px"> </div>

    Read the article

  • SL3 TreeView Not Programmatically Scrolling

    - by Chris
    I have a treeview in Silverlight 3. The treeview is bound to a observable collection - which contains a list of hierarchical data. When the page loads initially, all nodes, by default, in the treeview, are collapsed. I have functionality that allows a certain item in the treeview to be selected programmatically. The problem that I am running into is when an item is selected that isn't immediately visible (i.e. one or more parent nodes are collapsed). I programmatically expand them, but, when I try to programmatically scroll into view, so the user can see the selected item, it doesn't work. I looked into this further, and I believe that it has to do with the calculated viewport height for the scroll viewer. It almost seems like a timing issue, as, if the item's parent node is expanded, and then the item is programmatically selected, the code that scrolls the treeview into view for that selected treeview item works perfectly. Please refer to the extension method below that I am using to scroll the treeview into view. Any help or suggestions on how to correct this would be greatly appreciated. Thanks. public static void BringIntoViewForScrollViewer(this FrameworkElement frameworkElement, ScrollViewer scrollViewer) { var transform = frameworkElement.TransformToVisual(scrollViewer); var positionInScrollViewer = transform.Transform(new Point(0, 0)); if (positionInScrollViewer.Y < 0 || positionInScrollViewer.Y > scrollViewer.ViewportHeight) scrollViewer.ScrollToVerticalOffset(scrollViewer.VerticalOffset + positionInScrollViewer.Y - ScrollPadding); }

    Read the article

  • Precision of cos(atan2(y,x)) versus using complex <double>, C++

    - by Ivan
    Hi all, I'm writing some coordinate transformations (more specifically the Joukoswky Transform, Wikipedia Joukowsky Transform), and I'm interested in performance, but of course precision. I'm trying to do the coordinate transformations in two ways: 1) Calculating the real and complex parts in separate, using double precision, as below: double r2 = chi.x*chi.x + chi.y*chi.y; //double sq = pow(r2,-0.5*n) + pow(r2,0.5*n); //slow!!! double sq = sqrt(r2); //way faster! double co = cos(atan2(chi.y,chi.x)); double si = sin(atan2(chi.y,chi.x)); Z.x = 0.5*(co*sq + co/sq); Z.y = 0.5*si*sq; where chi and Z are simple structures with double x and y as members. 2) Using complex : Z = 0.5 * (chi + (1.0 / chi)); Where Z and chi are complex . There interesting part is that indeed the case 1) is faster (about 20%), but the precision is bad, giving error in the third decimal number after the comma after the inverse transform, while the complex gives back the exact number. So, the problem is on the cos(atan2), sin(atan2)? But if it is, how the complex handles that? Thanks!

    Read the article

  • Advanced tasks using Web.Config transformation

    - by dcadenas
    Does anyone know if there is a way to "transform" specific sections of values instead of replacing the whole value or an attribute? For example, I've got several appSettings entries that specify the Urls for different webservices. These entries are slightly different in the dev environment than the production environment. Some are less trivial than others <!-- DEV ENTRY --> <appSettings> <add key="serviceName1_WebsService_Url" value="http://wsServiceName1.dev.domain.com/v1.2.3.4/entryPoint.asmx" /> <add key="serviceName2_WebsService_Url" value="http://ma1-lab.lab1.domain.com/v1.2.3.4/entryPoint.asmx" /> </appSettings> <!-- PROD ENTRY --> <appSettings> <add key="serviceName1_WebsService_Url" value="http://wsServiceName1.prod.domain.com/v1.2.3.4/entryPoint.asmx" /> <add key="serviceName2_WebsService_Url" value="http://ws.ServiceName2.domain.com/v1.2.3.4/entryPoint.asmx" /> </appSettings> So far, I know I can do something like this in the Web.Release.Config: <add xdt:Locator="Match(key)" xdt:Transform="SetAttributes(value)" key="serviceName1_WebsService_Url" value="http://wsServiceName1.prod.domain.com/v1.2.3.4/entryPoint.asmx" /> <add xdt:Locator="Match(key)" xdt:Transform="SetAttributes(value)" key="serviceName2_WebsService_Url" value="http://ws.ServiceName2.domain.com/v1.2.3.4/entryPoint.asmx" /> However, everytime the Version for that webservice is updated, I would have to update the Web.Release.Config as well, which defeats the purpose of simplfying my web.config updates. I know I could also split that URL into different sections and update them independently, but I rather have it all in one key. I've looked through the available web.config Transforms but nothings seems to be geared towars what I am trying to accomplish. These are the websites I am using as a reference: Vishal Joshi's blog, MSDN Help, and Channel9 video Any help would be much appreciated! -D

    Read the article

  • WPF create a list of controls that can be scrolled via the mouse but still remain functional

    - by Mark
    I have a list of controls that I am displaying via a WrapPanel and it horizontally oriented. I have implemented a "Click and Drag" scrolling technique so that the user scrolls with the mouse via clicking and dragging. Like so: <Canvas x:Name="ParentCanvas" PreviewMouseDown="Canvas_MouseDown" MouseMove="Canvas_MouseMove"> <WrapPanel> <WrapPanel.RenderTransform> <TranslateTransform /> </WrapPanel.RenderTransform> <!-- controls are all in here ... --> </WrapPanel> </Canvas> Then in the code behind: private Point _mousePosition; private Point _lastMousePosition; private void Canvas_MouseDown(object sender, MouseButtonEventArgs e) { _lastMousePosition = e.GetPosition(ParentCanvas); e.Handled = true; } private void Canvas_MouseMove(object sender, MouseEventArgs e) { _mousePosition = e.GetPosition(ParentCanvas); var delta = _mousePosition - _lastMousePosition; if(e.LeftButton == MouseButtonState.Pressed && delta.X != 0) { var transform = ((TranslateTransform)_wrapPanel.RenderTransform).Clone(); transform.X += delta.X; _wrapPanel.RenderTransform = transform; _lastMousePosition = _mousePosition; } } This all works fine But what I want to do is make it so that when a users clicks to drag, the items within the WrapPanel dont respond (i.e. the user is only browsing), but when the user clicks (as in a full click) then they do respond to the click. Just like how the iphone works, when you press and drag directly on an app, it does not open the app, but rather scrolls the screen, but when you tap the app, it starts... I hope this makes sense. Cheers, Mark

    Read the article

  • Problem with poor font rendering with CSS3 transitions, jQuery, & Google Fonts

    - by Justin
    In Firefox, there is no problem. Here's an image: http://cl.ly/3R0L1q3P1r11040e3T1i In Safari, the text is rendering poorly: http://cl.ly/0a1101341r2E1D2d1W46 In IE7 & IE8, it's much worse, but I don't have a picture. Sorry :( I'm using Isotope jQuery plugin, and the CSS3 transitions seem to cause the poor font-rendering. I'm also using Google Font API. Here's what the CSS transitions for Isotope are written as: /**** Isotope CSS3 transitions ****/ .isotope, .isotope .isotope-item { -webkit-transition-duration: 0.8s; -moz-transition-duration: 0.8s; transition-duration: 0.8s; } .isotope { -webkit-transition-property: height, width; -moz-transition-property: height, width; transition-property: height, width; } .isotope .isotope-item { -webkit-transition-property: -webkit-transform, opacity; -moz-transition-property: -moz-transform, opacity; transition-property: transform, opacity; } I appreciate any help with this. Looks great in Firefox! Thanks!

    Read the article

  • Using ms: xpath functions inside XPathExpression

    - by Filini
    I am trying to use Microsoft XPath Extension Functions (such as ms:string-compare http://msdn.microsoft.com/en-us/library/ms256114.aspx) inside an XPathExpression object. These functions are extensions inside the MSXML library, and if I use them in an XslCompiledTransform (simply adding the "ms" namespace) they work like a charm: var xsl = @" <?xml version=""1.0"" encoding=""UTF-8""?> <xsl:stylesheet version=""2.0"" xmlns:xsl=""http://www.w3.org/1999/XSL/Transform"" xmlns:xs=""http://www.w3.org/2001/XMLSchema"" xmlns:fn=""http://www.w3.org/2005/xpath-functions"" xmlns:ms=""urn:schemas-microsoft-com:xslt""> <xsl:output method=""xml"" version=""1.0"" encoding=""UTF-8"" indent=""yes""/> <xsl:template match=""/Data""> <xsl:element name=""Result""> <xsl:value-of select=""ms:string-compare(@timeout1, @timeout2)""/> </xsl:element> </xsl:template> </xsl:stylesheet>"; var xslDocument = new XmlDocument(); xslDocument.LoadXml(xsl); var transform = new XslCompiledTransform(); transform.Load(xslDocument); Then I tried using them in an XPathExpression: XPathNavigator nav = document.DocumentElement.CreateNavigator(); XPathExpression expr = nav.Compile("ms:string-compare(/Data/@timeout1, /Data/@timeout2)"); XmlNamespaceManager manager = new XmlNamespaceManager(document.NameTable); manager.AddNamespace("ms", "urn:schemas-microsoft-com:xslt"); expr.SetContext(manager); nav.Evaluate(expr); But I get an exception "XsltContext is needed for this query because of an unknown function". XsltContext is a specific XmlNamespaceManager, but I don't know if it's possible to instantiate it without an actual XslCompiledTransform (it's abstract) and use it as my expression context. Is there any way to do this (or any other way to use ms: extensions inside an XPathExpression)?

    Read the article

  • Concept: Information Into Memory Location.

    - by Richeve S. Bebedor
    I am having troubles conceptualizing an algorithm to be used to transform any information or data into a specific appropriate and reasonable memory location in any data structure that I will be devising. To give you an idea, I have a JPanel object instance and I created another Container type object instance of any subtype (note this is in Java because I love this language), then I collected those instances into a data structure not specifically just for those instances but also applicable to any type of object. Now my procedure for fetching those data again is to extract the object specific features similar in category to all object in that data structure and transform it into a integer data memory location (specifically as much as possible) or any type of data that will pertain to this transformation. And I can already access that memory location without further sorting or applications of O(n) time complex algorithms (which I think preferable but I wanted to do my own way XD). The data structure is of any type either binary tree, linked list, arrays or sets (and the like XD). What is important is I don't need to have successive comparing and analysis of data just to locate information in big structures. To give you a technical idea, I have to an array DS that contains JLabel object instance with a specific name "HelloWorld". But array DS contains other types of object (in multitude). Now this JLabel object has a location in the array at index [124324] (which is if you do any type of searching algorithm just to arrive at that location is conceivably slow because added to it the data structure used was an array *note please disregard the efficiency of the data structure to be used I just want to explain to you my concept XD). Now I want to equate "HelloWorld" to 124324 by using a conceptually made function applicable to all data types. So that I can do a direct search by doing this DS[extractLocation("HelloWorld")] just to get that JLabel instance. I know this may sound crazy but I want to test my concept of non-sorting feature extracting search algorithm for any data structure wherein my main problem is how to transform information to be stored into memory location of where it was stored.

    Read the article

  • SVG rotate deletes Elemets

    - by user1468661
    I'm trying to generate svg-Code in a web-application. Here's an example output: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" version="1.1" baseProfile="full" width="1000px" height="600px"> <rect x="147.50198255352893" y="109.43695479777953" width="15.860428231562253" height="295.79698651863595" stroke="rgb(0,0,0)" fill="rgb(255,255,255)" stroke-width="3" transform="rotate(20 155.43219666931006 257.3354480570975)"/> <rect x="163.36241078509119" y="405.2339413164155" width="379.85725614591587" height="-23.79064234734335" stroke="rgb(0,0,0)" fill="rgb(255,255,255)" stroke-width="3" transform="rotate(20 353.2910388580491 393.3386201427438)"/> <rect x="543.219666931007" y="381.44329896907215" width="22.204599524187188" height="-353.6875495638382" stroke="rgb(0,0,0)" fill="rgb(255,255,255)" stroke-width="3" transform="rotate(20 554.3219666931006 204.59952418715304)"/> </svg> There should be three rotated Rectangles, but somehow in Chrome, Safari, and Inkscape only one of them is displayed. I did google and have no clue what is wrong. Thx for your help.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >