Search Results

Search found 11509 results on 461 pages for 'description logic'.

Page 34/461 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • What category of combinatorial problems appear on the logic games section of the LSAT?

    - by Merjit
    There's a category of logic problem on the LSAT that goes like this: Seven consecutive time slots for a broadcast, numbered in chronological order I through 7, will be filled by six song tapes-G, H, L, O, P, S-and exactly one news tape. Each tape is to be assigned to a different time slot, and no tape is longer than any other tape. The broadcast is subject to the following restrictions: L must be played immediately before O. The news tape must be played at some time after L. There must be exactly two time slots between G and P, regardless of whether G comes before P or whether G comes after P. I'm interested in generating a list of permutations that satisfy the conditions as a way of studying for the test and as a programming challenge. However, I'm not sure what class of permutation problem this is. I've generalized the type problem as follows: Given an n-length array A: How many ways can a set of n unique items be arranged within A? Eg. How many ways are there to rearrange ABCDEFG? If the length of the set of unique items is less than the length of A, how many ways can the set be arranged within A if items in the set may occur more than once? Eg. ABCDEF = AABCDEF; ABBCDEF, etc. How many ways can a set of unique items be arranged within A if the items of the set are subject to "blocking conditions"? My thought is to encode the restrictions and then use something like Python's itertools to generate the permutations. Thoughts and suggestions are welcome.

    Read the article

  • Is unnecessary error handling recommended in business logic? eg. Null check/Percentage limit check etc

    - by novice_at_work
    We usually put unnecessary checks in our business logic to avoid failures. Eg. 1. public ObjectABC funcABC(){ ObjectABC obj = new ObjectABC; .......... .......... //its never set to null here. .......... return obj; } ObjectABC o = funABC(); if(o!=null){ //do something } Why do we need this null check if we are sure that it will never be null? Is it a good practice or not? 2. int pplReached = funA(..,..,..); int totalPpl = funB(..,..,..); funA() just puts a few more restriction over result of funB(). Double percentage = (totalPpl==0||totalPpl<pplReached) ? 0.0 : pplReached/totalPpl; Do we again need this check? The questions is: Aren't we swallowing some fundamental issue by putting such checks? Issues which should be shown ideally, are avoided by putting these checks. What is the recommended way?

    Read the article

  • Where should my "filtering" logic reside with Linq-2-SQL and ASP.NET-MVC in View or Controller?

    - by Nate Bross
    I have a main Table, with several "child" tables. TableA and TableAChild1 and TableAChild2. I have a view which shows the information in TableA, and then has two columns of all items in TableAChild1 and TableAChild2 respectivly, they are rendered with Partial views. Both child tables have a bit field for VisibleToAll, and depending on user role, I'd like to either display all related rows, or related rows where VisibleToAll = true. This code, feels like it should be in the controller, but I'm not sure how it would look, because as it stands, the controller (limmited version) looks like this: return View("TableADetailView", repos.GetTableA(id)); Would something like this be even work, and would it be bad what if my DataContext gets submitted, would that delete all the rows that have VisibleToAll == false? var tblA = repos.GetTableA(id); tblA.TableAChild1 = tblA.TableAChild1.Where(tmp => tmp.VisibleToAll == true); tblA.TableAChild2 = tblA.TableAChild2.Where(tmp => tmp.VisibleToAll == true); return View("TableADetailView", tblA); It would also be simple to add that logic to the RendarPartial call from the main view: <% Html.RenderPartial("TableAChild1", Model.TableAChild1.Where(tmp => tmp.VisibleToAll == true); %>

    Read the article

  • Stuck on the logic of creating tags for posts (like SO tags)? (PHP)

    - by ggfan
    I am stuck on how to create tags for each post on my site. I am not sure how to add the tags into database. Currently... I have 3 tables: +---------------------+ +--------------------+ +---------------------+ | Tags | | Posting | | PostingTags | +---------------------+ +--------------------+ +---------------------+ | + TagID | | + posting_id | | + posting_id | +---------------------+ +--------------------+ +---------------------+ | + TagName | | + title | | + tagid | +---------------------+ +--------------------+ +---------------------+ The Tags table is just the name of the tags(ex: 1 PHP, 2 MySQL,3 HTML) The posting (ex: 1 What is PHP?, 2 What is CSS?, 3 What is HTML?) The postingtags shows the relation between posting and tags. When users type a posting, I insert the data into the "posting" table. It automatically inserts the posting_id for each post(posting_id is a primary key). $title = mysqli_real_escape_string($dbc, trim($_POST['title'])); $query4 = "INSERT INTO posting (title) VALUES ('$title')"; mysqli_query($dbc, $query4); HOWEVER, how do I insert the tags for each post? When users are filling out the form, there is a checkbox area for all the tags available and they check off whatever tags they want. (I am not doing where users type in the tags they want just yet) This shows each tag with a checkbox. When users check off each tag, it gets stored in an array called "postingtag[]". <label class="styled">Select Tags:</label> <?php $dbc = mysqli_connect(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME); $query5 = "SELECT * FROM tags ORDER BY tagname"; $data5 = mysqli_query($dbc, $query5); while ($row5 = mysqli_fetch_array($data5)) { echo '<li><input type="checkbox" name="postingtag[]" value="'.$row5['tagname'].'" ">'.$row5['tagname'].'</li>'; } ?> My question is how do I insert the tags in the array ("postingtag") into my "postingtags" table? Should I... $postingtag = $_POST["postingtag"]; foreach($postingtag as $value){ $query5 = "INSERT INTO postingtags (posting_id, tagID) VALUES (____, $value)"; mysqli_query($dbc, $query5); } 1.In this query, how do I get the posting_id value of the post? I am stuck on the logic here, so if someone can help me explain the next step, I would appreciate it! Is there an easier way to insert tags?

    Read the article

  • Explanation of converting exporting an XML document as a relational database using XSLT

    - by Yaaqov
    I would like to better understand the basic steps needed to a take an XML document like this Breakfast Menu... <?xml version="1.0" encoding="ISO-8859-1"?> <breakfast_menu> <food> <name>Belgian Waffles</name> <price>$5.95</price> <description>two of our famous Belgian Waffles with plenty of real maple syrup</description> <calories>650</calories> </food> <food> <name>Strawberry Belgian Waffles</name> <price>$7.95</price> <description>light Belgian waffles covered with strawberries and whipped cream</description> <calories>900</calories> </food> <food> <name>Berry-Berry Belgian Waffles</name> <price>$8.95</price> <description>light Belgian waffles covered with an assortment of fresh berries and whipped cream</description> <calories>900</calories> </food> <food> <name>French Toast</name> <price>$4.50</price> <description>thick slices made from our homemade sourdough bread</description> <calories>600</calories> </food> <food> <name>Homestyle Breakfast</name> <price>$6.95</price> <description>two eggs, bacon or sausage, toast, and our ever-popular hash browns</description> <calories>950</calories> </food> </breakfast_menu> And "export" it to say, an Access or MySQL database using XSLT, creating two joined tables: Table: breakfast_menu Field: menu_item_id Field: food_id Table: food Field: food_id Field: name Field: price Field: description Field: calories If there are online tutorials on this that you know of, I'd be interesting in learning more, as well. Thanks.

    Read the article

  • Autodetect/mount SDCards and run script for them on Linux

    - by Brendan
    Hey Everyone, I'm currently running SME Server, and need to have a script run upon the attachment of SD Cards to my server. The script itself works fine (it copies the contents of the cards), but the automounting and execution of the script is where I'm having issues. The I have a USB hub consisting of 10 USB ports; that shows up as: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 (The hub is the TERMINUS TECHNOLOGY INC entries) As I cannot plug SD Cards directly into the server; I use a USB to SD card attachement (10 of them) plugged into the hub to read the cards. Upon pluggig the 10 attachments (without cards) into the hub; lsusb yields the following: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 073: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 072: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 071: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 070: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 069: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 068: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 067: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 066: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 065: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 064: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 As you can see, the readers are the "Gensys Logic, Inc" entries. Plugging in an SD card to a reader doesn't affect lsusb (it reads exactly as above), however my system recognises the cards fine; as indicated by dmesg: Attached scsi generic sg11 at scsi54, channel 0, id 0, lun 0, type 0 USB Mass Storage device found at 73 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 If I manually mount sdd1 (mount /dev/sdd1 /somedirectory/) this works fine. What I'm really after is a solution that automounts each of the cards as they are inputted into the reader; and executes a script for them (this will involve copying their contents to another directory). My problem is that I don't know how to do this; I don't think udev will work as the USB devices don't change; if I could somehow get udev working with /dev/disk/by-path/ however I think this is doable (it seems to keep constant entries). ls /dev/disk returns: pci-0000:00:1d.7-usb-0:4.1.1.1:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0 pci-0000:0b:01.0-scsi-0:0:1:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0-part2 From above, we can see I have only one card plugged into the reader (pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1). Going mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 Works and places the card under /media/usbdisk/, however: mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 slot1/ doesn't work, and returns "mount: can't get address for /dev/disk/by-path/pci-0000" Any ideas and solutions would be great, I've seen the knowledge of a lot of the guys on here before so I'm hopeful someone can help me out. Thanks

    Read the article

  • Cisco ASA: How to route PPPoE-assigned subnet?

    - by Martijn Heemels
    We've just received a fiber uplink, and I'm trying to configure our Cisco ASA 5505 to properly use it. The provider requires us to connect via PPPoE, and I managed to configure the ASA as a PPPoE client and establish a connection. The ASA is assigned an IP address by PPPoE, and I can ping out from the ASA to the internet, but I should have access to an entire /28 subnet. I can't figure out how to get that subnet configured on the ASA, so that I can route or NAT the available public addresses to various internal hosts. My assigned range is: 188.xx.xx.176/28 The address I get via PPPoE is 188.xx.xx.177/32, which according to our provider is our Default Gateway address. They claim the subnet is correctly routed to us on their side. How does the ASA know which range it is responsible for on the Fiber interface? How do I use the addresses from my range? To clarify my config; The ASA is currently configured to default-route to our ADSL uplink on port Ethernet0/0 (interface vlan2, nicknamed Outside). The fiber is connected to port Ethernet0/2 (interface vlan50, nicknamed Fiber) so I can configure and test it before making it the default route. Once I'm clear on how to set it all up, I'll fully replace the Outside interface with Fiber. My config (rather long): : Saved : ASA Version 8.3(2)4 ! hostname gw domain-name example.com enable password ****** encrypted passwd ****** encrypted names name 10.10.1.0 Inside-dhcp-network description Desktops and clients that receive their IP via DHCP name 10.10.0.208 svn.example.com description Subversion server name 10.10.0.205 marvin.example.com description LAMP development server name 10.10.0.206 dns.example.com description DNS, DHCP, NTP ! interface Vlan2 description Old ADSL WAN connection nameif outside security-level 0 ip address 192.168.1.2 255.255.255.252 ! interface Vlan10 description LAN vlan 10 Regular LAN traffic nameif inside security-level 100 ip address 10.10.0.254 255.255.0.0 ! interface Vlan11 description LAN vlan 11 Lab/test traffic nameif lab security-level 90 ip address 10.11.0.254 255.255.0.0 ! interface Vlan20 description LAN vlan 20 ISCSI traffic nameif iscsi security-level 100 ip address 10.20.0.254 255.255.0.0 ! interface Vlan30 description LAN vlan 30 DMZ traffic nameif dmz security-level 50 ip address 10.30.0.254 255.255.0.0 ! interface Vlan40 description LAN vlan 40 Guests access to the internet nameif guests security-level 50 ip address 10.40.0.254 255.255.0.0 ! interface Vlan50 description New WAN Corporate Internet over fiber nameif fiber security-level 0 pppoe client vpdn group KPN ip address pppoe ! interface Ethernet0/0 switchport access vlan 2 speed 100 duplex full ! interface Ethernet0/1 switchport trunk allowed vlan 10,11,30,40 switchport trunk native vlan 10 switchport mode trunk ! interface Ethernet0/2 switchport access vlan 50 speed 100 duplex full ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 switchport access vlan 20 ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! boot system disk0:/asa832-4-k8.bin ftp mode passive clock timezone CEST 1 clock summer-time CEDT recurring last Sun Mar 2:00 last Sun Oct 3:00 dns domain-lookup inside dns server-group DefaultDNS name-server dns.example.com domain-name example.com same-security-traffic permit inter-interface same-security-traffic permit intra-interface object network inside-net subnet 10.10.0.0 255.255.0.0 object network svn.example.com host 10.10.0.208 object network marvin.example.com host 10.10.0.205 object network lab-net subnet 10.11.0.0 255.255.0.0 object network dmz-net subnet 10.30.0.0 255.255.0.0 object network guests-net subnet 10.40.0.0 255.255.0.0 object network dhcp-subnet subnet 10.10.1.0 255.255.255.0 description DHCP assigned addresses on Vlan 10 object network Inside-vpnpool description Pool of assignable addresses for VPN clients object network vpn-subnet subnet 10.10.3.0 255.255.255.0 description Address pool assignable to VPN clients object network dns.example.com host 10.10.0.206 description DNS, DHCP, NTP object-group service iscsi tcp description iscsi storage traffic port-object eq 3260 access-list outside_access_in remark Allow access from outside to HTTP on svn. access-list outside_access_in extended permit tcp any object svn.example.com eq www access-list Insiders!_splitTunnelAcl standard permit 10.10.0.0 255.255.0.0 access-list iscsi_access_in remark Prevent disruption of iscsi traffic from outside the iscsi vlan. access-list iscsi_access_in extended deny tcp any interface iscsi object-group iscsi log warnings ! snmp-map DenyV1 deny version 1 ! pager lines 24 logging enable logging timestamp logging asdm-buffer-size 512 logging monitor warnings logging buffered warnings logging history critical logging asdm errors logging flash-bufferwrap logging flash-minimum-free 4000 logging flash-maximum-allocation 2000 mtu outside 1500 mtu inside 1500 mtu lab 1500 mtu iscsi 9000 mtu dmz 1500 mtu guests 1500 mtu fiber 1492 ip local pool DHCP_VPN 10.10.3.1-10.10.3.20 mask 255.255.0.0 ip verify reverse-path interface outside no failover icmp unreachable rate-limit 10 burst-size 5 asdm image disk0:/asdm-635.bin asdm history enable arp timeout 14400 nat (inside,outside) source static any any destination static vpn-subnet vpn-subnet ! object network inside-net nat (inside,outside) dynamic interface object network svn.example.com nat (inside,outside) static interface service tcp www www object network lab-net nat (lab,outside) dynamic interface object network dmz-net nat (dmz,outside) dynamic interface object network guests-net nat (guests,outside) dynamic interface access-group outside_access_in in interface outside access-group iscsi_access_in in interface iscsi route outside 0.0.0.0 0.0.0.0 192.168.1.1 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa-server SBS2003 protocol radius aaa-server SBS2003 (inside) host 10.10.0.204 timeout 5 key ***** aaa authentication enable console SBS2003 LOCAL aaa authentication ssh console SBS2003 LOCAL aaa authentication telnet console SBS2003 LOCAL http server enable http 10.10.0.0 255.255.0.0 inside snmp-server host inside 10.10.0.207 community ***** version 2c snmp-server location Server room snmp-server contact [email protected] snmp-server community ***** snmp-server enable traps snmp authentication linkup linkdown coldstart snmp-server enable traps syslog crypto ipsec transform-set TRANS_ESP_AES-256_SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set TRANS_ESP_AES-256_SHA mode transport crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs group5 crypto dynamic-map outside_dyn_map 20 set transform-set TRANS_ESP_AES-256_SHA crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet 10.10.0.0 255.255.0.0 inside telnet timeout 5 ssh scopy enable ssh 10.10.0.0 255.255.0.0 inside ssh timeout 5 ssh version 2 console timeout 30 management-access inside vpdn group KPN request dialout pppoe vpdn group KPN localname INSIDERS vpdn group KPN ppp authentication pap vpdn username INSIDERS password ***** store-local dhcpd address 10.40.1.0-10.40.1.100 guests dhcpd dns 8.8.8.8 8.8.4.4 interface guests dhcpd update dns interface guests dhcpd enable guests ! threat-detection basic-threat threat-detection scanning-threat threat-detection statistics host number-of-rate 2 threat-detection statistics port number-of-rate 3 threat-detection statistics protocol number-of-rate 3 threat-detection statistics access-list threat-detection statistics tcp-intercept rate-interval 30 burst-rate 400 average-rate 200 ntp server dns.example.com source inside prefer webvpn group-policy DfltGrpPolicy attributes vpn-tunnel-protocol IPSec l2tp-ipsec group-policy Insiders! internal group-policy Insiders! attributes wins-server value 10.10.0.205 dns-server value 10.10.0.206 vpn-tunnel-protocol IPSec l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value Insiders!_splitTunnelAcl default-domain value example.com username martijn password ****** encrypted privilege 15 username marcel password ****** encrypted privilege 15 tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group Insiders! type remote-access tunnel-group Insiders! general-attributes address-pool DHCP_VPN authentication-server-group SBS2003 LOCAL default-group-policy Insiders! tunnel-group Insiders! ipsec-attributes pre-shared-key ***** ! class-map global-class match default-inspection-traffic class-map type inspect http match-all asdm_medium_security_methods match not request method head match not request method post match not request method get ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map type inspect http http_inspection_policy parameters protocol-violation action drop-connection policy-map global-policy class global-class inspect dns inspect esmtp inspect ftp inspect h323 h225 inspect h323 ras inspect http inspect icmp inspect icmp error inspect mgcp inspect netbios inspect pptp inspect rtsp inspect snmp DenyV1 ! service-policy global-policy global smtp-server 123.123.123.123 prompt hostname context call-home profile CiscoTAC-1 no active destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService destination address email [email protected] destination transport-method http subscribe-to-alert-group diagnostic subscribe-to-alert-group environment subscribe-to-alert-group inventory periodic monthly subscribe-to-alert-group configuration periodic monthly subscribe-to-alert-group telemetry periodic daily hpm topN enable Cryptochecksum:a76bbcf8b19019771c6d3eeecb95c1ca : end asdm image disk0:/asdm-635.bin asdm location svn.example.com 255.255.255.255 inside asdm location marvin.example.com 255.255.255.255 inside asdm location dns.example.com 255.255.255.255 inside asdm history enable

    Read the article

  • PHP Markdown Extra and Definition Lists

    - by Kirk Bentley
    I'm currently generating a definition list with PHP Markdown Extra with the following syntax: Term : Description : Description Two My Other Term : Description which generates the following HTML: <dl> <dt>Term</dt> <dd>Description</dd> <dd>Description Two</dd> <dt>My Other Term</dt> <dd>Description</dd> </dl> Does anyone know how I can get Markdown to create separate definition lists for each definition term and descriptions to create markup like this? <dl> <dt>Term</dt> <dd>Description</dd> <dd>Description Two</dd> </dl> <dl <dt>My Other Term</dt> <dd>Description</dd> </dl>

    Read the article

  • QueryReadStore loads JSON into DataGrid, but JsonRestStore does not (from the same source)

    - by labratmatt
    I'm building a Dojo DataGrid from JSON data provided by my REST interface. The DataGrid loads the data fine using a QueryReadStore, but doesn't seem to work with the same same data piped into a JsonRestStore. I'm using the following Dojo libs with Dojo 1.4.1: dojo.require("dojox.data.JsonRestStore"); dojo.require("dojox.grid.DataGrid"); dojo.require("dojox.data.QueryReadStore"); dojo.require("dojo.parser"); I declare my stores in the following manner: var storeJRS = new dojox.data.JsonRestStore({target:"api/collaborations.php/1", idAttribute: 'items[].id'}); var storeQRS = new dojox.data.QueryReadStore({url:"api/collaborations.php/1", requestMethod:"get"}); I create my grid layout like this: var gridLayout = [ new dojox.grid.cells.RowIndex({ name: "Row #", width: 5, styles: "text-align: left;" }), { name: "Name", field: "name", styles: "text-align:right;", width:20 }, { name: "Description", field: "description", width:30 } ]; I create my DataGrid as follows: The above works, but if I use QueryReadStore as my store, the grid is created with the headers (Name, Description), but it isn't populated with any rows: <div dojoType="dojox.grid.DataGrid" jsid="grid3" store="storeQRS" structure="gridLayout" style="height:500px; width:1000px;"></div> Using FireBug, I can see that QueryReadStore is getting my JSON data from my REST interface. It looks like the following: {"numRows":6,"items":[{"name":"My Super Cool Collab","description":"This is for all the super cool people in the super cool group","id":1},{"name":"My Other Super Cool","description":"This is for all the other super cool people","id":3},{"name":"This is another coll","description":"This is just some other collab","id":4},{"name":"some new collab","description":"this is a new collab","id":5},{"name":"yet another new coll","description":"uh huh","id":6},{"name":"asdf","description":"asdf","id":7}]} Any ideas? Thanks.

    Read the article

  • Get node content from xml file and transform it into php array?

    - by Kirzilla
    Hello, I'm looking for solution to extract some node from large xml file (using xmlstarlet http://xmlstar.sourceforge.net/) and then parse this node to php array. elements.xml <?xml version="1.0"?> <elements> <element id="1" par1="val1_1" par2="val1_2" par3="val1_3"> <title>element 1 title</title> <description>element 1 description</description> </element> <element id="2" par1="val2_1" par2="val2_2" par3="val2_3"> <title>element 2 title</title> <description>element 2 description</description> </element> </elements> To extract element tag with id="1" using xmlstarlet I'm executing this shell command... xmlstarlet sel -t -c "/elements/element[id=1]" elements.xml This shell command outputs something like this... <element id="1" par1="val1_1" par2="val1_2" par3="val1_3"> <title>element 1 title</title> <description>element 1 description</description> </element> How could I parse this shell output into php array? Thank you.

    Read the article

  • VB.NET class inherits a base class and implements an interface issue (works in C#)

    - by 300 baud
    I am trying to create a class in VB.NET which inherits a base abstract class and also implements an interface. The interface declares a string property called Description. The base class contains a string property called Description. The main class inherits the base class and implements the interface. The existence of the Description property in the base class fulfills the interface requirements. This works fine in C# but causes issues in VB.NET. First, here is an example of the C# code which works: public interface IFoo { string Description { get; set; } } public abstract class FooBase { public string Description { get; set; } } public class MyFoo : FooBase, IFoo { } Now here is the VB.NET version which gives a compiler error: Public Interface IFoo Property Description() As String End Interface Public MustInherit Class FooBase Private _Description As String Public Property Description() As String Get Return _Description End Get Set(ByVal value As String) _Description = value End Set End Property End Class Public Class MyFoo Inherits FooBase Implements IFoo End Class If I make the base class (FooBase) implement the interface and add the Implements IFoo.Description to the property all is good, but I do not want the base class to implement the interface. The compiler error is: Class 'MyFoo' must implement 'Property Description() As String' for interface 'IFoo'. Implementing property must have matching 'ReadOnly' or 'WriteOnly' specifiers. Can VB.NET not handle this, or do I need to change my syntax somewhere to get this to work?

    Read the article

  • Xml output on HTML

    - by umitunal
    Hi, My xml data = &lt;Item "/api/items/4000000002011"&gt;&lt;ItemID&gt;4000000002011&lt;/ItemID&gt;&lt;Name&gt;Sample Item1&lt;/Name&gt;&lt;Description&gt;Sample Description&lt;/Description&gt;&lt;Rate&gt;34.00&lt;/Rate&gt;&lt;Tax1Name&gt;PST&lt;/Tax1Name&gt;&lt;Tax1Percentage&gt;8&lt;/Tax1Percentage&gt;&lt;Tax2Name/&gt;&lt;Tax2Percentage/&gt;&lt;/Item&gt; Html output=<Item uri="/api/items/4000000002011"> <ItemID>4000000002011</ItemID><Name>Sample Item1</Name><Description>Sample Description</Description><Rate>34.00</Rate><Tax1Name>PST</Tax1Name><Tax1Percentage>8</Tax1Percentage><Tax2Name/><Tax2Percentage/></Item> I want to html output.How do I? <Item "/api/items/4000000002011"> <ItemID>4000000002011</ItemID> <Name>Sample Item1</Name> <Description>Sample Description</Description> <Rate>34.00</Rate> <Tax1Name>PST</Tax1Name> <Tax1Percentage>8</Tax1Percentage> <Tax2Name/> <Tax2Percentage/> </Item>

    Read the article

  • jquery, moving the current item to the top

    - by azz0r
    Hello, The problem I'm having is that if the fourth item has a long description, it cuts off as it goes below #features. What I was hoping is someone would have a suggestion on how to make the current item selected to move to the top? Code below: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <script type="text/javascript" src="/library/jquery/1.4.js"></script> <script> Model.FeatureBar = { current:0, items:{}, init: function init(options){ var me = this; me.triggers = [] me.slides = [] this.container = jQuery('#features'); jQuery('.feature').each(function(i){ me.triggers[i] = {url: jQuery(this).children('.feature-title a').href, title: jQuery(this).children('.feature-title'),description:jQuery(this).children('.feature-description')} me.slides[i] = {image: jQuery(this).children('.feature-image')} }); for(var i in this.slides){ this.slides[i].image.hide(); this.triggers[i].description.hide(); } Model.FeatureBar.goToItem(0); setInterval(function(){Model.FeatureBar.next()},5000); }, next: function next(){ var i = (this.current+1 < this.triggers.length) ? this.current+1 : 0; this.goToItem(i); }, previous: function previous(){ var i = (this.current-1 > 1) ? this.current-1 : this.triggers.length; this.goToItem(i); }, goToItem: function goToItem(i) { if (!this.slides[i]) { throw 'Slide out of range'; } this.triggers[this.current].description.slideUp(); this.triggers[i].description.slideDown(); this.slides[this.current].image.hide(); this.slides[i].image.show(); this.current = i; }, } </script> </head> <body> <div id="features"> <div class="feature current"> <div style="display: none;" class="feature-image"> <a href="google.com"><img src="/design/images/four-oh-four.png"></a> </div> <h2 class="feature-title"><a href="google.com">Item 3</a></h2> <p style="display: none;" class="feature-description"><a href="google.com">Item 3 description</a></p> </div> <div class="feature"> <div class="movie-cover"> <a href="/movie/movie"><img src="/image1" alt="" title=""></a> </div> <div style="display: block;" class="feature-image"> <a href="/rudeboiz-14-arse-splitters/movie"><img src="/images/Featured/rude.png"></a> </div> <h2 class="feature-title"><a href="/rude/movie">Item 2</a></h2> <p style="display: block;" class="feature-description"><a href="/rude/movie">Item 2 description</a></p> </div> <div class="feature"> <div style="display: none;" class="feature-image"> <a href="/pissed-up-brits/movie"><img src="/design/images/four-oh-four.png"></a> </div> <h2 class="feature-title"><a href="/pis/movie">Item 1</a></h2> <p style="display: none;" class="feature-description"><a href="/pis/movie">Item one description</a></p> </div> <div class="feature"> <div style="display: none;" class="feature-image"> <a href="what.com"><img src="/images/Featured/indie.png"></a> </div> <h2 class="feature-title"><a href="what.com">Item 4</a></h2> <p style="display: none;" class="feature-description"><a href="what.com">Item 4 description</a></p> </div> </div> </body> </html>

    Read the article

  • Ruby on Rails: attr_accessor for submodels

    - by williamjones
    I'm working with some models where a lot of a given model's key attributes are actually stored in a submodel. Example: class WikiArticle has_many :revisions has_one :current_revision, :class_name => "Revision", :order => "created_at DESC" end class Revision has_one :wiki_article end The Revision class has a ton of database fields, and the WikiArticle has very few. However, I often have to access a Revision's fields from the context of a WikiArticle. The most important case of this is probably on creating an article. I've been doing that with lots of methods that look like this, one for each field: def description if @description @description elsif current_revision current_revision.description else "" end end def description=(string) @description = string end And then on my save, I save @description into a new revision. This whole thing reminds me a lot of attr_accessor, only it doesn't seem like I can get attr_accessor to do what I need. How can I define an attr_submodel_accessor such that I could just give field names and have it automatically create all those methods the way attr_accessor does?

    Read the article

  • DotNetNuke + XPath = Custom navigation menu DNNMenu HTML render

    - by Rui Santos
    I'm developing a skin for DotNetNuke 5 using the Component DNN Done Right menu by Mark Alan which uses XSL-T to convert the XML sitemap into an HTML navigation. The XML sitemap outputs the following structure: <Root > <root > <node id="40" text="Home" url="http://localhost/dnn/Home.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="0" > <node id="58" text="Child1" url="http://localhost/dnn/Home/Child1.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="1" > <keywords >Child1</keywords> <description >Child1</description> <node id="59" text="Child1 SubItem1" url="http://localhost/dnn/Home/Child1/Child1SubItem1.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="2" > <keywords >Child1 SubItem1</keywords> <description >Child1 SubItem1</description> </node> <node id="60" text="Child1 SubItem2" url="http://localhost/dnn/Home/Child1/Child1SubItem2.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="0" only="0" depth="2" > <keywords >Child1 SubItem2</keywords> <description >Child1 SubItem2</description> </node> <node id="61" text="Child1 SubItem3" url="http://localhost/dnn/Home/Child1/Child1SubItem3.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="1" only="0" depth="2" > <keywords >Child1 SubItem3</keywords> <description >Child1 SubItem3</description> </node> </node> <node id="65" text="Child2" url="http://localhost/dnn/Home/Child2.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="1" only="0" depth="1" > <keywords >Child2</keywords> <description >Child2</description> <node id="66" text="Child2 SubItem1" url="http://localhost/dnn/Home/Child2/Child2SubItem1.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="2" > <keywords >Child2 SubItem1</keywords> <description >Child2 SubItem1</description> </node> <node id="67" text="Child2 SubItem2" url="http://localhost/dnn/Home/Child2/Child2SubItem2.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="1" only="0" depth="2" > <keywords >Child2 SubItem2</keywords> <description >Child2 SubItem2</description> </node> </node> </node> </root> </Root> My Goal is to render this XML block into this HTML Navigation structure only using UL's LI's, etc.. <ul id="topnav"> <li> <a href="#" class="home">Home</a> <!-- Parent Node - Depth0 --> <div class="sub"> <ul> <li><h2><a href="#">Child1</a></h2></li> <!-- Parent Node 1 - Depth1 --> <li><a href="#">Child1 SubItem1</a></li> <!-- ChildNode - Depth2 --> <li><a href="#">Child1 SubItem2</a></li> <!-- ChildNode - Depth2 --> <li><a href="#">Child1 SubItem3</a></li ><!-- ChildNode - Depth2 --> </ul> <ul> <li><h2><a href="#">Child2</a></h2></li> <!-- Parent Node 2 - Depth1 --> <li><a href="#">Child2 SubItem1</a></li> <!-- ChildNode - Depth2 --> <li><a href="#">Child2 SubItem2</a></li> <!-- ChildNode - Depth2 --> </ul> </div> </li> </ul> Can anyone help with the XSL coding? I'm just starting now with XSL..

    Read the article

  • How to construct query to update nested array document in mongo?

    - by GowtGM
    I am having following document in mongo, { "_id" : ObjectId("506e9e54a4e8f51423679428"), "description" : "ffffffffffffffff", "menus" : [ { "_id" : ObjectId("506e9e5aa4e8f51423679429"), "description" : "ffffffffffffffffffff", "items" : [ { "name" : "xcvxc", "description" : "vxvxcvxc", "text" : "vxcvxcvx", "menuKey" : "0", "onSelect" : "1", "_id" : ObjectId("506e9f07a4e8f5142367942f") } , { "name" : "abcd", "description" : "qqq", "text" : "qqq", "menuKey" : "0", "onSelect" : "3", "_id" : ObjectId("507e9f07a4e8f5142367942f") } ] }, { "_id" : ObjectId("506e9e5aa4e8f51423679429"), "description" : "rrrrr", "items" : [ { "name" : "xcc", "description" : "vx", "text" : "vxc", "menuKey" : "0", "onSelect" : "2", "_id" : ObjectId("506e9f07a4e8f5142367942f") } ] } ] } Now , i want to update the following document : { "name" : "abcd", "description" : "qqq", "text" : "qqq", "menuKey" : "0", "onSelect" : "3", "_id" : ObjectId("507e9f07a4e8f5142367942f") } I am having main documnet id: "_id" : ObjectId("506e9e54a4e8f51423679428") and menus id "_id" : ObjectId("506e9e54a4e8f51423679428") as well as items id "_id" : ObjectId("507e9f07a4e8f5142367942f") which is to be updated. I have tried using the following query: db.collection.update({ "_id" : { "$oid" : "506e9e54a4e8f51423679428"} , "menus._id" : { "$oid" : "506e9e5aa4e8f51423679429"}},{ "$set" : { "menus.$.items" : { "_id" : { "$oid" : "506e9f07a4e8f5142367942f"}} , "menus.$.items.$.name" : "xcvxc66666", ...}},false,false); but its not working...

    Read the article

  • Eclipse Hell . . . Failed to read the project description file (.project)

    - by Kobojunkie
    I think Eclipse is trying to make me miserable. A couple of hours ago, my project was working and compiling well. Suddenly that all changed. Eclipse somehow wipes out all changes I have made to my files(activity, manifest etc.) I make sure to save often but when I go to run the project, I get the error that I have a build error. I checked and there was none, so I go to close Eclipse, so I can reopen and see if the errors will go away. Instead what happens is Eclipse wipes clean all my files and I end up with a project on disk with lots of blank code files. I try to run anyway, and I get the error message below. Failed to read the project description file (.project) for 'com.example.android.nfc.simulator.FakeTagsActivity.FakeTagsActivity'. The file has been changed on disk, and it now contains invalid information. The project will not function properly until the description file is restored to a valid state. Anyone have an idea what in the world this is about and how I can rectify this?

    Read the article

  • /users/tags should contain scores

    - by Sean Patrick Floyd
    I am implementing some simple JavaScript/bookmarklet based apps that show some reputation info, including the score in the User's top tags (roughly based on this previous bookmarklet of mine). Now I can get a user's top tags (using the API), and I can also get the per-tag score if the user is logged in, by dynamically parsing the tag's top users page. But it costs me one AJAX request per tag and I have to download 10+k to extract a single numeric value. It would save a lot of traffic if the tags in <api>/users/<userid>/tags had a score field. The data seems to be there, after all the top users pages use it, so it would just be a question of exposing the data. Suggested structure: "tags": [ { "name": { "description": "name of the tag", "values": "string", "optional": false, "suggested_buffer_size": 25 }, "score": { "description": "tag score, sum of up votes for answers on non-wiki questions", "values": "32-bit signed integer", "optional": false }, "count": { "description": "tag count, exact meaning depends on context", "values": "32-bit signed integer", "optional": false }, "restricted_to": { "description": "user types that can make use of this tag, lack of this field indicates it is useable by all", "values": "one of anonymous, unregistered, registered, or moderator", "optional": true }, "fulfills_required": { "description": "indicates whether this tag is one of those that is required to be on a post", "values": "boolean", "optional": false }, "user_id": { "description": "user associated with this tag, depends on context", "values": "32-bit signed integer", "optional": true } } ]

    Read the article

  • Running Outlook from VB with multiple email addresses [migrated]

    - by Mac
    I am sending emails from my VB6 system and I am having problems with sending a single email to various email addresses. The code is as follows: On Error Resume Next Err.Clear Set oOutLookObject = CreateObject("Outlook.Application") If Err <> 0 Then MsgBox "Email error. Err = " & Err & " Description = " & Err.Description EmailValid = "N" Exit Function End If Set oEmailItem = oOutLookObject.CreateItem(0) If Err <> 0 Then MsgBox "Email error. Err = " & Err & " Description = " & Err.Description EmailValid = "N" Exit Function End If With oEmailItem .Recipients.Add (SMRecipients) .Subject = SMSubject .Importance = IMPORTANCENORMAL .Body = SMBody For i = 1 To 10 If RTrim(SMAttach(i)) <> "" Then .attachments.Add SMAttach(1) 'i) Else Exit For End If Next i .send End With If Err <> 0 Then MsgBox "Email error. Err = " & Err & " Description = " & Err.Description EmailValid = "N" Exit Function End If ''' .Attachments.Add ("c:\temp\test2.txt") Set oOutLookObject = Nothing I have set SMRecipients to a single email address and it is fine but when I add more addresses seperated by semicolons or spaces it only sends to the original address. My system runs under XP. Another point is that it use to find the addresses in the Outlook Address book and where they wetre not specific enough it would display the matching addresses for selection of the correct one. It no longer does this.

    Read the article

  • How To Edit XML File

    - by thebourneid
    I have a movie collection catalogue with local links to folders and files for an easy access. Recently I reorganaized my entire hard disk space and I need to update the links and I'm trying to do that automatically with perl. I can export the data in a xml file and import it again. I can extract the new filepaths with the use of File::Find but I'm stuck with two problems. I have no idea how to connect the $title from the new filepath with the corresponding $title from the xml file. I'm dealing with such files for the first time and I don't know how to proceed with the replacement process. Here is what I've done till now use strict; use warnings; use File::Basename; use File::Find; use File::Spec; use XML::Simple; use Data::Dumper; my $dir_target = 'somepath'; find(\&a, $dir_target); sub a { /\.iso$/ or return; my $fn = $File::Find::name; $fn =~ s/\//\\/g; $fn =~ /(.*\\)(.*)/; my $path = $1; my $filename = $2; my $title = (File::Spec->splitdir($fn))[2]; $title =~ s/(.*?)\s\(\d+\)$/$1/; $title =~ s/~/:/; $title =~ s/`/?/; my $link_local = '<link><description>Folder</description><url>'.$path.'</url><urltype>Movie</urltype></link><link><description>'.$filename.'</description><url>'.$fn.'</url><urltype>Movie</urltype></link>' unless $title eq ''; my $txt = 'somepath/log.txt'; my $xml_in = XMLin('somepath/test.xml', ForceArray => 1, KeepRoot => 1); my $xml_out = XMLout($xml_in, OutputFile => 'somepath/test_out.xml', KeepRoot=>1); open F, ">>", $txt; print F $link_local."\n\n"; close F; } And here is a snippet of the data I need to edit. If found imdb and dvdempire link - do not touch. if found local links replace, otherwise insert. I'm willing to complete the code myself but need some directions how to proceed further. Thanks. <title>$title</title> ....... <links> <link> <description>IMDB</description> <url>http://www.imdb.com/title/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>DVD Empire</description> <url>http://www.dvdempire.com/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>Folder</description> <url>OLD_FOLDERPATH</url> <urltype>Movie</urltype> </link> <link> <description>OLD_FILENAME</description> <url>OLD_FILENAMEPATH</url> <urltype>Movie</urltype> </link> </links>

    Read the article

  • How do I edit an XML file with Perl?

    - by thebourneid
    I have a movie collection catalogue with local links to folders and files for an easy access. Recently I reorganaized my entire hard disk space and I need to update the links and I'm trying to do that automatically with Perl. I can export the data in a XML file and import it again. I can extract the new filepaths with the use of File::Find but I'm stuck with two problems. I have no idea how to connect the $title from the new filepath with the corresponding $title from the XML file. I'm dealing with such files for the first time and I don't know how to proceed with the replacement process. Here is what I've done till now use strict; use warnings; use File::Basename; use File::Find; use File::Spec; use XML::Simple; use Data::Dumper; my $dir_target = 'D:/Movies/'; my %titles_locations = (); find(\&file_handler, $dir_target); sub file_handler { /\.iso$/ or return; my $fn = $File::Find::name; $fn =~ s/\//\\/g; $fn =~ /(.*\\)(.*)/; my $path = $1; my $filename = $2; my $title = (File::Spec->splitdir($fn))[2]; $title =~ s/(.*?)\s\(\d+\)$/$1/; $title =~ s/~/:/; $title =~ s/`/?/; my $link_local = '<link><description>Folder</description><url>'.$path.'</url><urltype>Movie</urltype></link><link><description>'.$filename.'</description><url>'.$fn.'</url><urltype>Movie</urltype></link>' unless $title eq ''; $titles_locations{$title} = {'filename'=>$filename, 'path'=>$path }; } my $xml_in = XMLin('somepath/test.xml', ForceArray => 1, KeepRoot => 1); my $title = {'key1' => 'title', 'key2' => 'links'}; foreach my $link (keys %$title) { } print Data::Dumper->Dump([$title]); my $xml_out = XMLout($xml_in, OutputFile => 'somepath/test_out.xml', KeepRoot=>1); And here is a snippet of the data I need to edit. If found imdb and dvdempire link - do not touch. if found local links replace, otherwise insert. I'm willing to complete the code myself but need some directions how to proceed further. Thanks. <title>$title</title> ....... <links> <link> <description>IMDB</description> <url>http://www.imdb.com/title/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>DVD Empire</description> <url>http://www.dvdempire.com/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>Folder</description> <url>OLD_FOLDERPATH</url> <urltype>Movie</urltype> </link> <link> <description>OLD_FILENAME</description> <url>OLD_FILENAMEPATH</url> <urltype>Movie</urltype> </link> </links>

    Read the article

  • Linq To Xml problems using XElement's method Elements(XName)

    - by Dorian McHensie
    Hello everyone. I have a problem using Linq To Xml. A simple code. I have this XML: <?xml version="1.0" encoding="utf-8" ?> <data xmlns="http://www.xxx.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.xxx.com/directory file.xsd"> <contact> <name>aaa</name> <email>[email protected]</email> <birthdate>2002-09-22</birthdate> <telephone>000:000000</telephone> <description>Description for this contact</description> </contact> <contact> <name>sss</name> <email>[email protected]</email> <birthdate>2002-09-22</birthdate> <telephone>000:000000</telephone> <description>Description for this contact</description> </contact> <contact> <name>bbb</name> <email>[email protected]</email> <birthdate>2002-09-22</birthdate> <telephone>000:000000</telephone> <description>Description for this contact</description> </contact> <contact> <name>ccc</name> <email>[email protected]</email> <birthdate>2002-09-22</birthdate> <telephone>000:000000</telephone> <description>Description for this contact</description> </contact> I want to get every contact mapping it on an object Contact. To do this I use this fragment of code: XDocument XDoc = XDocument.Load(System.Web.HttpRuntime.AppDomainAppPath + this.filesource); XElement XRoot = XDoc.Root; //XElement XEl = XElement.Load(this.filesource); var results = from e in XRoot.Elements("contact") select new Contact((string)e.Element("name"), (string)e.Element("email"), "1-1-1", null, null); List<Contact> cntcts = new List<Contact>(); foreach (Contact cntct in results) { cntcts.Add(cntct); } Contact[] c = cntcts.ToArray(); // Encapsulating element Elements<Contact> final = new Elements<Contact>(c); Ok just don't mind that all: focus on this: When I get the root node, it is all right, I get it correctly. When I use the select directive I try to get every node saying: from e in XRoot.Elements("contact") OK here's the problem: if I use: from e in XRoot.Elements() I get all contact nodes, but if I use: from e in XRoot.Elements("contact") I GET NOTHING: Empty SET. OK you tell me: Use the other one: OK I DO SO, let's use: from e in XRoot.Elements(), I get all nodes anyway, THAT's RIGHT BUT HERE COMES THE OTHER PROBLEM: When Saying: select new Contact((string)e.Element("name"), (string)e.Element("email"), "1-1-1", null, null); I Try to access <name>, <email>... I HAVE TO USE .Element("name") AND IT DOES NOT WORK TOO!!!!!!!!WHAT THE HELL IS THIS????????????? IT SEEMS THAT I DOES NOT MATCH THE NAME I PASS But how is it possible. I know that Elements() function takes, overloaded, one argument that is an XName which is mapped onto a string. Please consider that the code I wrote come from an example, It should work. Please somebody help me. THANKS IN ADVANCE

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • How do I create midi data to trigger an Ultrabeat pattern in Logic?

    - by user289581
    I've made some Ultrabeat patterns but I can't figure out how to trigger them. I make the pattern on C-1, then I go back to the arrange window and hit record and then hit C1 on my keyboard, and it doesn't play the pattern, it just plays a single note. I have Pattern Mode enabled on Ultrabeat and I'm using a one-shot trigger but I can't seem to trigger my pattern to play at all. What am I doing wrong?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >