Search Results

Search found 4472 results on 179 pages for 'unknown publisher'.

Page 94/179 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • javax.naming.InvalidNameException using Oracle BPM and weblogic when accessing directory

    - by alfredozn
    We are getting this exception when we start our cluster (2 managed servers, 1 admin), we have deployed only the ears corresponding to the OBPM 10.3.1 SP1 in a weblogic 10.3. When the server cluster starts, one of the managed servers (the first to start) get overloaded and ran out of connections to the directory DB because of this repeatedly error. It looks like the engine is trying to get the info from the LDAP server but I don't know why it is building a wrong query. fuego.directory.DirectoryRuntimeException: Exception [javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp']. at fuego.directory.DirectoryRuntimeException.wrapException(DirectoryRuntimeException.java:85) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:163) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:110) at fuego.directory.hybrid.ldap.Repository.selectById(Repository.java:38) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipantsInternal(MSADGroupValueProvider.java:124) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipants(MSADGroupValueProvider.java:70) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:149) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:152) at fuego.directory.hybrid.ldap.LDAPResult.getValue(LDAPResult.java:76) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.setInfo(LDAPOrganizationGroupAccessor.java:352) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:121) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:114) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.fetchGroup(LDAPOrganizationGroupAccessor.java:94) at fuego.directory.hybrid.HybridGroupAccessor.fetchGroup(HybridGroupAccessor.java:146) at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at fuego.directory.provider.DirectorySessionImpl$AccessorProxy.invoke(DirectorySessionImpl.java:756) at $Proxy66.fetchGroup(Unknown Source) at fuego.directory.DirOrganizationalGroup.fetch(DirOrganizationalGroup.java:275) at fuego.metadata.GroupManager.loadGroup(GroupManager.java:225) at fuego.metadata.GroupManager.find(GroupManager.java:57) at fuego.metadata.ParticipantManager.addNestedGroups(ParticipantManager.java:621) at fuego.metadata.ParticipantManager.buildCompleteRoleAssignments(ParticipantManager.java:527) at fuego.metadata.Participant$RoleTransitiveClousure.build(Participant.java:760) at fuego.metadata.Participant$RoleTransitiveClousure.access$100(Participant.java:692) at fuego.metadata.Participant.buildRoles(Participant.java:401) at fuego.metadata.Participant.updateMembers(Participant.java:372) at fuego.metadata.Participant.<init>(Participant.java:64) at fuego.metadata.Participant.createUncacheParticipant(Participant.java:84) at fuego.server.persistence.jdbc.JdbcProcessInstancePersMgr.loadItems(JdbcProcessInstancePersMgr.java:1706) at fuego.server.persistence.Persistence.loadInstanceItems(Persistence.java:838) at fuego.server.AbstractInstanceService.readInstance(AbstractInstanceService.java:791) at fuego.ejbengine.EJBInstanceService.getLockedROImpl(EJBInstanceService.java:218) at fuego.server.AbstractInstanceService.getLockedROImpl(AbstractInstanceService.java:892) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:743) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:730) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:144) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:162) at fuego.server.AbstractInstanceService.unselectAllItems(AbstractInstanceService.java:454) at fuego.server.execution.ToDoItemUnselect.execute(ToDoItemUnselect.java:105) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startNestedTransaction(TransactionAction.java:527) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:548) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.DefaultEngineExecution.executeAutomaticWork(DefaultEngineExecution.java:62) at fuego.server.execution.EngineExecution.executeAutomaticWork(EngineExecution.java:42) at fuego.server.execution.ToDoItem.executeAutomaticWork(ToDoItem.java:261) at fuego.ejbengine.ItemExecutionBean$1.execute(ItemExecutionBean.java:223) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66) at fuego.ejbengine.ItemExecutionBean.processMessage(ItemExecutionBean.java:209) at fuego.ejbengine.ItemExecutionBean.onMessage(ItemExecutionBean.java:120) at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:466) at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:371) at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:327) at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4547) at weblogic.jms.client.JMSSession.execute(JMSSession.java:4233) at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3709) at weblogic.jms.client.JMSSession.access$000(JMSSession.java:114) at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5058) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Caused by: javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp' at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2979) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2794) at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1826) at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1749) at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:368) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:338) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:321) at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:248) at fuego.jndi.FaultTolerantLdapContext.search(FaultTolerantLdapContext.java:612) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:136) ... 67 more

    Read the article

  • ssh authentication nfs

    - by user40135
    Hi all I would like to do ssh from machine "ub0" to another machine "ub1" without using passwords. I setup using nfs on "ub0" but still I am asked to insert a password. Here is my scenario: * machine ub0 and ub1 have the same user "mpiu", with same pwd, same userid, and same group id * the 2 servers are sharing a folder that is the HOME directory for "mpiu" * I did a chmod 700 on the .ssh * I created a key using ssh-keygene -t dsa * I did "cat id_dsa.pub authorized_keys". On this last file I tried also chmod 600 and chmod 640 * off course I can guarantee that on machine ub1 the user "shared_user" can see the same fodler that wes mounted with no problem. Below the content of my .ssh folder Code: authorized_keys id_dsa id_dsa.pub known_hosts After all of this calling wathever function "ssh ub1 hostname" I am requested my password. Do you know what I can try? I also UNcommented in the ssh_config file for both machines this line IdentityFile ~/.ssh/id_dsa I also tried ssh -i $HOME/.ssh/id_dsa mpiu@ub1 Below the ssh -vv Code: OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ub1 [192.168.2.9] port 22. debug1: Connection established. debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /mirror/mpiu/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version lshd-2.0.4 lsh - a GNU ssh debug1: no match: lshd-2.0.4 lsh - a GNU ssh debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-3ubuntu1 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,spki-sign-rsa debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server-client 3des-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client-server 3des-cbc hmac-md5 none debug2: dh_gen_key: priv key bits set: 183/384 debug2: bits set: 1028/2048 debug1: sending SSH2_MSG_KEXDH_INIT debug1: expecting SSH2_MSG_KEXDH_REPLY debug1: Host 'ub1' is known and matches the RSA host key. debug1: Found key in /mirror/mpiu/.ssh/known_hosts:1 debug2: bits set: 1039/2048 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /mirror/mpiu/.ssh/id_dsa (0xb874b098) debug1: Authentications that can continue: password,publickey debug1: Next authentication method: publickey debug1: Offering public key: /mirror/mpiu/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: password,publickey debug2: we did not send a packet, disable method debug1: Next authentication method: password mpiu@ub1's password: I hangs here!

    Read the article

  • SQL 2008 Replication corrupt data problem

    - by Jonathan K
    We took a SQL 2000 database. Took a lightspeed backup. Restored on SQL 2008 active/passive cluster. Then setup replication to replicate the data back to SQL 2000. So 2008 is the publisher/distributor, and 2000 is doing a pull subscription. Everything works well, execpt we occassionally get corrupt data in varchar/text fields on the subscriber. So for example we have a table with 4500 records. When we run this statement: update MedstaffProvider set Notes = 'Cell Phone: 360.123.4567 Answering Service: 360.123.9876' where LastName = 'smith' The record in the 2008 database is updated as expected. But in the subsriber datbase we'll get gibberish in the notes field: óPÌ[1] T $Oé[1] ð²ñ. K Here's what we know: This is repeatable, meaning we can run that same query all day long and get the same gibberish. If you alter update statement slightly the data gets replicated just fine. The collation on both databases is the same. So far we've only detected the problem with text/varchar fields. (The notes field above is text). Only one or two records in a table are impacted. The table structure looks identical in both 2000/2008. We haven't made any changes. We have found one solution that fixes the problem. Basically if we recreate the table in 2008 (say as MedStaffProvider2) and then insert all the data. Drop the original table. Rename the table to it's original name. Setup replication again. And run the exact same update statement it works as expected. Does anyone have any idea what might be happening here? Or are there any other techniques we can use to troubleshoot this? I've found a solution for this, but would really like to undertsand why this is happening.

    Read the article

  • wireless internet in linux is very very slow... but in windows.... everythnings fine

    - by Cody Acer
    yesterday when i was connecting to our neighbors wifi connection which is the signal strength is below 50%, i cant browse anything... even ping to gateway. 100% packet loss, and sometimes.. i can connect awesomely i can open my facebook account for 15 minutes but after 15min.. connection is extremely slow. but not windows i can surf even the signal str is very poor weird ey??.. root@Emely:~# lspci -knn 00:00.0 Host bridge [0600]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx DMI Bridge [8086:a010] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller [0300]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx Integrated Graphics Controller [8086:a011] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller [0380]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx Integrated Graphics Controller [8086:a012] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] 00:1b.0 Audio device [0403]: Intel Corporation NM10/ICH7 Family High Definition Audio Controller [8086:27d8] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 1 [8086:27d0] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 2 [8086:27d2] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.2 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 3 [8086:27d4] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 4 [8086:27d6] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 [8086:27c8] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.1 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 [8086:27c9] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.2 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 [8086:27ca] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.3 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 [8086:27cb] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.7 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller [8086:27cc] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: ehci-pci 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev e2) 00:1f.0 ISA bridge [0601]: Intel Corporation NM10 Family LPC Controller [8086:27bc] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: lpc_ich Kernel modules: lpc_ich 00:1f.2 SATA controller [0106]: Intel Corporation NM10/ICH7 Family SATA Controller [AHCI mode] [8086:27c1] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: ahci Kernel modules: ahci 00:1f.3 SMBus [0c05]: Intel Corporation NM10/ICH7 Family SMBus Controller [8086:27da] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel modules: i2c-i801 05:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01) Subsystem: Wistron NeWeb Corp. Device [185f:051a] Kernel driver in use: bcma-pci-bridge Kernel modules: bcma 09:00.0 Ethernet controller [0200]: Marvell Technology Group Ltd. 88E8040 PCI-E Fast Ethernet Controller [11ab:4354] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: sky2 Kernel modules: sky2 root@Emely:~# ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether e8:11:32:2e:a6:fd brd ff:ff:ff:ff:ff:ff 3: wlan0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:1b:b1:a9:ac:e0 brd ff:ff:ff:ff:ff:ff inet 192.168.1.108/24 brd 192.168.1.255 scope global wlan0 inet6 fe80::21b:b1ff:fea9:ace0/64 scope link valid_lft forever preferred_lft forever root@Emely:~# ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether e8:11:32:2e:a6:fd brd ff:ff:ff:ff:ff:ff 3: wlan0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:1b:b1:a9:ac:e0 brd ff:ff:ff:ff:ff:ff root@Emely:~# rfkill list all 0: samsung-wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: samsung-bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: hci0: Bluetooth Soft blocked: no Hard blocked: no 3: phy0: Wireless LAN Soft blocked: no Hard blocked: no is this a wireless driver issue?

    Read the article

  • Tool to modify properties/metadata of a PDF? i.e. Change "Title", "Author"? Sony Reader showing som

    - by Chris W. Rea
    I own a Sony Reader PRS-600 ebook reader. I bought a ton of Manning Publications ebooks (DRM-free) recently. Many of the books are PDFs since not all the ones I wanted are available in epub format. The problem: Some of the PDF books I purchased have incorrect or missing metadata. Making things worse, the Sony Reader only displays the "Title" from the PDF metadata when displaying book titles in the reader's collection of books! The Reader doesn't display the filename. So, even though I have a PDF informatively named "Windows PowerShell In Action.pdf", it shows up as "untitled" in the Reader. Imagine how useful the Reader's list of book titles becomes when many are just "untitled" or "unnamed document" ! Yes, it is maddening. So – short of expecting the publisher to fix the files or Sony to add a filename-based list instead, I'm looking for a way to fix the PDF metadata. I can view the metadata with Adobe Reader, but it doesn't permit modification of the properties. Leading to: Question: Is there a tool – free, or cheap – and either for PC or Mac, that can modify the properties / metadata of a DRM-free PDF document? I want to correct "Title" and "Author" fields, specifically.

    Read the article

  • GlassFish v3: Security related updates?

    - by chris_l
    I've used GlassFish v3.0 as my main development application server for a few weeks now. Now that I want to install it on my VPS, I'd like to get the latest security updates, because Glassfish v3 Release 3.0 (Open Source Edition or not) is already a few months old, and v3.1 is only available as "early access" nightlies (see https://glassfish.dev.java.net/public/downloadsindex.html). GlassFish offers an update mechanism (via pkg or updateTool), but when I simply try to get the latest updates (pkg image-update), it finds nothing. However, when I change the preferred publisher to dev.glassfish.org, I get a list with lots of updates. The interesting thing is, that I haven't been able to find any description about the exact meaning of the diverse publishers/repositories (release, stable, contrib and dev) anywhere on the web, most importantly answering the question: Am I supposed to use the "dev" repository for security updates, or is it (probably more likely) for unstable updates? Where do I get security updates from then? Or are there simply no security updates yet? Asking on the GlassFish forum resulted in 56 views, but 0 answers.

    Read the article

  • DNS Server Behind NAT

    - by Bryan
    I've got a Bind 9 DNS server sitting behind a NAT firewall, assume the Internet facing IP is 1.2.3.4 There are no restrictions on outgoing traffic, and port 53 (TCP/UDP) is forwarded from 1.2.3.4 to the internal DNS server (10.0.0.1). There are no IP Tables rules on either the VPS or the internal Bind 9 server. From a remote Linux VPS located elsewhere on the internet, nslookup works fine # nslookup foo.example.com 1.2.3.4 Server: 1.2.3.4 Address: 1.2.3.4#53 Name: foo.example.com Addresss: 9.9.9.9 However, when using the host command on the remote VPS, I receive the following output: # host foo.example.com 1.2.3.4 ;; reply from unexpected source: 1.2.3.4#13731, expected 1.2.3.4#53 ;; reply from unexpected source: 1.2.3.4#13731, expected 1.2.3.4#53 ;; connection timed out; no servers could be reached. From the VPS, I can establish a connection (using telnet) to 1.2.3.4:53 From the internal DNS server (10.0.0.1), the host command appears to be fine: # host foo.example.com 127.0.0.1 Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: foo.example.com has address 9.9.9.9 Any suggestions as to why the host command on my VPS is complaining about the reply coming back from another port, and what can I do to fix this? Further info: From a windows host external to the network >nslookup foo.example.com 1.2.3.4 DNS request timeout timeout was 2 seconds Server: UnKnown Address: 1.2.3.4 DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds *** Request to UnKnown timed-out This is a default install of bind from Ubuntu 12.04 LTS, with around 11 zones configured. $ named -v BIND 9.8.1-P1 TCP Dump (filtered) from internal DNS server 20:36:29.175701 IP pc.external.com.57226 > dns.example.com.domain: 1+ PTR? 4.3.2.1.in-addr.arpa. (45) 20:36:29.175948 IP dns.example.com.domain > pc.external.com.57226: 1 Refused- 0/0/0 (45) 20:36:31.179786 IP pc.external.com.57227 > dns.example.com.domain: 2+[|domain] 20:36:31.179960 IP dns.example.com.domain > pc.external.com.57227: 2 Refused-[|domain] 20:36:33.180653 IP pc.external.com.57228 > dns.example.com.domain: 3+[|domain] 20:36:33.180906 IP dns.example.com.domain > pc.external.com.57228: 3 Refused-[|domain] 20:36:35.185182 IP pc.external.com.57229 > dns.example.com.domain: 4+ A? foo.example.com. (45) 20:36:35.185362 IP dns.example.com.domain > pc.external.com.57229: 4*- 1/1/1 (95) 20:36:37.182844 IP pc.external.com.57230 > dns.example.com.domain: 5+ AAAA? foo.example.com. (45) 20:36:37.182991 IP dns.example.com.domain > pc.external.com.57230: 5*- 0/1/0 (119) TCP Dump from client during query 21:24:52.054374 IP pc.external.com.43845 > dns.example.com.53: 6142+ A? foo.example.com. (45) 21:24:52.104694 IP dns.example.com.29242 > pc.external.com.43845: UDP, length 95

    Read the article

  • Cannot Start MySQL Server on Fresh MAMP Install

    - by alexpelan
    I'm using Mac OS X 10.6.2 on my Macbook Pro. I can get the apache server to start, but not the mysql server, on both the default apache and default MAMP ports. When I try to go to my start page, I get the message "Error: Could not connect to MySQL server!" . Here's what's in my mysql error log: 00513 02:00:07 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended 100513 02:00:16 mysqld_safe Starting mysqld daemon with databases from /Applications/MAMP/db/mysql 100513 2:00:16 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in a future release. Please use '--slow_query_log'/'--slow_query_log_file' instead. 100513 2:00:16 [Warning] You have forced lower_case_table_names to 0 through a command-line option, even though your file system '/Applications/MAMP/db/mysql/' is case insensitive. This means that you can corrupt a MyISAM table by accessing it with different cases. You should consider changing lower_case_table_names to 1 or 2 100513 2:00:16 [Warning] One can only use the --user switch if running as root 100513 2:00:16 [Note] Plugin 'FEDERATED' is disabled. 100513 2:00:16 [Note] Plugin 'ndbcluster' is disabled. InnoDB: Error: log file /usr/local/mysql/data/ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 16777216 bytes! 100513 2:00:16 [ERROR] Plugin 'InnoDB' init function returned error. 100513 2:00:16 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100513 2:00:16 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100513 2:00:16 [ERROR] Aborting 100513 2:00:16 [Note] /Applications/MAMP/Library/libexec/mysqld: Shutdown complete 100513 02:00:16 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended A couple of things: 1) There are a bunch of different .cnf files that come with MAMP (my-huge, my-medium, etc.)...how can I tell which one is actually being used? 2) I deleted the ib_logfile0 and ib_logfile1 as recommended by another post on serverfault, and then ended up with more errors: 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile0 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile0 size to 16 MB InnoDB: Database physically writes the file full: wait... 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile1 size to 16 MB InnoDB: Database physically writes the file full: wait... InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100519 16:01:31 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100519 16:01:31 InnoDB: Started; log sequence number 0 44556 100519 16:01:31 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100519 16:01:31 [ERROR] Aborting And then I got this the next time I tried to run it: InnoDB: Unable to lock /usr/local/mysql/data/ibdata1, error: 35 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. Sorry that this is a lot of information, but I don't want to leave anything out. Thanks.

    Read the article

  • IP address reuse on macvlan devices

    - by Alex Bubnoff
    I'm trying to create easy to use and possibly simple testing environment for some product and got some strange behaviour of macvlan's. What I'm trying to achieve: make a toolset for one-line start/stop of lxc containers(via docker) bound to external ip(I have enough of it on host machine). So, I'm doing something like this: docker run -d -name=container_name container_image pipework eth1 container_name ip/prefix_len@gateway and pipework here does this: GUEST_IFNAME=ph$NSPID$eth1 ip link add link eth1 dev $GUEST_IFNAME type macvlan mode bridge ip link set eth1 up ip link set $GUEST_IFNAME netns $NSPID ip netns exec $NSPID ip link set $GUEST_IFNAME name eth1 ip netns exec $NSPID ip addr add $IPADDR dev eth1 ip netns exec $NSPID ip route delete default ip netns exec $NSPID ip link set eth1 up ip netns exec $NSPID ip route replace default via $GATEWAY ip netns exec $NSPID arping -c 1 -A -I eth1 $IPADDR And it works for first time per IP. But for second time and later packets for containers IP isn't getting into container, while all configuration seem fine. So it looks like this: External machine ? ping 212.76.131.212 ....silence.... Host machine root@ubuntu:~# ip link show eth1 2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:c9:e1:c9 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip addr show eth1 2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:c9:e1:c9 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# tcpdump -v -i eth1 icmp tcpdump: WARNING: eth1: no IPv4 address assigned tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 00:00:46.542042 IP (tos 0x0, ttl 60, id 9623, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2345, length 64 00:00:47.549969 IP (tos 0x0, ttl 60, id 9624, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2346, length 64 00:00:48.558143 IP (tos 0x0, ttl 60, id 9625, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2347, length 64 00:00:49.566319 IP (tos 0x0, ttl 60, id 9626, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2348, length 64 00:00:50.573999 IP (tos 0x0, ttl 60, id 9627, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2349, length 64 ^C 5 packets captured 5 packets received by filter 0 packets dropped by kernel 1 packet dropped by interface Host machine, netns of container root@ubuntu:~# ip netns exec 32053 ip link show eth1 48: eth1@if2: mtu 1500 qdisc noqueue state UNKNOWN link/ether b2:12:f7:cc:a1:9d brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip netns exec 32053 ip addr show eth1 48: eth1@if2: mtu 1500 qdisc noqueue state UNKNOWN link/ether b2:12:f7:cc:a1:9d brd ff:ff:ff:ff:ff:ff inet 212.76.131.212/29 scope global eth1 inet6 fe80::b012:f7ff:fecc:a19d/64 scope link valid_lft forever preferred_lft forever root@ubuntu:~# ip netns exec 32053 tcpdump -v -i eth1 icmp tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes ....silence.... ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel So, can anyone say, what can it be? Can this be caused by not a bug in macvlan implementation? Is there any tools I can use to debug that configuration?

    Read the article

  • Can't sync filesystem without reboot

    - by Fabio
    I'm having an issue with a linux server. Once a week the running mysql instance hangs and there is no way to fully stop it. If I kill it, it remains in zombie status and init does not reap its pid. The server is used for staging deployments and some internal tools, so it's not under heavy load. The only process constantly used id mysql and for this I think that it's the only process which suffer of this issue. I've searched system logs for errors and the only thing I found is this error (repeated a couple of times) in dmesg output: [706560.640085] INFO: task mysqld:31965 blocked for more than 120 seconds. [706560.640198] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [706560.640312] mysqld D ffff88032fd93f40 0 31965 1 0x00000000 [706560.640317] ffff880242a27d18 0000000000000086 ffff88031a50dd00 ffff880242a27fd8 [706560.640321] ffff880242a27fd8 ffff880242a27fd8 ffff88031e549740 ffff88031a50dd00 [706560.640325] ffff88031a50dd00 ffff88032fd947f8 0000000000000002 ffffffff8112f250 [706560.640328] Call Trace: [706560.640338] [<ffffffff8112f250>] ? __lock_page+0x70/0x70 [706560.640344] [<ffffffff816cb1b9>] schedule+0x29/0x70 [706560.640347] [<ffffffff816cb28f>] io_schedule+0x8f/0xd0 [706560.640350] [<ffffffff8112f25e>] sleep_on_page+0xe/0x20 [706560.640353] [<ffffffff816c9900>] __wait_on_bit+0x60/0x90 [706560.640356] [<ffffffff8112f390>] wait_on_page_bit+0x80/0x90 [706560.640360] [<ffffffff8107dce0>] ? autoremove_wake_function+0x40/0x40 [706560.640363] [<ffffffff8112f891>] filemap_fdatawait_range+0x101/0x190 [706560.640366] [<ffffffff81130975>] filemap_write_and_wait_range+0x65/0x70 [706560.640371] [<ffffffff8122e441>] ext4_sync_file+0x71/0x320 [706560.640376] [<ffffffff811c3e6d>] do_fsync+0x5d/0x90 [706560.640379] [<ffffffff811c40d0>] sys_fsync+0x10/0x20 [706560.640383] [<ffffffff816d495d>] system_call_fastpath+0x1a/0x1f When this happens the only way to make everything working again is a full reboot, but in order to do that I'm forced to use this command after I've manually stopped all running processes echo b > /proc/sysrq-trigger otherwise normal reboot process hangs forever. I've tracked reboots script and I've found out that also the reboot process hangs on a sync call, this one in /etc/init.d/sendsigs (I'm on ubuntu) # Flush the kernel I/O buffer before we start to kill # processes, to make sure the IO of already stopped services to # not slow down the remaining processes to a point where they # are accidentily killed with SIGKILL because they did not # manage to shut down in time. sync I'm almost sure that the cause of this is an hardware issue (the RAID controller???) also because I've other two machines with the same hardware and software configuration and they don't suffer of this, but I can't find any hint in syslog or dmesg. I've also installed smartmontools and mcelog packages but none of them did report any issue. What can I do to track the cause of this issue? Today is happened again, here is the status of system after triggering a reboot init---console-kit-dae---64*[{console-kit-dae}] +-dbus-daemon +-mcelog +-mysqld---{mysqld} +-newrelic-daemon---newrelic-daemon---11*[{newrelic-daemon}] +-ntpd +-polkitd---{polkitd} +-python3 +-rpc.idmapd +-rpc.statd +-rpcbind +-sh---rc---S20sendsigs---sync +-smartd +-snmpd +-sshd---sshd---zsh---sudo---zsh---pstree +-sshd---sshd---zsh---sudo---zsh And here is the status of sync process # ps aux | grep sync root 3637 0.1 0.0 4352 372 ? D 05:53 0:00 sync i.e. Uninterruptible sleep... Hardware specs as reported by lshw I think the raid controller is a fake raid. I usually don't deal with hardware (and for the record I don't have physical access to it) description: Computer product: X7DBP () vendor: Supermicro version: 0123456789 serial: 0123456789 width: 64 bits capabilities: smbios-2.4 dmi-2.4 vsyscall32 configuration: administrator_password=disabled boot=normal frontpanel_password=unknown keyboard_password=unknown power-on_password=disabled uuid=53D19F64-D663-A017-8922-0030487C1FEE *-core description: Motherboard product: X7DBP vendor: Supermicro physical id: 0 version: PCB Version serial: 0123456789 *-firmware description: BIOS vendor: Phoenix Technologies LTD physical id: 0 version: 6.00 date: 05/29/2007 size: 106KiB capacity: 960KiB capabilities: pci pnp upgrade shadowing escd cdboot bootselect edd int13floppy2880 acpi usb ls120boot zipboot biosbootspecification *-storage description: RAID bus controller product: 631xESB/632xESB SATA RAID Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 09 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list configuration: driver=ahci latency=0 resources: irq:19 ioport:18a0(size=8) ioport:1874(size=4) ioport:1878(size=8) ioport:1870(size=4) ioport:1880(size=32) memory:d8500400-d85007ff

    Read the article

  • Application Does Not Start in Windows 7

    - by Jim Fell
    I recently installed a new 60GB SSD as my primary hard drive and re-installed Windows 7 Professional 64-bit. I then installed SSD Fresh from Abelssoft to optimize Windows to run on the SSD. It seemed to install okay, but when I try to run the utility, its splash screen appears briefly before it quietly closes. No errors are displayed; the utility just fails to launch. I have run SSD Fresh on another SSD-equipped Windows 7 Pro x64 computer in the past without any problems. Does anyone know what might be preventing the program from running? I tried running sfc /scannow from the command line (with administrator privileges), shutting down the Spybot Resident, and disabling the firewall and virus scanner. I also tried running the tool as administrator; I even tried reinstalling it, running the installer as administrator. No luck. Every time I try to launch the program the Event Viewer logs this same set of errors: Error 4/2/2012 11:35:44 PM Application Error 1000 (100) Faulting application name: SSDFresh.exe, version: 1.0.0.0, time stamp: 0x4f2a45d8 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0xc0000005 Fault offset: 0x000007ff0016dbba Faulting process id: 0x994 Faulting application start time: 0x01cd11fd9fe978df Faulting application path: C:\Program Files (x86)\SSD Fresh\SSDFresh.exe Faulting module path: unknown Report Id: dfeed551-7df0-11e1-a2c7-002522c47ec0 Error 4/2/2012 11:35:43 PM .NET Runtime 1026 None Application: SSDFresh.exe Framework Version: v4.0.30319 Description: The process was terminated due to an unhandled exception. Exception Info: System.NullReferenceException Stack: at AbBugReporter.BugForm.InitLanguage() at AbBugReporter.BugForm..ctor(AbFlexTrans.LanguageInfo, AbBugReporter.BugReportManager, Boolean) at AbBugReporter.BugReportManager.Show(System.Exception) at SSDFresh.App.App_DispatcherUnhandledException(System.Object, System.Windows.Threading.DispatcherUnhandledExceptionEventArgs) at System.Windows.Threading.Dispatcher.CatchException(System.Exception) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(System.Object, System.Delegate, System.Object, Int32, System.Delegate) at System.Windows.Threading.Dispatcher.WrappedInvoke(System.Delegate, System.Object, Int32, System.Delegate) at System.Windows.Threading.Dispatcher.InvokeImpl(System.Windows.Threading.DispatcherPriority, System.TimeSpan, System.Delegate, System.Object, Int32) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr, Int32, IntPtr, IntPtr) at MS.Win32.UnsafeNativeMethods.DispatchMessage(System.Windows.Interop.MSG ByRef) at System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame) at System.Windows.Application.RunInternal(System.Windows.Window) at System.Windows.Application.Run() at SSDFresh.App.Main() Error 4/2/2012 11:35:39 PM SideBySide 59 None Activation context generation failed for "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe".Error in manifest or policy file "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe.Config" on line 0. Invalid Xml syntax. Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None For those who are interested, here is my system configuration: ASRock M3A770DE AM3 AMD 770 ATX AMD Motherboard AMD Athlon II X3 455 Rana 3.3GHz Socket AM3 95W Triple-Core Desktop Processor ADX455WFGMBOX G.SKILL Value Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Desktop Memory Model F3-10600CL9D-8GBNT Mushkin Enhanced Chronos Deluxe MKNSSDCR60GB-DX 2.5" 60GB SATA III Synchronous MLC Internal Solid State Drive (SSD) (Primary/Boot HD) Western Digital Caviar Blue RFHWD1600AAJS 160GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive (Secondary HD) Sony Optiarc CD/DVD Burner Black SATA Model AD-7261S-0B LightScribe Support RAIDMAX RX-850AE 850W ATX12V v2.3 / EPS12V SLI Certified CrossFire Ready 80 PLUS GOLD Certified Modular Active PFC Power Supply ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card Asus ML228H 21.5" Full HD LED BackLight LED Monitor Slim Design (x3)

    Read the article

  • Linux Kernel not passing through multicast UDP packets

    - by buecking
    Recently I've set up a new Ubuntu Server 10.04 and noticed my UDP server is no longer able to see any multicast data sent to the interface, even after joining the multicast group. I've got the exact same set up on two other Ubuntu 8.04.4 LTS machines and there is no problem receiving data after joining the same multicast group. The ethernet card is a Broadcom netXtreme II BCM5709 and the driver used is: b $ ethtool -i eth1 driver: bnx2 version: 2.0.2 firmware-version: 5.0.11 NCSI 2.0.5 bus-info: 0000:01:00.1 I'm using smcroute to manage my multicast registrations. b$ smcroute -d b$ smcroute -j eth1 233.37.54.71 After joining the group ip maddr shows the newly added registration. b$ ip maddr 1: lo inet 224.0.0.1 inet6 ff02::1 2: eth0 link 33:33:ff:40:c6:ad link 01:00:5e:00:00:01 link 33:33:00:00:00:01 inet 224.0.0.1 inet6 ff02::1:ff40:c6ad inet6 ff02::1 3: eth1 link 01:00:5e:25:36:47 link 01:00:5e:25:36:3e link 01:00:5e:25:36:3d link 33:33:ff:40:c6:af link 01:00:5e:00:00:01 link 33:33:00:00:00:01 inet 233.37.54.71 <------- McastGroup. inet 224.0.0.1 inet6 ff02::1:ff40:c6af inet6 ff02::1 So far so good, I can see that I'm receiving data for this multicast group. b$ sudo tcpdump -i eth1 -s 65534 host 233.37.54.71 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65534 bytes 09:30:09.924337 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 09:30:09.947547 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 09:30:10.108378 IP 192.164.1.120.58866 > 233.37.54.71.15574: UDP, length 268 09:30:10.196841 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212 ... I can also confirm that the interface is receiving mcast packets. b $ ethtool -S eth1 | grep mcast_pack rx_mcast_packets: 103998 tx_mcast_packets: 33 Now here's the problem. When I try to capture the traffic using a simple ruby UDP server I receive zero data! Here's a simple server that reads data send on port 15572 and prints the first two characters. This works on the two 8.04.4 Ubuntu Servers, but not the 10.04 server. require 'socket' s = UDPSocket.new s.bind("", 15572) 5.times do text, sender = s.recvfrom(2) puts text end If I send a UDP packet crafted in ruby to localhost, the server receives it and prints out the first two characters. So I know that the server above is working correctly. irb(main):001:0> require 'socket' => true irb(main):002:0> s = UDPSocket.new => #<UDPSocket:0x7f3ccd6615f0> irb(main):003:0> s.send("I2 XXX", 0, 'localhost', 15572) When I check the protocol statistics I see that InMcastPkts is not increasing. While on the other 8.04 servers, on the same network, received a few thousands packets in 10 seconds. b $ netstat -sgu ; sleep 10 ; netstat -sgu IcmpMsg: InType3: 11 OutType3: 11 Udp: 446 packets received 4 packets to unknown port received. 0 packet receive errors 461 packets sent UdpLite: IpExt: InMcastPkts: 4654 <--------- Same as below OutMcastPkts: 3426 InBcastPkts: 9854 InOctets: -1691733021 OutOctets: 51187936 InMcastOctets: 145207 OutMcastOctets: 109680 InBcastOctets: 1246341 IcmpMsg: InType3: 11 OutType3: 11 Udp: 446 packets received 4 packets to unknown port received. 0 packet receive errors 461 packets sent UdpLite: IpExt: InMcastPkts: 4656 <-------------- Same as above OutMcastPkts: 3427 InBcastPkts: 9854 InOctets: -1690886265 OutOctets: 51188788 InMcastOctets: 145267 OutMcastOctets: 109712 InBcastOctets: 1246341 If I try forcing the interface into promisc mode nothing changes. At this point I'm stuck. I've confirmed the kernel config has multicast enabled. Perhaps there are other config options I should be checking? b $ grep CONFIG_IP_MULTICAST /boot/config-2.6.32-23-server CONFIG_IP_MULTICAST=y Any thoughts on where to go from here?

    Read the article

  • Merging cuesheet chapter halves into single track for an audiobook

    - by TheSavo
    I have an audiobook that I have ripped and I need some help constructing chapters. I have already made some cue sheets TITLE "Bookname" PERFORMER "the Author" FILE "File1.FLAC" wave ; 23971906.667 milliseconds TRACK 01 AUDIO TITLE "_Intro" INDEX 01 00:00:00 TRACK 02 AUDIO TITLE "CH 01" INDEX 01 24:15:50 TRACK 03 AUDIO TITLE "CH 02" INDEX 01 66:21:00 TRACK 04 AUDIO TITLE "CH 03" INDEX 01 87:05:00 The audio book is in two files. The chapter at the end of the first file is continued in the second file. However, the second file restates: The publisher Book Title List item Blah blah blah I would like to merge the two 'halves' of the chapter in one seamless track. The only way I can think to do this would be be: Bulk cut down the tracks. Drop the junk info into junk track Continue the track listings as normal Take the two "halves" of the target chapter and build a separate cue sheet for it. I know there has to be an easier way. I am ok with making the 'junk' info a 'gap' or something. These are are FLAC files that will be converted to MP3 for my phone and other potable devices. I have read the primers on cue sheets, but I am just not getting it.

    Read the article

  • Determine wifi capabilities of Windows box (with WSUS install rules)

    - by Hagen von Eitzen
    I need to determine if a computer is in fact a laptop with wifi capabilities (with emphasis on wifi rather than laptop). More precisely, I want to distribute a piece of software I wrote via WSUS and Local Update Publisher to those clients. To this end, I want to create appropriate "Package installable rules", that is simple rule used bay the Windows Update Service on the client to decide beforehand whether or not an update/installation package is applicable. Typically, such "installabel rules" are logical combinations of rules of type "File exists", "Registry Key exists", "WMI Query", "MSI Product Installed", so I'd prefer one of those methods. The method I hope someone here can help me find should work with Win 7/Vista, preferably also with XP. My guess is that WMI query is the way to go, but I have little experience in that. I have found that one can e.g. query for EnclosureType and that might detect a laptop. However, I would be much happier if an actually available wifi interface would be detected. Does anyone have an idea how to approach this? If there is anything you need me to clarify, don't hesitate to comment.

    Read the article

  • Installing .NET Framework 4 Client Profile breaks Windows Update

    - by Richard
    I have a Samsung NC-10 netbook with a fresh install of Windows 7 Home Premium 32-bit (it only had 2GB). If Microsoft .NET Framework 4 Client Profile is installed on it, Windows Update will always return error code 8024402F ("Windows Update encountered an unknown error"). As soon as I uninstall it, Windows Update works just fine again. Out of the four computers in this house, only this netbook has the problem. My question is: How can I get the .NET Framework 4 Client Profile installed on my netbook and continue to have a functioning Windows Update? ----- More information ----- The hard-drive recently died on my netbook so I replaced it with a nice new SSD and did a fresh installation of Windows 7 Home Premium (SP1) - along with all the updates. At some point I found that, when I ran Windows Update, I was greeted with error code 8024402F ("Windows Update encountered an unknown error"). Looking in C:\Windows\WindowsUpdate.log, I saw the following issue: WARNING: ECP: Failed to validate cab file digest downloaded from http://download.windowsupdate.com/msdownload/update/software/dflt/2012/02/4913552_4a5c9563d1f58c77f30d0d5c9999e4b8bff3ab21.cab with error 0x80091007 WARNING: ECP: This roundtrip contained some optimized updates which failed. New Update count = 0, Old Count = 3 FATAL: ProcessCoreMetadata did not return any update to be committed WARNING: Sync of Updates: 0x8024402f WARNING: SyncServerUpdatesInternal failed: 0x8024402f When I downloaded the CAB from the URL listed and opened it, it contained a file called 4913552.txt. A search on Google suggested that it's related to Microsoft .NET Framework 4 Client Profile. Other people had reported problems with it breaking Windows Update, but they were running Windows XP. I tried the steps outlined on the Microsoft site for this error code, but it reported that there was nothing wrong. I also found this superuser question, I tried all the answers listed but none of them made any difference. My router doesn't block ActiveX, changing my internet settings in IE made no difference, assuming it was a corrupted update repository didn't do anything (except wipe my update history), my date and time was correct, switching to Google's DNS didn't work and neither did disabling IPv6. Figuring that this update was corrupted, I repaired it and nothing changed. In desperation I un-installed it and Windows Update started working again! Brilliant! I then downloaded the full version from the Microsoft website, installed it and, thankfully, Windows Update continued to work just fine. A week later I turn on my netbook and Windows Update is broken again with exactly the same error message and log entries as before. Repairing .NET Framework 4 Client Profile did nothing, removing it entirely solved the problem again. Thinking this might be some odd Windows installation issue, I formatted the hard-drive and re-installed Windows. Same problem as before - as soon as .NET Framework 4 Client Profile ended up on the netbook, Windows Update stopped working and reported error 8024402F. As soon as it was un-installed, everything worked just fine again. There are three other machines in this house and all of them have working Windows Update and this Client Profile. Does anyone know why it causes this netbook to break and, more importantly, how I can fix it?

    Read the article

  • Certificates required for WHQL-certified drivers

    - by Kasius
    The 64-bit Windows 7 image that we deploy to machines at our site does not contain all of the certificates included on a default Windows image. Automatic root certificate installation is also disabled per policy from higher in the organization. We have had a lot of trouble installing many WHQL-certified drivers from reputable companies (ex. HP, Lexmark, Dell, etc.), and I hypothesize that a required certificate is missing from one of the certificate stores on the machine. The error we typically get is: The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner. I know that it is signed. A .CAT file is included, and it has the following tree from top to bottom: Microsoft Root Authority (thumbprint a4 34 89 15 9a 52 0f 0d 93 d0 32 cc af 37 e7 fe 20 a8 b4 19) Microsoft Windows Hardware Compatibility PCA (thumbprint 93 b8 d8 82 0a 32 db 20 a5 ea b6 8d 86 ad 67 8e fa 14 ea 41) Microsoft Windows Hardware Compatibility Publisher (thumprint b0 50 45 45 42 4e be 2c 16 2f 62 5b bf 5a e6 9b 96 bf 0b 0b) What certificates are required to install WHQL-certified drivers? Is it possibly something other than certificates? Thanks! NOTE: I have posted this question on Technet as well, but honestly, I've never had a lot of luck posting questions on the Technet forums.

    Read the article

  • How do I push my initial snapshot to a subscriber server in SQL Server 2000?

    - by Kev
    I'm configuring Transactional Replication using the Push model. The scenario is: The SQL Servers: SQL01 (publisher) and SQL02 (subscriber) - both running SQL 2000 SP4. Both servers are standalone (i.e. not domain members) Both servers have their FQDN and NETBIOS names in their HOSTS files I've managed to configure SQL01 to publish my database and configured a Push subscription for SQL02 using the Push New Subscription wizard and set the Distribution Agent to update the subscription continuously. On the Push Subscription wizard "Initialise Subscription" page I've selected "Yes, initialise the schema and data" and ticked the "Start the Snapshot Agent to begin the initialisation process immediately" option. All the required services are running (SQL Agent). When I complete the wizard and browse the Replication - Publications folder I can see my publication (blue book with arrow). The publication shows the Push subscription and its status is Pending. If I look in the c:\Program Files\Microsoft SQL Server\Mssql\Repldata folder I see a number of T-SQL scripts for each table e.g. Products.bcp, Products.sch, Products.idx. What should happen now? Should my replicated database now (magically) appear on the subscription server?

    Read the article

  • Lenovo System Update Breaks Windows Live

    - by wolfvilleian
    Hey everyone, I've been racking my brain (and fingers from typing) trying to solve this issue to no avail. I have a Lenovo computer and I install their system update tool to install all my missing drivers. However after this tool is installed Windows Live 2011 breaks, it will no longer sign in giving error number 8e5e0247 all the solutions online haven't helped. It appears that a language setting somewhere gets set to en_ms, and I'm en_ca. My computer is running Windows 7 x64. When i try to sign onto messenger it gives an error that (with some research) means your locale or language is not supported, I've searched my computer for any reference to en_ms but find none. Also a few other things seem to have broken, When a UAC box comes up it is no longer able to identify the publisher of anything, and also the indexing service does not work (I'm not sure if the indexing issue is related, but the UAC issue happened right after installation), I had this issue before but I don't remember how I fixed it, I believe it had something to do with environmental variables. When it goes to sign in it gets as far as the "Loading contacts" then stops and goes back to the sign in screen. Has anyone seen this before? Thanks

    Read the article

  • SQL 2008 R2 replication error: The process could not connect to Distributor

    - by Lance Lefebure
    I have two servers running SQL 2008 R2 Standard, each with an instance named "MAIN". I have a small test database on my primary server (one table, 13 rows) that I want to replicate to a second server as a proof-of-concept for some larger databases that I want to replicate. I set up the primary server to be a publisher and distributor, and set the database to do transactional replication. I copied the data to the second server via a backup/restore, not via a snapshot (which I'll have to do with the larger databases due to database size and limited bandwidth). I followed the instructions here: http://gnawgnu.blogspot.com/2009/11/sql-2008-transactional-replication-and.html Now on the subscriber, I go under Replication / Local Subscriptions / Right click / Properties on my subscription to the DB. The status of the last synchronization shows a status of: "The process could not connect to Distributor 'PRIMARYSERVER\MAIN'." Data IS replicating from the primary to the secondary. Any record I add on the primary shows up on the secondary server within seconds. Is the Distributor part of the Snapshot system that I'm not using, or is it part of the transaction replication stuff? Thanks, Lance

    Read the article

  • IPv6 host route is deleted after PMTU expires

    - by SAPikachu
    I am experimenting my new IPv6 tunnel setup between my local Ubuntu box and a scratch Linode. I set up some docker containers, configured 6in4 tunnel server and IPv6 forwarding on the Linode: # uname -a Linux argo 3.15.4-x86_64-linode45 #1 SMP Mon Jul 7 08:42:36 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 48: sit-sapikachu: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN group default link/sit 106.185.41.115 peer 1.2.3.4 inet6 fd00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::6ab9:2973/64 scope link valid_lft forever preferred_lft forever 13: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fc00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever // Docker containers are bridged to docker0 On my local box, I configured a 6in4 tunnel interface to connect to the Linode box, and added a host route to one of the docker container: # uname -a Linux sapikachu-netbox 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 16: sit-argo: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default link/sit 0.0.0.0 peer 106.185.41.115 inet6 fd00::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::a97:302/64 scope link valid_lft forever preferred_lft forever inet6 fe80::ac19:1/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fa/64 scope link valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether *** brd ff:ff:ff:ff:ff:ff .. snipped .. inet6 fd00:0:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::2e0:6fff:fe0e:365e/64 scope link valid_lft forever preferred_lft forever # ip route replace fc00::1875:8606:d8c1:8a9d via fd00::1 # Add route to docker container # ip -6 route .. snipped unrelated routes fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires 590sec mtu 1472 fd00::/64 dev sit-argo proto kernel metric 256 fd00:0:1::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sit-argo proto kernel metric 256 (Note that tunnel MTU on my local box is different from the server, this is intentional for testing) After adding the host route to the docker container (fc00::1875:8606:d8c1:8a9d), I can ping the container without problem until the route expires. After that I couldn't get reply any more. If I run ip -6 route in a few seconds after expiration, expiration time of the host route will be a negative number: fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires -1sec And output of ip route get fc00::1875:8606:d8c1:8a9d shows that it is routed to my default IPv6 gateway (which fails to route it correctly of course, since the address is not globally routable). After some time, the host route disappears without a trace. This problem won't happen if I do either one of the following things: Set MTU of tunnel on my local box to be the same as the server (1472). The route won't have expiration time in both ip -6 route and ip route get in this case. Instead of adding a host route, add a route with network mask (even /127 works). In this case ip -6 route shows the route without expiration time, ip route get shows expiration time but it will be correctly refreshed after expiration. Although this problem can be easily resolved, I am curious to know why this happens. Is there error in my configuration, or is this a kernel bug?

    Read the article

  • Windows 7 x64 support for Intel GMA 3650 (or GMA 3600)

    - by Loom
    I recently purchased an Intel D2700 MUD motherboard and I cannot find drivers for the Win7 x64 integrated graphics (Intel GMA 3650 aka PowerVR sgs545). The accompanying CD contains Win7 x32 version only. When I run it I got an error: This computer does not meet the minimum requirements for installing the software. I tried to use online utility Intel Driver Update Utility Graphics. I used Chrome, Firefox, Internet Explorer without success. First, UAC prompt appear, and then endlessly spinning progress-bar with text "Analyzing computer...". The text in UAC prompt is: Program file name: System Requirements Lab Verified publisher: Husdawg, LLC I downloaded this utility (intel_srldetect_4.5.5.0) and started it from my hard disk. I got an error: A network error occured while attempting to read from the file: C:\Users\Loom\Downloads\SystemRequirementsLab_intel_4.5.5.0.msi Standard VGA driver works for this video card but without hardware acceleration: Hardware acceleration is either disabled or not supported by your video card driver, which could slow game performance. Make sure you have the latest video card driver installed and that hardware acceleration is turned on. Where I can get appropriate driver?

    Read the article

  • Lenovo System Update Breaks Windows Live

    - by wolfvilleian
    Hey everyone, I've been racking my brain (and fingers from typing) trying to solve this issue to no avail. I have a Lenovo computer and I install their system update tool to install all my missing drivers. However after this tool is installed Windows Live 2011 breaks, it will no longer sign in giving error number 8e5e0247 all the solutions online haven't helped. It appears that a language setting somewhere gets set to en_ms, and I'm en_ca. My computer is running Windows 7 x64. When i try to sign onto messenger it gives an error that (with some research) means your locale or language is not supported, I've searched my computer for any reference to en_ms but find none. Also a few other things seem to have broken, When a UAC box comes up it is no longer able to identify the publisher of anything, and also the indexing service does not work (I'm not sure if the indexing issue is related, but the UAC issue happened right after installation), I had this issue before but I don't remember how I fixed it, I believe it had something to do with environmental variables. When it goes to sign in it gets as far as the "Loading contacts" then stops and goes back to the sign in screen. Has anyone seen this before? Thanks

    Read the article

  • Using jQuery validation plugin with tabbed navigation

    - by user3438917
    I have a tabbed navigation wizard, for which the first section needs to be validated before proceeding to the next tab. The validation should trigger when the user hits the "next" button. I am unable to get the validation to trigger though: <form id="target-group" novalidate="novalidate"> <div class="box"> <div class='box-header-main'><h2><img src="assets/img/list.png" /> Target Group Information</h2></div> <br /> <div class='box'> <div class='box-header-property'><h2><span data-bind="text:Name">New Target Group</span> | <i class='fa fa-file'></i></h2></div> <br /> <div class='row'> <div id='flight-wizard'> <div id='content' class='col-lg-12'> <div class='col-lg-12'> <div id='tabs'> <ul> <li id="targetgroup-info-tab"><a href='#tabs-1'><i class="fa fa-info-circle"></i>Target Group Info</a></li> <li id="zone-tab"><a href='#tabs-2'><i class="fa fa-map-marker"></i>Zones</a></li> </ul> <div id='tabs-1'> <div class='row'> <div class='col-xs-6'> <div class='form-group'> Name<sup>*</sup> <input id="selectError0" name="name" class='form-control col-xs-12' data-bind="value: asdf" placeholder='Enter Name ...' /> </div> <form class='form-horizontal'> <div class='form-group'> Product(s)<sup>*</sup> <div class='controls' id='products'> <select id='selectError3' class='form-control' data-bind="options:test, optionsText: 'Name', optionsValue : 'test', value: test, optionsCaption: 'Choose Product...'"></select> </div> </div> </form> </div> <!--RIGHT PANE--> <div class='col-xs-6'> <div class='form-group'> Platform<sup>*</sup> <div class='controls'> <select id="selectError2" class='form-control' data-bind="options:test, optionsText: 'Name', optionsValue: 'test', value : test, optionsCaption: 'Choose Platform...'"></select> </div> </div> <form class='form-horizontal'> <div class='form-group'> AdTypes(s)<sup>*</sup> <div class='controls' id='adtypes'> <select multiple="" id='adtypesselect' class='form-control' data-rel="chosen" data-bind="options:test, optionsText: 'Name', optionsValue : 'test', selectedOptions: test, optionsCaption: 'test...'"></select> </div> </div> </form> <button id="btn_cancel_large" class='btn btn-large btn-primary btn-round'><i class='fa fa-ban' /></i> Cancel</button> <button id="btn-next-large" class='btn btn-large btn-primary btn-round'>Next <i class='fa fa-arrow-circle-right'></i></button> </div> <!--end of right pane--> </div> </div> <div id='tabs-2'> <div class='row'> <div class='col-lg-12'> <div class='row'> <div class='col-lg-12'> <div id='zones_list' class='box-content'> <div id='add-new-targetgroupzone' class='add-new'><i class='fa fa-plus-circle'></i><a href='/#/inventory/targeting/' onclick="return false;">Add Zone</a></div> <table id="results" width="100%"> <thead> <tr> <th>Publisher</th> <th>Property</th> <th>Zone</th> <th>AdTypes</th> <th width='10%'>Quick&nbsp;Actions</th> </tr> </thead> </table> </div> </div> </div> </div> </div> <br /> <div class="btn_row"> <button id="btn_cancel_large2" class='btn btn-large btn-primary btn-round'><i class='fa fa-ban' /></i> Cancel</button> <button id="btn-submit-large" class='btn btn-large btn-primary btn-round'>Submit <i class='fa fa-arrow-circle-down'></i></button> </div> </div> </div> </div> </div> </div> </div> </div> </div> </form> <form id="zones-form" style="display: none;" novalidate="novalidate" class="slideup-form"> <div class="box"> <div class="box-header-panel"> <h2>Add Target Group Zone</h2> <div class="box-icon" id="zones-form-close"> <i class="fa fa-arrow-circle-down"></i> </div> </div> <div class="box-content clearfix"> <div class="box-content"> <table id="zones-list" width="100%"> <thead> <tr> <th>Publisher</th> <th>Property</th> <th>Zone</th> <th>AdTypes</th> <th width='10%'>Quick&nbsp;Actions</th> </tr> </thead> </table> </div> </div> </div> </div> </form> jQuery: $("#target-group").validate({ rules: { name: { required: true } }, messages: { name: "Name required", } }); $('#btn-next-large').click(function () { if ($('#target-group').valid()) $tabs.tabs('select', $(this).attr("rel")); });

    Read the article

  • Parse XML document

    - by Neil
    I am trying to parse a remote XML document (from Amazon AWS): <ItemLookupResponse xmlns="http://webservices.amazon.com/AWSECommerceService/2009-03-31"> <OperationRequest> <RequestId>011d32c5-4fab-4c7d-8785-ac48b9bda6da</RequestId> <Arguments> <Argument Name="Condition" Value="New"></Argument> <Argument Name="Operation" Value="ItemLookup"></Argument> <Argument Name="Service" Value="AWSECommerceService"></Argument> <Argument Name="Signature" Value="73l8oLJhITTsWtHxsdrS3BMKsdf01n37PE8u/XCbsJM="></Argument> <Argument Name="MerchantId" Value="Amazon"></Argument> <Argument Name="Version" Value="2009-03-31"></Argument> <Argument Name="ItemId" Value="603084260089"></Argument> <Argument Name="IdType" Value="UPC"></Argument> <Argument Name="AWSAccessKeyId" Value="[myAccessKey]"></Argument> <Argument Name="Timestamp" Value="2010-06-14T15:03:27Z"></Argument> <Argument Name="ResponseGroup" Value="OfferSummary,ItemAttributes"></Argument> <Argument Name="SearchIndex" Value="All"></Argument> </Arguments> <RequestProcessingTime>0.0318510000000000</RequestProcessingTime> </OperationRequest> <Items> <Request> <IsValid>True</IsValid> <ItemLookupRequest> <Condition>New</Condition> <DeliveryMethod>Ship</DeliveryMethod> <IdType>UPC</IdType> <MerchantId>Amazon</MerchantId> <OfferPage>1</OfferPage> <ItemId>603084260089</ItemId> <ResponseGroup>OfferSummary</ResponseGroup> <ResponseGroup>ItemAttributes</ResponseGroup> <ReviewPage>1</ReviewPage> <ReviewSort>-SubmissionDate</ReviewSort> <SearchIndex>All</SearchIndex> <VariationPage>All</VariationPage> </ItemLookupRequest> </Request> <Item> <ASIN>B0000UTUNI</ASIN> <DetailPageURL>http://www.amazon.com/Garnier-Fructis-Fortifying-Conditioner-Minute/dp/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB0000UTUNI</DetailPageURL> <ItemLinks> <ItemLink> <Description>Technical Details</Description> <URL>http://www.amazon.com/Garnier-Fructis-Fortifying-Conditioner-Minute/dp/tech-data/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Add To Baby Registry</Description> <URL>http://www.amazon.com/gp/registry/baby/add-item.html%3Fasin.0%3DB0000UTUNI%26SubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Add To Wedding Registry</Description> <URL>http://www.amazon.com/gp/registry/wedding/add-item.html%3Fasin.0%3DB0000UTUNI%26SubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Add To Wishlist</Description> <URL>http://www.amazon.com/gp/registry/wishlist/add-item.html%3Fasin.0%3DB0000UTUNI%26SubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Tell A Friend</Description> <URL>http://www.amazon.com/gp/pdp/taf/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>All Customer Reviews</Description> <URL>http://www.amazon.com/review/product/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>All Offers</Description> <URL>http://www.amazon.com/gp/offer-listing/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> </ItemLinks> <ItemAttributes> <Binding>Health and Beauty</Binding> <Brand>Garnier</Brand> <EAN>0603084260089</EAN> <Feature>Helps restore strength and shine</Feature> <Feature>Penetrates deep to nourish, repair and rejuvenate</Feature> <Feature>Makes hair softer and more manageable without weighing it down</Feature> <ItemDimensions> <Weight Units="hundredths-pounds">40</Weight> </ItemDimensions> <Label>Garnier</Label> <ListPrice> <Amount>419</Amount> <CurrencyCode>USD</CurrencyCode> <FormattedPrice>$4.19</FormattedPrice> </ListPrice> <Manufacturer>Garnier</Manufacturer> <NumberOfItems>1</NumberOfItems> <ProductGroup>Health and Beauty</ProductGroup> <ProductTypeName>ABIS_DRUGSTORE</ProductTypeName> <Publisher>Garnier</Publisher> <Size>5.0 oz</Size> <Studio>Garnier</Studio> <Title>Garnier Fructis Fortifying Fortifying Deep Conditioner, 3 Minute Masque - 5 oz</Title> <UPC>603084260089</UPC> </ItemAttributes> <OfferSummary> <LowestNewPrice> <Amount>229</Amount> <CurrencyCode>USD</CurrencyCode> <FormattedPrice>$2.29</FormattedPrice> </LowestNewPrice> <TotalNew>7</TotalNew> <TotalUsed>0</TotalUsed> <TotalCollectible>0</TotalCollectible> <TotalRefurbished>0</TotalRefurbished> </OfferSummary> </Item> </Items> </ItemLookupResponse> I am trying to extract data from the XML stream using XPathDocument, but with no luck: WebRequest request = HttpWebRequest.Create(url); WebResponse response = request.GetResponse(); //XmlDocument doc = new XmlDocument(); XPathDocument Doc = new XPathDocument(response.GetResponseStream()); XPathNavigator nav = Doc.CreateNavigator(); XPathNodeIterator ListPrice = nav.Select("/ItemLookupResponse/Items/Item/ItemAttributes/ListPrice"); foreach (XPathNavigator node in ListPrice) { Response.Write(node.GetAttribute("Amount", NAMESPACE)); } What am I missing? Thanks in advance!!

    Read the article

  • RSS feeds in Orchard

    - by Bertrand Le Roy
    When we added RSS to Orchard, we wanted to make it easy for any module to expose any contents as a feed. We also wanted the rendering of the feed to be handled by Orchard in order to minimize the amount of work from the module developer. A typical example of such feed exposition is of course blog feeds. We have an IFeedManager interface for which you can get the built-in implementation through dependency injection. Look at the BlogController constructor for an example: public BlogController( IOrchardServices services, IBlogService blogService, IBlogSlugConstraint blogSlugConstraint, IFeedManager feedManager, RouteCollection routeCollection) { If you look a little further in that same controller, in the Item action, you’ll see a call to the Register method of the feed manager: _feedManager.Register(blog); This in reality is a call into an extension method that is specialized for blogs, but we could have made the two calls to the actual generic Register directly in the action instead, that is just an implementation detail: feedManager.Register(blog.Name, "rss", new RouteValueDictionary { { "containerid", blog.Id } }); feedManager.Register(blog.Name + " - Comments", "rss", new RouteValueDictionary { { "commentedoncontainer", blog.Id } }); What those two effective calls are doing is to register two feeds: one for the blog itself and one for the comments on the blog. For each call, the name of the feed is provided, then we have the type of feed (“rss”) and some values to be injected into the generic RSS route that will be used later to route the feed to the right providers. This is all you have to do to expose a new feed. If you’re only interested in exposing feeds, you can stop right there. If on the other hand you want to know what happens after that under the hood, carry on. What happens after that is that the feedmanager will take care of formatting the link tag for the feed (see FeedManager.GetRegisteredLinks). The GetRegisteredLinks method itself will be called from a specialized filter, FeedFilter. FeedFilter is an MVC filter and the event we’re interested in hooking into is OnResultExecuting, which happens after the controller action has returned an ActionResult and just before MVC executes that action result. In other words, our feed registration has already been called but the view is not yet rendered. Here’s the code for OnResultExecuting: model.Zones.AddAction("head:after", html => html.ViewContext.Writer.Write( _feedManager.GetRegisteredLinks(html))); This is another piece of code whose execution is differed. It is saying that whenever comes time to render the “head” zone, this code should be called right after. The code itself is rendering the link tags. As a result of all that, here’s what can be found in an Orchard blog’s head section: <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire"     href="/rss?containerid=5" /> <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire - Comments"     href="/rss?commentedoncontainer=5" /> The generic action that these two feeds point to is Index on FeedController. That controller has three important dependencies: an IFeedBuilderProvider, an IFeedQueryProvider and an IFeedItemProvider. Different implementations of these interfaces can provide different formats of feeds, such as RSS and Atom. The Match method enables each of the competing providers to provide a priority for themselves based on arbitrary criteria that can be found on the FeedContext. This means that a provider can be selected based not only on the desired format, but also on the nature of the objects being exposed as a feed or on something even more arbitrary such as the destination device (you could imagine for example giving shorter text only excerpts of posts on mobile devices, and full HTML on desktop). The key here is extensibility and dynamic competition and collaboration from unknown and loosely coupled parts. You’ll find this pattern pretty much everywhere in the Orchard architecture. The RssFeedBuilder implementation of IFeedBuilderProvider is also a regular controller with a Process action that builds a RssResult, which is itself a thin ActionResult wrapper around an XDocument. Let’s get back to the FeedController’s Index action. After having called into each known feed builder to get its priority on the currently requested feed, it will select the one with the highest priority. The next thing it needs to do is to actually fetch the data for the feed. This again is a collaborative effort from a priori unknown providers, the implementations of IFeedQueryProvider. There are several implementations by default in Orchard, the choice of which is again done through a Match method. ContainerFeedQuery for example chimes in when a “containerid” parameter is found in the context (see URL in the link tag above): public FeedQueryMatch Match(FeedContext context) { var containerIdValue = context.ValueProvider.GetValue("containerid"); if (containerIdValue == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } The actual work is done in the Execute method, which finds the right container content item in the Orchard database and adds elements for each of them. In other words, the feed query provider knows how to retrieve the list of content items to add to the feed. The last step is to translate each of the content items into feed entries, which is done by implementations of IFeedItemBuilder. There is no Match method this time. Instead, all providers are called with the collection of items (or more accurately with the FeedContext, but this contains the list of items, which is what’s relevant in most cases). Each provider can then choose to pick those items that it knows how to treat and transform them into the format requested. This enables the construction of heterogeneous feeds that expose content items of various types into a single feed. That will be extremely important when you’ll want to expose a single feed for all your site. So here are feeds in Orchard in a nutshell. The main point here is that there is a fair number of components involved, with some complexity in implementation in order to allow for extreme flexibility, but the part that you use to expose a new feed is extremely simple and light: declare that you want your content exposed as a feed and you’re done. There are cases where you’ll have to dive in and provide new implementations for some or all of the interfaces involved, but that requirement will only arise as needed. For example, you might need to create a new feed item builder to include your custom content type but that effort will be extremely focused on the specialized task at hand. The rest of the system won’t need to change. So what do you think?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >