Search Results

Search found 2698 results on 108 pages for 'steve 72'.

Page 29/108 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • SQL - get latest records from table where field is unique

    - by 89stevenharris
    I have a table of data as follows id status conversation_id message_id date_created 1 1 1 72 2012-01-01 00:00:00 2 2 1 87 2012-03-03 00:00:00 3 2 2 95 2012-05-05 00:00:00 I want to get all the rows from the table in date_created DESC order, but only one row per conversation_id. So in the case of the example data above, I would want to get the rows with id 2 and 3. Any advice is much appreciated.

    Read the article

  • 'Unable to read symbols' error

    - by cannyboy
    When I 'Build and Go' on the device, the console shows: warning: Unable to read symbols for ""/Users/Steve/Blue/build/Debug-iphoneos"/Blue.app/Blue" (file not found). warning: Unable to read symbols for ""/Users/Steve/Blue/build/Debug-iphoneos"/Blue.app/Blue" (file not found). Is this something I should worry about? If so, where should I look to find the root of the issue? The app works OK, but I'm just worried that this might be an AppStore approval issue.

    Read the article

  • How can I build pyv8 from source on FreeBSD against the v8 port?

    - by Utkonos
    I am unable to build pyv8 from source on FreeBSD. I have installed the /usr/ports/lang/v8 port, and I'm running into the following error. It seems that pyv8 wants to build v8 itself even though v8 is already built and installed. How can I point pyv8 to the already installed location of v8? # python setup.py build Found Google v8 base on V8_HOME , update it to the latest SVN trunk at running build ==================== INFO: Installing or updating GYP... -------------------- INFO: Check out GYP from SVN ... DEBUG: make dependencies ERROR: Check out GYP from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. ==================== INFO: Patching the GYP scripts INFO: patch the Google v8 build/standalone.gypi file to enable RTTI and C++ Exceptions ==================== INFO: building Google v8 with GYP for x64 platform with release mode -------------------- INFO: build v8 from SVN ... DEBUG: make verifyheap=off component=shared_library visibility=on gdbjit=off liveobjectlist=off regexp=native disassembler=off objectprint=off debuggersupport=on extrachecks=off snapshot=on werror=on x64.release ERROR: build v8 from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. The files that are installed by the v8 port are the following (in /usr/local): bin/d8 include/v8.h include/v8-debug.h include/v8-preparser.h include/v8-profiler.h include/v8-testing.h include/v8stdint.h lib/libv8.so lib/libv8.so.1

    Read the article

  • I want to change DPI with Imagemagick without changing the actual byte-size of the image data

    - by user1694803
    I feel so horribly sorry that I have to ask this question here, but after hours of researching how to do an actually very simple task I'm still failing... In Gimp there is a very simple way to do what I want. I only have the German dialog installed but I'll try to translate it. I'm talking about going to "Picture-PrintingSize" and then adjusting the Values "X-Resolution" and "Y-Resolution" which are known to me as so called DPI values. You can also choose the format which by default is "Pixel/Inch". (In German the dialog is "Bild-Druckgröße" and there "X-Auflösung" and "Y-Auflösung") Ok, the values there are often "72" by default. When I change them to e.g. "300" this has the effect that the image stays the same on the computer, but if I print it, it will be smaller if you look at it, but all the details are still there, just smaller - it has a higher resolution on the printed paper (but smaller size... which is fine for me). I am often doing that when I am working with LaTeX, or to be exact with the command "pdflatex" on a recent Ubuntu-Machine. When I'm doing the above process with Gimp manually everything works just fine. The images will appear smaller in the resulting PDF but with high printing quality. What I am trying to do is to automate the process of going into Gimp and adjusting the DPI values. Since Imagemagick is known to be superb and I used it for many other tasks I tried to achieve my goal with this tool. But it does just not do what I want. After trying a lot of things I think this actually is be the command that should be my friend: convert input.png -density 300 output.png This should set the DPI to 300, as I can read everywhere in the web. It seems to work. When I check the file it stays the same. file input.png output.png input.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced output.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced When I use this command, it seems like it did what I wanted: identify -verbose output.png | grep 300 Resolution: 300x300 PNG:pHYs : x_res=300, y_res=300, units=0 (Funny enough, the same output comes for input.png which confuses me... so this might be the wrong parameters to watch?) But when I now render my TeX with "pdflatex" the image is still big and blurry. Also when I open the image with Gimp again the DPI values are set to "72" instead of "300". So there actually was no effect at all. Now what is the problem here. Am I getting something completely wrong? I can't be that wrong since everything works just fine with Gimp... Thanks for any help in this. I am also open to other automated solutions which are easily done on a Linux system...

    Read the article

  • unable to sniff traffic despite network interface being in monitor or promiscuous mode

    - by user65126
    I'm trying to sniff out my network's wireless traffic but am having issues. I'm able to put the card in monitor mode, but am unable to see any traffic except broadcasts, multicasts and probe/beacon frames. I have two network interfaces on this laptop. One is connected normally to 'linksys' and the other is in monitor mode. The interface in monitor mode is on the right channel. I'm not associated with the access point because, as I understand, I don't need to if using monitor mode (vs promiscuous). When I try to ping the router ip, I'm not seeing that traffic show up in wireshark. Here's my ifconfig settings: daniel@seasonBlack:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1f:29:9e:b2:89 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:112 errors:0 dropped:0 overruns:0 frame:0 TX packets:112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8518 (8.5 KB) TX bytes:8518 (8.5 KB) wlan0 Link encap:Ethernet HWaddr 00:21:00:34:f7:f4 inet addr:192.168.1.116 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::221:ff:fe34:f7f4/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:9758 errors:0 dropped:0 overruns:0 frame:0 TX packets:4869 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3291516 (3.2 MB) TX bytes:677386 (677.3 KB) wlan1 Link encap:UNSPEC HWaddr 00-02-72-7B-92-53-33-34-00-00-00-00-00-00-00-00 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:1500 Metric:1 RX packets:112754 errors:0 dropped:0 overruns:0 frame:0 TX packets:101 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18569124 (18.5 MB) TX bytes:12874 (12.8 KB) wmaster0 Link encap:UNSPEC HWaddr 00-21-00-34-F7-F4-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wmaster1 Link encap:UNSPEC HWaddr 00-02-72-7B-92-53-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Here's my iwconfig settings: daniel@seasonBlack:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. wmaster0 no wireless extensions. wlan0 IEEE 802.11bg ESSID:"linksys" Mode:Managed Frequency:2.437 GHz Access Point: 00:18:F8:D6:17:34 Bit Rate=54 Mb/s Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=68/70 Signal level=-42 dBm Noise level=-69 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 wmaster1 no wireless extensions. wlan1 IEEE 802.11bg Mode:Monitor Frequency:2.437 GHz Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Here's how I know I'm on the right channel: daniel@seasonBlack:~$ iwlist channel lo no frequency information. eth0 no frequency information. wmaster0 no frequency information. wlan0 11 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Current Frequency=2.437 GHz (Channel 6) wmaster1 no frequency information. wlan1 11 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Current Frequency=2.437 GHz (Channel 6)

    Read the article

  • vconfig created virtual interface and trunking - is the the interface untagged or tagged for that VLAN ID?

    - by kce
    I am trying to setup an additional VLAN on our Debian-based router/firewall (which exists as a virtual machine on Hyper-V), our core switch (an HP Procurve 5406) and a remote HP ProCurve 2610 that is connected via a WAN Transparent Lan Service (TLS) link. Let's work backwards from the network edge: The Debian server has an external connection attached to eth0. The internal interface is eth1, which is connected directly from our Hyper-V host to the 5406. The port that eth1 is attached to is setup as Trk12. The 2610 is attached to Trk9 (which trunks a whole slew of VLANs - Trk9 is our TLS head). I can successfully ping the management IP addresses for my VLAN from both switches but I cannot ping, from either switch, the virtual interface for my new VLAN on the Debian-base router and firewall. The existing VLAN works fine. What gives? The port eth1 is attached to is a trunk, the existing VLAN (ID 98) is untagged on the trunk, the new VLAN (ID 198) is tagged. VLAN 198 is tagged on Trk9 on the 5406 and on the 2610. I can ping the other switch's management IP (10.100.198.2 and 10.100.198.3) from the other respective switch. That leg of the VLAN works - however I cannot communicate with eth1.198's 10.100.198.1. I feel like I'm missing something elementary but what it is remains illusive to me. I suspect the issue is with the vconfig created eth1.198. It should pass the tagged VLAN 198 packets correct? But they cannot seem to get any further than the 5406. Communication on the existing VLAN 98 works fine. From the Debian box: eth1: eth1 Link encap:Ethernet HWaddr 00:15:5d:34:5e:03 inet addr:10.100.0.1 Bcast:10.100.255.255 Mask:255.255.0.0 inet6 addr: fe80::215:5dff:fe34:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12179786 errors:0 dropped:0 overruns:0 frame:0 TX packets:20210532 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1586498028 (1.4 GiB) TX bytes:26154226278 (24.3 GiB) Interrupt:9 Base address:0xec00 eth1.198: eth1.198 Link encap:Ethernet HWaddr 00:15:5d:34:5e:03 inet addr:10.100.198.1 Bcast:10.100.198.255 Mask:255.255.255.0 inet6 addr: fe80::215:5dff:fe34:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1496 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:72 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:3528 (3.4 KiB) # cat /proc/net/vlan/eth1.198: eth1.198 VID: 198 REORDER_HDR: 0 dev->priv_flags: 1 total frames received 0 total bytes received 0 Broadcast/Multicast Rcvd 0 total frames transmitted 72 total bytes transmitted 3528 total headroom inc 0 total encap on xmit 39 Device: eth1 INGRESS priority mappings: 0:0 1:0 2:0 3:0 4:0 5:0 6:0 7:0 EGRESS priority mappings: # ip route 10.100.198.0/24 dev eth1.198 proto kernel scope link src 10.100.198.1 206.174.64.0/20 dev eth0 proto kernel scope link src 206.174.66.14 10.100.0.0/16 dev eth1 proto kernel scope link src 10.100.0.1 default via 206.174.64.1 dev eth0 # iptables -L -v Chain INPUT (policy DROP 6875 packets, 637K bytes) pkts bytes target prot opt in out source destination 41 4320 ACCEPT all -- lo any anywhere anywhere 11481 1560K ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 107 8058 ACCEPT icmp -- any any anywhere anywhere 0 0 ACCEPT tcp -- eth1 any 10.100.0.0/24 anywhere tcp dpt:ssh 701 317K ACCEPT udp -- eth1 any anywhere anywhere udp dpts:bootps:bootpc Chain FORWARD (policy DROP 1 packets, 40 bytes) pkts bytes target prot opt in out source destination 156K 25M ACCEPT all -- eth1 any anywhere anywhere 215K 248M ACCEPT all -- eth0 eth1 anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- eth1.198 any anywhere anywhere 0 0 ACCEPT all -- eth0 eth1.198 anywhere anywhere state RELATED,ESTABLISHED Chain OUTPUT (policy ACCEPT 13048 packets, 1640K bytes) pkts bytes target prot opt in out source destination From the 5406: # show vlan ports trk12 detail Status and Counters - VLAN Information - for ports Trk12 VLAN ID Name | Status Voice Jumbo Mode ------- -------------------- + ---------- ----- ----- -------- 98 WIFI | Port-based No No Untagged 198 VLAN198 | Port-based No No Tagged

    Read the article

  • Too many Bind query (cache) denied, DNS attack?

    - by Jake
    Once Bind crashed and I did: tail -f /var/log/messages I see a massive number of logs every second. Is this a DNS attack? or is there something wrong? Sometimes I see a domain in logs like this: dOmAin.com (upper and lower). As you see there is only one single domain in the logs with different IPs Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#38921: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.144.171#38833: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.17#42428: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.27#37899: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#39263: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.170#59723: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#32903: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 134.58.60.1#47558: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.34#47387: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.8#59392: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.19#64395: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 217.72.163.3#42190: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 83.146.21.252#22020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.116#57342: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#52020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.72#64317: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#31989: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#47436: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.16#44005: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#50379: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 94.241.128.3#60106: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#59118: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 212.95.135.78#27811: query (cache) 'domain.com/A/IN' denied /etc/resolv.conf ; generated by /sbin/dhclient-script nameserver 4.2.2.4 nameserver 8.8.4.4 Bind config: // generated by named-bootconf.pl options { directory "/var/named"; /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; allow-transfer { none; }; allow-recursion { localnets; }; //listen-on-v6 { any; }; notify no; }; // // a caching only nameserver config // controls { inet 127.0.0.1 allow { localhost; } keys { rndckey; }; }; zone "." IN { type hint; file "named.ca"; }; zone "localhost" IN { type master; file "localhost.zone"; allow-update { none; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "named.local"; allow-update { none; }; };

    Read the article

  • ISP 5 Device Limit ... again

    - by Tommo
    Sorry for the delay in responding to the suggestions that were posted in my first question (ISP 5 Device Limit - double NAT the solution?). I've been travelling and have not been able to try anything. Below is what I've tried and where I have not been successful. Any more help gratefully appreciated. I figure I need to give a more comprehensive overview of what I've got and how it's set up. First of all - I am using all Apple products here. I am iMac, iPad, iPhone, Apple TV, Airport Express and Time Capsule. I used to like the way that it 'just worked'. Now I find that it requires a bit of encouragement before it 'just works'. So, as I stated in my original question; my ISP has a router in my building that is limiting me to 5 devices. I am hard wired into this router and I can neither access it physically nor logically (they won't let me access it). Also, I only appear to be able to connect to it through the LAN ports on my Time Capsule. Any device I connect appears to be on a rolling IP list with the following settings: Router 91.72.80.1 Devices then get assigned IPv4 addresses in the range (as far as I can see) from 91.72.80.2 onwards. SubNet Mask 255.255.255.0 DNS Servers 213.132.63.25, 80.227.2.4 I have my Time Capsule / Router in Bridge-Mode which means I am limited to the 5 devices and cannot use Guest Networks etc. What I've tried today. Static IPs: On all devices, I went from DHCP to Static and put in the same information when they had connected using DHCP. Somewhat surprisingly this did not work. None of the devices enjoyed any connection to the router and certainly no internet connection. Intentional Double-NAT - Time Capsule to 'DHCP and NAT': By selecting DHCP and NAT on my Router I was able to connect devices to my Time Capsule in the range 10.0.1.2 to 10.0.1.200. This offered no internet connectivity and didn't really help the situation. In this mode, however, I was able to force the devices - individually and laboriously - to look for the Router and previously listed DNSs by inputting the numbers from 'Bridge-mode' into the STATIC settings and then resetting the connection. The Router then appeared to assign a distinct IP address to the device and it worked on the network. I had this working for more than 5 devices. However, this is not a great solution because as soon as one of the mobile devices left the building it needed repointing to the Router. The connections were also not very stable. Especially when trying to hold onto a VPN. Spoofing a few MAC addresses: I'm afraid I don't really know what this would achieve, nor how to do it on an Apple device… So … I'm almost back at Square One. I have had to withdraw to the Bridge-Mode position again with the 5 device limit to see if there's a better course of action to follow. ANY help would be much appreciated. I am positive that I cannot be the only one suffering under this 5 device limit!

    Read the article

  • HDMI video connection cuts top and bottom borders of screen

    - by Luis Alvarado
    Ok this is an extension of another problem I had with a VGA connection and an Nvidia Geforce GT 440 card. Here is goes the explanation of this particular problem: I have a Soneview 32' TV. This TV has many connections including VGA (First reason I bought it), HDMI (Second reason but did not have a HDMI cable at that time) and DVI. I have had this TV for little over a month now, actually I had it to celebrate the release of Ubuntu 11.10 and started using it exactly on that date (I know too much fan there but hey, I like geek stuff). I started using it with the VGA cable. After 2 weeks I bought an Nvidia GT440 card. The previous 9500GT was working correctly with no problems whatsoever. I installed the GT440 and the first problem that I encountered using this latest card is mentioned here: Nvidia GT 440 black screen problem when loading lightdm greeter. The solution to this problem was to actually disconnect then connect again the VGA cable. This would result in the screen showing me the lightdm screen for my login. If I did not disconnect then connect the cable I could be there forever thinking that there is no video signal. I got tired of looking for answers that did not work and for solutions that made me literally have to install Ubuntu again. I just went and bought a HDMI cable and changed the VGA one for that one. It worked and I did not have to disconnect/connect the cable but now I have this problem when using any resolution. My normal resolution is 1920x1080 (This TV is 1080HD) so in VGA I could use this resolution with no problem, but on HDMI am getting the borders cut out. Here is a pic: As you can see from the PIC, the Launcher icons only show less than 50% of their witdh. Forget about the top and bottom parts, I can access them with the mouse but I can not visualize them in the screen. It is like it's outside of the TVs view. Basically there is like 20 to 30 pixels gone from all sides. I searched around and came to running xrand --verbose to see what it could detect from the TV. I got this: cyrex@cyrex:~$ xrandr --verbose xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 175, current 1920 x 1080, maximum 1920 x 1080 default connected 1920x1080+0+0 (0x164) normal (normal) 0mm x 0mm Identifier: 0x163 Timestamp: 465485 Subpixel: unknown Clones: CRTC: 0 CRTCs: 0 Transform: 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 filter: 1920x1080 (0x164) 103.7MHz *current h: width 1920 start 0 end 0 total 1920 skew 0 clock 54.0KHz v: height 1080 start 0 end 0 total 1080 clock 50.0Hz 1920x1080 (0x165) 105.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 55.1KHz v: height 1080 start 0 end 0 total 1080 clock 51.0Hz 1920x1080 (0x166) 107.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 56.2KHz v: height 1080 start 0 end 0 total 1080 clock 52.0Hz 1920x1080 (0x167) 109.9MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 57.2KHz v: height 1080 start 0 end 0 total 1080 clock 53.0Hz 1920x1080 (0x168) 112.0MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 58.3KHz v: height 1080 start 0 end 0 total 1080 clock 54.0Hz 1920x1080 (0x169) 114.0MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 59.4KHz v: height 1080 start 0 end 0 total 1080 clock 55.0Hz 1680x1050 (0x16a) 98.8MHz h: width 1680 start 0 end 0 total 1680 skew 0 clock 58.8KHz v: height 1050 start 0 end 0 total 1050 clock 56.0Hz 1680x1050 (0x16b) 100.5MHz h: width 1680 start 0 end 0 total 1680 skew 0 clock 59.9KHz v: height 1050 start 0 end 0 total 1050 clock 57.0Hz 1600x1024 (0x16c) 95.0MHz h: width 1600 start 0 end 0 total 1600 skew 0 clock 59.4KHz v: height 1024 start 0 end 0 total 1024 clock 58.0Hz 1440x900 (0x16d) 76.5MHz h: width 1440 start 0 end 0 total 1440 skew 0 clock 53.1KHz v: height 900 start 0 end 0 total 900 clock 59.0Hz 1360x768 (0x171) 65.8MHz h: width 1360 start 0 end 0 total 1360 skew 0 clock 48.4KHz v: height 768 start 0 end 0 total 768 clock 63.0Hz 1360x768 (0x172) 66.8MHz h: width 1360 start 0 end 0 total 1360 skew 0 clock 49.2KHz v: height 768 start 0 end 0 total 768 clock 64.0Hz 1280x1024 (0x173) 85.2MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 66.6KHz v: height 1024 start 0 end 0 total 1024 clock 65.0Hz 1280x960 (0x176) 83.6MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 65.3KHz v: height 960 start 0 end 0 total 960 clock 68.0Hz 1280x960 (0x177) 84.8MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 66.2KHz v: height 960 start 0 end 0 total 960 clock 69.0Hz 1280x720 (0x178) 64.5MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 50.4KHz v: height 720 start 0 end 0 total 720 clock 70.0Hz 1280x720 (0x179) 65.4MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 51.1KHz v: height 720 start 0 end 0 total 720 clock 71.0Hz 1280x720 (0x17a) 66.4MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 51.8KHz v: height 720 start 0 end 0 total 720 clock 72.0Hz 1152x864 (0x17b) 72.7MHz h: width 1152 start 0 end 0 total 1152 skew 0 clock 63.1KHz v: height 864 start 0 end 0 total 864 clock 73.0Hz 1152x864 (0x17c) 73.7MHz h: width 1152 start 0 end 0 total 1152 skew 0 clock 63.9KHz v: height 864 start 0 end 0 total 864 clock 74.0Hz ....Many Resolutions later... 320x200 (0x1d1) 10.2MHz h: width 320 start 0 end 0 total 320 skew 0 clock 31.8KHz v: height 200 start 0 end 0 total 200 clock 159.0Hz 320x175 (0x1d2) 9.0MHz h: width 320 start 0 end 0 total 320 skew 0 clock 28.0KHz v: height 175 start 0 end 0 total 175 clock 160.0Hz 1920x1080 (0x1dd) 333.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 173.9KHz v: height 1080 start 0 end 0 total 1080 clock 161.0Hz If it helps, the Refresh Rate at 1920x1080 is 60. There is a flickering effect at this resolution using HDMI but not VGA which I imagine is related to the borders cut off issue am asking here. I have also done the following but this will only solve the problem on lower resolutions than 1920x1080 or on others TV (My father has a Sony TV where this problem is also solved): NVIDIA WAY Go to Nvidia-Settings and there will be an option that will have more features if a HDMI cable is connected. In the next pic the option is DFP-1 (CNDLCD) but this name changes depending on what device the PC is connected to: Uncheck Force Full GPU Scaling What this will do for resolutions LOWER than 1920x1080 (At least in my case) is solve the flickering problem and fix the borders cut by the monitor. Save to Xorg.conf file the changes made after changing to a resolution acceptable to your eyes. TV WAY If you TV has OSD Menu and this menu has options for scanning the screen resolution or auto adjusting to it, disable them. Specifically the option about SCAN. If you have an option for AV Mode disable it. Basically disable any option that needs to scan and scale the resolution. Test one by one. In the case of my father's TV this did it. In my case, the Nvidia solved it for lower resolutions. NOTE: In the case this is not solved in the next couple of weeks I will add this as the answer but take into consideration that the issue is still active with 1920x1080 resolutions.

    Read the article

  • Chrome and Firefox not able to find some sites after Ubuntu's recent update?

    - by gkr
    Loading of revision3.com in Chrome stops and status saying "waiting for static.inplay.tubemogul.com" grooveshark.com in Chrome never loads But wikipedia.org, google.com works just normal same behavior in Firefox too. I use wired DSL connection in Ubuntu 12.04. I guess this started happening after I upgraded the Chrome browser or Flash plugin few days ago from Ubuntu updates through update-manager. Thanks EDIT: Everything works normal in my WinXP. Problem is only in Ubuntu 12.04. This is what I get when I use wget gkr@gkr-desktop:~$ wget revision3.com --2012-06-12 08:58:01-- http://revision3.com/ Resolving revision3.com (revision3.com)... 173.192.117.198 Connecting to revision3.com (revision3.com)|173.192.117.198|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `index.html' [ ] 81,046 32.0K/s in 2.5s 2012-06-12 08:58:04 (32.0 KB/s) - `index.html' saved [81046] gkr@gkr-desktop:~$ wget static.inplay.tubemogul.com --2012-06-12 08:51:25-- http://static.inplay.tubemogul.com/ Resolving static.inplay.tubemogul.com (static.inplay.tubemogul.com)... 72.21.81.253 Connecting to static.inplay.tubemogul.com (static.inplay.tubemogul.com)|72.21.81.253|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-06-12 08:51:25 ERROR 404: Not Found. grooveshark.com nevers responses so I have to Ctrl+C to terminate wget. gkr@gkr-desktop:~$ wget grooveshark.com --2012-06-12 08:51:33-- http://grooveshark.com/ Resolving grooveshark.com (grooveshark.com)... 8.20.213.76 Connecting to grooveshark.com (grooveshark.com)|8.20.213.76|:80... connected. HTTP request sent, awaiting response... ^C gkr@gkr-desktop:~$ This is the apt term.log of updates I mentioned Log started: 2012-06-09 04:51:45 (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 161392 files and directories currently installed.) Preparing to replace google-chrome-stable 19.0.1084.52-r138391 (using .../google-chrome-stable_19.0.1084.56-r140965_i386.deb) ... Unpacking replacement google-chrome-stable ... Preparing to replace libpulse0 1:1.1-0ubuntu15 (using .../libpulse0_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulse0 ... Preparing to replace libpulse-mainloop-glib0 1:1.1-0ubuntu15 (using .../libpulse-mainloop-glib0_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulse-mainloop-glib0 ... Preparing to replace libpulsedsp 1:1.1-0ubuntu15 (using .../libpulsedsp_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulsedsp ... Preparing to replace sudo 1.8.3p1-1ubuntu3.2 (using .../sudo_1.8.3p1-1ubuntu3.3_i386.deb) ... Unpacking replacement sudo ... Preparing to replace flashplugin-installer 11.2.202.235ubuntu0.12.04.1 (using .../flashplugin-installer_11.2.202.236ubuntu0.12.04.1_i386.deb) ... Unpacking replacement flashplugin-installer ... Preparing to replace pulseaudio-utils 1:1.1-0ubuntu15 (using .../pulseaudio-utils_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-utils ... Preparing to replace pulseaudio 1:1.1-0ubuntu15 (using .../pulseaudio_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio ... Preparing to replace pulseaudio-module-gconf 1:1.1-0ubuntu15 (using .../pulseaudio-module-gconf_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-gconf ... Preparing to replace pulseaudio-module-x11 1:1.1-0ubuntu15 (using .../pulseaudio-module-x11_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-x11 ... Preparing to replace shared-mime-info 1.0-0ubuntu4 (using .../shared-mime-info_1.0-0ubuntu4.1_i386.deb) ... Unpacking replacement shared-mime-info ... Preparing to replace pulseaudio-module-bluetooth 1:1.1-0ubuntu15 (using .../pulseaudio-module-bluetooth_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-bluetooth ... Processing triggers for menu ... /usr/share/menu/downverter.menu: 1: /usr/share/menu/downverter.menu: Syntax error: word unexpected (expecting ")") Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for update-notifier-common ... flashplugin-installer: downloading http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.2.202.236.orig.tar.gz Installing from local file /tmp/tmpNrBt4g.gz Flash Plugin installed. Processing triggers for doc-base ... Processing 1 changed doc-base file... Registering documents with scrollkeeper... Setting up google-chrome-stable (19.0.1084.56-r140965) ... Setting up libpulse0 (1:1.1-0ubuntu15.1) ... Setting up libpulse-mainloop-glib0 (1:1.1-0ubuntu15.1) ... Setting up libpulsedsp (1:1.1-0ubuntu15.1) ... Setting up sudo (1.8.3p1-1ubuntu3.3) ... Installing new version of config file /etc/pam.d/sudo ... Setting up flashplugin-installer (11.2.202.236ubuntu0.12.04.1) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up pulseaudio-utils (1:1.1-0ubuntu15.1) ... Setting up pulseaudio (1:1.1-0ubuntu15.1) ... Setting up pulseaudio-module-gconf (1:1.1-0ubuntu15.1) ... Setting up pulseaudio-module-x11 (1:1.1-0ubuntu15.1) ... Setting up shared-mime-info (1.0-0ubuntu4.1) ... Setting up pulseaudio-module-bluetooth (1:1.1-0ubuntu15.1) ... Processing triggers for menu ... /usr/share/menu/downverter.menu: 1: /usr/share/menu/downverter.menu: Syntax error: word unexpected (expecting ")") Processing triggers for libc-bin ... ldconfig deferred processing now taking place Log ended: 2012-06-09 04:53:32 This is from history.log Start-Date: 2012-06-09 04:51:45 Commandline: aptdaemon role='role-commit-packages' sender=':1.56' Upgrade: shared-mime-info:i386 (1.0-0ubuntu4, 1.0-0ubuntu4.1), pulseaudio:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulse-mainloop-glib0:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), sudo:i386 (1.8.3p1-1ubuntu3.2, 1.8.3p1-1ubuntu3.3), pulseaudio-module-bluetooth:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), pulseaudio-module-x11:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), flashplugin-installer:i386 (11.2.202.235ubuntu0.12.04.1, 11.2.202.236ubuntu0.12.04.1), pulseaudio-utils:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), pulseaudio-module-gconf:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulse0:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulsedsp:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), google-chrome-stable:i386 (19.0.1084.52-r138391, 19.0.1084.56-r140965) End-Date: 2012-06-09 04:53:32

    Read the article

  • Dependency Injection with Spring/Junit/JPA

    - by Steve
    I'm trying to create JUnit tests for my JPA DAO classes, using Spring 2.5.6 and JUnit 4.8.1. My test case looks like this: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"classpath:config/jpaDaoTestsConfig.xml"} ) public class MenuItem_Junit4_JPATest extends BaseJPATestCase { private ApplicationContext context; private InputStream dataInputStream; private IDataSet dataSet; @Resource private IMenuItemDao menuItemDao; @Test public void testFindAll() throws Exception { assertEquals(272, menuItemDao.findAll().size()); } ... Other test methods ommitted for brevity ... } I have the following in my jpaDaoTestsConfig.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"> <!-- uses the persistence unit defined in the META-INF/persistence.xml JPA configuration file --> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="persistenceUnitName" value="CONOPS_PU" /> </bean> <bean id="groupDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao" lazy-init="true" /> <bean id="permissionDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.PermissionDao" lazy-init="true" /> <bean id="applicationUserDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.ApplicationUserDao" lazy-init="true" /> <bean id="conopsUserDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.ConopsUserDao" lazy-init="true" /> <bean id="menuItemDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.MenuItemDao" lazy-init="true" /> <!-- enables interpretation of the @Required annotation to ensure that dependency injection actually occures --> <bean class="org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor"/> <!-- enables interpretation of the @PersistenceUnit/@PersistenceContext annotations providing convenient access to EntityManagerFactory/EntityManager --> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <!-- transaction manager for use with a single JPA EntityManagerFactory for transactional data access to a single datasource --> <bean id="jpaTransactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory"/> </bean> <!-- enables interpretation of the @Transactional annotation for declerative transaction managment using the specified JpaTransactionManager --> <tx:annotation-driven transaction-manager="jpaTransactionManager" proxy-target-class="false"/> </beans> Now, when I try to run this, I get the following: SEVERE: Caught exception while allowing TestExecutionListener [org.springframework.test.context.support.DependencyInjectionTestExecutionListener@fa60fa6] to prepare test instance [null(mil.navy.ndms.conops.common.dao.impl.MenuItem_Junit4_JPATest)] org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mil.navy.ndms.conops.common.dao.impl.MenuItem_Junit4_JPATest': Injection of resource fields failed; nested exception is java.lang.IllegalStateException: Specified field type [interface javax.persistence.EntityManagerFactory] is incompatible with resource type [javax.persistence.EntityManager] at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessAfterInstantiation(CommonAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:959) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireBeanProperties(AbstractAutowireCapableBeanFactory.java:329) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:110) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:255) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:93) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.invokeTestMethod(SpringJUnit4ClassRunner.java:130) at org.junit.internal.runners.JUnit4ClassRunner.runMethods(JUnit4ClassRunner.java:61) at org.junit.internal.runners.JUnit4ClassRunner$1.run(JUnit4ClassRunner.java:54) at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at org.junit.internal.runners.JUnit4ClassRunner.run(JUnit4ClassRunner.java:52) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Caused by: java.lang.IllegalStateException: Specified field type [interface javax.persistence.EntityManagerFactory] is incompatible with resource type [javax.persistence.EntityManager] at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.checkResourceType(InjectionMetadata.java:159) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.(PersistenceAnnotationBeanPostProcessor.java:559) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$1.doWith(PersistenceAnnotationBeanPostProcessor.java:359) at org.springframework.util.ReflectionUtils.doWithFields(ReflectionUtils.java:492) at org.springframework.util.ReflectionUtils.doWithFields(ReflectionUtils.java:469) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findPersistenceMetadata(PersistenceAnnotationBeanPostProcessor.java:351) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(PersistenceAnnotationBeanPostProcessor.java:296) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:745) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:448) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(AccessController.java:219) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:221) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:168) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.autowireResource(CommonAnnotationBeanPostProcessor.java:435) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.getResource(CommonAnnotationBeanPostProcessor.java:409) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor$ResourceElement.getResourceToInject(CommonAnnotationBeanPostProcessor.java:537) at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:180) at org.springframework.beans.factory.annotation.InjectionMetadata.injectFields(InjectionMetadata.java:105) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessAfterInstantiation(CommonAnnotationBeanPostProcessor.java:289) ... 18 more It seems to be telling me that its attempting to store an EntityManager object into an EntityManagerFactory field, but I don't understand how or why. My DAO classes accept both an EntityManager and EntityManagerFactory via the @PersistenceContext attribute, and they work find if I load them up and run them without the @ContextConfiguration attribute (i.e. if I just use the XmlApplcationContext to load the DAO and the EntityManagerFactory directly in setUp ()). Any insights would be appreciated. Thanks. --Steve

    Read the article

  • Silverlight/Web Service Serializing Interface for use Client Side

    - by Steve Brouillard
    I have a Silverlight solution that references a third-party web service. This web service generates XML, which is then processed into objects for use in Silverlight binding. At one point we the processing of XML to objects was done client-side, but we ran into performance issues and decided to move this processing to the proxies in the hosting web project to improve performance (which it did). This is obviously a gross over-simplification, but should work. My basic project structure looks like this. Solution Solution.Web - Holds the web page that hosts Silverlight as well as proxies that access web services and processes as required and obviously the references to those web services). Solution.Infrastructure - Holds references to the proxy web services in the .Web project, all genned code from serialized objects from those proxies and code around those objects that need to be client-side. Solution.Book - The particular project that uses the objects in question after processed down into Infrastructure. I've defined the following Interface and Class in the Web project. They represent the type of objects that the XML from the original third-party gets transformed into and since this is the only project in the Silverlight app that is actually server-side, that was the place to define and use them. //Doesn't get much simpler than this. public interface INavigable { string Description { get; set; } } //Very simple class too public class IndexEntry : INavigable { public List<IndexCM> CMItems { get; set; } public string CPTCode { get; set; } public string DefinitionOfAbbreviations { get; set; } public string Description { get; set; } public string EtiologyCode { get; set; } public bool HighScore { get; set; } public IndexToTabularCommandArguments IndexToTabularCommandArgument { get; set; } public bool IsExpanded { get; set; } public string ManifestationCode { get; set; } public string MorphologyCode { get; set; } public List<TextItem> NonEssentialModifiersAndQualifyingText { get; set; } public string OtherItalics { get; set; } public IndexEntry Parent { get; set; } public int Score { get; set; } public string SeeAlsoReference { get; set; } public string SeeReference { get; set; } public List<IndexEntry> SubEntries { get; set; } public int Words { get; set; } } Again; both of these items are defined in the Web project. Notice that IndexEntry implments INavigable. When the code for IndexEntry is auto-genned in the Infrastructure project, the definition of the class does not include the implmentation of INavigable. After discovering this, I thought "no problem, I'll create another partial class file reiterating the implmentation". Unfortunately (I'm guessing because it isn't being serialized), that interface isn't recognized in the Infrastructure project, so I can't simply do that. Here's where it gets really weird. The BOOK project CAN see the INavigable interface. In fact I use it in Book, though Book has no reference to the Web Service in the Web project where the thing is define, though Infrastructure does. Just as a test, I linked to the INavigable source file from indside the Infrastructure project. That allowed me to reference it in that project and compile, but causes havoc in the Book project, because now there's a conflick between the one define in Infrastructure and the one defined in the Web project's web service. This is behavior I would expect. So, to try and sum up a bit. Web project has a web service that process data from a third-party service and has a class and interface defined in it. The class implements the interface. The Infrastructure project references the web service in the Web Project and the Book project references the Infrastructure project. The implmentation of the interface in the class does NOT serialize down, so the auto-genned code in INfrastructure does not show this relationship, breaking code further down-stream. The Book project, whihc is further down-stream CAN see the interface as defined in the Web Project, even though its only reference is through the Infrastructure project; whihc CAN'T see it. Am I simple missing something easy here? Can I apply an attribute to either the Interface definition or to the its implmentation in the class to ensure its visibility downstream? Anything else I can do here? I know this is a bit convoluted and anyone still with me here, thanks for your patience and any advice you might have. Cheers, Steve

    Read the article

  • Linux router: ping doesn't route back

    - by El Barto
    I have a Debian box which I'm trying to set up as a router and an Ubuntu box which I'm using as a client. My problem is that when the Ubuntu client tries to ping a server on the Internet, all the packets are lost (though, as you can see below, they seem to go to the server and back without problem). I'm doing this in the Ubuntu Box: # ping -I eth1 my.remote-server.com PING my.remote-server.com (X.X.X.X) from 10.1.1.12 eth1: 56(84) bytes of data. ^C --- my.remote-server.com ping statistics --- 13 packets transmitted, 0 received, 100% packet loss, time 12094ms (I changed the name and IP of the remote server for privacy). From the Debian Router I see this: # tcpdump -i eth1 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 7, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 8, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 8, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 9, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 9, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 10, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 10, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 11, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 11, length 64 ^C 9 packets captured 9 packets received by filter 0 packets dropped by kernel # tcpdump -i eth2 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 213, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 213, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 214, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 214, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 215, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 215, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 216, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 216, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 217, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 217, length 64 ^C 10 packets captured 10 packets received by filter 0 packets dropped by kernel And at the remote server I see this: # tcpdump -i eth0 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 1, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 1, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 2, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 2, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 3, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 3, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 4, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 4, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 5, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 5, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 6, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 6, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 7, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 7, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 8, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 8, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 9, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 9, length 64 18 packets captured 228 packets received by filter 92 packets dropped by kernel Here "X.X.X.X" is my remote server's IP and "Y.Y.Y.Y" is my local network's public IP. So, what I understand is that the ping packets are coming out of the Ubuntu box (10.1.1.12), to the router (10.1.1.1), from there to the next router (192.168.1.1) and reaching the remote server (X.X.X.X). Then they come back all the way to the Debian router, but they never reach the Ubuntu box back. What am I missing? Here's the Debian router setup: # ifconfig eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:105761 errors:0 dropped:0 overruns:0 frame:0 TX packets:48944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:40298768 (38.4 MiB) TX bytes:44831595 (42.7 MiB) Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38335992 errors:0 dropped:0 overruns:0 frame:0 TX packets:37097705 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:4260680226 (3.9 GiB) TX bytes:3759806551 (3.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3408 errors:0 dropped:0 overruns:0 frame:0 TX packets:3408 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:358445 (350.0 KiB) TX bytes:358445 (350.0 KiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:2767779 errors:0 dropped:0 overruns:0 frame:0 TX packets:1569477 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3609469393 (3.3 GiB) TX bytes:96113978 (91.6 MiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 # arp -n # Note: Here I have changed all the different MACs except the ones corresponding to the Ubuntu box (on 10.1.1.12 and 192.168.1.12) Address HWtype HWaddress Flags Mask Iface 192.168.1.118 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.72 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.94 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.102 ether NN:NN:NN:NN:NN:NN C eth2 10.1.1.12 ether 00:1e:67:15:2b:f0 C eth1 192.168.1.86 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.2 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.61 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.64 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.116 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.91 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.52 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.93 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.87 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.92 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.100 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.40 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.53 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.83 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.89 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.12 ether 00:1e:67:15:2b:f1 C eth2 192.168.1.77 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.66 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.90 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.65 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.41 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.78 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.123 ether NN:NN:NN:NN:NN:NN C eth2 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 10.1.1.0/24 !10.1.1.0/24 MASQUERADE all -- !10.1.1.0/24 10.1.1.0/24 Chain OUTPUT (policy ACCEPT) target prot opt source destination And here's the Ubuntu box: # ifconfig eth0 Link encap:Ethernet HWaddr 00:1e:67:15:2b:f1 inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe15:2bf1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:28785139 errors:0 dropped:0 overruns:0 frame:0 TX packets:19050735 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32068182803 (32.0 GB) TX bytes:6061333280 (6.0 GB) Interrupt:16 Memory:b1a00000-b1a20000 eth1 Link encap:Ethernet HWaddr 00:1e:67:15:2b:f0 inet addr:10.1.1.12 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe15:2bf0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:285086 errors:0 dropped:0 overruns:0 frame:0 TX packets:12719 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:30817249 (30.8 MB) TX bytes:2153228 (2.1 MB) Interrupt:16 Memory:b1900000-b1920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:86048 errors:0 dropped:0 overruns:0 frame:0 TX packets:86048 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11426538 (11.4 MB) TX bytes:11426538 (11.4 MB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.1.1.1 0.0.0.0 UG 100 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.8.0.0 192.168.1.10 255.255.255.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 # arp -n # Note: Here I have changed all the different MACs except the ones corresponding to the Debian box (on 10.1.1.1 and 192.168.1.10) Address HWtype HWaddress Flags Mask Iface 192.168.1.70 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.90 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.97 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.103 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.13 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.120 (incomplete) eth0 192.168.1.111 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.118 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.51 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.102 (incomplete) eth0 192.168.1.64 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.52 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.74 (incomplete) eth0 192.168.1.94 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.121 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.72 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.87 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.91 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.71 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.78 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.83 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.88 (incomplete) eth0 192.168.1.82 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.98 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.100 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.93 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.73 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.11 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.85 (incomplete) eth0 192.168.1.112 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.89 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.65 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.81 ether NN:NN:NN:NN:NN:NN C eth0 10.1.1.1 ether 94:0c:6d:82:0d:98 C eth1 192.168.1.53 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.116 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.61 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.10 ether 6c:f0:49:a4:47:38 C eth0 192.168.1.86 (incomplete) eth0 192.168.1.119 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.66 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth1 192.168.1.92 ether NN:NN:NN:NN:NN:NN C eth0 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination Edit: Following Patrick's suggestion, I did a tcpdump con the Ubuntu box and I see this: # tcpdump -i eth1 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 1, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 1, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 2, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 2, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 3, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 3, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 4, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 4, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 5, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 5, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 6, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 6, length 64 ^C 12 packets captured 12 packets received by filter 0 packets dropped by kernel So the question is: if all packets seem to be coming and going, why does ping report 100% packet loss?

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Divya Malik
    A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on the Oracle Applications blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Divya Malik
    A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on the Oracle Applications blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com

    Read the article

  • Retail CEO Interviews

    - by David Dorf
    Businessweek's 2012 Interview Issue has interviews with three retail CEOs that are worth a quick read.  I copied some excerpts below, but please follow the links to the entire interviews. Ron Johnson, CEO JCPenney Take me through your merchandising. One of the things I learned from Steve [Jobs]—Steve said three times in his life he had the chance to be part of the change of an interface. If you change the interface, you can dramatically change the entire experience of the product. For Steve, that was the mouse, the scroll wheel on the iPod, and then the [touch]screen. What we’re trying to do here is change the interface of retail. What we call that is the street, and you’re standing in the middle of it. When you walk into a store today, you’re overwhelmed by merchandise. There is a narrow aisle. Typically, it’s filled with product on tables and you’re overwhelmed with the noise of signs and promotions. Especially in the age of the Internet, the idea of going to a very large store and having so much abundance is actually not very appealing. The first thing you find here is you’re inspired. I have used the mannequins. The street is actually this new navigation path for a retail store. So if you come in here—you’ll notice that these aisles are 14 feet wide. These are wider than Nordstrom’s (JWN). Slide show of JCPenney store. Walter Robb, co-CEO Whole Foods What did you learn from the recent recession about selling groceries?It was a lot of humble pie, because our sales experienced a drop that I have never seen in 32 years of retail. Customers left us in droves. We also learned that there were some very loyal customers who loved Whole Foods (WFM), people who said, “I like what you stand for. I like coming here. I like this experience.” That was very affirming. I think the realization was that we’ve got some customers, and we need to make sure we know who they are. So instead of chasing every customer out there, we started doing customer discussion groups. We were growing for growth’s sake, which is not a good strategy. We were chasing the rainbow. We cut the growth in half overnight and said, “All right, slow down. Let’s make sure we’re doing this better and more thoroughly and more thoughtfully.” This company is a mission-based company. This company started to change the world by bringing healthier food to the world. It’s not about the money, it’s about the impact, and this company is back on track as a result of those experiences. Video of Whole Foods store tour. Kay Krill, CEO Ann Taylor You’ve worked in retail all your life. What drew you to it?I graduated from college, and I did not know what I wanted to do. Macy’s (M) came to campus to interview for their training program, and I thought, “Let me give it a try.” I got the job and fell in love with the industry. The president of Macy’s at the time said, “If you don’t wake up every morning dying to go to work, then retailing is not for you; it has to be in your blood.” It was in my blood. I love the fact that every day is different. You can get to be creative one day, financial the next day, marketing the next. I love going to stores. I love talking to associates. I love talking to clients. There’s not a predictable day.

    Read the article

  • Microsoft Build 2012 Day 1 Keynote Summary

    - by Tim Murphy
    So I have finally dried the tears after watching the Keynote for Build 2012.  This wasn’t because it was an emotional presentation, but because for the second year I missed the goodies.  Each on site attendee got a Surface RT, a Lumia 920 and a voucher for 100GB of SkyDrive storage. The event was opened with the announcement that in the three days since the launch of Windows 8 over 4 million upgrades have been sold.  I don’t care who you are that is an impressive stat.  Ballmer then spent a fair amount of time remaking the case for the Windows and Windows Phone platforms similar to what we have heard over the last to launch events. There were some cool, but non-essential demos.  The one that was the most fun was the Perceptive Pixel 82” slate device.  At first glance I wondered why I would ever want such a device, but then Ballmer explained it’s possible use for schools and boardrooms.  The actually made sense. Then things got strange.  Steve started explaining features that developers could leverage.  Usually this type of information is left to the product leads.  He focused on the integration with the Charms features such as Search and Share. Steve “Guggs” Guggenheim showed off an app that would appeal to my kids from Disney called “Agent P” which is base on Phineas and Ferb.  Then he got to the meat of the presentation.  We found out that you could add a tile that can be used to sell ad space.  In the same vein we also found out that you could use Microsoft’s, Paypal’s or any commerce engine of your own creation or choosing. For those who are interested in sports and especially developing sports apps you would have found the small presentation from Michael Bayle of ESPN.  He introduced the ESPN app which has tons of features.  For the developers in the crowd he also mentioned that ESPN has an API available at developer.espn.com. During the launch events we were told apps were coming.  In this presentation we were actually shown a scrolling list of logos and told about a couple of them.  Ballmer mentioned specifically Twitter, SAP and DropBox.  These are impressive names that were just a couple of the list impressive names. Steve Ballmer addressed the question of why you should develop for the Windows 8 platform.  He feels that Microsoft has the best commercial terms for developers, a better way to build apps than other platforms and a variety of form factors.  His key point though was the available volume of customers given the current Windows install base and assuming even a flat growth of the platform.  This he backed with a promise that Microsoft is going to do better at marketing and you won’t be able to avoid the ads that they are bringing out. The last section of the key note was present by Kevin Gallo from the Windows Phone team.  This was the real reason I tuned into the webcast.  He impressed upon those watching that the strength of developing for the Microsoft platform is the common programming model that now exist.  While there are difference between form factor implementations you can leverage code across them. He claimed that 90% of developer requests for Windows Phone 8 had been implemented.  These include: More controls with better performance Better live tiles including lock screen integration Speech support in custom apps Easier submission to the market place App camera integration VOIP and chat support Bluetooth and NFC support Native C++ development Direct 3D development   The quote from Kevin that stood out for me was that “Take your Dramamine and buckle your seatbelt type of games are coming to Windows Phone 8”.  He back this up by displaying a list of game development frameworks and then having Unity come out and do a demo. Ok, almost done … The last two things of note for me were the announcement that the SDK is immediately available at dev.windowsphone.com and that they were reducing the cost of an individual developer account to $8 for the next 8 days. Let the development commence. del.icio.us Tags: Build 2012,Windows 8,Windows Phone 8,Windows Phone

    Read the article

  • Oracle at HR Tech: What a Difference a Year Makes

    - by Natalia Rachelson
    Last week, I had the privilege of attending the famous HR Technology Conference (HR Tech) in my new hometown of Chicago. This annual event, which draws the who of who in the world of HR technology, was by far the biggest.  It wasn't just the highest level of attendance that was mind blowing, but also the amazing quality of attendees. Kudos go to the organizers, especially Bill Kutik for pulling together such a phenomenal conference. Conference highlights included Naomi Bloom's (http://infullbloom.us) Masters Panel and Mark Hurd's General Session on the last day of the conference. Naomi managed to do the seemingly impossible -- get all of the industry heavyweights and fierce competitors to travel to Chicago for her panel. Here are the executives she hosted: Our own Steve Miranda Sanjay Poonen, President Global Solutions, SAP Stan Swete, CTO, Workday Mike Capone, VP for Product Development and CIO, ADP John Wookey, EVP, Social Applications, Salesforce.com Adam Rogers, CTO, Ultimate Software       I bet you think "WOW" when you look at these names. Just this panel by itself would have been enough of a draw for any tech conference, so Naomi and Bill really scored. TechTarget published a great review of the conference here.  And here are a few highlights from Steve. "Steve Miranda, EVP Apps Dev Oracle, said delivering software in the cloud helps vendors shape their products to customer needs more efficiently. "As vendors, we're able to improve the software faster," he said. "We can see in real time what customers are using and not using." Miranda underscored Oracle's commitment to socializing its HCM platform,and named recruiting as an area where social has had a significant impact. "We want to make social a part of the fabric, not a separate piece," he said. "Already, if you're doing recruiting without social, it probably doesn't make any sense."" Having Mark Hurd at the conference was another real treat and everyone took notice.  The Business of HR publication covered Mark's participation at HR Tech and the full article is available here. Here is what Business of HR had to say: "In truth, the story of Oracle today is a story similar to many of the current and potential customers they faced at the conference this week. Their business is changing and growing. They've dealt with acquisitions of their own and their competitors continue to nip at their heels. They are dealing with growth (and yes, they are hiring in case you're interested). They have concerns about talent as well. If Oracle feels as strongly about their products as they seem to be, they will be getting their co-president in front of a lot more groups of current and potential customers like they did at the HR Technology Conference this year. And here's hoping this is one executive who won't stop talking about the importance of talent just because he isn't at the HR tech conference anymore." Natalia RachelsonSenior Director, Oracle Applications

    Read the article

  • Why Apple’s New SDK Limitation is So Offensive

    - by TStewartDev
    I am not an Apple fanboy, nor have I ever been. However, I have owned a Mac, an iPod, and an iPhone in my lifetime, and for more than a decade, I have defended Apple against the untruths that the haters so enjoy spewing. I encouraged my wife to buy a MacBook when she needed a new laptop two years ago, and I often recommend them to my friends and relatives. I have proudly and happily used my first generation iPhone for nearly three years. Now, for the first time in well over ten years, I find myself ready to swear off Apple and encourage everyone I know to do the same. I was disappointed when Apple wouldn't allow native apps, but I still bought the iPhone. I've stomached their ambiguous app approval process even though it's apparent that Steve may just reject your app because he doesn't like you or feels threatened by you (I'm still lamenting the rejection of the Google Voice app). But, as a developer, I can no longer tolerate Apple's terms and the kind of totalitarian control they indicate Apple wants. In case you are not already familiar, Apple has dictated in their OS 4.0 SDK license agreement (the now infamous Section 3.3.1) that all apps developed for the iPhone must be coded in C, C++, or Objective C, and moreover, that using any cross-compiling platforms is a violation of the agreement. For those of you who aren't developers, let me try to illustrate why this angers those of us who are. Imagine you're a professional writer. You've had articles published in some journals and magazines, and you've got a couple popular books out there, too. You've got an idea for a new book, and so you take it to your publisher. Your publisher agrees that it's a good idea. "But," says the publisher, "we want to hold our books to a tighter standard so that our readers get the experience we want them to have. Therefore, from now on, all our writers may only use words from this list of the 10,000 most common English words. Furthermore, if you cite any other works or quote anyone, they must comply with that same list, or you'll have to rewrite the entire work as well in case our readers want to look up your citation." What do you do? If your work is a children's book, this probably isn't a big deal to you. If it's an autobiography, textbook, or even a novel, though, you're going to have a lot of trouble describing your content with only common words. It's going to take you longer to complete your book, too, since you'll be looking up less common words frequently to see if you can use them. You could always go to another publisher, but this one has the best ability to distribute your book. The next largest distributor can only do a quarter as much. You could abandon the project altogether, but then everyone loses. Isn't this a silly scenario? Who would put such a limitation on writers? Yet this is very much what Apple is doing. They are using their dominant position in the market to coerce developers to write their apps exclusively for the iPhone OS by making it too expensive to write for multiple platforms. It is at least a threefold attack, striking at Adobe who is set to release a compiler that lets Flash source be compiled to iPhone binaries; striking at Google whose Android platform stands the best chance at the moment of providing serious competition to the iPhone; and reinforcing their own strong position by keeping popular apps exclusively to iPhone. And while developers are already very upset about this, the sad fact is that most of us will cave and give in to Apple because consumers don't know any better. They will continue to buy Apple's toy forcing developers to play Apple's maniacal game in order to make any money, at least until Steve Jobs decides he doesn't like them or he intends to release a competing application (bye-bye OpenFeint). Apple has been kept in check on the desktop front by a very dominant Microsoft, but I'm afraid that their success with iPods, iTunes, and iPhones has created a monster that we may have to bear until it is slain by an anti-trust suit or dies with the retirement of Steve Jobs.

    Read the article

  • MySQL: Request to select the last 10 send/received messages to/by different users

    - by Yako malin
    I want to select the 10 last messages you received OR you sent TO different users. For example the results must be shown like that: 1. John1 - last message received 04/17/10 3:12 2. Thomy - last message sent 04/16/10 1:26 3. Pamela - last message received 04/12/10 3:51 4. Freddy - last message received 03/28/10 9:00 5. Jack - last message sent 03/20/10 4:53 6. Tom - last message received 02/01/10 7:41 ..... Table looks like: CREATE TABLE `messages` ( `id` int(11) NOT NULL AUTO_INCREMENT, `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `sender` int(11) DEFAULT NULL, `receiver` int(11) DEFAULT NULL, `content` text ) I think Facebook (and the iPhone) use this solution. When you go to your mail box, you have the last messages received/sent grouped by Users (friends). So I will take an example. If I have theses messages (THEY ARE ORDERED YET): **Mike** **Tom** **Pam** Mike Mike **John** John Pam **Steve** **Bobby** Steve Steve Bobby Only Message with ** should be returned because they are the LAST messages I sent/received By User. In fact I want the last message of EACH discussion. What is the solution?

    Read the article

  • Linking to an item in another node (XSLT)

    - by Andrew Parisi
    I have an XML document with companies listed in it. I want to create a link with XSLT that contains the <link> child of the next node. Sorry if this is confusing..here is some sample XML of what i'm trying to obtain: <portfolio> <company> <name>Dano Industries</name> <link>dano.xml</link> </company> <company> <name>Mike and Co.</name> <link>mike.xml</link> </company> <company> <name>Steve Inc.</name> <link>steve.xml</link> </company> </portfolio> I want two links, "BACK" and "NEXT". While currently on mike.xml, I want BACK to link to "dano.xml" and NEXT linked to "steve.xml"...etc..and have it dynamically change when on a different page based on the nodes around it. I want to do this because I may add and change the list as I go along, so I don't want to have to manually re-link everything. How can I obtain this? Sorry I am new to XSLT, so please explain with solution if possible! Thanks in advance!

    Read the article

  • linq to xml enumerating over descendants

    - by gh9
    Hi trying to write a simple linq query from a tutorial I read. But i cannot seem to get it to work. I am trying to display both the address in the attached xml document, but can only display the first one. Can someone help me figure out why both aren't being printed. Thank you very much <?xml version="1.0" encoding="utf-8" ?> <Emails> <Email group="FooBar"> <Subject>Test subject</Subject> <Content>Test Content</Content> <EmailTo> <Address>[email protected]</Address> <Address>[email protected]</Address> </EmailTo> </Email> </Emails> Dim steve = (From email In emailList.Descendants("Email") _ Where (email.Attribute("group").Value.Equals("FooBar")) _ Select content = email.Element("EmailTo").Descendants("Address")).ToList() If Not steve Is Nothing Then For Each addr In steve Console.WriteLine(addr.Value) Next Console.ReadLine() End If

    Read the article

  • Windows 7, IIS 7.5, Selfssl

    - by Steve
    The windows iis6 resource kit won't install on Windows 7 (Home Premium) so I copied it from another machine and selfssl.exe is giving me: Failed to generate the cryptographic key: 0x5 I tried the instructions here but am still getting the above error. I'm trying to set the common name of the certificate to a name other than the machine name so I can avoid the certificate errors in the browser. This is a test web application. I know I can just test with the browser errors, but I'd like to mimic real world conditions as much as possible. Is there any other way to generate your own ssl certificates for iis7.5?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >