Search Results

Search found 1157 results on 47 pages for 'latency'.

Page 40/47 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Observing flow control idle time in TCP

    - by user12820842
    Previously I described how to observe congestion control strategies during transmission, and here I talked about TCP's sliding window approach for handling flow control on the receive side. A neat trick would now be to put the pieces together and ask the following question - how often is TCP transmission blocked by congestion control (send-side flow control) versus a zero-sized send window (which is the receiver saying it cannot process any more data)? So in effect we are asking whether the size of the receive window of the peer or the congestion control strategy may be sub-optimal. The result of such a problem would be that we have TCP data that we could be transmitting but we are not, potentially effecting throughput. So flow control is in effect: when the congestion window is less than or equal to the amount of bytes outstanding on the connection. We can derive this from args[3]-tcps_snxt - args[3]-tcps_suna, i.e. the difference between the next sequence number to send and the lowest unacknowledged sequence number; and when the window in the TCP segment received is advertised as 0 We time from these events until we send new data (i.e. args[4]-tcp_seq = snxt value when window closes. Here's the script: #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[3]-tcps_snxt - args[3]-tcps_suna) = args[3]-tcps_cwnd / { cwndclosed[args[1]-cs_cid] = timestamp; cwndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / cwndclosed[args[1]-cs_cid] && args[4]-tcp_seq = cwndsnxt[args[1]-cs_cid] / { @meantimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = avg(timestamp - cwndclosed[args[1]-cs_cid]); @stddevtimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = stddev(timestamp - cwndclosed[args[1]-cs_cid]); @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); cwndclosed[args[1]-cs_cid] = 0; cwndsnxt[args[1]-cs_cid] = 0; } tcp:::receive / args[4]-tcp_window == 0 && (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { swndclosed[args[1]-cs_cid] = timestamp; swndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["swnd", args[2]-ip_saddr, args[4]-tcp_dport] = count(); } tcp:::send / swndclosed[args[1]-cs_cid] && args[4]-tcp_seq = swndsnxt[args[1]-cs_cid] / { @meantimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = avg(timestamp - swndclosed[args[1]-cs_cid]); @stddevtimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = stddev(timestamp - swndclosed[args[1]-cs_cid]); swndclosed[args[1]-cs_cid] = 0; swndsnxt[args[1]-cs_cid] = 0; } END { printf("%-6s %-20s %-8s %-25s %-8s %-8s\n", "Window", "Remote host", "Port", "TCP Avg WndClosed(ns)", "StdDev", "Num"); printa("%-6s %-20s %-8d %@-25d %@-8d %@-8d\n", @meantimeclosed, @stddevtimeclosed, @numclosed); } So this script will show us whether the peer's receive window size is preventing flow ("swnd" events) or whether congestion control is limiting flow ("cwnd" events). As an example I traced on a server with a large file transfer in progress via a webserver and with an active ssh connection running "find / -depth -print". Here is the output: ^C Window Remote host Port TCP Avg WndClosed(ns) StdDev Num cwnd 10.175.96.92 80 86064329 77311705 125 cwnd 10.175.96.92 22 122068522 151039669 81 So we see in this case, the congestion window closes 125 times for port 80 connections and 81 times for ssh. The average time the window is closed is 0.086sec for port 80 and 0.12sec for port 22. So if you wish to change congestion control algorithm in Oracle Solaris 11, a useful step may be to see if congestion really is an issue on your network. Scripts like the one posted above can help assess this, but it's worth reiterating that if congestion control is occuring, that's not necessarily a problem that needs fixing. Recall that congestion control is about controlling flow to prevent large-scale drops, so looking at congestion events in isolation doesn't tell us the whole story. For example, are we seeing more congestion events with one control algorithm, but more drops/retransmission with another? As always, it's best to start with measures of throughput and latency before arriving at a specific hypothesis such as "my congestion control algorithm is sub-optimal".

    Read the article

  • Replicating between Cloud and On-Premises using Oracle GoldenGate

    - by Ananth R. Tiru
    Do you have applications running on the cloud that you need to connect with the on premises systems. The most likely answer to this question is an astounding YES!  If so, then you understand the importance of keep the data fresh at all times across the cloud and on-premises environments. This is also one of the key focus areas for the new GoldenGate 12c release which we announced couple of week ago via a press release. Most enterprises have spent years avoiding the data “silos” that inhibit productivity. For example, an enterprise which has adopted a CRM strategy could be relying on an on-premises based marketing application used for developing and nurturing leads. At the same time it could be using a SaaS based Sales application to create opportunities and quotes. The sales and the marketing teams which use these systems need to be able to access and share the data in a reliable and cohesive way. This example can be extended to other applications areas such as HR, Supply Chain, and Finance and the demands the users place on getting a consistent view of the data. When it comes to moving data in hybrid environments some of the key requirements include minimal latency, reliability and security: Data must remain fresh. As data ages it becomes less relevant and less valuable—day-old data is often insufficient in today’s competitive landscape. Reliability must be guaranteed despite system or connectivity issues that can occur between the cloud and on-premises instances. Security is a key concern when replicating between cloud and on-premises instances. There are several options to consider when replicating between the cloud and on-premises instances. Option 1 – Secured network established between the cloud and on-premises A secured network is established between the cloud and on-premises which enables the applications (including replication software) running on the cloud and on-premises to have seamless connectivity to other applications irrespective of where they are physically located. Option 2 – Restricted network established between the cloud and on-premises A restricted network is established between the cloud and on-premises instances which enable certain ports (required by replication) be opened on both the cloud and on the on-premises instances and white lists the IP addresses of the cloud and on-premises instances. Option 3 – Restricted network access from on-premises and cloud through HTTP proxy This option can be considered when the ports required by the applications (including replication software) are not open and the cloud instance is not white listed on the on-premises instance. This option of tunneling through HTTP proxy may be only considered when proper security exceptions are obtained. Oracle GoldenGate Oracle GoldenGate is used for major Fortune 500 companies and other industry leaders worldwide to support mission-critical systems for data availability and integration. Oracle GoldenGate addresses the requirements for ensuring data consistency between cloud and on-premises instances, thus facilitating the business process to run effectively and reliably. The architecture diagram below illustrates the scenario where the cloud and the on-premises instance are connected using GoldenGate through a secured network In the above scenario, Oracle GoldenGate is installed and configured on both the cloud and the on-premises instances. On the cloud instance Oracle GoldenGate is installed and configured on the machine where the database instance can be accessed. Oracle GoldenGate can be configured for unidirectional or bi-directional replication between the cloud and on premises instances. The specific configuration details of Oracle GoldenGate processes will depend upon the option selected for establishing connectivity between the cloud and on-premises instances. The knowledge article (ID - 1588484.1) titled ' Replicating between Cloud and On-Premises using Oracle GoldenGate' discusses in detail the options for replicating between the cloud and on-premises instances. The article can be found on My Oracle Support. To learn more about Oracle GoldenGate 12c register for our launch webcast where we will go into these new features in more detail.   You may also want to download our white paper "Oracle GoldenGate 12c Release 1 New Features Overview" I would love to hear your requirements for replicating between on-premises and cloud instances, as well as your comments about the strategy discussed in the knowledge article to address your needs. Please post your comments in this blog or in the Oracle GoldenGate public forum - https://forums.oracle.com/community/developer/english/business_intelligence/system_management_and_integration/goldengate

    Read the article

  • What's wrong with my wireless?

    - by dazzle
    I am having issues with my wireless connection. My connection is constantly disconnecting, then attempting to reconnect, reconnecting momentarily, then disconnecting etc. on times scales that range from seconds to minutes. In the meantime, needless to say I'm having significant packet loss. I'm running Ubuntu 14.04 64bit, updated and upgraded to today. Here is my card and driver: delta@sager:~$ lspci -vq | grep -i wireless -B 1 -A 5 04:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73) Subsystem: Intel Corporation Dual Band Wireless-AC 7260 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at f7d00000 (64-bit, non-prefetchable) [size=8K] Capabilities: Kernel driver in use: iwlwifi Here is my kernel: delta@sager:~$ uname -r 3.13.0-34-generic None of the other machines on my home network are having these issues. Windows Vista is networking without issue for goodness sake ;-) Here is a small clipping from the output of dmesg. As you can see, I am getting a cfg80211 message of some sort over and over again (FYI, I've replaced my MAC address with a series of dashes, so anytime there is a ---------------, that was where the MAC address was: [ 1881.739161] wlan1: authenticate with --------------- [ 1881.741561] wlan1: send auth to --------------- (try 1/3) [ 1881.743440] wlan1: authenticated [ 1881.746027] wlan1: associate with --------------- (try 1/3) [ 1881.749244] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1881.754727] wlan1: associated [ 1881.754827] cfg80211: Calling CRDA for country: US [ 1881.761552] cfg80211: Regulatory domain changed to country: US [ 1881.761559] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1881.761564] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1881.761568] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1881.761571] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761574] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761577] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761580] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1881.761584] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1882.391038] cfg80211: Calling CRDA to update world regulatory domain [ 1882.396254] cfg80211: World regulatory domain updated: [ 1882.396260] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1882.396265] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396268] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396271] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1882.396274] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396277] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.148252] wlan1: authenticate with --------------- [ 1886.150005] wlan1: send auth to --------------- (try 1/3) [ 1886.151807] wlan1: authenticated [ 1886.154847] wlan1: associate with --------------- (try 1/3) [ 1886.158147] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1886.163464] wlan1: associated [ 1886.163520] wlan1: Limiting TX power to 30 (30 - 0) dBm as advertised by --------------- [ 1886.163588] cfg80211: Calling CRDA for country: US [ 1886.170500] cfg80211: Regulatory domain changed to country: US [ 1886.170508] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1886.170513] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1886.170517] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1886.170520] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170523] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170526] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170529] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1886.170533] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1887.200197] cfg80211: Calling CRDA to update world regulatory domain [ 1887.203655] cfg80211: World regulatory domain updated: [ 1887.203659] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1887.203662] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203664] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203666] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1887.203668] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203670] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) I've poked around on AskUbuntu, and have not found any adequate solutions; have also found similar threads that were left unanswered. Any advice/experience/threads I might be able to pull on would be greatly appreciated. In your opinion, is this a kernel issue, hardware issue, etc.? Thanks in advance. EDIT: chili, here's the output of iwconfig: delta@sager:~$ iwconfig wlan1 IEEE 802.11abg ESSID:"LANbeforetime" Mode:Managed Frequency:2.412 GHz Access Point: ----------- Bit Rate=48 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=44/70 Signal level=-66 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:80 Missed beacon:0 eth0 no wireless extensions. lo no wireless extensions.

    Read the article

  • Randomely loosing wireless connexion with Cubuntu 12.04

    - by statquant
    I am presently experiencing random disconnections from my wireless network. It looks like it is more and more frequent (however I have not seen any clear pattern). This is killing me... Here is some information that should help (from ubuntu forums). Thanks for reading Machine : Acer Aspire S3 statquant@euclide:~$ lsb_release -d Description: Ubuntu 12.04.1 LTS statquant@euclide:~$ uname -mr 3.2.0-33-generic x86_64 statquant@euclide:~$ sudo /etc/init.d/networking restart * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... statquant@euclide:~$ lspci 02:00.0 Network controller: Atheros Communications Inc. AR9485 Wireless Network Adapter (rev 01) statquant@euclide:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 004: ID 064e:c321 Suyin Corp. Bus 002 Device 003: ID 0bda:0129 Realtek Semiconductor Corp. statquant@euclide:~$ ifconfig wlan0 Link encap:Ethernet HWaddr 74:de:2b:dd:c4:78 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::76de:2bff:fedd:c478/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:913 errors:0 dropped:0 overruns:0 frame:0 TX packets:802 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:873218 (873.2 KB) TX bytes:125826 (125.8 KB) statquant@euclide:~$ iwconfig wlan0 IEEE 802.11bgn ESSID:"Bbox-D646D1" Mode:Managed Frequency:2.437 GHz Access Point: 00:19:70:80:01:6C Bit Rate=65 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Link Quality=56/70 Signal level=-54 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:71 Missed beacon:0 statquant@euclide:~$ dmesg | grep "wlan" [ 17.495866] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 17.498950] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 20.072015] wlan0: authenticate with 00:19:70:80:01:6c (try 1) [ 20.269853] wlan0: authenticate with 00:19:70:80:01:6c (try 2) [ 20.272386] wlan0: authenticated [ 20.298682] wlan0: associate with 00:19:70:80:01:6c (try 1) [ 20.302321] wlan0: RX AssocResp from 00:19:70:80:01:6c (capab=0x431 status=0 aid=1) [ 20.302325] wlan0: associated [ 20.307307] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 30.402292] wlan0: no IPv6 routers present statquant@euclide:~$ sudo lshw -C network [sudo] password for statquant: *-network description: Wireless interface product: AR9485 Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 74:de:2b:dd:c4:78 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-33-generic firmware=N/A ip=192.168.1.3 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:c0400000-c047ffff memory:afb00000-afb0ffff statquant@euclide:~$ iwlist scan wlan0 Scan completed : Cell 01 - Address: 00:19:70:80:01:6C Channel:6 Frequency:2.437 GHz (Channel 6) Quality=56/70 Signal level=-54 dBm Encryption key:on ESSID:"Bbox-D646D1" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000125fb152bb Extra: Last beacon: 40020ms ago IE: Unknown: 000B42626F782D443634364431 IE: Unknown: 010882848B960C121824 IE: Unknown: 030106 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101820003A4000027A4000042435E0062322F00 IE: Unknown: 2D1A4C101BFF00000000000000000000000000000000000000000000 IE: Unknown: 3D1606080800000000000000000000000000000000000000 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010000000000 And... finally, please note that I did the following (after looking for fixes of similar problems), but unfortunately it did not work sudo modprobe -r iwlwifi sudo modprobe iwlwifi 11n_disable=1

    Read the article

  • No Wireless Networks, BCM4313 [duplicate]

    - by TalonPlz
    This question already has an answer here: How to Install Broadcom Wireless Drivers (BCM43xx) 38 answers Just bought this little Asus 10 inch laptop that came with Ubuntu 12.04. Everything at my home was fine: Wireless identified and connected. As soon as I went to my girlfriend's house the trouble started. I couldn't connect to wireless (authentication... times out and asks for authentication) I started doing internet searching, tried a few solutions posted on line using terminal commands. No solutions. I decided to upgraded to 12.10-13.04 and that left me with a worse problem: I can no longer see ANY networks what so ever. Wireless card is ON, with out a doubt. Wired connection works. I have been fumbling with driver versions to no .avail, and have no idea which driver I am currently running I have a vague idea of what terminal lines to run: lshw: resources: irq:17 memory:f7d00000-f7dfffff *-network description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth2 version: 01 serial: dc:85:de:56:c4:ea width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.20.155.1 (r326264) latency=0 multicast=yes wireless=IEEE 802.11abg resources: irq:17 memory:f7d00000-f7d03fff iwconfig: eth1 no wireless extensions. eth2 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off lo no wireless extensions. I am new and excited to start my Ubuntu and Linux life and this is only the first of my few hic cups i am sure! :) Thanks all UPDATE: Report from 2nd answer talon@Black1015E:~$ sudo apt-get remove --purge bcmwl-kernel-source [sudo] password for talon: Reading package lists... Done Building dependency tree Reading state information... Done Package 'bcmwl-kernel-source' is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. talon@Black1015E:~$ wget http://us.archive.ubuntu.com/ubuntu/pool/restricted/b/bcmwl/bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb --2013-10-22 18:50:32-- http://us.archive.ubuntu.com/ubuntu/pool/restricted/b/bcmwl/bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb Resolving us.archive.ubuntu.com (us.archive.ubuntu.com)... 2001:67c:1562::15, 2001:67c:1562::13, 2001:67c:1562::14, ... Connecting to us.archive.ubuntu.com (us.archive.ubuntu.com)|2001:67c:1562::15|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1181334 (1.1M) [application/x-debian-package] Saving to: ‘bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb’ 100%[======================================>] 1,181,334 3.37MB/s in 0.3s 2013-10-22 18:50:33 (3.37 MB/s) - ‘bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb’ saved [1181334/1181334] talon@Black1015E:~$ arvh No command 'arvh' found, did you mean: Command 'arch' from package 'coreutils' (main) arvh: command not found talon@Black1015E:~$ arch x86_64 talon@Black1015E:~$ sudo dpkg -i bcmwl*.deb Selecting previously unselected package bcmwl-kernel-source. (Reading database ... 171895 files and directories currently installed.) Unpacking bcmwl-kernel-source (from bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb) ... Setting up bcmwl-kernel-source (5.100.82.112+bdcom-0ubuntu3) ... Loading new bcmwl-5.100.82.112+bdcom DKMS files... First Installation: checking all kernels... Building only for 3.8.0-32-generic Building for architecture x86_64 Building initial module for 3.8.0-32-generic Done. wl: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.8.0-32-generic/updates/dkms/ depmod........ DKMS: install completed. Error: Module b43 is not currently loaded Error: Module b43legacy is not currently loaded Error: Module ssb is not currently loaded Error: Module bcm43xx is not currently loaded Error: Module brcm80211 is not currently loaded Error: Module brcmfmac is not currently loaded update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-32-generic rebooting now

    Read the article

  • Fun tips with Analytics

    - by user12620172
    If you read this blog, I am assuming you are at least familiar with the Analytic functions in the ZFSSA. They are basically amazing, very powerful and deep. However, you may not be aware of some great, hidden functions inside the Analytic screen. Once you open a metric, the toolbar looks like this: Now, I’m not going over every tool, as we have done that before, and you can hover your mouse over them and they will tell you what they do. But…. Check this out. Open a metric (CPU Percent Utilization works fine), and click on the “Hour” button, which is the 2nd clock icon. That’s easy, you are now looking at the last hour of data. Now, hold down your ‘Shift’ key, and click it again. Now you are looking at 2 hours of data. Hold down Shift and click it again, and you are looking at 3 hours of data. Are you catching on yet? You can do this with not only the ‘Hour’ button, but also with the ‘Minute’, ‘Day’, ‘Week’, and the ‘Month’ buttons. Very cool. It also works with the ‘Show Minimum’ and ‘Show Maximum’ buttons, allowing you to go to the next iteration of either of those. One last button you can Shift-click is the handy ‘Drill’ button. This button usually drills down on one specific aspect of your metric. If you Shift-click it, it will display a “Rainbow Highlight” of the current metric. This works best if this metric has many ‘Range Average’ items in the left-hand window. Give it a shot. Also, one will sometimes click on a certain second of data in the graph, like this:  In this case, I clicked 4:57 and 21 seconds, and the 'Range Average' on the left went away, and was replaced by the time stamp. It seems at this point to some people that you are now stuck, and can not get back to an average for the whole chart. However, you can actually click on the actual time stamp of "4:57:21" right above the chart. Even though your mouse does not change into the typical browser finger that most links look like, you can click it, and it will change your range back to the full metric. Another trick you may like is to save a certain view or look of a group of graphs. Most of you know you can save a worksheet, but did you know you could Sync them, Pause them, and then Save it? This will save the paused state, allowing you to view it forever the way you see it now.  Heatmaps. Heatmaps are cool, and look like this:  Some metrics use them and some don't. If you have one, and wish to zoom it vertically, try this. Open a heatmap metric like my example above (I believe every metric that deals with latency will show as a heatmap). Select one or two of the ranges on the left. Click the "Change Outlier Elimination" button. Click it again and check out what it does.  Enjoy. Perhaps my next blog entry will be the best Analytic metrics to keep your eyes on, and how you can use the Alerts feature to watch them for you. Steve 

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • high performance hibernate insert

    - by luke
    I am working on a latency sensitive part of an application, basically i will receive a network event transform the data and then insert all the data into the DB. After profiling i see that basically all my time is spent trying to save the data. here is the code private void insertAllData(Collection<Data> dataItems) { long start_time = System.currentTimeMillis(); long save_time = 0; long commit_time = 0; Transaction tx = null; try { Session s = HibernateSessionFactory.getSession(); s.setCacheMode(CacheMode.IGNORE); s.setFlushMode(FlushMode.NEVER); tx = s.beginTransaction(); for(Data data : dataItems) { s.saveOrUpdate(data); } save_time = System.currentTimeMillis(); tx.commit(); s.flush(); s.clear(); } catch(HibernateException ex) { if(tx != null) tx.rollback(); } commit_time = System.currentTimeMillis(); System.out.println("Save: " + (save_time - start_time)); System.out.println("Commit: " + (commit_time - save_time)); System.out.println(); } The size of the collection is always less than 20. here is the timing data that i see: Save: 27 Commit: 9 Save: 27 Commit: 9 Save: 26 Commit: 9 Save: 36 Commit: 9 Save: 44 Commit: 0 This is confusing to me. I figure that the save should be quick and all the time should be spent on commit. but clearly I'm wrong. I have also tried removing the transaction (its not really necessary) but i saw worse times... I have set hibernate.jdbc.batch_size=20... i need this operation to be as fast as possible, ideally there would only be one roundtrip to the database. How can i do this?

    Read the article

  • Setting up a "cookieless domain" to improve site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • ARMv6 FIQ, acknowledge interrupt

    - by fastmonkeywheels
    I'm working with an i.mx35 armv6 core processor. I have Interrupt 62 configured as a FIQ with my handler installed and being called. My handler at the moment just toggles an output pin so I can test latency with a scope. With the code below, once I trigger the FIQ it continues forever as fast as it can, apparently not being acknowledged. I'm triggering the FIQ by means of the Interrupt Force Register so I'm assured that the source isn't triggering it this fast. If I disable Interrupt 62 in the AVIC in my FIQ routine the interrupt only triggers once. I have read the sections on the VIC Port in the ARM1136JF-S and ARM1136J-S Technical Reference Manual and it covers proper exit procedure. I'm only having one FIQ handler so I have no need to branch. The line that I don't understand is: STR R0, [R8,#AckFinished] I'm not sure what AckFinished is supposed to be or what this command is supposed to do. My FIQ handler is below: ldr r9, IOMUX_ADDR12 ldr r8, [r9] orr r8, #0x08 @ top LED str r8,[r9] @turn on LED bic r8, #0x08 @ top LED str r8,[r9] @turn off LED subs pc, r14, #4 IOMUX_ADDR12: .word 0xFC2A4000 @remapped IOMUX addr My handler returns just fine and normal system operation resumes if I disable it after the first go, otherwise it triggers constantly and the system appears to hang. Do you think my assumption is right that the core isn't acknowledging the AVIC or could there be another cause of this FIQ triggering?

    Read the article

  • How does real-time collaboration with multiple clients work in a system using operation transformati

    - by Saikat Chakrabarti
    I just finished reading High-Latency, Low-Bandwidth Windowing in the Jupiter Collaboration System and I mostly followed everything until part 6: global consistency. This part describes how the system described in the paper can be extended to accomodate for multiple clients connected to the server. However, the explanation is very short and essentially says the system will work if the central server merely forwards client messages to all the other clients. I don't really understand how this works though. What state vector would be sent in the message that is sent to all the other clients? Does the server maintain separate state vectors for each client? Does it maintain a separate copy of the widgets locally for each client? The simple example I can think of is this setup: imagine client A, server, and client B with client A and client B both connected to the server. To start, all three have the state object "ABCD". Then, client A sends the message "insert character F at position 0" at the same time client B sends the message "insert character G at position 0" to the server. It seems like simply relaying client A's message to client B and vice versa doesn't actually handle this case. So what exactly does the server do?

    Read the article

  • moving audio over a local network using GStreamer

    - by James Turner
    I need to move realtime audio between two Linux machines, which are both running custom software (of mine) which builds on top of Gstreamer. (The software already has other communication between the machines, over a separate TCP-based protocol - I mention this in case having reliable out-of-band data makes a difference to the solution). The audio input will be a microphone / line-in on the sending machine, and normal audio output as the sink on the destination; alsasrc and alsasink are the most likely, though for testing I have been using the audiotestsrc instead of a real microphone. GStreamer offers a multitude of ways to move data round over networks - RTP, RTSP, GDP payloading, UDP and TCP servers, clients and sockets, and so on. There's also many examples on the web of streaming both audio and video - but none of them seem to work for me, in practice; either the destination pipeline fails to negotiate caps, or I hear a single packet and then the pipeline stalls, or the destination pipeline bails out immediately with no data available. In all cases, I'm testing on the command-line just gst-launch. No compression of the audio data is required - raw audio, or trivial WAV, uLaw or aLaw encoding is fine; what's more important is low-ish latency.

    Read the article

  • Avoiding GC thrashing with WSE 3.0 MTOM service

    - by Leon Breedt
    For historical reasons, I have some WSE 3.0 web services that I cannot upgrade to WCF on the server side yet (it is also a substantial amount of work to do so). These web services are being used for file transfers from client to server, using MTOM encoding. This can also not be changed in the short term, for reasons of compatibility. Secondly, they are being called from both Java and .NET, and therefore need to be cross-platform, hence MTOM. How it works is that an "upload" WebMethod is called by the client, sending up a chunk of data at a time, since files being transferred could potentially be gigabytes in size. However, due to not being able to control parts of the stack before the WebMethod is invoked, I cannot control the memory usage patterns of the web service. The problem I am running into is for file sizes from 50MB or so onwards, performance is absolutely killed because of GC, since it appears that WSE 3.0 buffers each chunk received from the client in a new byte[] array, and by the time we've done 50MB we're spending 20-30% of time doing GC. I've played with various chunk sizes, from 16k to 2MB, with no real great difference in results. Smaller chunks are killed by the latency involved with round-tripping, and larger chunks just postpone the slowdown until GC kicks in. Any bright ideas on cutting down on the garbage created by WSE? Can I plug into the pipeline somehow and jury-rig something that has access to the client's request stream and streams it to the WebMethod? I'm aware that it is possible to "stream" responses to the client using WSE (albeit very ugly), but this problem is with requests from the client.

    Read the article

  • 4.0/WCF: Best approach for bi-idirectional message bus?

    - by TomTom
    Just a technology update, now that .NET 4.0 is out. I write an application that communicates to the server through what is basically a message bus (instead of method calls). This is based on the internal architecture of the application (which is multi threaded, passing the messages around). There are a limited number of messages to go from the client to the server, quite a lot more from the server to the client. Most of those can be handled via a separate specialized mechanism, but at the end we talk of possibly 10-100 small messages per second going from the server to the client. The client is supposed to operate under "internet conditions". THis means possibly home end users behind standard NAT devices (i.e. typical DSL routers) - a firewalled secure and thus "open" network can not be assumed. I want to have as little latency and as little overhad for the communication as possible. What is the technologally best way to handle the message bus callback? I Have no problem regularly calling to the server for message delivery if something needs to be sent... ...but what are my options to handle the messagtes from the server to the client? WsDualHttp does work how? Especially under a NAT scenario? Just as a note: polling is most likely out - the main problem here is that I would have a significant overhead OR a significant delay, both aren ot really wanted. Technically I would love some sort of streaming appraoch, where the server can write messags to a stream while he generates them and they get sent to the client as they come. Not esure this is doable with WCF, though (if not, I may acutally decide to handle the whole message part outside of WCF and just do control / login / setup / destruction via WCF).

    Read the article

  • proper use of volatile keyword

    - by luke
    I think i have a pretty good idea about the volatile keyword in java, but i'm thinking about re-factoring some code and i thought it would be a good idea to use it. i have a class that is basically working as a DB Cache. it holds a bunch of objects that it has read from a database, serves requests for those objects, and then occasionally refreshes the database (based on a timeout). Heres the skeleton public class Cache { private HashMap mappings =....; private long last_update_time; private void loadMappingsFromDB() { //.... } private void checkLoad() { if(System.currentTimeMillis() - last_update_time > TIMEOUT) loadMappingsFromDB(); } public Data get(ID id) { checkLoad(); //.. look it up } } So the concern is that loadMappingsFromDB could be a high latency operation and thats not acceptable, So initially i thought that i could spin up a thread on cache startup and then just have it sleep and then update the cache in the background. But then i would need to synchronize my class (or the map). and then i would just be trading an occasional big pause for making every cache access slower. Then i thought why not use volatile i could define the map reference as volatile private volatile HashMap mappings =....; and then in get (or anywhere else that uses the mappings variable) i would just make a local copy of the reference: public Data get(ID id) { HashMap local = mappings; //.. look it up using local } and then the background thread would just load into a temp table and then swap the references in the class HashMap tmp; //load tmp from DB mappings = tmp;//swap variables forcing write barrier Does this approach make sense? and is it actually thread-safe?

    Read the article

  • Optimizing quality for available bandwidth in Flash/RTMFP

    - by Artem M.
    I'm developing a simple one-on-one P2P video chat using ActionScript, and I'd like to ensure the best video quality for the peers given their bandwidth. This means: Setting the best quality given the available bandwidth when the chat starts Responding to network congestions during chat by decreasing the quality. The task is similar to dynamic stream switching, but P2P has its specifics that make dynamic streaming approaches not work. For example, the maxBytesPerSecond metric monitored in dynamic stream switching is pretty useless in P2P where the receiving NetStream's buffer size is set to 0 to minimize latency. So far, it looks like the most reliable QoS metric for P2P is SRTT. In my simulated tests on a local network, a bandwidth congestion makes it shot up to 500 ms and more when there's a bandwidth limit introduced. However, it gives no hint as to how best adjust the value for bandwidth in Camera.setQuality(0, bandwidth) to respond to the congestion. I've done lots of experiments, and I still don't see a clear and simple solution to the problem. I'm also wondering how this issue is addressed (if at all) in other RTMFP chat solutions.

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is assembly an absolute must?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • Netty options for real-time distribution of small messages to a large number of clients?

    - by user439407
    I am designing a (near) real-time Netty server to distribute a large number of very small messages to a large number of clients across the internet. In internal, go as fast as you can testing, I found that I could do 10k clients no sweat, but now that we are trying to go across the internet, where the latency, bandwidth etc varies pretty wildly we are running into the dreaded outOfMemory issues, even with 2 gigs of RAM. I have tried various workarounds(setting the socket stack sizes smaller, setting high and low water marks, cancelling things that are too old), and they help a little, but they seem to only help a little bit. What would some good ways to optimize Netty for sending large #s of small messages without significant delays? Also, the bulk of the message only consists of one kind of message that I don't particularly care if it doesn't arrive. I would use UDP but because we don't control the client, thats not really a possibility. Is it possible to set a separate timeout solely for this kind of message without affecting the other messages? Any insight you could offer would be greatly appreciated.

    Read the article

  • Designing code for a duo Website + AIR desktop App

    - by faB
    I want to use AIR to create an OFFLINE version of a "webapp" kind of website (lots of ajax, front end code). Haven't been much further than the HelloWorld example, I keep wondering: how do you design your code, to maximize code reuse between the website (say in php or Java or .Net), and the AIR app ? Can you actually re-use 100% of the front end code, provided that it is designed to account for the AIR app ? How would you go about doing that ? For example, the website makes many Ajax calls which have latency, and uses listeners. The AIR app doesn't need listeners it could run database requests synchronously, and it doesn't need to run ajax calls right? Would you write an abstraction layer for that ? So that the same calls on the AIR app will not do a xmlhttp but instead implement the server-side code with AIR; and call the listener ? So you don't have to rewrite the front end code patterns ? Does this make sense ? It's really hard to search on Google. I'm thinking there must be a good article somewhere of somebody who went through and perhaps a framework to do that ?

    Read the article

  • Java: Efficiency of the readLine method of the BufferedReader and possible alternatives

    - by Luhar
    We are working to reduce the latency and increase the performance of a process written in Java that consumes data (xml strings) from a socket via the readLine() method of the BufferedReader class. The data is delimited by the end of line separater (\n), and each line can be of a variable length (6KBits - 32KBits). Our code looks like: Socket sock = connection; InputStream in = sock.getInputStream(); BufferedReader inputReader = new BufferedReader(new InputStreamReader(in)); ... do { String input = inputReader.readLine(); // Executor call to parse the input thread in a seperate thread }while(true) So I have a couple of questions: Will the inputReader.readLine() method return as soon as it hits the \n character or will it wait till the buffer is full? Is there a faster of picking up data from the socket than using a BufferedReader? What happens when the size of the input string is smaller than the size of the Socket's receive buffer? What happens when the size of the input string is bigger than the size of the Socket's receive buffer? I am getting to grips (slowly) with Java's IO libraries, so any pointers are much appreciated. Thank you!

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • multiple ajax requests with jquery

    - by Emil
    I got problems with the async nature of Javascript / JQuery. Lets say the following (no latency is counted for, in order to not make it so troublesome); I got three buttons (A, B, C) on a page, each of the buttons adds an item into a shopping cart with one ajax-request each. If I put an intentional delay of 5 seconds in the serverside script (PHP) and pushes the buttons with 1 second apart, I want the result to be the following: Request A, 5 seconds Request B, 6 seconds Request C, 7 seconds However, the result is like this Request A, 5 seconds Request B, 10 seconds Request C, 15 seconds This have to mean that the requests are queued and not run simultaneously, right? Isnt this opposite to what async is? I also tried to add a random get-parameter to the url in order to force some uniqueness to the request, no luck though. I did read a little about this. If you avoid using the same "request object (?)" this problem wont occure. Is it possible to force this behaviour in JQuery? This is the code that I am using $.ajax( { url : strAjaxUrl + '?random=' + Math.floor(Math.random()*9999999999), data : 'ajax=add-to-cart&product=' + product, type : 'GET', success : function(responseData) { // update ui }, error : function(responseData) { // show error } }); I also tried both GET and POST, no difference. I want the requests to be sent right when the button is clicked, not when the previous request is finnished. I want the requests to be run simultaneously, not in a queue.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >