Search Results

Search found 5919 results on 237 pages for 'io priority'.

Page 50/237 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Writing the tests for FluentPath

    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasnt designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • QoS for Cisco Router to Prioritize Voice and Interactive Traffic

    - by TJ Huffington
    I have a Cisco 891W NATing Voice and Data to the internet over a 10mbit/2mbit connection. Voice traffic gets degraded when I upload large files. Pings time out as well. I tried to configure a QoS policy but it's basically not doing anything. Voice traffic still degrades when upload bandwidth gets saturated. Here is my current configruation: class-map match-any QoS-Transactional match protocol ssh match protocol xwindows class-map match-any QoS-Voice match protocol rtp audio class-map match-any QoS-Bulk match protocol secure-nntp match protocol smtp match protocol tftp match protocol ftp class-map match-any QoS-Management match protocol snmp match protocol dns match protocol secure-imap class-map match-any QoS-Inter-Video match protocol rtp video class-map match-any QoS-Voice-Control match access-group name Voice-Control policy-map QoS-Priority-Output class QoS-Voice priority percent 25 set dscp ef class QoS-Inter-Video bandwidth remaining percent 10 set dscp af41 class QoS-Transactional bandwidth remaining percent 25 random-detect dscp-based set dscp af21 class QoS-Bulk bandwidth remaining percent 5 random-detect dscp-based set dscp af11 class QoS-Management bandwidth remaining percent 1 set dscp cs2 class QoS-Voice-Control priority percent 5 set dscp ef class class-default fair-queue interface FastEthernet8 bandwidth 1024 bandwidth receive 20480 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto auto discovery qos crypto map mymap max-reserved-bandwidth 80 service-policy output QoS-Priority-Output crypto map mymap 10 ipsec-isakmp set peer 1.2.3.4 default set transform-set ESP-3DES-SHA match address 110 qos pre-classify ! fa8 is my connection to the internet. Voice traffic goes over a VPN ("mymap") to the SIP server. That's why I specified "qos pre-classify" which I believe is the way to classify traffic over the VPN. However even when I ping a public IP while saturating upload bandwidth, the latency is exceptionally high. Is this configuration correct? Are there any suggestions that might make this work for my setup? Thanks in advance.

    Read the article

  • QoS for Cisco Router to Prioritize Voice and Interactive Traffic

    - by TJ Huffington
    I have a Cisco 891W NATing Voice and Data to the internet over a 10mbit/2mbit connection. Voice traffic gets degraded when I upload large files. Pings time out as well. I tried to configure a QoS policy but it's basically not doing anything. Voice traffic still degrades when upload bandwidth gets saturated. Here is my current configruation: class-map match-any QoS-Transactional match protocol ssh match protocol xwindows class-map match-any QoS-Voice match protocol rtp audio class-map match-any QoS-Bulk match protocol secure-nntp match protocol smtp match protocol tftp match protocol ftp class-map match-any QoS-Management match protocol snmp match protocol dns match protocol secure-imap class-map match-any QoS-Inter-Video match protocol rtp video class-map match-any QoS-Voice-Control match access-group name Voice-Control policy-map QoS-Priority-Output class QoS-Voice priority percent 25 set dscp ef class QoS-Inter-Video bandwidth remaining percent 10 set dscp af41 class QoS-Transactional bandwidth remaining percent 25 random-detect dscp-based set dscp af21 class QoS-Bulk bandwidth remaining percent 5 random-detect dscp-based set dscp af11 class QoS-Management bandwidth remaining percent 1 set dscp cs2 class QoS-Voice-Control priority percent 5 set dscp ef class class-default fair-queue interface FastEthernet8 bandwidth 1024 bandwidth receive 20480 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto auto discovery qos crypto map mymap max-reserved-bandwidth 80 service-policy output QoS-Priority-Output crypto map mymap 10 ipsec-isakmp set peer 1.2.3.4 default set transform-set ESP-3DES-SHA match address 110 qos pre-classify ! fa8 is my connection to the internet. Voice traffic goes over a VPN ("mymap") to the SIP server. That's why I specified "qos pre-classify" which I believe is the way to classify traffic over the VPN. However even when I ping a public IP while saturating upload bandwidth, the latency is exceptionally high. Is this configuration correct? Are there any suggestions that might make this work for my setup? Thanks in advance.

    Read the article

  • /etc/security/limits.conf for setting program limits in Linux

    - by Flavius Akerele
    I have the following inside /etc/security/limits.conf (I have specified root separately because * will not include it.) user2 - core unlimited * - core 0 root - core 0 * - rss 512000 root - rss 512000 * - nproc 100 root - nproc 100 * - maxlogins 1 root - maxlogins 1 I run a program as user2 (./programname) but /proc/3498/limits says cores are disabled: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Both ulimit -Sa and ulimit -Ha output that cores are disabled: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14001 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 512000 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 100 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Why are cores disabled ?

    Read the article

  • /etc/security/limits.conf for setting program limits in Linux

    - by Flavius Akerele
    I have the following inside /etc/security/limits.conf (I have specified root separately because * will not include it.) user2 - core unlimited * - core 0 root - core 0 * - rss 512000 root - rss 512000 * - nproc 100 root - nproc 100 * - maxlogins 1 root - maxlogins 1 I run a program as user2 (./programname) but /proc/3498/limits says cores are disabled: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Both ulimit -Sa and ulimit -Ha output that cores are disabled: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14001 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 512000 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 100 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Why are cores disabled ?

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Why am I getting an IndexOutOfBoundsException here?

    - by Berzerker
    I'm getting an index out of bounds exception thrown and I don't know why, within my replaceValue method below. [null, (10,4), (52,3), (39,9), (78,7), (63,8), (42,2), (50,411)] replacement value test:411 size=7 [null, (10,4), (52,3), (39,9), (78,7), (63,8), (42,2), (50,101)] removal test of :(10,4) [null, (39,9), (52,3), (42,2), (78,7), (63,8), (50,101)] size=6 I try to replace the value again here and get an error... package heappriorityqueue; import java.util.*; public class HeapPriorityQueue<K,V> { protected ArrayList<Entry<K,V>> heap; protected Comparator<K> comp; int size = 0; protected static class MyEntry<K,V> implements Entry<K,V> { protected K key; protected V value; protected int loc; public MyEntry(K k, V v,int i) {key = k; value = v;loc =i;} public K getKey() {return key;} public V getValue() {return value;} public int getLoc(){return loc;} public String toString() {return "(" + key + "," + value + ")";} void setKey(K k1) {key = k1;} void setValue(V v1) {value = v1;} public void setLoc(int i) {loc = i;} } public HeapPriorityQueue() { heap = new ArrayList<Entry<K,V>>(); heap.add(0,null); comp = new DefaultComparator<K>(); } public HeapPriorityQueue(Comparator<K> c) { heap = new ArrayList<Entry<K,V>>(); heap.add(0,null); comp = c; } public int size() {return size;} public boolean isEmpty() {return size == 0; } public Entry<K,V> min() throws EmptyPriorityQueueException { if (isEmpty()) throw new EmptyPriorityQueueException("Priority Queue is Empty"); return heap.get(1); } public Entry<K,V> insert(K k, V v) { size++; Entry<K,V> entry = new MyEntry<K,V>(k,v,size); heap.add(size,entry); upHeap(size); return entry; } public Entry<K,V> removeMin() throws EmptyPriorityQueueException { if (isEmpty()) throw new EmptyPriorityQueueException("Priority Queue is Empty"); if (size == 1) return heap.remove(1); Entry<K,V> min = heap.get(1); heap.set(1, heap.remove(size)); size--; downHeap(1); return min; } public V replaceValue(Entry<K,V> e, V v) throws InvalidEntryException, EmptyPriorityQueueException { // replace the value field of entry e in the priority // queue with the given value v, and return the old value This is where I am getting the IndexOutOfBounds exception, on heap.get(i); if (isEmpty()){ throw new EmptyPriorityQueueException("Priority Queue is Empty."); } checkEntry(e); int i = e.getLoc(); Entry<K,V> entry=heap.get(((i))); V oldVal = entry.getValue(); K key=entry.getKey(); Entry<K,V> insert = new MyEntry<K,V>(key,v,i); heap.set(i, insert); return oldVal; } public K replaceKey(Entry<K,V> e, K k) throws InvalidEntryException, EmptyPriorityQueueException, InvalidKeyException { // replace the key field of entry e in the priority // queue with the given key k, and return the old key if (isEmpty()){ throw new EmptyPriorityQueueException("Priority Queue is Empty."); } checkKey(k); checkEntry(e); K oldKey=e.getKey(); int i = e.getLoc(); Entry<K,V> entry = new MyEntry<K,V>(k,e.getValue(),i); heap.set(i,entry); downHeap(i); upHeap(i); return oldKey; } public Entry<K,V> remove(Entry<K,V> e) throws InvalidEntryException, EmptyPriorityQueueException{ // remove entry e from priority queue and return it if (isEmpty()){ throw new EmptyPriorityQueueException("Priority Queue is Empty."); } MyEntry<K,V> entry = checkEntry(e); if (size==1){ return heap.remove(size--); } int i = e.getLoc(); heap.set(i, heap.remove(size--)); downHeap(i); return entry; } protected void upHeap(int i) { while (i > 1) { if (comp.compare(heap.get(i/2).getKey(), heap.get(i).getKey()) <= 0) break; swap(i/2,i); i = i/2; } } protected void downHeap(int i) { int smallerChild; while (size >= 2*i) { smallerChild = 2*i; if ( size >= 2*i + 1) if (comp.compare(heap.get(2*i + 1).getKey(), heap.get(2*i).getKey()) < 0) smallerChild = 2*i+1; if (comp.compare(heap.get(i).getKey(), heap.get(smallerChild).getKey()) <= 0) break; swap(i, smallerChild); i = smallerChild; } } protected void swap(int j, int i) { heap.get(j).setLoc(i); heap.get(i).setLoc(j); Entry<K,V> temp; temp = heap.get(j); heap.set(j, heap.get(i)); heap.set(i, temp); } public String toString() { return heap.toString(); } protected MyEntry<K,V> checkEntry(Entry<K,V> ent) throws InvalidEntryException { if(ent == null || !(ent instanceof MyEntry)) throw new InvalidEntryException("Invalid entry."); return (MyEntry)ent; } protected void checkKey(K key) throws InvalidKeyException{ try{comp.compare(key,key);} catch(Exception e){throw new InvalidKeyException("Invalid key.");} } }

    Read the article

  • Eclipse Error Exporting Web Project as WAR

    - by Anand
    Hi I have the following error when I export my war file org.eclipse.core.runtime.CoreException: Extended Operation failure: org.eclipse.jst.j2ee.internal.web.archive.operations.WebComponentExportOperation at org.eclipse.wst.common.frameworks.internal.datamodel.ui.DataModelWizard.performFinish(DataModelWizard.java:189) at org.eclipse.jface.wizard.WizardDialog.finishPressed(WizardDialog.java:752) at org.eclipse.jface.wizard.WizardDialog.buttonPressed(WizardDialog.java:373) at org.eclipse.jface.dialogs.Dialog$2.widgetSelected(Dialog.java:624) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:228) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3880) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3473) at org.eclipse.jface.window.Window.runEventLoop(Window.java:825) at org.eclipse.jface.window.Window.open(Window.java:801) at org.eclipse.ui.internal.handlers.WizardHandler$Export.executeHandler(WizardHandler.java:97) at org.eclipse.ui.internal.handlers.WizardHandler.execute(WizardHandler.java:273) at org.eclipse.ui.internal.handlers.HandlerProxy.execute(HandlerProxy.java:294) at org.eclipse.core.commands.Command.executeWithChecks(Command.java:476) at org.eclipse.core.commands.ParameterizedCommand.executeWithChecks(ParameterizedCommand.java:508) at org.eclipse.ui.internal.handlers.HandlerService.executeCommand(HandlerService.java:169) at org.eclipse.ui.internal.handlers.SlaveHandlerService.executeCommand(SlaveHandlerService.java:241) at org.eclipse.ui.internal.actions.CommandAction.runWithEvent(CommandAction.java:157) at org.eclipse.ui.internal.actions.CommandAction.run(CommandAction.java:171) at org.eclipse.ui.actions.ExportResourcesAction.run(ExportResourcesAction.java:116) at org.eclipse.ui.actions.BaseSelectionListenerAction.runWithEvent(BaseSelectionListenerAction.java:168) at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection(ActionContributionItem.java:584) at org.eclipse.jface.action.ActionContributionItem.access$2(ActionContributionItem.java:501) at org.eclipse.jface.action.ActionContributionItem$5.handleEvent(ActionContributionItem.java:411) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3880) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3473) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Caused by: org.eclipse.core.commands.ExecutionException: Error exportingWar File at org.eclipse.jst.j2ee.internal.archive.operations.J2EEArtifactExportOperation.execute(J2EEArtifactExportOperation.java:131) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl$1.run(DataModelPausibleOperationImpl.java:376) at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:1800) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.runOperation(DataModelPausibleOperationImpl.java:401) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.runOperation(DataModelPausibleOperationImpl.java:352) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.doExecute(DataModelPausibleOperationImpl.java:242) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.executeImpl(DataModelPausibleOperationImpl.java:214) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.cacheThreadAndContinue(DataModelPausibleOperationImpl.java:89) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.execute(DataModelPausibleOperationImpl.java:202) at org.eclipse.wst.common.frameworks.internal.datamodel.ui.DataModelWizard$1$CatchThrowableRunnableWithProgress.run(DataModelWizard.java:218) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121) Caused by: org.eclipse.jst.j2ee.commonarchivecore.internal.exception.SaveFailureException: Error opening archive for export.. at org.eclipse.jst.j2ee.internal.web.archive.operations.WebComponentExportOperation.export(WebComponentExportOperation.java:64) at org.eclipse.jst.j2ee.internal.archive.operations.J2EEArtifactExportOperation.execute(J2EEArtifactExportOperation.java:123) ... 10 more Caused by: org.eclipse.jst.jee.archive.ArchiveSaveFailureException: Error saving archive: WebComponentArchiveLoadAdapter, Component: P/Nautilus2 to output path: D:/Nautilus2.war at org.eclipse.jst.jee.archive.internal.ArchiveFactoryImpl.saveArchive(ArchiveFactoryImpl.java:84) at org.eclipse.jst.j2ee.internal.archive.operations.J2EEArtifactExportOperation.saveArchive(J2EEArtifactExportOperation.java:306) at org.eclipse.jst.j2ee.internal.web.archive.operations.WebComponentExportOperation.export(WebComponentExportOperation.java:50) ... 11 more Caused by: java.io.FileNotFoundException: D:\myproject.war (Access is denied) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.(Unknown Source) at java.io.FileOutputStream.(Unknown Source) at org.eclipse.jst.jee.archive.internal.ArchiveFactoryImpl.createSaveAdapterForJar(ArchiveFactoryImpl.java:108) at org.eclipse.jst.jee.archive.internal.ArchiveFactoryImpl.saveArchive(ArchiveFactoryImpl.java:74) ... 13 more Contains: Extended Operation failure: org.eclipse.jst.j2ee.internal.web.archive.operations.WebComponentExportOperation org.eclipse.core.commands.ExecutionException: Error exportingWar File at org.eclipse.jst.j2ee.internal.archive.operations.J2EEArtifactExportOperation.execute(J2EEArtifactExportOperation.java:131) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl$1.run(DataModelPausibleOperationImpl.java:376) at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:1800) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.runOperation(DataModelPausibleOperationImpl.java:401) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.runOperation(DataModelPausibleOperationImpl.java:352) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.doExecute(DataModelPausibleOperationImpl.java:242) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.executeImpl(DataModelPausibleOperationImpl.java:214) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.cacheThreadAndContinue(DataModelPausibleOperationImpl.java:89) at org.eclipse.wst.common.frameworks.internal.datamodel.DataModelPausibleOperationImpl.execute(DataModelPausibleOperationImpl.java:202) at org.eclipse.wst.common.frameworks.internal.datamodel.ui.DataModelWizard$1$CatchThrowableRunnableWithProgress.run(DataModelWizard.java:218) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121) Caused by: org.eclipse.jst.j2ee.commonarchivecore.internal.exception.SaveFailureException: Error opening archive for export.. at org.eclipse.jst.j2ee.internal.web.archive.operations.WebComponentExportOperation.export(WebComponentExportOperation.java:64) at org.eclipse.jst.j2ee.internal.archive.operations.J2EEArtifactExportOperation.execute(J2EEArtifactExportOperation.java:123) ... 10 more Caused by: org.eclipse.jst.jee.archive.ArchiveSaveFailureException: Error saving archive: WebComponentArchiveLoadAdapter, Component: P/Nautilus2 to output path: D:/Nautilus2.war at org.eclipse.jst.jee.archive.internal.ArchiveFactoryImpl.saveArchive(ArchiveFactoryImpl.java:84) at org.eclipse.jst.j2ee.internal.archive.operations.J2EEArtifactExportOperation.saveArchive(J2EEArtifactExportOperation.java:306) at org.eclipse.jst.j2ee.internal.web.archive.operations.WebComponentExportOperation.export(WebComponentExportOperation.java:50) ... 11 more Caused by: java.io.FileNotFoundException: D:\myproject.war (Access is denied) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.(Unknown Source) at java.io.FileOutputStream.(Unknown Source) at org.eclipse.jst.jee.archive.internal.ArchiveFactoryImpl.createSaveAdapterForJar(ArchiveFactoryImpl.java:108) at org.eclipse.jst.jee.archive.internal.ArchiveFactoryImpl.saveArchive(ArchiveFactoryImpl.java:74) ... 13 more Can anyone help me out with this ?

    Read the article

  • java serial I/O: handling USB serial connection/disconnection in a robust manner

    - by Jason S
    I'm using rxtx for serial I/O handling in Java with an FTDI2232H that provides a USB comm port. It works great, with one exception: if I unplug the USB cable, so that the COM port disappears at runtime, it spews exceptions left and right: java.io.IOException: No error in nativeavailable at gnu.io.RXTXPort.nativeavailable(Native Method) at gnu.io.RXTXPort$SerialInputStream.read(RXTXPort.java:1427) at gnu.io.RXTXPort$SerialInputStream.read(RXTXPort.java:1339) and when I re-plug the cable in again, it does not recover. Is there any way to get rxtx to work properly with USB comm port connection/disconnection? (I've tried to post to the rxtx mailing list but for some strange reason I cannot send messages even though I am subscribed to the list. I've emailed the list admin and have gotten no response.) If not, is there another serial I/O framework that does?

    Read the article

  • Problem reading from two separate InputStreams

    - by Emil H
    I'm building a Yammer client for Android in Scala and have encountered the following issue. When two AsyncTasks try to parse an XML response (not the same, each task has it's own InputStream) from the Yammer API the underlying stream throws a IOException with the message "null SSL pointer", as seen below: Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception java.lang.RuntimeException: An error occured while executing doInBackground() at android.os.AsyncTask$3.done(AsyncTask.java:200) at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:234) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:258) at java.util.concurrent.FutureTask.run(FutureTask.java:122) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:648) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:673) at java.lang.Thread.run(Thread.java:1060) Caused by: java.io.IOException: null SSL pointer at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl.nativeread(Native Method) at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl.access$300(OpenSSLSocketImpl.java:55) at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl$SSLInputStream.read(OpenSSLSocketImpl.java:524) at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:103) at org.apache.http.impl.io.AbstractSessionInputBuffer.read(AbstractSessionInputBuffer.java:134) at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:174) at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:188) at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:178) at org.apache.harmony.xml.ExpatParser.parseFragment(ExpatParser.java:504) at org.apache.harmony.xml.ExpatParser.parseDocument(ExpatParser.java:467) at org.apache.harmony.xml.ExpatReader.parse(ExpatReader.java:329) at org.apache.harmony.xml.ExpatReader.parse(ExpatReader.java:286) at javax.xml.parsers.SAXParser.parse(SAXParser.java:361) at org.mediocre.util.XMLParser$.loadXML(XMLParser.scala:28) at org.mediocre.util.XMLParser$.loadXML(XMLParser.scala:12) ..... Searching for the error didn't give much clarity. Does this have something to do with the response from the server? Or is it something else? Complete code can be found at: http://github.com/archevel/YammerTime I get no error if I wait until the first repsponse is finished and then let the other complete. The request is made with the DefaultHttpClient, but this is supposedly thread safe. What am I missing? If anything needs to be clarified just ask :) Cheers, Emil H

    Read the article

  • BufferedReader.readLine() gives error java.net.SocketException: Software caused connection abort: re

    - by javatcp
    I am trying to code my program such that until the buffered reader gets something in readLine() from my tcp client it should keep running in the while loop checking but I get this error as soon as the program executes Mar 31, 2010 11:03:36 PM deswash.DESWashView$5 run SEVERE: null java.net.SocketException: Software caused connection abort: recv failed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.readLine(BufferedReader.java:299) at java.io.BufferedReader.readLine(BufferedReader.java:362) at deswash.DESWashView$5.run(DESWashView.java:448) the second line in the following code throws the error while(running){ String temp = in.readLine(); if(!(temp.equals(null))){ int inid = Integer.parseInt(temp); stationList.add(inid); } }

    Read the article

  • error executing aapt, all of the sudden

    - by pjv
    I know there are a lot of these topics around but none seem to help in my case, nor describe it exactly. The best similar one is aapt not found under the right path. My problem is that I can be using Eclipse for a whole evening programming, compiling and using my device, and then suddenly I get "error executing aapt" for my current project and ofcourse R.java isn't (properly) generated anymore. I then restart Eclipse and everything goes away. I'm seeing this once a day on average however. I've recently switched to amd64 and installed the latest Android-2.3 SDK and matching tools. I know that there is now a platform-tools folder that has an aapt version that should work SDK version independently. At first I had added this directory to my PATH, as instructed on the SDK website. I've also tried not adding it to my path and making a link platforms/android-9/tools so that every SDK version might use it's own old copy. Needless to say, platform-tools/aapt is there and has the right permissions, and I have been able to execute it on the command-line at any time. When I do write a faulty xml file or sorts, and appropriately get an error, I see an extra line that says "aapt: /lib32/libz.so.1: no version information available". I'm running a recent Gentoo linux system. I have everything installed to support x86 on amd64, but have re-emerged emul-linux-x86-baselibs and zlib just to be sure. The problem persists. I do see some pages that spell horror over some zlib bugs, but I'm not sure if that's related. I realize I'm not on the reference Ubuntu platform, but surely the difference cannot be that great? It might very well be a bug in aapt or the tools itself. Why would it suddenly stop working? I also experience that the ids in R.java were incorrect, namely that simple findViewById() code would give ClassCastExceptions because of mixed ids the one time, and then work perfectly without any changes bu only a "clean project", in the aftermath of a failing aapt. Finally, I've run a few commands on aapt, that don't seem to add any extra information: #ldd aapt ./aapt: /lib32/libz.so.1: no version information available (required by ./aapt) linux-gate.so.1 => (0xffffe000) librt.so.1 => /lib32/librt.so.1 (0x4f864000) libpthread.so.0 => /lib32/libpthread.so.0 (0x4f849000) libz.so.1 => /lib32/libz.so.1 (0xf7707000) libstdc++.so.6 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.4.4/32/libstdc++.so.6 (0x415e9000) libm.so.6 => /lib32/libm.so.6 (0x4f876000) libgcc_s.so.1 => /lib32/libgcc_s.so.1 (0x4fac6000) libc.so.6 => /lib32/libc.so.6 (0x4f5ed000) /lib/ld-linux.so.2 (0x4f5ca000) #file aapt aapt: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped Can anybody tell anything wrong with my configuration? Does it smell like a bug perhaps (otherwise let's report it (again))? Update 2010-01-06: I've gained some more knowledge. When I recently was trying to export a signed apk, I ran into another error message (full details from the Eclipse error view) regarding aapt I hadn't seen before. Note here too, that I can just restart Eclipse and can export apks again without problems, at least for a little while. I'm starting to think it is related to lack of memory on my system. The message "onvoldoende geheugen beschikbaar" means "insufficient memory available". I have also been seeing insufficient memory errors in DDMS when I'm dumping HPROF files. Here is the error log (shortened): !ENTRY com.android.ide.eclipse.adt 4 0 2011-01-05 23:11:16.097 !MESSAGE Export Wizard Error !STACK 1 org.eclipse.core.runtime.CoreException: Failed to export application at com.android.ide.eclipse.adt.internal.project.ExportHelper.exportReleaseApk(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.doExport(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.access$0(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard$1.run(Unknown Source) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121) Caused by: com.android.ide.eclipse.adt.internal.build.AaptExecException: Error executing aapt. Please check aapt is present at /opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt at com.android.ide.eclipse.adt.internal.build.BuildHelper.executeAapt(Unknown Source) at com.android.ide.eclipse.adt.internal.build.BuildHelper.packageResources(Unknown Source) ... 5 more Caused by: java.io.IOException: Cannot run program "/opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt": java.io.IOException: error=12, Onvoldoende geheugen beschikbaar ... Caused by: java.io.IOException: java.io.IOException: error=12, Onvoldoende geheugen beschikbaar ... !SUBENTRY 1 com.android.ide.eclipse.adt 4 0 2011-01-05 23:11:16.098 !MESSAGE Failed to export application !STACK 0 com.android.ide.eclipse.adt.internal.build.AaptExecException: Error executing aapt. Please check aapt is present at /opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt at com.android.ide.eclipse.adt.internal.build.BuildHelper.executeAapt(Unknown Source) at com.android.ide.eclipse.adt.internal.build.BuildHelper.packageResources(Unknown Source) at com.android.ide.eclipse.adt.internal.project.ExportHelper.exportReleaseApk(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.doExport(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.access$0(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard$1.run(Unknown Source) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121) Caused by: java.io.IOException: Cannot run program "/opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt": java.io.IOException: error=12, Onvoldoende geheugen beschikbaar ... Caused by: java.io.IOException: java.io.IOException: error=12, Onvoldoende geheugen beschikbaar

    Read the article

  • Ambiguous reference in Stream

    - by Sharpeye500
    This is the webservice method i have LoadImageFromDB(int ID, ref Stream streamReturnVal) I have this on the top of the section using Stream = System.IO.MemoryStream; Whenever i consume this method(update web reference) from a web application, i get this error 'Stream' is an ambiguous reference between 'System.IO.Stream' and 'WebReference.Stream' Any thoughts? In webservice class using Stream = System.IO.MemoryStream; LoadImageFromDB(int ID, ref Stream streamReturnVal); In web page where above webservice is consumed: using WebReference; Stream streamReturnVal = null; streamReturnVal = new MemoryStream(); WebserviceInstanceName.LoadImageFromDB(100,streamReturnVal ); PS: Stream - is from System.IO.Stream

    Read the article

  • How do I fix my "Stream closed" error in spring-ws?

    - by mcherm
    I have working code using the spring-ws library to respond to soap requests. I moved this code to a different project (I'm merging projects) and now it is failing. I would like to figure out the reason for the failure. The symptom I get is this: when the HTTP request arrives, spring begins handling the call. Then I get the following exception: org.springframework.ws.soap.saaj.SaajSoapEnvelopeException: Could not access envelope: java.io.IOException: Stream closed; nested exception is javax.xml.soap.SOAPException: java.io.IOException: Stream closed at org.springframework.ws.soap.saaj.SaajSoapMessage.getEnvelope(SaajSoapMessage.java:109) <<more lines that don't matter>> Caused by: java.io.IOException: Stream closed at java.io.PushbackInputStream.ensureOpen(PushbackInputStream.java:57) at java.io.PushbackInputStream.read(PushbackInputStream.java:116) at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source) at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source) at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl.parse(Unknown Source) at org.apache.axis.encoding.DeserializationContext.parse(DeserializationContext.java:227) at org.apache.axis.SOAPPart.getAsSOAPEnvelope(SOAPPart.java:696) ... 30 more Examining it in a debugger, it appears that spring successfully handles HTTP headers, but then when it begins to process the contents of the SOAP message itself, it chokes when reading the very first character of the body. Some googling for the error message suggests that the problem is that a PushbackInputStream which is apparently used for reading from the socket is read twice or perhaps has close() called and then is read afterward. It is happening inside of spring-ws, not my code, and since it worked fine before I moved the code to a new project it must have something to do with versions of spring, or something it uses like axis or xerces. But I can't find any differences in versions of these! Has anyone encountered this error before? Or do you have any suggestions of approaches I could take in troubleshooting this?

    Read the article

  • Haskell typeclass

    - by Geoff
    I have a Haskell typeclass question. I can't munge the syntax to get this (seemingly reasonable) program to compile under GHC. import Control.Concurrent.MVar blah1 :: [a] -> IO ([a]) blah1 = return blah2 :: [a] -> IO (MVar [a]) blah2 = newMVar class Blah b where blah :: [a] -> IO (b a) instance Blah [] where blah = blah1 -- BOOM instance Blah (MVar []) where blah = blah2 main :: IO () main = do putStrLn "Ok" I get the following error message, which kind of makes sense, but I don't know how to fix it: `[]' is not applied to enough type arguments Expected kind `*', but `[]' has kind `* -> *' In the type `MVar []' In the instance declaration for `Blah (MVar [])'

    Read the article

  • how do you convert a directoryInfo file to string

    - by steve
    Problem is that i cant convert to string Dim path As String = "..\..\..\Tier1 downloads\CourseVB\" If countNumberOfFolders > 0 Then 'if there is a folder then ' make a reference to a directory Dim di As New IO.DirectoryInfo(path) Dim diar1 As IO.DirectoryInfo() = di.GetDirectories() Dim dra As IO.DirectoryInfo 'list the names of all files in the specified directory For Each dra In diar1 Dim lessonDirectoryName() As Lesson lessonDirectoryName(0).lessonName = dra Next 'the the lesson is an object, and lessonName is the property of type string. How do i convert the directoryInfo to string?

    Read the article

  • PerlIO in Windows PowerShell and CMD.exe

    - by Evan Carroll
    Apparently, a Perl script I have results in two different output files depending on if I run it under Windows PowerShell, or cmd.exe. The script can be found at the bottom of this question. The file handle is opened with IO::File, I believe that PerlIO is doing some screwy stuff. It seems as if under cmd.exe the encoding chosen is much more compact encoding (4.09 KB), as compared to PowerShell which generates a file nearly twice the size (8.19 KB). This script takes a shell script and generates a Windows batch file. It seems like the one generated under cmd.exe is just regular ASCII (1 byte character), while the other one appears to be UTF-16 (first two bytes FF FE) Can someone verify and explain why PerlIO works differently under Windows Powershell than cmd.exe? Also, how do I explicitly get an ASCII-magic PerlIO filehandle using IO::File? Currently, only the file generated with cmd.exe is executable. The UTF-16 .bat (I think that's the encoding) is not executable by either PowerShell or cmd.exe. BTW, we're using Perl 5.12.1 for MSWin32 #!/usr/bin/env perl use strict; use warnings; use File::Spec; use IO::File; use IO::Dir; use feature ':5.10'; my $bash_ftp_script = File::Spec->catfile( 'bin', 'dm-ftp-push' ); my $fh = IO::File->new( $bash_ftp_script, 'r' ) or die $!; my @lines = grep $_ !~ /^#.*/, <$fh>; my $file = join '', @lines; $file =~ s/ \\\n/ /gm; $file =~ tr/'\t/"/d; $file =~ s/ +/ /g; $file =~ s/\b"|"\b/"/g; my @singleLnFile = grep /ncftp|echo/, split $/, $file; s/\$PWD\///g for @singleLnFile; my $dh = IO::Dir->new( '.' ); my @files = grep /\.pl$/, $dh->read; say 'echo off'; say "perl $_" for @files; say for @singleLnFile; 1;

    Read the article

  • Qooxdoo REST JSON request problem - unexpected token and then timeout

    - by freiksenet
    Hello! I am learning Qooxdoo framework and I am trying to make it work with a small Django web service. Django webservice just returns JSON data like this: { "name": "Football", "description": "The most popular sport." } Then I use the following code to query that url: var req = new qx.io.remote.Request(url, "GET", "application/json"); req.toggleCrossDomain(); req.addListener("completed", function(e) { alert(e.getContent()); }); req.send(); Unfortunately when I execute the code I get unexpected token error and then request timeouts. Uncaught SyntaxError: Unexpected token : Native.js:91013011 qx.io.remote.RequestQueue[246]: Timeout: transport 248 Native.js:91013011 qx.io.remote.RequestQueue[246]: 5036ms > 5000ms Native.js:91013013 qx.io.remote.Exchange[248]: Timeout: implementation 249 JSLint reports that this is a valid JSON, so I wonder why Qooxdoo doesn't parse it correctly.

    Read the article

  • How to read integer in Erlang?

    - by Jace Jung
    I'm trying to read user input of integer. (like cin nInput; in C++) I found io:fread bif from http://www.erlang.org/doc/man/io.html, so I write code like this. {ok, X} = io:fread("input : ", "~d"), io:format("~p~n", [X]). but when I input 10, the erlang terminal keep giving me "\n" not 10. I assume fread automatically read 10 and conert this into string. How can I read integer value directly? Is there any way to do this? Thank you for reading this.

    Read the article

  • grails Sql error

    - by Srinath
    Hi, I was getting issue wen using new Sql method in grails . import groovy.sql.Sql def datasource def organization_config = new Sql(dataSource) def orgs = organization_config.rows("select o.organizationId,o.name from organization o ") session.setAttribute("org_results", orgs); The application is running but getting these errors when restart tomcat server. SEVERE: IOException while loading persisted sessions: java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: groovy.sql.GroovyRowResult java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: groovy.sql.GroovyRowResult Can any one please tell me wy this is coming . thanks in advance, sri..

    Read the article

  • Streaming files from EventMachine handler?

    - by Noah
    I am creating a streaming eventmachine server. I'm concerned about avoiding blocking IO or doing anything else to muck up the event loop. From what I've read, ruby's non-blocking IO can be used to stream files in a non-blocking way, or I can call next_tick, but I'm a little unclear about which of these approaches is preferable. Part of the problem is that I have not found a good explanation of non-blocking IO library functions in ruby. Short version: Assuming a long-lived network IO operation, several wall clock minutes of streaming per file, transfer, what is the best way to do this in eventmachine without gumming up the event loop? while 1 do file.read do |bytes| @conn.send_data bytes end end I understand that the above code will block and I'm wondering what to put in its place. Also, I cannot use the FileStreamer class that is part of eventmachine as is, because I need to manipulate the data after it's read but before it's sent. Thanks, Noah

    Read the article

  • Webstart omits cookie, resulting in EOFException in ObjectInputStream when accessing Servlets?!

    - by Houtman
    Hi, My app. is started from both the commandline and by using an JNLP file. Im running java version 1.6.0_14 First i had the problem that i created the Buffered input and output streams in incorrect order. Found the solution here at StackOverflow . So starting from the commandline works fine now. But when starting the app using Webstart, it ends here java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(Unknown Source) at java.io.ObjectInputStream$BlockDataInputStream.readShort(Unknown Source) at java.io.ObjectInputStream.readStreamHeader(Unknown Source) at java.io.ObjectInputStream.<init>(Unknown Source) at <..>remoting.thinclient.RemoteSocketChannel.<init>(RemoteSocketChannel.java:76) I found some posts regarding similar problems; at ibm.com - identifies cookies problem at bugs.sun.com - identifies problem as solved in 6u10(b12)? The first suggests that there is a problem in Webstart with cookies. It doesn't seem to be acknowledged as a proper java bug though.. Still i am a bit lost in the solution provided regarding the cookies.(ibm link) Can anyone expand on the cookie solution? I can't find information on how the cookie is generated in the first place. Many thanks.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >