Search Results

Search found 6587 results on 264 pages for 'slow motion'.

Page 102/264 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • problem with jQuery array being a newBie.

    - by user236106
    <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js" type="text/javascript"></script> <script type="text/jscript"> $('document').ready(function (){ var clickNo=0; $('div#pics>img').hide() $('div#pics>input').click(function(event){ $('div#pics>img')[0].show('slow'); }) }); </script> </head> <body> <div id="pics"> <input type="button" value="next" /> <--a bunch of img tags .I had to resort to this comment because the system won't let me include the img tag in the code--> </div> </body> </html> I can't understand why the line $('div#picsimg')[0].show('slow'); is not working ?

    Read the article

  • odd behavior when checking if radio button selected in jQuery

    - by RememberME
    I had the following check in my jQuery which I thought was working fine to see if a radio button was checked. if ($("input[@name='companyType']:checked").attr('id') == "primary") { ... } Here's the radiobuttons: <p> <label>Company Type:</label> <label for="primary"><input onclick="javascript: $('#sec').hide('slow');$('#primary_company').find('option:first').attr('selected','selected');" type="radio" name="companyType" id="primary" checked />Primary</label> <label for="secondary"><input onclick="javascript: $('#sec').show('slow');" type="radio" name="companyType" id="secondary" />Subsidiary</label> </p> Then, it suddenly stopped working (or so I thought). I did some debugging and finally realized that it was returning an id of "approved_status". Elsewhere on my form I have a checkbox called "approved_status". I realized that when I originally tested this, I must have testing it on records where approved_status is false. And, now most of my approved_statuses are true/checked. I changed the code to this: var id = $("input:radio[@name='companyType']:checked").attr('id'); alert(id); if (id == "primary") { And it's now properly returning "primary" or "secondary" as the id. So, it is working, but it seems that it's not checking the name at all and now just checking radio buttons. I just want to know for future use, what's wrong with the original code b/c I can see possibly having 2 different radio sets on a page and then my new fix probably wouldn't work. Thanks!

    Read the article

  • Calling Multiple functions simultaneously

    - by Noob
    I'm trying to call two different functions for two different HTML elements at the same time, but the second function isn't being read at all. I'm also trying to use the id to specify which corresponding elements to grab data from. Here's what I have: function changeImage(id) { var s = document.getElementById('showcase'); var simg = s.getElementsByTagName('img'); var slen = simg.length; for(i=0; i < slen; i++) { simg[i].style.display = 'none'; } $('#' + id).fadeIn('slow', 0); function createComment(jim) { //alert('hello?'); var d = document.getElementById('description'); var dh = document.getElementsByTagName('p'); var dlen = dh.length; //alert(dh); for(i=0; i < dlen; i++) { alert(dh); dh[i].style.display = 'none'; } $('#' + jim).fadeIn('slow', 0); }

    Read the article

  • Need Help on JavaScript to Trim

    - by MacpaLtd
    I am facing some problems with my server space. The images are using all the space from the server, making it slow. As it is an eCommerce website, it cannot be slow or we lose customers. If I have the following: SKU's : ABC123-001 > catName > Phone ABC753-851 > catName > MAC AT1233-098 > catName > PC How can I use trim to make it the following: SKU's : 123 > catName > Phone 753 > catName > MAC 1233 > catName > PC Which I would use in the following script: <script type="text/javascript"> $(function(){ var sku = $("#ProductBreadcrumb ul li:last").text(); $(".ProductThumbImage img").attr('src','http://img.example.com/images/'+catName+'/'+sku+'.jpg'); }); </script> So, basically, the output for the picture link would be: http://img.example.com/images/phone/123.jpg http://img.example.com/images/mac/753.jpg http://img.example.com/images/pc/1233.jpg So yeah, first problem I have to face is.. How can I trim it? I am not familiar with JavaScript so any help would be really appreciated :D

    Read the article

  • Combine MD5 hashes of multiple files

    - by user685869
    I have 7 files that I'm generating MD5 hashes for. The hashes are used to ensure that a remote copy of the data store is identical to the local copy. Unfortunately, the link between these two copies of the data is mind numbingly slow. Changes to the data are very rare but I have a requirement that the data be synchronized at all times (or as soon as possible). Rather than passing 7 different MD5 hashes across my (extremely slow) communications link, I'd like to generate the hash for each file and then combine these hashes into a single hash which I can then transfer and then re-calculate/use for comparison on the remote side. If the "combined hash" differs, then I'd start sending the 7 individual hashes to determine exactly which file(s) have been changed. For example, here are the MD5 hashes for the 7 files as of last week: 0709d609d69385255c496436eb50402c 709465a74411bd596595c7b9b158ae6a 4ab657320ef33e3d5eb498e4c13d41b7 3b49c6ab199994fd776bb63761414e72 0fc28c5a010fc3c06c0c930c88e31a15 c4ecd214662cac5aae0e53f6f252bf0e 8b086431e43148a2c2d943ba30d31cc6 I'd like to combine these hashes together such that I get a single unique value (perhaps another MD5 hash?) that I can then send to the remote system. On the remote system, I'd then perform the same calculation to determine if the data as a whole has been changed. If it has, then I'd start sending the individual hashes, etc. The most important factor is that my "combined hash" be short enough so that it uses less bandwidth than just sending all 7 hashes in the first place. I thought of writing the 7 MD5 hashes to a file and then hashing that file but is there a better way?

    Read the article

  • Utility to indexing a directory?

    - by achacha
    Here is what I am trying to do: I have a directory (with sub-directories) with source files, I need to index them so I can find files fast (find as I type) so I can open them for compare/analysis. I don't want it to scan the content, just filename index for quick lookup. I do this when trying to determine if a class exists in a given tree (we maintain directory trees for each release which has a lot of files) and sometimes I want to quickly check files to see how something was implemented, etc. Most of these directories are on remote servers (sometimes on the other side of the world) or on a VM (which is on a server far away), so I only want to read the directory trees once, which is why running find every time is way too slow and doing 'find . foo.txt' and then searching that is a bit tedious. It's kind of like how "Find Resource" works in eclipse after it indexes all files, but it's a bit of a chore to import/remove directories into eclipse every time. Eclipse is also very slow when dealing with remote volumes. Any suggestions are appreciated :)

    Read the article

  • Vacancy Tracking Algorithm implementation in C++

    - by Dave
    I'm trying to use the vacancy tracking algorithm to perform transposition of multidimensional arrays in C++. The arrays come as void pointers so I'm using address manipulation to perform the copies. Basically, there is an algorithm that starts with an offset and works its way through the whole 1-d representation of the array like swiss cheese, knocking out other offsets until it gets back to the original one. Then, you have to start at the next, untouched offset and do it again. You repeat until all offsets have been touched. Right now, I'm using a std::set to just fill up all possible offsets (0 up to the multiplicative fold of the dimensions of the array). Then, as I go through the algorithm, I erase from the set. I figure this would be fastest because I need to randomly access offsets in the tree/set and delete them. Then I need to quickly find the next untouched/undeleted offset. First of all, filling up the set is very slow and it seems like there must be a better way. It's individually calling new[] for every insert. So if I have 5 million offsets, there's 5 million news, plus re-balancing the tree constantly which as you know is not fast for a pre-sorted list. Second, deleting is slow as well. Third, assuming 4-byte data types like int and float, I'm using up actually the same amount of memory as the array itself to store this list of untouched offsets. Fourth, determining if there are any untouched offsets and getting one of them is fast -- a good thing. Does anyone have suggestions for any of these issues?

    Read the article

  • JQuery ajax success help

    - by Jason
    Hi all, I am implementing a "Quick delete" function into a page I am creating. The way it works is like this: 1: You click the "delete" button in the table row for the record that you want to delete. 2: The page sends a request to the ajax page and return a successfully message of "yes" or a failure message of "no". My issue is that if I get a successful message of "yes" I want to hide the row for that record. I am having issue "finding" the row using JQuery. Here is my jquery code: $(document).ready(function(){ $(".pane .btn-delete").click(function(){ var element = $(this); var del_id = element.attr("id"); var dataString = 'action=del&cid=' + del_id; if(confirm("Are you sure you want to delete this content block?")) { $("#msgbox").addClass('ajaxmsg').text('Checking permissions....').fadeIn(1000); $.ajax({ type: "get", url: "ajax/admArticles_ajax.php", data: dataString, success: function(data){ switch(data) { case "yes": $("#msgbox").addClass('ajaxmsg').text('Deleting content block....').fadeIn(1000); $(this).parents(".pane").animate({ backgroundColor: "#fbc7c7" }, "fast") .animate({ opacity: "hide" }, "slow") break case "no": $("#msgbox").removeClass().addClass('error').text('You do not have the correct permissions to delete this content....').fadeIn(1000); break default: }; } }); } return false; }); }); This is the lines of code I am using to hide the row however it is not working because I don't think $(this).parents(".pane") finds the element. $(this).parents(".pane").animate({ backgroundColor: "#fbc7c7" }, "fast") .animate({ opacity: "hide" }, "slow") Any help would be greatly appreciated. Thanks...

    Read the article

  • Fastest way to read/store lots of multidimensional data? (Java)

    - by RemiX
    I have three questions about three nested loops: for (int x=0; x<400; x++) { for (int y=0; y<300; y++) { for (int z=0; z<400; z++) { // compute and store value } } } And I need to store all computed values. My standard approach would be to use a 3D-array: values[x][y][z] = 1; // test value but this turns out to be slow: it takes 192 ms to complete this loop, where a single int-assignment int value = 1; // test value takes only 66 ms. 1) Why is an array so relatively slow? 2) And why does it get even slower when I put this in the inner loop: values[z][y][x] = 1; // (notice x and z switched) This takes more than 4 seconds! 3) Most importantly: Can I use a data structure that is as quick as the assignment of a single integer, but can store as much data as the 3D-array?

    Read the article

  • jquery 2 fades complete at the same time

    - by odavy
    Hello again, just another quick one: I am noticing differences with fadeOut dependant on whether this is a target. Here's my structure. I have rows of data on my page, and each row has two icons. One is an update icon for that row, one is a delete icon for that row. When a user clicks the update icon for a particular row, I want both the update and the delete icons to fade away. So, in order to fade the thing the user clicked (the update button) and its corresponding delete button, I am using... $(this).next().add(this).fadeOut('slow'); ...which works, but the two elements don't fade at the same time. this fades first (which is the update icon), and then this.next fades out (the delete icon). But if I specify the two elements by name... $('#updS2, #delS2').fadeOut('slow'); then they fade together. Why is it different? Apologies for Friday rambling.

    Read the article

  • Problem with jQuery array

    - by user236106
    <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js" type="text/javascript"></script> <script type="text/jscript"> $('document').ready(function (){ var clickNo=0; $('div#pics>img').hide() $('div#pics>input').click(function(event){ $('div#pics>img')[0].show('slow'); }) }); </script> </head> <body> <div id="pics"> <input type="button" value="next" /> <--a bunch of img tags .I had to resort to this comment because the system won't let me include the img tag in the code--> </div> </body> </html> I can't understand why the line $('div#pics>img')[0].show('slow'); is not working.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • How recovery zip password using CUDA (GPU) ?

    - by marc
    Welcome, How can i recovery zip password on linux using CUDA (GPU). From 2 day's i'm trying using "fcrackzip" but it's too slow. Few months back i saw some application that can use GPU / CUDA and get large performance boost in compare to CPU. If brute-force using cuda is not possible, please tell me what's the best application for dictionary attack, and where can i find best (largest) dictionary. Regards

    Read the article

  • hp dl580 g5 diag error

    - by maruti
    server disk access is slow, and running insight diagnostics reports this error: Error: 640003 DST Error Error: 640006: The Read and/or Write HARD error rate is above threshold This drive has experienced/recorded error conditions reported by diagnosis and requires replacement"

    Read the article

  • SMB2 traffic crashes network?

    - by Phil Cross
    We've been having significant network slowdown issues over the past few weeks, primarily on a Friday morning. We run Windows 7 client machines, with Windows Server 2008 R2 servers. What generally happens is the network starts to slow down massively at 08:55 and resumes normal speeds at around 09:20 This affects everything on the network from logging on, resetting passwords, opening programs and files etc. On my client machine, Physical Memory usage remains at around 40% (normal) and CPU usage hovers around 0-10% idle. The servers show memory usage spikes massively and remains quite intense during the times mentioned above. I have taken several wireshark captures, both during the slowdown and when the network operates fine. One of the main things I noticed is the increase in SMB2 entries in the wireshark log during the slowdown. Record Time Source Destination Protocol Length Info 382 3.976460000 10.47.35.11 10.47.32.3 SMB2 362 Create Request File: pcross\My Documents 413 4.525047000 10.47.35.11 10.47.32.3 SMB2 146 Close Request File: pcross\My Documents 441 5.235927000 10.47.32.3 10.47.35.11 SMB2 298 Create Response File: pcross\My Documents\Downloads 442 5.236199000 10.47.35.11 10.47.32.3 SMB2 260 Find Request File: pcross\My Documents\Downloads SMB2_FIND_ID_BOTH_DIRECTORY_INFO Pattern: *;Find Request File: pcross\My Documents\Downloads SMB2_FIND_ID_BOTH_DIRECTORY_INFO Pattern: * 573 6.327634000 10.47.35.11 10.47.32.3 SMB2 146 Close Request File: pcross\My Documents\Downloads 703 7.664186000 10.47.35.11 10.47.32.3 SMB2 394 Create Request File: pcross\My Documents\Downloads\WestlandsProspectus\P24 __ P21.pdf These are some of the SMB2 records from a list of a couple of hundred which original from my computer with a destination of the fileserver. One of the interesting things to note is the last entry in the examples above is for a PDF file. That file was not open anywhere on my computer, or on anyone elses. No folders with the files in were open either. When I took another capture when the network was running fine, there were hardly any SMB2 entries, and the ones that were displayed were mainly from Wireshark. We currently have around 800 computers, 90 Macs and 200 Laptops and Netbooks. Our concern is if this traffic is happening on my computer, is it happening on other computers, and if so, would those computers be adding to the slow network issues? Again, this only happens during certain times. We're pretty sure its not the our antivirus. Is there anything to narrow down whats initializing this SMB traffic during the particular times? Or if anyone has any extra advice, or links to resources it would be appreciate.

    Read the article

  • Free solution to backup folders to external SFTP server with shadow copy

    - by Sergiy Byelozyorov
    I have an account in university on Linux machine with 10TB of free space accessible via SFTP. I would like to backup my Windows 7 x64 laptop to university. Currently I am using rsync+cygwin, but backup is pretty slow (without shadow copy) and I hate console window appearing every day on my screen when I login. So I am looking for something like Windows Backup but with support for SFTP. Combination of tools will work too.

    Read the article

  • How to make Ultra VNC viewer faster ?

    - by karthik
    I am trying to share the screen of a system A to another system B. The normal screen sharing is good. But when i run a 3-D program in System A and try to view it from System B, i see the screen frame by frame. The response time is too slow. My Req. is to show the 3-D program to another person. How can i make VNC faster, to its best ?

    Read the article

  • How to make Ultra VNC viewer faster ?

    - by karthik
    I am trying to share the screen of a system A to another system B. The normal screen sharing is good. But when i run a 3-D program in System A and try to view it from System B, i see the screen frame by frame. The response time is too slow. My Req. is to show the 3-D program to another person. How can i make VNC faster, to its best ?

    Read the article

  • copy windows registry and/or other locked files

    - by karolrvn
    Hi. While improving my (personal) backup system, I noticed, that I cannot copy certain locked files, like the windows registry files. Is there a way to copy such things? Or a specific solution for the registry (I know of the regedit-File-Export ,,solution'' but this is to text format and seems slow). AFAIK, On Linux the locking system is advisory and on Windows it is mandatory. Can I somehow bypass the mandatory-ness for backup purposes etc.? TIA.

    Read the article

  • Queston about torrents

    - by c0mrade
    Why do my torrents go at rate 25/30 kb/s and my regular http downloads rate up to 300 kb/s .. is my ISP to blame for this, I meant torrents have like thousand seeders and again its very slow.How are they blocking torrent speed, can I bypass it?

    Read the article

  • Debian (wheezy) force cache to RAM

    - by Marek Javurek
    I have Linux server running about 6 game servers. I have 3 GB total of RAM but I use only about 500 MB. Is there a way to cache one of my game servers (all the files - even not actually used maps etc. - about 1,5 GB) to RAM? The reason I want to do this is because my Linux server is virtual and the hard drives ar very slow, so there is really big IO wait time. IO: http://i.stack.imgur.com/7HLhB.png

    Read the article

  • How to switch between the upper and lower pane in emacs?

    - by Anthony Kong
    I am using the erlang mode in Aquamacs. The mode, by default, creates a new pane and buffer "*erlang*" when I hit C-C C-K to compile an erlang file. (as seen in the attached screen shot) What is the easiest way to switch between these two panes? I do not think "C-x b" is applicable in this case because 'C-X b' then "*erlang" is slow considering I have to switch between my files and the erlang shell rather frequently.

    Read the article

  • How to retrieve names of all private MSMQ queues - efficiently?

    - by Damian Powell
    How can I retrieve the names of all of the private MSMQ queues on the local machine, without using System.Messaging.MessageQueue.GetPrivateQueuesByMachine(".")? I'm using PowerShell so any solution using COM, WMI, or .NET is acceptable, although the latter is preferable. Note that this StackOverflow question has a solution that returns all of the queue objects. I don't want the objects (it's too slow and a little flakey when there are lots of queues), I just want their names.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >