Search Results

Search found 251 results on 11 pages for 'fragmentation'.

Page 5/11 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • The Evolution of Television and Home Entertainment

    - by Bill Evjen
    This is a group that is focused on entertainment in the aviation industry. I am attending their conference for the first time as it relates to my job at Swank Motion Pictures and what we do for our various markets. I will post my notes here. The Evolution of Television and Home Entertainment by Patrick Cosson, Veebeam TV has been the center of living rooms for sometime. Conversations and culture evolve around the TV. The way we consume this content has dramatically been changing. After TV, we had the MTV revolution of TV. It has created shorter attention spans, it made us more materialistic, narcissistic, and not easily impressed. Then we came to the Internet. The amount of content has expanded. It contains a ton of user-generated content, provides filtering, organization, distribution. We now have a problem. We are in the age of digital excess. We can access whatever we want. In conjunction with this – we are moving. The challenge we have now is curation. The trends  we see: rapid shift from scheduled to on demand consumption. A move to Internet protocols from cable Rapid fragmentation of media a transition from the TV set to a variety of screens Social connections bring mediators and amplifiers. TiVo – the shift to on demand It is because of a time-crunch Provides personal experiences Once old consumption habits are changed, there is no way back! Experiences are that people are loading up content and then bringing it with them on planes, to hotels, etc. Rapid fragmentation of media sources Many new professional content sources and channels, the rise of digital distribution, and the rise of user-generated content contribute to the wealth of content sources and abundant choice. Netflix, BBC iPlayer, hulu, Pandora, iTunes, Amazon Video, Vudu, Voddler, Spotify (these companies didn’t exist 5 years ago). People now expect this kind of consumption. People are now thinking how to deliver all these tools. Transition from the TV set to multi-screens The TV screen has traditionally been the dominant consumption screen for TV and video. Now the PC, game consoles, and various mobile devices are rapidly becoming common video devices. Multi-screens are now the norm. Social connections becoming key mediators What increasingly funnels traffic on the web, social networking enablers, will become an integral part of the discovery, consumption and sharing model for Television. The revolution will be broadcasted on Facebook and Twitter. There is business disruption There are a lot of new entrants Rapid internationalization Increasing competition from existing media players A fragmenting audience base Web browser Freedom to access any site The fight over the walled garden Most devices are not powerful enough to support a full browser PC will always be present in the living room Wireless link between PC and TV Output 1080p, plays anything, secure Key players and their challenges Services Internet media is increasingly interconnected to social media and publicly shared UGC Content delivery moving to IPTV Rights management issues are creating silos and hindering a great user experience and growth Devices Devices are becoming people’s windows into all kinds of media from all kinds of sources There won’t be a consolidation of the device landscape, rather the opposite Finding the right niche makes the most sense. We are moving to an on demand world of streaming world. People want full access to anything.

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • !gcroot output leads nowhere

    - by Jeff Costa
    I am troubleshooting memory fragmentation in an app pool, as evidenced by a small number of Free objects consuming the most space on the heap: 0x000007ff00256728 6,543 3,890,208 System.Collections.Hashtable+bucket[] 0x000007ff002649a8 7,297 22,979,560 System.Byte[] 0x000007ff001e0d90 251,347 30,374,304 System.String 0x0000000001d0c830 373 48,036,816 Free Running the !dumpgen 3 command reveals the fragmentation; There is a repeating pattern of Free and System.Object objects of the same size: 000000017feb7350 24 **** FREE **** 000000017feb7368 8192 System.Object[] 000000017feb9368 24 **** FREE **** 000000017feb9380 8192 System.Object[] 000000017febb380 24 **** FREE **** 000000017febb398 8192 System.Object[] 000000017febd398 24 **** FREE **** 000000017febd3b0 8192 System.Object[] 000000017febf3b0 24 **** FREE **** 000000017febf3c8 8192 System.Object[] 000000017fec13c8 24 **** FREE **** 000000017fec13e0 8192 System.Object[] 000000017fec33e0 24 **** FREE **** 000000017fec33f8 8192 System.Object[] 000000017fec53f8 24 **** FREE **** 000000017fec5410 14024 System.Object[] 000000017fec8ad8 24 **** FREE **** 000000017fec8af0 8192 System.Object[] 000000017fecaaf0 24 **** FREE **** 000000017fecab08 8192 System.Object[] 000000017feccb08 24 **** FREE **** 000000017feccb20 8192 System.Object[] 000000017feceb20 24 **** FREE **** 000000017feceb38 8192 System.Object[] 000000017fed0b38 24 **** FREE **** 000000017fed0b50 8192 System.Object[] 000000017fed2b50 24 **** FREE **** 000000017fed2b68 8192 System.Object[] When I try to obtain the root of one of the System.Objects with !gcroot, I get a pinned handle, but no additional stack data: Scan Thread 41 OSThread 1044 DOMAIN(0000000001D51330):HANDLE(Pinned):15217e8:Root: 000000017fe60fe8(System.Object[]) As you can see, there is no additional data to go on. Running a !handle command also yields nothing: 0:041> !handle 000000017fe7a068 ff Handle 000000017fe7a068 Type <Error retrieving type> unable to query object information unable to query object information No object specific information available How can I trace out this memory leak when I cannot find what is rooting System.Object?

    Read the article

  • APC (PHP Cache) Uptime 0 minutes, not caching

    - by Jussi
    My goal is to implement APC for opcode cache for a drupal 6 production site. I have so far tested APC with several php files with and without including other php files with include_once. Also tried to tweak the apc.ini values for shm_size, apc.include_once_override and apc.stat. Restarted apache every time. Resulting in apc.php not showing any changes in any values. (except of course the changed apc.ini values are shown as they should) Every time i refresh the apc.php test page, the start time resets as the current time showing uptime 0 minutes. apc.php -testpage shows: General Cache InformationAPC Version 3.1.9 PHP Version 5.2.10 APC Host xxxx.xx.xx Server Software Apache/2.2.3 (CentOS) Shared Memory 1 Segment(s) with 128.0 MBytes (mmap memory, pthread mutex Locks locking) Start Time 2011/07/26 11:53:56 Uptime 0 minutes File Upload Support 1 Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 1 Request Rate (hits, misses) 2.00 cache requests/second Hit Rate 1.00 cache requests/second Miss Rate 1.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 16 apc.mmap_file_mask /tmp/apcphp5.095eRm apc.num_files_hint 1024 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.slam_defense 0 apc.stat 0 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Host Status Diagrams: Free: 128.0 MBytes (100.0%) Hits: 1 (50.0%) Used: 20.3 KBytes (0.0%) Misses: 1 (50.0%) Detailed Memory Usage and Fragmentation: Fragmentation: 0% phpinfo shows: Server API CGI/FastCGI APC: Version 3.1.9 APC Debugging Enabled MMAP Support Enabled MMAP File Mask /tmp/apcphp5.JkKDk7 Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Jul 21 2011 14:31:12 I followed these steps to find if suexec settings would prevent caching: http://www.litespeedtech.com/support/forum/showthread.php?t=4189 [root@host /]# ps -ef|grep lsphp root 20402 17833 0 11:21 pts/0 00:00:00 grep lsphp [root@host /]# ps -waux root 17833 0.0 0.1 5004 1484 pts/0 S 10:39 0:00 bash ..indicates that there is no lsphp running on the host also I read the following article and comments, concluding that in my case the problem is not the suexec as the user apache is the httpd process owner http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ also suexec command is not recognized when logged and launced as root @ host also i'm almost confident that there is no cPanel running on the host to check if a setting there would reset the running cache process at some interval This leaves me with few clues where to head next. I tried to set (with chown and chgrp) apache as the owner of the apc.php file and some test php files resulting in 500 server error. Is there a way to check if the file permissions prevent the apc stay running? I'm tremendously grateful for any suggestions or help.

    Read the article

  • Shrink Sql Server database

    - by hani
    My SQL Server 2008 database file (.mdf) file is nearly 24 MB but the log file grown upto 15 GB. If I want to shrink database what are the important points to take into consideration? Will shrink causes any index fragmentation and does it affect my database performance?

    Read the article

  • How do I prevent TCP connection freezes over an OpenVPN network?

    - by Jason R
    New details added at the end of this question; it's possible that I'm zeroing in on the cause. I have a UDP OpenVPN-based VPN set up in tap mode (I need tap because I need the VPN to pass multicast packets, which doesn't seem to be possible with tun networks) with a handful of clients across the Internet. I've been experiencing frequent TCP connection freezes over the VPN. That is, I will establish a TCP connection (e.g. an SSH connection, but other protocols have similar issues), and at some point during the session, it seems that traffic will cease being transmitted over that TCP session. This seems to be related to points at which large data transfers occur, such as if I execute an ls command in an SSH session, or if I cat a long log file. Some Google searches turn up a number of answers like this previous one on Server Fault, indicating that the likely culprit is an MTU issue: that during periods of high traffic, the VPN is trying to send packets that get dropped somewhere in the pipes between the VPN endpoints. The above-linked answer suggests using the following OpenVPN configuration settings to mitigate the problem: fragment 1400 mssfix This should limit the MTU used on the VPN to 1400 bytes and fix the TCP maximum segment size to prevent the generation of any packets larger than that. This seems to mitigate the problem a bit, but I still frequently see the freezes. I've tried a number of sizes as arguments to the fragment directive: 1200, 1000, 576, all with similar results. I can't think of any strange network topology between the two ends that could trigger such a problem: the VPN server is running on a pfSense machine connected directly to the Internet, and my client is also connected directly to the Internet at another location. One other strange piece of the puzzle: if I run the tracepath utility, then that seems to band-aid the problem. A sample run looks like: [~]$ tracepath -n 192.168.100.91 1: 192.168.100.90 0.039ms pmtu 1500 1: 192.168.100.91 40.823ms reached 1: 192.168.100.91 19.846ms reached Resume: pmtu 1500 hops 1 back 64 The above run is between two clients on the VPN: I initiated the trace from 192.168.100.90 to the destination of 192.168.100.91. Both clients were configured with fragment 1200; mssfix; in an attempt to limit the MTU used on the link. The above results would seem to suggest that tracepath was able to detect a path MTU of 1500 bytes between the two clients. I would assume that it would be somewhat smaller due to the fragmentation settings specified in the OpenVPN configuration. I found that result somewhat strange. Even stranger, however: if I have a TCP connection in the stalled state (e.g. an SSH session with a directory listing that froze in the middle), then executing the tracepath command shown above causes the connection to start up again! I can't figure out any reasonable explanation for why this would be the case, but I feel like this might be pointing toward a solution to ultimately eradicate the problem. Does anyone have any recommendations for other things to try? Edit: I've come back and looked at this a bit further, and have found only more confounding information: I set the OpenVPN connection to fragment at 1400 bytes, as shown above. Then, I connected to the VPN from across the Internet and used Wireshark to look at the UDP packets that were sent to the VPN server while the stall occurred. None were greater than the specified 1400 byte count, so the fragmentation seems to be functioning properly. To verify that even a 1400-byte MTU would be sufficient, I pinged the VPN server using the following (Linux) command: ping <host> -s 1450 -M do This (I believe) sends a 1450-byte packet with fragmentation disabled (I at least verified that it didn't work if I set it to an obviously-too-large value like 1600 bytes). These seem to work just fine; I get replies back from the host with no issue. So, maybe this isn't an MTU issue at all. I'm just confused as to what else it might be! Edit 2: The rabbit hole just keeps getting deeper: I've now isolated the problem a bit more. It seems to be related to the exact OS that the VPN client uses. I have successfully duplicated the problem on at least three Ubuntu machines (versions 12.04 through 13.04). I can reliably duplicate an SSH connection freeze within a minute or so by just cat-ing a large log file. However, if I do the same test using a CentOS 6 machine as a client, then I don't see the problem! I've tested using the exact same OpenVPN client version as I was using on the Ubuntu machines. I can cat log files for hours without seeing the connection freeze. This seems to provide some insight as to the ultimate cause, but I'm just not sure what that insight is. I have examined the traffic over the VPN using Wireshark. I'm not a TCP expert, so I'm not sure what to make of the gory details, but the gist is that at some point, a UDP packet gets dropped due to the limited bandwidth of the Internet link, causing TCP retransmissions inside the VPN tunnel. On the CentOS client, these retransmissions occur properly and things move on happily. At some point with the Ubuntu clients, though, the remote end starts retransmitting the same TCP segment over and over (with the transmit delay increasing between each retransmission). The client sends what looks like a valid TCP ACK to each retransmission, but the remote end still continues to transmit the same TCP segment periodically. This extends ad infinitum and the connection stalls. My question here would be: Does anyone have any recommendations for how to troubleshoot and/or determine the root cause of the TCP issue? It's as if the remote end isn't accepting the ACK messages sent by the VPN client. One common difference between the CentOS node and the various Ubuntu releases is that Ubuntu has a much more recent Linux kernel version (from 3.2 in Ubuntu 12.04 to 3.8 in 13.04). A pointer to some new kernel bug maybe? I'm assuming that if that were so, then I wouldn't be the only one experiencing the problem; I don't think this seems like a particularly exotic setup.

    Read the article

  • MOSS 2007 SP2 DB index maintanance

    - by Mike H
    I've read in the "About Service Pack 2 for SharePoint Products and Technologies" paper that SP2 includes an update for the Update Statistics Timer Job that causes SharePoint to run SQL Server's online index rebuild feature (p.4). I'm uncertain of the terminology here but is this the rebuild that SQL Server uses for minor fragmentation (up to around 40%) and leaves the DB online? I'm also guessing that this will therefore not rebuild severely fragmented indexes as I think this requires the DB to come offline. Can someone please confirm my belief here?

    Read the article

  • Configure APC for maximum hit rate

    - by Steven De Groote
    I'm currently running php5 with APC, the latter with default configuration. However after setting up munin to monitor APC, I'm surprised by the results: apc.shm_size: 30 apc.gc_ttl: 3600 apc.ttl: 0 Used: 14MB Request rate: 100 requests/second Fragmentation: 0 Hit ratio: 80% (dropping to 0 a few times per hour) So the obvious question: how can I adapt the configuration to achieve a higher hitrate. I find it very strange that the available memory is not fully used which the hitratio is still below what I would expect. Thank for any hints!

    Read the article

  • Limit NFS block size from server side?

    - by paulw1128
    Is it possible to enforce a maximum rsize/wsize in nfsd? I'm having issues related to IP fragmentation (yes, I'm stuck with NFS-over-UDP, contrary to the warnings in the manpage), and have no practical access to the client mount command (buried in one of many TFTP boot images). http://nfs.sourceforge.net/nfs-howto/ar01s05.html lists a kernel source parameter limiting the maximum block size, but I'm not gong to get away with recompiling the nfsd kernel module so that's not really an option either :-(

    Read the article

  • apc fragments and configuration

    - by Jourkey
    I have quite a bit of fragmentation. Is this a serious problem? How can I fix this? How can I make sure it doesn't happen again? Are there any other config recommendations for me? Thanks! http://i48.tinypic.com/zv5r4g.jpg

    Read the article

  • HTG Explains: Why Linux Doesn’t Need Defragmenting

    - by Chris Hoffman
    If you’re a Linux user, you’ve probably heard that you don’t need to defragment your Linux file systems. You’ll also notice that Linux distributions don’t come with disk-defragmenting utilities. But why is that? To understand why Linux file systems don’t need defragmenting in normal use – and Windows ones do – you’ll need to understand why fragmentation occurs and how Linux and Windows file systems work differently from each other. HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • How does ecryptfs impact harddisk performance?

    - by Freddi
    I have my home directy encrypted with ecryptfs. Does ecryptfs lead to fragmentation? I have the feeling that reading files, displaying folders and login became continuously slower and slower (although it was not noticeably slow at the beginning). The hard disk makes a lot of seek noise even if I open only a text file. In /home/.ecryptfs I see many big archives (that probably contain the encrypted files), so I'm wondering if Linux file system online defragmentation gains anything here. What options do I have to increase performance? Should I decide whether I maybe better do without encryption?

    Read the article

  • file acess slow after deletion of many files

    - by stefan
    I recently accidentally created millions of files in one folder (rougly 5 million) and due to limitations I couldn't process them correctly (maximum argument count exceeded for wc / ls and such stuff). So I deleted them, which took quite a while, but now they're gone. I deleted the files with a regular rm. It weren't any system files. So the files are definitively deleted, but the system is very slow on file stuff now. ls, cat and auto-complete by pressing tab freezes the terminal for several seconds. Is this some sort of fragmentation issue? Is it an issue with the files beeing still somehow present?

    Read the article

  • Bring 2 GB Large Pages to Solaris 10

    - by Giri Mandalika
    Few facts: 8 KB is the default page size on Oracle Solaris 10 and 11 as of this writing Both hardware and software must have support for 2 GB large pages SPARC T4 processors are capable of supporting 2 GB pages Oracle Solaris 11 kernel has in-built support for 2 GB pages Oracle Solaris 10 has no default support for 2 GB pages Memory intensive 64-bit applications may benefit the most from using 2 GB pages Prerequisites: OS: Oracle Solaris 10 8/11 (Update 10) or later Hardware: Oracle servers with SPARC T4 processors e.g., SPARC T4-1, T4-2 or T4-4, SPARC SuperCluster T4-4 Steps to enable 2 GB large pages on Oracle Solaris 10: Install the latest kernel patch or ensure that 147440-04 or later was installed Check the patch download instructions Add the following line to /etc/system and reboot set max_uheap_lpsize=0x80000000 Finally check the output of the following command when the system is back online pagesize -a eg., % pagesize -a 8192 <-- 8K 65536 <-- 64K 4194304 <-- 4M 268435456 <-- 256M 2147483648 <-- 2G % uname -a SunOS jar-jar 5.10 Generic_147440-21 sun4v sparc sun4v Also See: Solaris 9 or later: More performance with Large Pages (MPSS) Large page support for instructions (text) in Solaris 10 1/06 Solaris: How To Disable Out Of The Box (OOB) Large Page Support? Memory fragmentation / Large Pages on Solaris x86

    Read the article

  • Les appareils sous Android seraient plus sujets aux pannes matérielles que ceux sous Windows Phone ou iOS selon WDS

    Les appareils sous Android seraient plus sujets aux pannes matérielles Que Windows Phone et iOS selon WDS Le résultat d'une étude menée par la société WDS montre que les dispositifs sous Android seraient plus victimes de pannes matérielles que les autres. L'enquête a été menée pendant un an en Europe, en Amérique du Nord, en Australie et en Afrique du Sud. Elle s'appuie sur environ 600 000 appels clients et révèle que le taux de pannes des smartphones sous Android s'élèverait à 14%. Selon WDS, les pannes matérielles les plus fréquentes sur les dispositifs Android, seraient dues à la fragmentation de la plate-forme mobile et à son adoption par des constructeurs divers. ...

    Read the article

  • Google : nouvelles applications Gmail pour Android et iOS et nouveau Youtube pour les iDevices d'Apple

    Gmail : nouvelles applications pour Android et iOS Et nouveau Youtube pour les iDevices d'Apple La fragmentation touche également les applications de Google pour son propre OS mobile. Son nouveau Gmail pour Android - qui vient de sortir - n'est en effet disponible que pour les versions 4.0 et ultérieures (aujourd'hui la 4.1) du système. Parmi les améliorations du client de messagerie mobile, on notera pêle-mêle : un nouvel aperçu des photos dans les mails, la possibilité de mettre une vidéo ou une photo directement en pièce jointe, l'ajustement automatique de la taille de la police à l'écran ou le glisser des messages vers la droite pour les trier ou les supprimer. ...

    Read the article

  • Which Git-based MIS to track workflow like Trac/Redmine but on console minimastically?

    - by hhh
    Definitions MIS = management information system Some list about console based solutions here and some GUI-hacks here. Been fed up to install all those dependencies and no make -files with GUI -things so which console-based MIS would you suggest for a game-development team with graphical -repo, animation -repo, code -repo, stories -repo, etc ? P.s. I do use Git -submodules and the reason for repo -fragmentation is due to roles and size, certain repos such as graphic -repos tend to be quite large so better to keep them separate. Perhaps useful to readers interested about this http://stackoverflow.com/questions/5881578/trac-vs-redmine https://github.com/jchris/sofa

    Read the article

  • VDC Research Webcast: Engineering Business Value in the IoT with Java 8

    - by tangelucci
    Date: Thursday, June 19, 2014 Time: 9:30 AM PDT, 12:30 PM EDT, 17:30 GMT The growth of the Internet of Things (IoT) opens up new service-driven opportunities, delivering increased efficiencies, better customer value, and improved quality of life. Realizing the full potential of the Internet of Things requires that we change how we view and build devices. These next-generation systems provide the core foundation of the services, rapidly transforming data to information to value. From healthcare to building control systems to vehicle telematic systems, the IoT focuses on how conneted devices can become more intelligent, enhance interoperability with other devices, systems and services, and drive timely decisions while delivering real business return for all. Join this webcast to learn about: Driving both revenue opportunities and operational efficiencies for the IoT value chain Leveraging Java to make devices more secure How Java can help overcome resource gaps around intelligent connected devices Suggestions on how to better manage fragmentation in embedded devices Register here: http://event.on24.com/r.htm?e=793757&s=1&k=4EA8426D0D31C60A2EDB139635FF75AB

    Read the article

  • D'après-vous, quels sont les smartphones les plus réussis sous Android ? Participez à notre sondage

    D'après-vous, quels sont les smartphones les plus réussis sous Android ? Participez à notre sondage La fragmentation d'Android est un problème délicat pour les développeurs. D'autant plus qu'il est double : des versions de l'OS pour smartphones d'un coté et pour tablettes de l'autre, et des versions différentes de l'OS au sein de chaque catégorie. A cette complexité s'en ajoute une autre (contrairement à iOS) : la diversité des hardwares. Dont la première pour les développeurs est la différence des tailles d'écran. Enfin (contrairement à Windows Phone dont l'UI (Metro) n'est pas modifiable par les constructeurs), Android permet à chaque industriel de personnaliser l'interfa...

    Read the article

  • "Chunked" MemoryStream

    - by Karol Kolenda
    I'm looking for the implementation of MemoryStream which does not allocate memory as one big block, but rather a collection of chunks. I want to store a few GB of data in memory (64 bit) and avoid limitation of memory fragmentation.

    Read the article

  • How does heap compaction work quickly?

    - by Mason Wheeler
    They say that compacting garbage collectors are faster than traditional memory management because they only have to collect live objects, and by rearranging them in memory so everything's in one contiguous block, you end up with no heap fragmentation. But how can that be done quickly? It seems to me that that's equivalent to the bin-packing problem, which is NP-hard and can't be completed in a reasonable amount of time on a large dataset within the current limits of our knowledge about computation. What am I missing?

    Read the article

  • Reconstructing data from PCAP sniff

    - by Ishi
    Hi everyone !! I am trying to sniff HTTP data through libpcap and get all the http contents (header+payload) after processing the TCP payload. As per my discussion at http://stackoverflow.com/questions/2905430/writing-an-http-sniffer-or-any-other-application-level-sniffer , I am facing problems due to fragmentation - I need to reconstruct the whole stream (or defragment it) to get a complete HTTP packet, and this is where I need some help. Thanks in anticipation !!

    Read the article

  • Shrink Sql Server database

    - by hani
    My SQL Server 2008 database file (.mdf) file is nearly 24 MB but the log file grown upto 15 GB. If I want to shrink database what are the important points to take into consideration? Will shrink causes any index fragmentation and does it affect my database performance?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >