Search Results

Search found 3768 results on 151 pages for 'lite byte'.

Page 55/151 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Free CodeRush Plugin: VS Add New Items Simplified!

    Check out this new CodeRush plugin that saves you time by letting you create items from Visual Studios "Add New Dialog" without using the dialog! Watch the CodeRush DXNewItem plugin dialog to learn how to install and use this plugin: Download DX_NewItem Here   Feedback Be sure to leave Rory feedback below for another great CodeRush plugin! Thanks Rory!   Want to experience a better Visual Studio? Install CodeRush by downloading the free lite version here: CodeRush...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How can I test different TV display types for my XBLIG game?

    - by Steve Dunn
    The XBLIG submission check-list says to ensure that all important stuff critical to game-play be visible in the 'TitleSafeArea'. I've received a bug report during play-test of my game (Crazy Balloon Lite) that says parts of my scrolling 2D map are only half visible (i.e. chopped at the left of the screen). I've tested my game myself on a 47" TV and also a 19" VGA monitor, both of which look fine. The bug report says the issue occurs on a standard 20" TV. My question is: without buying different sizes of TVs, is there a way to test what my game will look like on different sized displays?

    Read the article

  • Making a Background Scrolling in Stacking Game

    - by David Dimalanta
    Hmmm...Is it a good idea to use a LibGDX parallax background for making a stacking game (i.e. PAPA STACKer Lite)? For example, I'm starting to use the blocks to drag-n-drop it. Next, when the next piece reaches the top of the screen, it automatically scrolls to the next one where the available space left. Aside from that, is it also involved with the camera code (Orthographic Camera) that the screen size appeared like 720x1280 but actually it's 1440x2560 for example? And another thing, does the background scrolling have the option to scroll from start to finish and infinite?

    Read the article

  • Installer says I'd need a wireless firmware, but WLAN works without it out of the box

    - by unor
    While installing 12.04.01 (Alternate) on a netbook (Samsung N150Plus), Ubuntu informed me that I'd need the firmware brcm/bcm43xx-0.fw. I skipped this step. If I understand this page correctly, the mentioned firmware is a WLAN driver. However, after the Ubuntu installation was completed, the WLAN worked out of the box. After a few minutes, Ubuntu informed me that there is a firmware update available, which would be the mentioned firmware. Do I need to install this? Is Ubuntu using some kind of "lite" WLAN driver and I might run into problems with certain networks in the future, if I don't use the firmware? Would there be benefits in using the firmware? (maybe longer runtime because of lower consumption?)

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Group "Discussion" software?

    - by Kayle
    My client wants a "lite" forum... not unlike these stack exchange sites, but even a little lighter. There's a screenshot of the discussion group she likes most, below. You can also go here to see it for yourself it you like. I don't think traditional forum apps will display, functionally, in this manner. Is there any software I can use to get a similar result? A web service would be acceptable as well.

    Read the article

  • Can't read .cso files but I can read their .hlsl versions?

    - by Jader J Rivera
    Well I've been trying to read a .cso file to use as a shader for a DirectX program I'm currently making. Problem is no matter how I implemented a way to read the file it never worked. And after fidgeting around I discover that it's only the .cso files I can't read. I can read anything else (which means it works) even their .hlsl files. Which is strange because the .hlsl (high level shader language) files are supposed to turn into .cso (compiled shader object) files. What I'm currently doing is: vector<byte> Read(string File){ vector<byte> Text; fstream file(File, ios::in | ios::ate | ios::binary); if(file.is_open()){ Text.resize(file.tellg()); file.seekg(0 , ios::beg); file.read(reinterpret_cast<char*>(&Text[0]), Text.size()); file.close(); } return Text; }; If I then implement it. Read("VertexShader.hlsl"); //Works Read("VertexShader.cso"); //Doesn't Works?!?! And I need the .cso version of the shader to draw my sexy triangles. Without it my life and application will never continue and I have no idea what could be wrong. (I've also asked this at stack overflow but still no answers.)

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • Recommened design pattern to handle multiple compression algorithms for a class hierarchy

    - by sgorozco
    For all you OOD experts. What would be the recommended way to model the following scenario? I have a certain class hierarchy similar to the following one: class Base { ... } class Derived1 : Base { ... } class Derived2 : Base { ... } ... Next, I would like to implement different compression/decompression engines for this hierarchy. (I already have code for several strategies that best handle different cases, like file compression, network stream compression, legacy system compression, etc.) I would like the compression strategy to be pluggable and chosen at runtime, however I'm not sure how to handle the class hierarchy. Currently I have a tighly-coupled design that looks like this: interface ICompressor { byte[] Compress(Base instance); } class Strategy1Compressor : ICompressor { byte[] Compress(Base instance) { // Common compression guts for Base class ... // if( instance is Derived1 ) { // Compression guts for Derived1 class } if( instance is Derived2 ) { // Compression guts for Derived2 class } // Additional compression logic to handle other class derivations ... } } As it is, whenever I add a new derived class inheriting from Base, I would have to modify all compression strategies to take into account this new class. Is there a design pattern that allows me to decouple this, and allow me to easily introduce more classes to the Base hierarchy and/or additional compression strategies?

    Read the article

  • How do I parse a header with two different version [ID3] avoiding code duplication?

    - by user66141
    I really hope you can give me some interesting viewpoints for my situation, because I am not satisfied with my current approach. I am writing an MP3 parser, starting with an ID3v2 parser. Right now I`m working on the extended header parsing, my issue is that the optional header is defined differently in version 2.3 and 2.4 of the tag. The 2.3 version optional header is defined as follows: struct ID3_3_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (either 6 or 8 bytes , excluded) WORD wExtFlags; //Extended header flags DWORD dwSizeOfPadding; //Size of padding (size of the tag excluding the frames and headers) }; While the 2.4 version is defined : struct ID3_4_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (synchsafe int) BYTE bNumberOfFlagBytes; //Number of flag bytes BYTE bFlags; //Flags }; How could I parse the header while minimizing code duplication? Using two different functions to parse each version sounds less great, using a single function with a different flow for each occasion is similar, any good practices for this kind of issues ? Any tips for avoiding code duplication? Any help would be appreciated.

    Read the article

  • An alternative to multiple inheritance when creating an abstraction layer?

    - by sebf
    In my project I am creating an abstraction layer for some APIs. The purpose of the layer is to make multi-platform easier, and also to simplify the APIs to the feature set that I need while also providing some functionality, the implementation of which will be unique to each platform. At the moment, I have implemented it by defining and abstract class, which has methods which creates objects that implement interfaces. The abstract class and these interfaces define the capabilities of my abstraction layer. The implementation of these in my layer should of course be arbitrary from the POV view of my application, but I have done it, for my first API, by creating chains of subclasses which add more specific functionality as the features of the APIs they expose become less generic. An example would probably demonstrate this better: //The interface as seen by the application interface IGenericResource { byte[] GetSomeData(); } interface ISpecificResourceOne : IGenericResource { int SomePropertyOfResourceOne {get;} } interface ISpecificResourceTwo : IGenericResource { string SomePropertyOfResourceTwo {get;} } public abstract class MyLayer { ISpecificResourceOne CreateResourceOne(); ISpecificResourceTwo CreateResourceTwo(); void UseResourceOne(ISpecificResourceOne one); void UseResourceTwo(ISpecificResourceTwo two); } //The layer as created in my library public class LowLevelResource : IGenericResource { byte[] GetSomeData() {} } public class ResourceOne : LowLevelResource, ISpecificResourceOne { int SomePropertyOfResourceOne {get{}} } public class ResourceTwo : ResourceOne, ISpecificResourceTwo { string SomePropertyOfResourceTwo {get {}} } public partial class Implementation : MyLayer { override UseResourceOne(ISpecificResourceOne one) { DoStuff((ResourceOne)one); } } As can be seen, I am essentially trying to have two inheritance chains on the same object, but of course I can't do this so I simulate the second version with interfaces. The thing is though, I don't like using interfaces for this; it seems wrong, in my mind an interface defines a contract, any class that implements that interface should be able to be used where that interface is used but here that is clearly not the case because the interfaces are being used to allow an object from the layer to masquerade as something else, without the application needing to have access to its definition. What technique would allow me to define a comprehensive, intuitive collection of objects for an abstraction layer, while their implementation remains independent? (Language is C#)

    Read the article

  • DevConnections: Customer Testimonials

    Check out this short video of a couple of very satisfied DevExpress customers who dropped by the booth to share their success stories using DevExpress: There were many other customers that we either didnt or couldnt get on tape but Id like to say Thank You! to everyone who holds a DevExpress license. Want to experience a better Visual Studio? Install CodeRush by downloading the free lite version here: CodeRush Xpress Or better yet, try the full blown package free for 30 days CodeRush...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Reconstruct a file from a TCP stream

    - by Abhishek Chanda
    I have a client and a server and a third box which sees all packets from the server to the client (but not the other way around). Now when the client requests a file from the server (over HTTP), the third box sees the response. I am trying to reconstruct the file there. I am using libpcap to capture TCP datagrams and trying to reconstruct the file there. Here is what I did Listen for packets on an interface Group all packets which have the same ACK number Sort the group based on SEQ number Extract data from each packet and combine them and write to the disk The problem is, the file thus generated is not exactly the same as the original file. Does everything sound correct here? Some more details: I am using C++ The packet data is being stored as std::vector<char> I did change the byte order while reading the ack number and seq number from the packet using ntohl I am not sure if I need to change the byte order for the data as well. I tried to reverse the data from each packet before combining them, even that did not work. Is there something I am missing?

    Read the article

  • UDF Partition reported full when it is not

    - by Capt.Nemo
    I was using these instructions to setup an external hard disk with udf. I have been able to setup a multi-partition system using those instructions, but I seem to have hit a wall, where the partition is reported as full while writing to the disk. Every other tool available to me reports it as free. Relevant lshw output Here's a screenshot showing the disk: Both the output of df and the file manager (caja) report the disk as free. Filesystem Size Used Avail Use% Mounted on /dev/sda9 9.0G 7.6G 910M 90% / udev 974M 12K 974M 1% /dev /dev/sda1 50G 47G 295M 100% /media/Data /dev/sda6 49G 41G 5.9G 88% /home /dev/sda2 155G 127G 29G 82% /media/Entertainment /dev/sda8 14G 13G 516M 96% /media/Stuff /dev/sdb2 120G 1.9G 112G 2% /media/3c887659-5676-4946-875b-b797be508ce7 /dev/sdb3 11G 2.6G 7.7G 25% /media/108b0a1d-fd1a-4f38-b1c6-4ad1a20e34a3 /dev/sdb1 802G 34G 768G 5% /media/disk I seem to have hit a wall near the 35GB mark. Despite being shown as 35gb/860gb used everywhere, the following happens on a write attempt: [2017][/media/Dory]$ echo D>>echo bash: echo: write error: No space left on device Writing byte by byte, the maximum I can take it to is 34719248K. The most weird part is that on mounting it Windows, Windows can write to the disk easily, and the writes are being read fine back in Ubuntu. However, the used-bytes remains at 34719248K in Ubuntu (It goes higher on Windows, however).

    Read the article

  • Choix technologique : Spring 3.0 ou Java EE 6 pour vos projets Java en 2010 ?

    Fin 2009, ce sont deux solutions très attendues qui ont finalement été livrées : D'un coté Spring 3.0, framework déjà fort utilisé en entreprise, et qui arrive avec quelques nouveautés : Support de Rest, Expression Language, .. Principal atout : Il est déjà utilisable sur des serveurs omniprésents en production (Tomcat, Websphere 6.1,..) De l'autre coté, Java EE 6. Un nouvelle version du standard, beaucoup plus légère et souple que les anciennes versions. Il propose également un profil Web, qui propose des fonctionnalités fort similaire à ce qui est proposé par Spring (EJB Lite, Injection de dépendances, ..). Pour l'heure, la seule implémentation disponible est Glassfish. De votre coté, quelle solution utilise...

    Read the article

  • Parsing an header with two different version [ID3] avoiding code duplication?

    - by user66141
    I really hope you could give me some interesting viewpoints for my situation, my ways to approach my issue are not to my liking . I am writing an mp3 parser , starting with an ID3v2 parser . Right now I`m working on the extended header parsing , my issue is that the optional header is defined differently in version 2.3 and 2.4 of the tag . The 2.3 version optional header is defined as follows : struct ID3_3_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (either 6 or 8 bytes , excluded) WORD wExtFlags; //Extended header flags DWORD dwSizeOfPadding; //Size of padding (size of the tag excluding the frames and headers) }; While the 2.4 version is defined : struct ID3_4_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (synchsafe int) BYTE bNumberOfFlagBytes; //Number of flag bytes BYTE bFlags; //Flags }; How could I parse the header while minimizing code duplication ? Using two different functions to parse each version sounds less great , using a single function with a different flow for each occasion is similar , any good practices for this kind of issues ? any tips for avoiding code duplication ? anything would be great .

    Read the article

  • How can I compile an IP address to country lookup database to make available for free?

    - by Nick
    How would I go about compiling an accurate database of IP addresses and their related countries to make available as an open source download for any web developer who wants to perform a geographic IP lookup? It seems that a company called MaxMind has a monopoly on geographic IP data, because most online tutorials I've seen for country lookups based on IP addresses start by suggesting a subscription to MaxMind's paid service (or their less accurate free 'Lite' version). I'm not completely averse to paying for their solution or using the free one, but the concept of an accurate open source equivalent that anyone can use without restriction appeals to me, and I think it would be useful for the web development community. How is geographic IP data collected, and how realistic is it to hope to maintain an up-to-date open version?

    Read the article

  • Petstore using Java EE 6 ? Almost!

    - by arungupta
    Antonio Goncalves, a Java Champion, JUG leader, and a well-known author, has started building a Petstore-like application using Java EE 6. The complete end-to-end sample application will build a eCommerce website and follows the Java EE 6 design principles of simple and easy-to-use to its core. Its using several technologies from the platform such as JPA 2.0, CDI 1.0, Bean Validation 1.0, EJB Lite 3.1, JSF 2.0, and JAX-RS 1.1. The two goals of the project are: • use Java EE 6 and just Java EE 6 : no external framework or dependency • make it simple : no complex business algorithm The application works with GlassFish and JBoss today and there are plans to add support for TomEE. Download the source code from github.com/agoncal/agoncal-application-petstore-ee6. And feel free to fork if you want to use a fancy toolkit as the front-end or show some nicer back-end integration. Some other sources of similar end-to-end applications are: • Java EE 6 Tutorial • Java EE 6 Galleria • Java EE 6 Hands-on Lab

    Read the article

  • IDirect3DDevice9::GetRenderTargetData() returns no data

    - by P. Avery
    I've got a simple function to get the rendertarget data of an RT( w/default pool ). This particular RT has a resolution of 1x1( it's the 10'th and final mip of a texture ). Here is my code to get data for IDirect3DSurface9 *pTargetSurface: IDirect3DSurface9 *pSOS = NULL; pd3dDevice->CreateOffScreenPlainSurface( 1, 1, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &pSOS, NULL ); // get residual energy if( FAILED( hr = pd3dDevice->GetRenderTargetData( pTargetSurface, pSOS ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DDevice9::GetRenderTargetData() at DownsampleArea()", __LINE__, hr ); goto Exit; } // lock surface if( FAILED( hr = pSOS->LockRect( &rct, NULL, D3DLOCK_READONLY ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DSurface9::LockRect() at DownsampleArea()", __LINE__, hr ); goto Exit; } // get residual energy from downsampled texture pByte = ( BYTE* )rct.pBits; D3DXVECTOR4 vEnergy; vEnergy.z = ( float )pByte[ 0 ] / 255.0f; vEnergy.y = ( float )pByte[ 1 ] / 255.0f; vEnergy.x = ( float )pByte[ 2 ] / 255.0f; vEnergy.w = ( float )pByte[ 3 ] / 255.0f; V( pSOS->UnlockRect() ); All formatting and settings are correct, directx in debug mode shows no errors... The problem is that the 4 bytes above are 0...I know this to be incorrect by using PIX to debug...PIX shows that RGB bytes are 0.078 and Alpah is 1. These values are not less than that which can be represented by a single byte( 1 / 255 ). Any ideas? Am I copying rendertarget data correctly?

    Read the article

  • Tell the CDI 2 Expert Group What You Think!

    - by reza_rahman
    Since it's introduction in Java EE 6, CDI has become a key API for the platform. CDI 1.1 was a relatively minor release included in Java EE 7 as was CDI 1.2 (to be included in GlassFish 4.0.1). We have much higher expectations from CDI 2 (projected to be included in Java EE 8) under the new leadership of Antoine Sabot-Durand. Much like we conducted the Java EE 8 survey to solidify future direction for the platform, CDI 2 is now undergoing the same effort. Towards this goal the CDI 2 leadership is now soliciting feedback on some very specific items via an open survey. Topics include the likes of Java SE bootstrap, asynchronous processing, modularity, EJB-style @Startup and @Asynchronous in CDI, configuration and CDI Lite. You can of course also provide free-form input on anything that's not on the survey. Take the survey now on the CDI specification site and help shape the future of CDI 2 and Java EE 8!

    Read the article

  • USB Port not working for Ubuntu 14.04 LTS

    - by user292125
    I have recently installed Ubuntu 14.04 LTS on my system. But the usb port does not seem to be working. When I type lsusb in terminal, following is displayed on the screen: Bus 002 Device 114: ID 0951:1665 Kingston Technology Bus 002 Device 004: ID 413c:2106 Dell Computer Corp. Dell QuietKey Keyboard Bus 002 Device 003: ID 04ca:0062 Lite-On Technology Corp. Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub. However the disk utility shows no media. Kindly help me on this. Suggestions will be appreciated.

    Read the article

  • Which way to make money on Android? ads, purchase, trial version?

    - by otakun85
    Hi, I want to release an app, but what is in your experience the best way to make money on android? I've seen some solutions out there, but which one is the best? Solution A: App is free and has ads. + long term income + free for customer ~ money based on usage - ads makes apps ugly - needs a internet connection, which your app may not need Solution B: App is purchased + instant income ~ money only once ~ the customer have only 15 minutes to try out the app - you need a credit card on the google market - the customer have to give money :D Solution C: Trial version Solution D: free Lite version with ads & full to pay version without adds Any other ideas?

    Read the article

  • How to disguise a serverside mob as another?

    - by Shaun Wild
    I've been working a Minecraft sever mod and i want to be able to add a new entity to the server, but then make the server send the packets to the client, imitating another mob, for example.. Lets say say i have EntityPlayerNPC.class, what i want to do is have all of the packets that get sent to the client look like they are from that of another player which is on the player, therefore allowing me to add custom NPC's... Thinking about the theory i'm sure this can be done. I've tried looking around for where the packets are being sent from and whatnot, can anyone think up a solution? edit: i tried adding a new constructor to the Packet20NamedEntitySpawn class like so: public Packet20NamedEntitySpawn(String username, EntityLiving e){ this.entityId = 0; this.name = username; this.xPosition = MathHelper.floor_double(e.posX * 32.0D); this.yPosition = MathHelper.floor_double(e.posY * 32.0D); this.zPosition = MathHelper.floor_double(e.posZ * 32.0D); this.rotation = (byte)((int)(e.rotationYaw * 256.0F / 360.0F)); this.pitch = (byte)((int)(e.rotationPitch * 256.0F / 360.0F)); this.metadata = e.getDataWatcher(); } unfortunatley, that didn't work :(

    Read the article

  • NeDB : la base de données légère écrite en JavaScript sort, simple et persistante, elle peut être utilisée « in-memory »

    NeDB : la base de données légère écrite en JavaScript sort Simple et persistante, elle peut être utilisée « in-memory »« Où pourrais-je trouver une base de données légère à utiliser dans mes projets Node.js ? » Cette question, un développeur du nom de Louis Chatriot se l'est posée. Ne trouvant rien de concret dans ses recherches qui répond à ses attentes, ce dernier a développé sa propre solution en JavaScript, qu'il a par la suite nommée NeDB.Le but de Chatriot n'est pas de rivaliser avec les caïds en place comme MongoDB ou Couch. En effet, NeDB dérive de MongoDB. Chatriot le compare à une sorte de SQL lite taillé pour les projets Node.js.NeDB prend en charge l'indexation. Le développeur...

    Read the article

  • What does the Sys_PageIn() function do in Quake?

    - by Philip
    I've noticed in the initialization process of the original Quake the following function is called. volatile int sys_checksum; // **lots of code** void Sys_PageIn(void *ptr, int size) { byte *x; int j,m,n; //touch all memory to make sure its there. The 16-page skip is to //keep Win 95 from thinking we're trying to page ourselves in (we are //doing that, of course, but there's no reason we shouldn't) x = (byte *)ptr; for (n=0 ; n<4 ; n++) { for (m=0; m<(size - 16 * 0x1000) ; m += 4) { sys_checksum += *(int *)&x[m]; sys_checksum += *(int *)&x[m + 16 * 0x10000]; } } } I think I'm just not familiar enough with paging to understand this function. the void* ptr passed to the function is a recently malloc()'d piece of memory that is size bytes big. This is the whole function - j is an unreferenced variable. My best guess is that the volatile int sys_checksum is forcing the system to physically read all of the space that was just malloc()'d, perhaps to ensure that these spaces exist in virtual memory? Is this right? And why would someone do this? Is it for some antiquated Win95 reason?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >