Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 204/457 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Question with "extern" in C

    - by why
    When programming, I would like to split one large file(which contains main function) to many small files, so there is one common case: functions in small files can modify the var from main file, so i think extern is very useful! for instance: in main.c extern int i = 100; in small.c extern int i; fprintf(stdout, "var from main file: %d\n", i); I just want to know is my understanding right?

    Read the article

  • Good tool to convert sourcecode to PDF?

    - by Toad
    I've a daunting task of getting familiair and possibly re-architecting large pieces of old source code. I was hoping there would be a nice tool to convert php (in my case), but let's make it more general: any language to PDF, for offline browsing on a Kindle or Ipad Would be ideal if it would create indexes / hyperlinks automatically. So function calls can be easily browsed into

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • How to load image in correct pixel depth

    - by extropy
    I have a bunch of monochrome (1bpp) PNG images I want to load, and pass to pdfSharp. Using Image.FromFile loads images fine, but it alawys uses 32BPP, regardless of the pixel depth of the file. That results in very large PDF files generated. Is there a way to load images in their native pixel depth?

    Read the article

  • Sybase PowerDesigner Change Many (Find/Replace/Convert) Data Item's Data Types

    - by Andy
    Hello, I have a relatively large Conceptual Data Model in PowerDesigner. After generating a Physical Data Model and seeing the DBMS data types, I need to update all of data types(NUMBER/TEXT) for each data item. I'd like to either do a find/replace within the Conceptual Data Model or somehow map to different data types when creating the Physical Data Model. Ex. Change the auto conversion of Text - Clob, to Text - NVARCHAR(20). Thanks!

    Read the article

  • It says i have an indented block when i dont?

    - by user3728373
    def cave(): global key global response print(''' You find yourself standing infront of a cave. You venture into the cave to find a large door blocking your path. (insert key, turn around''') response = input("Enter a command: ") while response != 'insert key' or response != 'turn around': if response =='insert key' or response == 'turn around': break print('Choose one of the options: ") response = input() if response == 'insert key': if key == 1: win() else: print('''You don't have a key. Get One!!''') elif response == 'turn around' : home()

    Read the article

  • Reliable data serving

    - by Madhu
    How can i make sure my file serving is reliable and scalable? How many parallel request it can handle? I am thinking beyond the hardware capability and band width. i am following http://stackoverflow.com/questions/55709/streaming-large-files-in-a-java-servlet

    Read the article

  • Why are there differing definitions of INT64_MIN? And why do they behave differently?

    - by abelenky
    The stdint.h header at my company reads: #define INT64_MIN -9223372036854775808LL But in some code in my project, a programmer wrote: #undef INT64_MIN #define INT64_MIN (-9223372036854775807LL -1) He then uses this definition in the code. The project compiles with no warnings/errors. When I attempted to remove his definition and use the default one, I got: error: integer constant is so large that it is unsigned The two definitions appear to be equivalent. Why does one compile fine and the other fails?

    Read the article

  • Can computer clusters be used for general everyday applications?

    - by Matt Pascoe
    Does anyone know how a computer cluster can be used for everyday applications, like for example video games? I would like to build a computer cluster that can run applications over the cluster that were not specifically designed for computer clusters and still see the performance increase. One use would be for video games, but I would also like to utilize the increased computing power for running a large network of virtualized machines.

    Read the article

  • A quick way to map unordered list of longs to buffer location ?

    - by alhazen
    I have a large number of points (indexed by long) that are processed by multiple threads and I'm using a buffer to hold the output results in order. As the number of points processed is huge, what would be an efficient way to map the indexes of the points to the corresponding ordered position in the buffer ? Example: long bufferIndex bufferIndex index (if BufferSize = 2) (if BufferSize = 4) ---------------------------------------------- 2938 0 0 2939 1 1 2941 1 3 2940 0 2 Thanks.

    Read the article

  • Tuning garbage collections for low latency

    - by elec
    I'm looking for arguments as to how best to size the young generation (with respect to the old generation) in an environment where low latency is critical. My own testing tends to show that latency is lowest when the young generation is fairly large (eg. -XX:NewRatio <3), however I cannot reconcile this with the intuition that the larger the young generation the more time it should take to garbage collect. The application runs on linux, jdk 6 before update 14, i.e G1 not available.

    Read the article

  • Detect the file size of a link's href using JavaScript

    - by noblethrasher
    Hi, Would like to write a script to detect the file size of the target of a link on a web page. Right now I have a function that finds all links to PDF files (i.e. the href ends with '.pdf') and appends the string '[pdf]' to the innerText. I would like to extend it so that I can also append some text advising the user that the target is a large file (e.g. greater than 1MB). Thanks

    Read the article

  • What libraries provide cross-platform 3D and P2P support?

    - by uckelman
    I'm trying to find a constellation of libraries which, taken together, meet the following requirements: Smooth scaling, rotation, panning (in two dimensions). I'll have a large bitmap (or SVG, in some cases), maybe up to 10000x10000 pixels, which serves as map, with some middling number of small bitmaps (or, again, possibly SVG) that can be dragged around over it. I need to be able to zoom, rotate, and pan this scene; however, the view will always be normal to (i.e., looking head-on at) the large bitmap, so I'm not really using the depth dimension. Peer-to-peer. I'd like for multiple users to be able to connect in order to share one of the scenes mentioned above, preferably peer-to-peer, without much configuration by the user. I'm intending to have a server running for cases where users are unable to connect P2P; I'd like to have the failover happen automatically, or possibly have some way of promoting clients who are capable to be servers themselves. Synchronization. Once a user has started dragging one of the small bitmaps (a piece), no other user should be able to drag that piece until the drag stops. I haven't thought of exactly how to do this---there might be a simple solution, or this kind of synchronization might be something that a library provides. Cross(ish)-platform. I need to be able to run on Linux, Windows, and Mac OS. It would be nice to also be able to run on tablets. Having mostly the same code for all platforms is a plus, but not absolutely necessary. (L)GPL compatible. I'm planning to release under the LGPL or GPL, preferably the latter, so I need libraries which have compatible licenses. I'm not set on any particular language, I'd like to use the library or libraries which make the work easiest, though my preference is to work in at most two languages for the project. (The Model could potentially be in one language and the View in another, so they could talk to each other via some protocol I define, if that would get me a better selection of libraries to use.) Can anyone offer suggestions for what to use?

    Read the article

  • What is GC holes?

    - by tianyi
    I wrote a long TCP connection socket server in C#. Spike in memory in my server happens. I used dotNet Memory Profiler(a tool) to detect where the memory leaks. Memory Profiler indicates the private heap is huge, and the memory is something like below(the number is not real,what I want to show is the GC0 and GC2's Holes are very very huge, the data size is normal): Managed heaps - 1,500,000KB Normal heap - 1400,000KB Generation #0 - 600,000KB Data - 100,000KB "Holes" - 500,000KB Generation #1 - xxKB Data - 0KB "Holes" - xKB Generation #2 - xxxxxxxxxxxxxKB Data - 100,000KB "Holes" - 700,000KB Large heap - 131072KB Large heap - 83KB Overhead/unused - 130989KB Overhead - 0KB Howerver, what is GC hole? I read an article about the hole: http://kaushalp.blogspot.com/2007/04/what-is-gc-hole-and-how-to-create-gc.html The author said : The code snippet below is the simplest way to introduce a GC hole into the system. //OBJECTREF is a typedef for Object*. { PointerTable *pTBL = o_pObjectClass->GetPointerTable(); OBJECTREF aObj = AllocateObjectMemory(pTBL); OBJECTREF bObj = AllocateObjectMemory(pTBL); //WRONG!!! “aObj” may point to garbage if the second //“AllocateObjectMemory” triggered a GC. DoSomething (aOb, bObj); } All it does is allocate two managed objects, and then does something with them both. This code compiles fine, and if you run simple pre-checkin tests, it will probably “work.” But this code will crash eventually. Why? If the second call to “AllocateObjectMemory” triggers a GC, that GC discards the object instance you just assigned to “aObj”. This code, like all C++ code inside the CLR, is compiled by a non-managed compiler and the GC cannot know that “aObj” holds a root reference to an object you want kept live. ======================================================================== I can't understand what he explained. Does the sample mean aObj becomes a wild pointer after GC? Is it mean { aObj = (*aObj)malloc(sizeof(object)); free(aObj); function(aObj);? } ? I hope somebody can explain it.

    Read the article

  • Which library should I use for server-side image manipulation on Node.JS?

    - by Andrew
    I found a quite large list of available libraries on Node.JS wiki but I'm not sure which of those are more mature and provide better performance. Basically I want to do the following: load some images to a server from external sources put them onto one big canvas crop and mask them a bit apply a filter or two Resize the final image and give a link to it Big plus if the node package works on both Linux and Windows.

    Read the article

  • Why do some APIs provide mostly interfaces, not classes?

    - by Lord Torgamus
    Some Java APIs provide a large number of interfaces and few classes. For example, the Stellent/Oracle UCM API is composed of roughly 80% interfaces/20% classes, and many of the classes are just exceptions. What is the technical reason for preferring interfaces to classes? Is it just an effort to minimize coupling? To improve encapsulation/information hiding? Something else?

    Read the article

  • Why is J2EE scalable?

    - by py213py
    I heard from various sources that J2EE is highly scalable, but to me it seems that you could never scale a J2EE application to the level of the google search engine or any other large website. I would like to hear the technical reasons why it is so scalable.

    Read the article

  • Recommended migration strategy for C++ project in Visual Studio 6

    - by jacobsee
    For a large application written in C++ using Visual Studio 6, what is the best way to move into the modern era? I'd like to take an incremental approach where we slowly move portions of the code and write new features into C# for example and compile that into a library or dll that can be referenced from the legacy application. Is this possible and what is the best way to do it?

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >