Search Results

Search found 10098 results on 404 pages for 'per pixel'.

Page 256/404 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • MultiCast Messages to multiple clients on the same machine

    - by Christopher Chase
    Im trying to write a server/service that broadcasts a message on the lan ever second or so, Kind of like a service discovery. The message needs to be received by multiple client programs that could be on the same machine or different machines. But there could be more than one program on each machine running at the same time. Im using delphi7, with indy 9.0.18 where im stuck is if i should be using UDP or IP MultiCast or if its even possible... Ive managed to get it to work with IP Multi Cast with one client per machine, but even after many trys with different bindings.. max/min ports etc, i cant seem to find a solution.

    Read the article

  • What is the underlying reason for not being able to put arrays of pointers in unsafe structs in C#?

    - by cons
    If one could put an array of pointers to child structs inside unsafe structs in C# like one could in C, constructing complex data structures without the overhead of having one object per node would be a lot easier and less of a time sink, as well as syntactically cleaner and much more readable. Is there a deep architectural reason why fixed arrays inside unsafe structs are only allowed to be composed of "value types" and not pointers? I assume only having explicitly named pointers inside structs must be a deliberate decision to weaken the language, but I can't find any documentation about why this is so, or the reasoning for not allowing pointer arrays inside structs, since I would assume the garbage collector shouldn't care what is going on in structs marked as unsafe. Digital Mars' D handles structs and pointers elegantly in comparison, and I'm missing not being able to rapidly develop succinct data structures; by making references abstract in C# a lot of power seems to have been removed from the language, even though pointers are still there at least in a marketing sense. Maybe I'm wrong to expect languages to become more powerful at representing complex data structures efficiently over time.

    Read the article

  • trouble configuring WCF to use session

    - by Michael
    I am having trouble in configuring WCF service to run in session mode. As a test I wrote this simple service : [ServiceContract] public interface IService1 { [OperationContract] string AddData(int value); } [ServiceBehavior(InstanceContextMode=InstanceContextMode.PerSession)] internal class Service1 : IService1,IDisposable { private int acc; public Service1() { acc = 0; } public string AddData(int value) { acc += value; return string.Format("Accumulator value: {0}", acc); } #region IDisposable Members public void Dispose() { } #endregion } I am using Net.TCP binding with default configuration with reliable session flag enabled. As far as I understand , such service should run with no problems in session mode. But , the service runs as in per call mode - each time I call AddData , constructor gets called before executing AddData and Dispose() is called after the call. Any ideas why this might be happening?

    Read the article

  • Counting number of children in hierarchical SQL data

    - by moontear
    for a simple data structure such as so: ID parentID Text Price 1 Root 2 1 Flowers 3 1 Electro 4 2 Rose 10 5 2 Violet 5 6 4 Red Rose 12 7 3 Television 100 8 3 Radio 70 9 8 Webradio 90 For reference, the hierarchy tree looks like this: ID Text Price 1 Root |2 Flowers |-4 Rose 10 | |-6 Red Rose 12 |-5 Violet 5 |3 Electro |-7 Television 100 |-8 Radio 70 |-9 Webradio 90 I'd like to count the number of children per level. So I would get a new column "NoOfChildren" like so: ID parentID Text Price NoOfChildren 1 Root 8 2 1 Flowers 3 3 1 Electro 3 4 2 Rose 10 1 5 2 Violet 5 0 6 4 Red Rose 12 0 7 3 Television 100 0 8 3 Radio 70 1 9 8 Webradio 90 0 I read a few things about hierarchical data, but I somehow get stuck on the multiple inner joins on the parentIDs. Maybe someone could help me out here. moon

    Read the article

  • compiling actionscript from command line using MXMLC

    - by I. J. Kennedy
    I have a tiny actionscript "project" consisting of two files, call them foo.as and bar.as. For reasons I won't go into, I really really want to build the .SWF from the command line, without setting up a formal project of any kind. Every compiler I've ever used lets you do this, but for the life of me I can't figure out how to coerce MXMLC into compiling these two files and linking them into a SWF. Naively, I try MXMLC foo.as bar.as but I'm informed that only one source file is allowed. Ok, supposing I compiled these two files separately, how would I link them together to get the final SWF? NOTE: The only reason I have two files instead of one is the requirement of only one class per file. I tried putting both classes in one file and making one of the classes "private" or "internal" but neither of these ideas worked. I would be ecstatic to find out I can put more than one class in a file (with only one being public).

    Read the article

  • Separate log files for each web application and shared libraries with log4j

    - by oo_olo_oo
    I have few web applications run on the Tomcat server. Each application contains its own log4j library copy inside its own war. This allows for separate, flexible logging configuration per application. I also have few shared libraries (kept in Tomcat's shared libraries directory). I would like to have shared library loggers output among with the application (which uses them) loggers output (for example: if application A logs to file a.log, and uses library b.jar, I would like b.jar to log also to the a.log file). The problem is, that the shared libraries are loaded by the shared classloader, which causes that they can't access loggers defined by the applications. Is there any solution for this issue?

    Read the article

  • Google Visualization legend format Jquery ui tabs

    - by Liam
    I've got an issue when using the Google Visualization api line chart with Jquery ui tabs. I've got two graphs on two tabs. The first graph, which is visible by default displays fine: the second chart on the hidden tab seems to be messing up the key: I've tried changing the options but nothing I do seems to make any difference. Here are my options: options = { 'title': title, titleTextStyle: { color: color, fontSize: 20 }, 'width': 950, 'height': 400, hAxis: { minorGridlines: { count: x } }, chartArea: { width: 880 }, legend: { position: 'bottom', textStyle: { fontSize: 10 } } }; // Instantiate and draw our chart, passing in some options. var chart = new google.visualization.LineChart(document.getElementById(divId)); chart.draw(data, options); $('#tabs').tabs(); any thoughts on what is causing this and how to prevent it?? Edit If I remove the tabs() call it displays correctly. As per answer below from @Vipul tried setting the div to a fixed width, no difference.

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Free data warehouse - Infobright, Hadoop/Hive or what ?

    - by peperg
    I need to store large amount of small data objects (millions of rows per month). Once they're saved they wont change. I need to : store them securely use them to analysis (mostly time-oriented) retrieve some raw data occasionally It would be nice if it could be used with JasperReports or BIRT My first shot was Infobright Community - just a column-oriented, read-only storing mechanism for MySQL On the other hand, people says that NoSQL approach could be better. Hadoop+Hive looks promissing, but the documentation looks poor and the version number is less than 1.0 . I heard about Hypertable, Pentaho, MongoDB .... Do you have any recommendations ? (Yes, I found some topics here, but it was year or two ago)

    Read the article

  • UnitOfWork & StrcutureMap & Desktop Application

    - by Afshin Gh
    When developing Web App and using StrcutureMap i normally use HybridHttpOrThreadLocalScoped for my UnitOfWork Part. (unit of work starts up per web request and ...) What about a desktop app(windows service, console, ...)? There is no session and request in a desktop app. How should i manage UoW in this situation? any good reference or article? I don't want to make it singleton. What should I do?

    Read the article

  • Functions as pointers in Objective-C

    - by richman0829
    This is a question from Learn Objective-C on the Mac... Functions as pointers What I typed in, as per the recipe, was: NSString *boolString (BOOL yesNo) { if (yesNo) { return (@"YES"); } else { return (@"NO"); } } // boolString The pointer asterisk in the first line doesn't seem necessary, yet deleting it results in an error message. But what does it do? In NSString * boolString (yesNo); what seems to be going on is a function is defined as a pointer to an NSString. The function without the asterisk NSLog (@"are %d and %d different? %@", 5, 5, boolString(areTheyDifferent)); returns an NSString of YES or NO. But how can it return an NSString when it's a pointer? It might return the ADDRESS of an NSString; or if dereferenced it could return the CONTENTS of that address (an NSString such as YES or NO). Yet I see no place where it is dereferenced.

    Read the article

  • Django ORM QuerySet intersection by a field

    - by Sri Raghavan
    These are the (pseudo) models I've got. Blog: name etc... Article: name blog creator etc User (as per django.contrib.auth) So my problem is this: I've got two users. I want to get all of the articles that the two users published on the same blog (no matter which blog). I can't simply filter the Article model by both users, because that would yield the set of Articles created by both users. Obviously not what I want. but can I filter somehow to get all of the articles where a field of the object matches between the two querysets?

    Read the article

  • Hidden Features of ActionScript

    - by Ole Jak
    What are some of the hidden features of ActionScript? ActionScript is widely used language. It has been around for so many years. So at least from the existing features, do you know any that are not well known but very useful. Of course, this question is along the lines of: Hidden Features of JavaScript Hidden Features of CSS Hidden Features of C# Hidden Features of VB.NET Hidden Features of Java Hidden Features of ASP.NET Hidden Features of Python Hidden Features of TextPad Hidden Features of Eclipse Hidden Features of HTML Do not mention features of ActionScript 2.0, since it is quite old and not eweryone can understend it now Please specify one feature per answer. Note that it's not always a great idea to use these hidden features; often times they are surprising and confusing to others reading your code.

    Read the article

  • How can I block based on URL (from address bar) in a safari extension

    - by PerilousApricot
    I'm trying to write an extension that will block access to (configurable) list of URLs if they are accessed more than N times per hour. From what I understand, I need to have a start script pass a "should I load this" message to a global HTML page (who can access the settings object to get the list of URLs), who will give a thumbs up/thumbs down message back to the start script to deny/allow loading. That works out fine for me, but when I use the usual beforeLoad/canLoad handlers, I get messages for all the sub-items that need to be loaded (images/etc..), which screws up the #accesses/hour limit I'm trying to make. Is there a way to synchronously pass messages back and forth between the two sandboxes so I can tell the global HTML page, "this is the URL in the window bar and the timestamp for when this request came in", so I can limit duplicate requests? Thanks!

    Read the article

  • SQL Server: Must numbers all be specified with latin numeral digits?

    - by Ian Boyd
    Does SQL server expect numbers to be specified with digits from the latin alphabet, e.g.: 0123456789 Is it valid to give SQL Server digits in other alphabets? Rosetta Stone: Latin: 01234567890 Arabic: ?????????? Bengali: ?????????? i know that the client (ADO) will convert 8-bit strings to 16-bit unicode strings using the current culture. But the client is also converting numbers to strings using their current culture, e.g.: SELECT * FROM Inventory WHERE Quantity > ???,?? Which throws SQL Server for fits. i know that the server/database has it's defined code page and locale, but that is for strings. Will SQL Server interpret numbers using the active (or per-login specified) locale, or must all numeric values be specifid with latin numeral digits?

    Read the article

  • Need an algorithm to group several parameters of a person under the persons name

    - by QuickMist
    Hi. I have a bunch of names in alphabetical order with multiple instances of the same name all in alphabetical order so that the names are all grouped together. Beside each name, after a coma, I have a role that has been assigned to them, one name-role pair per line, something like whats shown below name1,role1 name1,role2 name1,role3 name1,role8 name2,role8 name2,role2 name2,role4 name3,role1 name4,role5 name4,role1 ... .. . I am looking for an algorithm to take the above .csv file as input create an output .csv file in the following format name1,role1,role2,role3,role8 name2,role8,role2,role4 name3,role1 name4,role5,role1 ... .. . So basically I want each name to appear only once and then the roles to be printed in csv format next to the names for all names and roles in the input file. The algorithm should be language independent. I would appreciate it if it does NOT use OOP principles :-) I am a newbie.

    Read the article

  • Scala/Erlang use something like greenThread or not ?

    - by CHAPa
    Hi all, Im reading a lot about how scala/Erlang does lightweight threads and your concurrency model ( Actor Model ). Off course, some doubts appear in my head. Scala/Erlang use a approach similar to the old thread model used by java (greenThread) ? for example, suppose that there is a machine with 2 cores, so the scala/erlang environment will fork one thread per processor ? The other threads will be scheduled by user-space( scala VM / erlang vm ) environment. is it correct ? how under the hood that really work ? thanks a lot.

    Read the article

  • How should I go about creating a point system for users like SO and yahoo answers?(PHP)

    - by ggfan
    I am creating a voting system for a Q&A site project in which if a user asks a question, he/she losses -5 points; answer a question +5, vote a question +1, etc. (kind of like SO and yahoo answers) --To create the basic arithmetic, I have a "users_points" table that relates the user_id and their total points. +---+---------+ | 1 | 100 | +---+---------+ | 2 | 54 | +---+---------+ --Basically if the users does certain task, it would + or - the points. How do I prevent users from say voting an answer up 100 times. ex: I want a user to be only able to vote once per question, etc.

    Read the article

  • ETL mechanisms for MySQL to SQL Server over WAN

    - by Troy Hunt
    I’m looking for some feedback on mechanisms to batch data from MySQL Community Server 5.1.32 with an external host down to an internal SQL Server 05 Enterprise machine over VPN. The external box accumulates data throughout business hours (about 100Mb per day), which then needs to be transferred internationally across a WAN connection to an internal corporate environment before some BI work is performed. This should just be change-sets making their way down each night. I’m interested in thoughts on the ETL mechanisms people have successfully used in similar scenarios before. SSIS seems like a potential candidate; can anyone comment on the suitability for this scenario? Alternatively, other thoughts on how to do this in a cost-conscious way would be most appreciated. Thanks!

    Read the article

  • Super wide, but not so tall, bitmap?

    - by Erik Karlsson
    Howdy folks. Is there any way to create a more space/resource efficient bitmap? Currently i try to render a file, approx 800 px high but around 720000px wide. It crashes my application, presumebly because of the share memory-size of the bitmap. Can i do it more efficient, like creating it as an gif directly and not later when i save it? I try to save a series of lines/rectangles from a real world reading, and i want it to be 1px per 1/100th of a second. Any input? And btw, i never manage to ogon using google open id : /

    Read the article

  • Using Parallel Extensions with ThreadStatic attribute. Could it leak memory?

    - by the-locster
    I'm using Parallel Extensions fairly heavily and I've just now encountered a case where using thread locla storrage might be sensible to allow re-use of objects by worker threads. As such I was lookign at the ThreadStatic attribute which marks a static field/variable as having a unique value per thread. It seems to me that it would be unwise to use PE with the ThreadStatic attribute without any guarantee of thread re-use by PE. That is, if threads are created and destroyed to some degree would the variables (and thus objects they point to) remain in thread local storage for some indeterminate amount of time, thus causing a memory leak? Or perhaps the thread storage is tied to the threads and disposed of when the threads are disposed? But then you still potentially have threads in a pool that are longed lived and that accumulate thread local storage from various pieces of code the threads are used for. Is there a better approach to obtaining thread local storage with PE? Thankyou.

    Read the article

  • Freetype2 failing under WoW64

    - by Necrolis
    I built a tff to D3D texture function using freetype2(2.3.9) to generate grayscale maps from the fonts. it works great under native win32, however, on WoW64 it just explodes (well, FT_Done and FT_Load_Glyph do). from some debugging, it seems to be a problem with HeapFree as called by free from FT_Free. I know it should work, as games like WCIII, which to the best of my knowledge use freetype2, run fine, this is my code, stripped of the D3D code(which causes no problems on its own): FT_Face pFace = NULL; FT_Error nError = 0; FT_Byte* pFont = static_cast<FT_Byte*>(ARCHIVE_LoadFile(pBuffer,&nSize)); if((nError = FT_New_Memory_Face(pLibrary,pFont,nSize,0,&pFace)) == 0) { FT_Set_Char_Size(pFace,nSize << 6,nSize << 6,96,96); for(unsigned char c = 0; c < 95; c++) { if(!FT_Load_Glyph(pFace,FT_Get_Char_Index(pFace,c + 32),FT_LOAD_RENDER)) { FT_Glyph pGlyph; if(!FT_Get_Glyph(pFace->glyph,&pGlyph)) { LOG("GET: %c",c + 32); FT_Glyph_To_Bitmap(&pGlyph,FT_RENDER_MODE_NORMAL,0,1); FT_BitmapGlyph pGlyphMap = reinterpret_cast<FT_BitmapGlyph>(pGlyph); FT_Bitmap* pBitmap = &pGlyphMap->bitmap; const size_t nWidth = pBitmap->width; const size_t nHeight = pBitmap->rows; //add to texture atlas } } } } else { FT_Done_Face(pFace); delete pFont; return FALSE; } FT_Done_Face(pFace); delete pFont; return TRUE; } ARCHIVE_LoadFile returns blocks allocated with new. As a secondary question, I would like to render a font using pixel sizes, I came across FT_Set_Pixel_Sizes, but I'm unsure as to whether this stretches the font to fit the size, or bounds it to a size. what I would like to do is render all the glyphs at say 24px (MS Word size here), then turn it into a signed distance field in a 32px area. Update After much fiddling, I got a test app to work, which leads me to think the problems are arising from threading, as my code is running in a secondary thread. I have compiled freetype into a static lib using the multithread DLL, my app uses the multithreaded libs. gonna see if i can set up a multithreaded test. Also updated to 2.4.4, to see if the problem was a known but fixed bug, didn't help however. Update 2 After some more fiddling, it turns out I wasn't using the correct lib for 2.4.4 -.- after fixing that, the test app works 100%, but the main app still crashes when FT_Done_Face is called, still seems to be a crash in the memory heap management of windows. is it possible that there is a bug in freetype2 that makes it blow up under user threads?

    Read the article

  • select only new row in oracle

    - by Hlex
    Hi, I have table with "varchar2" as primary key. Transaction is about 1 000 000 per day. my app wake up every 5 minute to generate text file by query only new record. It will remember last point and do only new record. 1)Do you have idea how query with good performance? I able to add new column if need. 2)what do you think this process should do by? plsql? java?

    Read the article

  • Resolving HttpRequestScoped Instances outside of a HttpRequest in Autofac

    - by Page Brooks
    Suppose I have a dependency that is registered as HttpRequestScoped so there is only one instance per request. How could I resolve a dependency of the same type outside of an HttpRequest? For example: // Global.asax.cs Registration builder.Register(c => new MyDataContext(connString)).As<IDatabase>().HttpRequestScoped(); _containerProvider = new ContainerProvider(builder.Build()); // This event handler gets fired outside of a request // when a cached item is removed from the cache. public void CacheItemRemoved(string k, object v, CacheItemRemovedReason r) { // I'm trying to resolve like so, but this doesn't work... var dataContext = _containerProvider.ApplicationContainer.Resolve<IDatabase>(); // Do stuff with data context. } The above code throws a DependencyResolutionException when it executes the CacheItemRemoved handler: No scope matching the expression 'value(Autofac.Builder.RegistrationBuilder`3+<c__DisplayClass0[MyApp.Core.Data.MyDataContext,Autofac.Builder.SimpleActivatorData,Autofac.Builder.SingleRegistrationStyle]).lifetimeScopeTag.Equals(scope.Tag)' is visible from the scope in which the instance was requested.

    Read the article

  • Why 2 GB memory limit when running in 64 bit Windows ?

    - by Roland Bengtsson
    I'm a member in a team that develop a Delphi application. The memory requirements are huge. 500 MB is normal but in some cases it got out of memory exception. The memory allocated in that cases is typically between 1000 - 1700 MB. We of course want 64-bits compiler but that won't happen now (and if it happens we also must convert to unicode, but that is another story...). My question is why is there a 2 GB memory limit per process when running in a 64 bit environment. The pointer is 32 bit so I think 4 GB would be the right limit. I use Delphi 2007.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >