Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 393/457 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • How can i split up a component using cfinclude and still use inheritance?

    - by rip747
    Note: this is just a simplized example of what i'm trying to do to get the idea across. The problem I'm having is that I want to use cfinclude inside cfcomponent so that i can group like methods into separate files for more manageability. The problem I'm running into is when i try to extend another component that also uses cfinclude to manage it's method as demostrated below. Note that ComponentA extends ComponentB: ComponentA ========== <cfcomponent output="false" extends="componentb"> <cfinclude template="componenta/methods.cfm"> </cfcomponent> componenta/methods.cfm ====================== <cffunction name="a"><cfreturn "componenta-a"></cffunction> <cffunction name="b"><cfreturn "componenta-b"></cffunction> <cffunction name="c"><cfreturn "componenta-c"></cffunction> <cffunction name="d"><cfreturn super.a()></cffunction> ComponentB ========== <cfcomponent output="false"> <cfinclude template="componentb/methods.cfm"> </cfcomponent> componentb/methods.cfm ====================== <cffunction name="a"><cfreturn "componentb-a"></cffunction> <cffunction name="b"><cfreturn "componentb-b"></cffunction> <cffunction name="c"><cfreturn "componentb-c"></cffunction> The issue is that when i try to initialize ComponentA I get an the error: "Routines cannot be declared more than once. The routine a has been declared twice in different templates." The whole reason for this is because when you use cfinclude it's evaluated at RUN TIME instead of COMPILE TIME. Short of moving the methods into the components themselves and eliminating the use of cfinclude, how can i get around this or does someone have a better idea splitting up large components?

    Read the article

  • JQUERY - Find all Elements with Class="X" and then POST all those elements to the server to INS into

    - by nobosh
    Given a large text block from a WYSIWYG like: Lorem ipsum dolor sit amet, <span class="X" id="12">consectetur adipiscing elit</span>. Donec interdum, neque at posuere scelerisque, justo tortor tempus diam, eu hendrerit libero velit sed magna. Morbi laoreet <span class="X" id="13">tincidunt quam in facilisis.</span> Cras lacinia turpis viverra lacus <span class="X" id="14">egestas elementum. Curabitur sed diam ipsum.</span> How can I use JQUERY to find the following: <span class="X" id="12">consectetur adipiscing elit</span> <span class="X" id="13">tincidunt quam in facilisis.</span> <span class="X" id="14">egestas elementum. Curabitur sed diam ipsum.</span> And post it to the server as follows 12, consectetur adipiscing 13, tincidunt quam in facilisis. 14, egestas elementum. Curabitur sed diam ipsum. In a way where in Coldfusion it can loop through the results and make 3 inserts into the DB? Thanks

    Read the article

  • How to use a TFileStream to read 2D matrices into dynamic array?

    - by Robert Frank
    I need to read a large (2000x2000) matrix of binary data from a file into a dynamic array with Delphi 2010. I don't know the dimensions until run-time. I've never read raw data like this, and don't know IEEE so I'm posting this to see if I'm on track. I plan to use a TFileStream to read one row at a time. I need to be able to read as many of these formats as possible: 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point For 32-bit two's complement, I'm thinking something like the code below. Changing to Int64 and Int16 should be straight forward. How can I read the IEEE? Am I on the right track? Any suggestions on this code, or how to elegantly extend it for all 4 data types above? Since my post-processing will be the same after reading this data, I guess I'll have to copy the matrix into a common format when done. I have no problem just having four procedures (one for each data type) like the one below, but perhaps there's an elegant way to use RTTI or buffers and then move()'s so that the same code works for all 4 datatypes? Thanks! type TRowData = array of Int32; procedure ReadMatrix; var Matrix: array of TRowData; NumberOfRows: Cardinal; NumberOfCols: Cardinal; CurRow: Integer; begin NumberOfRows := 20; // not known until run time NumberOfCols := 100; // not known until run time SetLength(Matrix, NumberOfRows); for CurRow := 0 to NumberOfRows do begin SetLength(Matrix[CurRow], NumberOfCols); FileStream.ReadBuffer(Matrix[CurRow], NumberOfCols * SizeOf(Int32)) ); end; end;

    Read the article

  • When is a bool not a bool (compiler warning C4800)

    - by omatai
    Consider this being compiled in MS Visual Studio 2005 (and probably others): CPoint point1( 1, 2 ); CPoint point2( 3, 4 ); const bool point1And2Identical( point1 == point2 ); // C4800 warning const bool point1And2TheSame( ( point1 == point2 ) == TRUE ); // no warning What the...? Is the MSVC compiler brain-dead? As far as I can tell, TRUE is #defined as 1, without any type information. So by what magic is there any difference between these two lines? Surely the type of the expression inside the brackets is the same in both cases? [This part of the question now satisfactorily answered in the comments just below] Personally, I think that avoiding the warning by using the == TRUE option is ugly (though less ugly than the != 0 alternative, despite being more strictly correct), and it is better to use #pragma warning( disable:4800 ) to imply "my code is good, the compiler is an ass". Agree? Note - I have seen all manner of discussion on C4800 talking about assigning ints to bools, or casting a burger combo with large fries (hold the onions) to a bool, and wondering why there are strange results. I can't find a clear answer on what seems like a much simpler question... that might just shine line on C4800 in general.

    Read the article

  • C# Convert string to nullable type (int, double, etc...)

    - by Nathan Koop
    I am attempting to do some data conversion. Unfortunately, much of the data is in strings, where it should be int's or double, etc... So what I've got is something like: double? amount = Convert.ToDouble(strAmount); The problem with this approach is if strAmount is empty, if it's empty I want it to amount to be null, so when I add it into the database the column will be null. So I ended up writing this: double? amount = null; if(strAmount.Trim().Length>0) { amount = Convert.ToDouble(strAmount); } Now this works fine, but I now have five lines of code instead of one. This makes things a little more difficult to read, especially when I have a large amount of columns to convert. I thought I'd use an extension to the string class and generic's to pass in the type, this is because it could be a double, or an int, or a long. So I tried this: public static class GenericExtension { public static Nullable<T> ConvertToNullable<T>(this string s, T type) where T: struct { if (s.Trim().Length > 0) { return (Nullable<T>)s; } return null; } } But I get the error: Cannot convert type 'string' to 'T?' Is there a way around this? I am not very familiar with creating methods using generics.

    Read the article

  • What goes into main function?

    - by Woltan
    I am looking for a best practice tip of what goes into the main function of a program using c++. Currently I think two approaches are possible. (Although the "margins" of those approaches can be arbitrarily close to each other) 1: Write a "Master"-class that receives the parameters passed to the main function and handle the complete program in that "Master"-class (Of course you also make use of other classes). Therefore the main function would be reduced to a minimum of lines. #include "MasterClass.h" int main(int args, char* argv[]) { MasterClass MC(args, argv); } 2: Write the "complete" program in the main function making use of user defined objects of course! However there are also global functions involved and the main function can get somewhat large. I am looking for some general guidelines of how to write the main function of a program in c++. I came across this issue by trying to write some unit test for the first approach, which is a little difficult since most of the methods are private. Thx in advance for any help, suggestion, link, ...

    Read the article

  • How can I manage building library projects that produce both a static lib and a dll?

    - by Scott Langham
    I've got a large visual studio solution with ~50 projects. There are configurations for StaticDebug, StaticRelease, Debug and Release. Some libraries are needed in both dll and static lib form. To get them, we rebuild the solution with a different configuration. The Configuration Manager window is used to setup which projects need to build in which flavours, static lib, dynamic dll or both. This can by quite tricky to manage and it's a bit annoying to have to build the solution multiple times and select the configurations in the right order. Static versions need building before non-static versions. I'm wondering, instead of this current scheme, might it be simpler to manage if, for the projects I needed to produce both a static lib and dynamc dll, I created two projects. Eg: CoreLib CoreDll I could either make both of these projects reference all the same files and build them twice, or I'm wondering, would it be possible to build CoreLib and then get CoreDll to link it to generate the dll? I guess my question is, do you have any advice on how to structure your projects in this kind of situation? Thanks.

    Read the article

  • Can you dynamically combine multiple conditional functions into one in Python?

    - by erich
    I'm curious if it's possible to take several conditional functions and create one function that checks them all (e.g. the way a generator takes a procedure for iterating through a series and creates an iterator). The basic usage case would be when you have a large number of conditional parameters (e.g. "max_a", "min_a", "max_b", "min_b", etc.), many of which could be blank. They would all be passed to this "function creating" function, which would then return one function that checked them all. Below is an example of a naive way of doing what I'm asking: def combining_function(max_a, min_a, max_b, min_b, ...): f_array = [] if max_a is not None: f_array.append( lambda x: x.a < max_a ) if min_a is not None: f_array.append( lambda x: x.a > min_a ) ... return lambda x: all( [ f(x) for f in f_array ] ) What I'm wondering is what is the most efficient to achieve what's being done above? It seems like executing a function call for every function in f_array would create a decent amount of overhead, but perhaps I'm engaging in premature/unnecessary optimization. Regardless, I'd be interested to see if anyone else has come across usage cases like this and how they proceeded. Also, if this isn't possible in Python, is it possible in other (perhaps more functional) languages?

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project?

    Read the article

  • KSH: Variables containing double quotes

    - by nitrobass24
    I have a string called STRING1 that could contain double quotes. I am echoing the string through sed to pull out puntuation then sending to array to count certain words. The problem is I cannot echo variables through double quotes to sed. I am crawling our filesystems looking for files that use FTP commands. I grep each file for "FTP" STRING1=`grep -i ftp $filename` If you echo $STRING1 this is the output (just one example) myserver> echo "Your file `basename $1` is too large to e-mail. You must ftp the file to BMC tech support. \c" echo "Then, ftp it to ftp.bmc.com with the user name 'anonymous.' \c" echo "When the ftp is successful, notify technical support by phone (800-537-1813) or by e-mail ([email protected].)" Then I have this code STRING2=`echo $STRING1|sed 's/[^a-zA-Z0-9]/ /g'` I have tried double quoting $STRING1 like STRING2=`echo "$STRING1"|sed 's/[^a-zA-Z0-9]/ /g'` But that does not work. Single Qoutes, just sends $STRING1 as the string to sed...so that did not work. What else can I do here?

    Read the article

  • MPI Odd/Even Compare-Split Deadlock

    - by erebel55
    I'm trying to write an MPI version of a program that runs an odd/even compare-split operation on n randomly generated elements. Process 0 should generated the elements and send nlocal of them to the other processes, (keeping the first nlocal for itself). From here, process 0 should print out it's results after running the CompareSplit algorithm. Then, receive the results from the other processes run of the algorithm. Finally, print out the results that it has just received. I have a large chunk of this already done, but I'm getting a deadlock that I can't seem to fix. I would greatly appreciate any hints that people could give me. Here is my code http://pastie.org/3742474 Right now I'm pretty sure that the deadlock is coming from the Send/Recv at lines 134 and 151. I've tried changing the Send to use "tag" instead of myrank for the tag parameter..but when I did that I just keep getting a "MPI_ERR_TAG: invalid tag" for some reason. Obviously I would also run the algorithm within the processors 0 but I took that part out for now, until I figure out what is going wrong. Any help is appreciated.

    Read the article

  • How do I determine whether calculation was completed, or detect interrupted calculation?

    - by BenTobin
    I have a rather large workbook that takes a really long time to calculate. It used to be quite a challenge to get it to calculate all the way, since Excel is so eager to silently abort calculation if you so much as look at it. To help alleviate the problem, I created some VBA code to initiate the the calculation, which is initiated by a form, and the result is that it is not quite as easy to interrupt the calculation process, but it is still possible. (I can easily do this by clicking the close X on the form, but I imagine there are other ways) Rather than taking more steps to try and make it harder to interrupt calculation, I'd like to have the code detect whether calculation is complete, so it can notify the user rather than just blindly forging on into the rest of the steps in my code. So far, I can't find any way to do that. I've seen references to Application.CalculationState, but the value is xlDone after I interrupt calculation, even if I interrupt the calculation after a few seconds (it normally takes around an hour). I can't think of a way to do this by checking the value of cells, since I don't know which one is calculated last. I see that there is a way to mark cells as "dirty" but I haven't been able to find a way to check the dirtiness of a cell. And I don't know if that's even the right path to take, since I'd likely have to check every cell in every sheet. The act of interrupting calculation does not raise an error, so my ON ERROR doesn't get triggered. Is there anything I'm missing? Any ideas? Any ideas?

    Read the article

  • Flex Tree Properties, Null Reference?

    - by mvrak
    I am pulling down a large XML file and I have no control over it's structure. I used a custom function to use the tag name to view the tree structure as a flex tree, but then it breaks. I am guessing it has something to do with my other function, one that calls attribute values from the selected node. See code. <mx:Tree x="254" y="21" width="498" height="579" id="xmllisttree" labelFunction="namer" dataProvider="{treeData}" showRoot="false" change="treeChanged(event)" /> //and the Cdata import mx.rpc.events.ResultEvent; [Bindable] private var fullXML:XMLList; private function contentHandler(evt:ResultEvent):void{ fullXML = evt.result.page; } [Bindable] public var selectedNode:Object; public function treeChanged(event:Event):void { selectedNode=Tree(event.target).selectedItem; } public function namer(item:Object):String { var node:XML = XML(item); var nodeName:QName = node.name(); var stringtest:String ="bunny"; return nodeName.localName; } The error is TypeError: Error #1009: Cannot access a property or method of a null object reference. Where is the null reference?

    Read the article

  • Python parsing error message functions

    - by user1716168
    The code below was created by me with the help of many SO veterans: The code takes an entered math expression and splits it into operators and operands for later use. I have created two functions, the parsing function that splits, and the error function. I am having problems with the error function because it won't display my error messages and I feel the function is being ignored when the code runs. An error should print if an expression such as this is entered: 3//3+4,etc. where there are two operators together, or there are more than two operators in the expression overall, but the error messages dont print. My code is below: def errors(): numExtrapolation,opExtrapolation=parse(expression) if (len(numExtrapolation) == 3) and (len(opExtrapolation) !=2): print("Bad1") if (len(numExtrapolation) ==2) and (len(opExtrapolation) !=1): print("Bad2") def parse(expression): operators= set("*/+-") opExtrapolate= [] numExtrapolate= [] buff=[] for i in expression: if i in operators: numExtrapolate.append(''.join(buff)) buff= [] opExtrapolate.append(i) opExtrapolation=opExtrapolate else: buff.append(i) numExtrapolate.append(''.join(buff)) numExtrapolation=numExtrapolate #just some debugging print statements print(numExtrapolation) print("z:", len(opExtrapolation)) return numExtrapolation, opExtrapolation errors() Any help would be appreciated. Please don't introduce new code that is any more advanced than the code already here. I am looking for a solution to my problem... not large new code segments. Thanks.

    Read the article

  • Sphinx - delimiters

    - by yoda
    Hi, I would like to know if the Sphinx engine works with any delimiters (like commas and periods in normal MySQL). My question comes from the urge, not to use them at all, but to escape them or at least thay they don't enter in conflict when performing MATCH operations with FULLTEXT searches, since I have problems dealing with them in MySQL by default and I would prefer not to be forced to replace those delimiters by any other characters to provide a good set of results. Sorry if I'm saying something stupid, but I don't have experience with Sphinx or other complementary (?) search engines. To give you an example, if I perform a search with "Passat 2.0 TDI" MySQL by default would identify the period in this case as a delimiter and since the "2" and "0" are too short to be considered words by default, the results would be a bit messed up. Is it easy to handle with Sphinx (or other search engine)? I'm open to suggestions. This is for a large project, with probably more than 500.000 possible records (not trivial at all). Cheers!

    Read the article

  • Booking logic and architecture, database sync: Hotels, tennis courts reservation system ...

    - by coulix
    Hello Stackers, Imagine that you want to design a tennis booking system. You have 5 tennis clubs as partenrs with no online api allowing you to check on their side if a court is booked or not: You have to build this part as well. Every time a booking is done on their side you want it to be know by our system. Probably using a POST request form tennis partner to our server. Every time a booking is done on our website, we want to push the booking to their system. The difficulty is that their system need to be online and accessible from outside. Ip may change, we have to use a dns updater. In case their system is not available we still accept the booking and fallback to an async email with 'i confirm booking/reject booking' link sent to the club. I find the whole process quite complex and was wondering about the way online hotel booking system and hotel were working. Do they all have their data open and online ? The good thing is that the data will grow large and fits nicely to some no SQL ;) like couch db

    Read the article

  • Reduce file size for charts pasted from excel into word

    - by Steve Clanton
    I have been creating reports by copying some charts and data from an excel document into a word document. I am pasting into a content control, so i use ChartObject.CopyPicture in excel and ContentControl.Range.Paste in word. This is done in a loop: Set ws = ThisWorkbook.Worksheets("Charts") With ws For Each cc In wordDocument.ContentControls If cc.Range.InlineShapes.Count > 0 Then scaleHeight = cc.Range.InlineShapes(1).scaleHeight scaleWidth = cc.Range.InlineShapes(1).scaleWidth cc.Range.InlineShapes(1).Delete .ChartObjects(cc.Tag).CopyPicture Appearance:=xlScreen, Format:=xlPicture cc.Range.Paste cc.Range.InlineShapes(1).scaleHeight = scaleHeight cc.Range.InlineShapes(1).scaleWidth = scaleWidth ElseIf ... Next cc End With Creating these reports using Office 2007 yielded files that were around 6MB, but creating them (using the same worksheet and document) in Office 2010 yields a file that is around 10 times as large. After unzipping the docx, I found that the extra size comes from emf files that correspond to charts that are pasted in using VBA. Where they range from 360 to 900 KB before, they are 5-18 MB. And the graphics are not visibly better. I am able to CopyPicture with the format xlBitmap, and while that is somewhat smaller, it is larger than the emf generated by Office 2007 and noticeably poorer quality. Are there any other options for reducing the file size? Ideally, I would like to produce a file with the same resolution for the charts as I did using Office 2007. Is there any way that uses VBA only (without modifying the charts in the spreadsheet)?

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • How to remove empty tables from a MySQL backup file.

    - by user280708
    I have multiple large MySQL backup files all from different DBs and having different schemas. I want to load the backups into our EDW but I don't want to load the empty tables. Right now I'm cutting out the empty tables using AWK on the backup files, but I'm wondering if there's a better way to do this. If anyone is interested, this is my AWK script: EDIT: I noticed today that this script has some problems, please beware if you want to actually try to use it. Your output may be WRONG... I will post my changes as I make them. # File: remove_empty_tables.awk # Copyright (c) Northwestern University, 2010 # http://edw.northwestern.edu /^--$/ { i = 0; line[++i] = $0; getline if ($0 ~ /-- Definition/) { inserts = 0; while ($0 !~ / ALTER TABLE .* ENABLE KEYS /) { # If we already have an insert: if (inserts > 0) print else { # If we found an INSERT statement, the table is NOT empty: if ($0 ~ /^INSERT /) { ++inserts # Dump the lines before the INSERT and then the INSERT: for (j = 1; j <= i; ++j) print line[j] i = 0 print $0 } # Otherwise we may yet find an insert, so save the line: else line[++i] = $0 } getline # go to the next line } line[++i] = $0; getline line[++i] = $0; getline if (inserts > 0) { for (j = 1; j <= i; ++j) print line[j] print $0 } next } else { print "--" } } { print }

    Read the article

  • Move SELECT to SQL Server side

    - by noober
    Hello all, I have an SQLCLR trigger. It contains a large and messy SELECT inside, with parts like: (CASE WHEN EXISTS(SELECT * FROM INSERTED I WHERE I.ID = R.ID) THEN '1' ELSE '0' END) AS IsUpdated -- Is selected row just added? as well as JOINs etc. I like to have the result as a single table with all included. Question 1. Can I move this SELECT to SQL Server side? If yes, how to do this? Saying "move", I mean to create a stored procedure or something else that can be executed before reading dataset in while cycle. The 2 following questions make sense only if answer is "yes". Why do I want to move SELECT? First off, I don't like mixing SQL with C# code. At second, I suppose that server-side queries run faster, since the server have more chances to cache them. Question 2. Am I right? Is it some sort of optimizing? Also, the SELECT contains constant strings, but they are localizable. For instance, WHERE R.Status = "Enabled" "Enabled" should be changed for French, German etc. So, I want to write 2 static methods -- OnCreate and OnDestroy -- then mark them as stored procedures. When registering/unregistering my assembly on server side, just call them respectively. In OnCreate format the SELECT string, replacing {0}, {1}... with required values from the assembly resources. Then I can localize resources only, not every script. Question 3. Is it good idea? Is there an existing attribute to mark methods to be executed by SQL Server automatically after (un)registartion an assembly? Regards,

    Read the article

  • Counting XML elements in file on Android

    - by CSharperWithJava
    Take a simple XML file formatted like this: <Lists> <List> <Note/> ... <Note/> </List> <List> <Note/> ... <Note/> </List> </Lists> Each node has some attributes that actually hold the data of the file. I need a very quick way to count the number of each type of element, (List and Note). Lists is simply the root and doesn't matter. I can do this with a simple string search or something similar, but I need to make this as fast as possible. Design Parameters: Must be in java (Android application). Must AVOID allocating memory as much as possible. Must return the total number of Note elements and the number of List elements in the file, regardless of location in file. Number of Lists will typically be small (1-4), and number of notes can potentially be very large (upwards of 1000, typically 100) per file. I look forward to your suggestions.

    Read the article

  • Where can I find a powerful, standards compliant, web-based interactive org chart API?

    - by Adam Soltys
    Hi, I'm looking to build an interactive web-based org chart for a large organization. I somewhat like the interface at ancestry.com where you can hover over people and pan/zoom around and click on different nodes to make them the root. Ideally, I'd like it if people could belong to multiple organizational entities like committees, working groups, etc. In other words the API should support graphs in general, not just trees. I'd like to be able to visually explode each organizational substructure into substituents by clicking on it, with a nice animation of the employees ballooning or spilling out so you can really interactively drill down through the organization. I found http://code.google.com/apis/visualization/documentation/gallery/orgchart.html but it looks a bit rudimentary. I know there are desktop tools like OrgPlus and Visio that can build static charts but I'm really looking for a free, web-based API with open standards-based output like SVG or HTML5 Canvas elements rather than Flash or some proprietary output. Something I can embed into a custom web application and style myself. Something interactive.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • PHP custom function code optimization

    - by Alex
    Now comes the hard part. How do you optimize this function: function coin_matrix($test, $revs) { $coin = array(); for ($i = 0; $i < count($test); $i++) { foreach ($revs as $j => $rev) { foreach ($revs as $k => $rev) { if ($j != $k && $test[$i][$j] != null && $test[$i][$k] != null) { if(!isset($coin[$test[$i][$j]])) { $coin[$test[$i][$j]] = array(); } if(!isset($coin[$test[$i][$j]][$test[$i][$k]])) { $coin[$test[$i][$j]][$test[$i][$k]] = 0; } $coin[$test[$i][$j]][$test[$i][$k]] += 1 / ($some_var - 1); } } } } return $coin; } I'm not that good at this and if the arrays are large, it runs forever. The function is supposed to find all pairs of values from a two-dim array and sum them like this: $coin[$i][$j] += sum_of_pairs_in_array_row / [count(elements_of_row) - 1] Thanks a lot!

    Read the article

  • File upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >