Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 312/457 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • What to return when making an Ajax request

    - by Russell
    When we return data from an Ajax call, is it better to return a document containing HTML to display on the page or return an Xml/json data which can be processed? I know different circumstances may determine what 'better' means, but I really want to know which will be more appropriate for different circumstances. I am working on the framework for a large ASP .Net application, using jQuery Ajax (forms plugin). My initial thought was to return the data as Xml, then process accordingly. Then this increases processing required in Javascript, to populate the page. I am trying to balance flexible, clear and simple. Thanks in advance for your knowledge and information.

    Read the article

  • How can I generate a random human-readable colour from a seed? C#

    - by SLC
    Got a logfile, and it has all kinds of text in it. Currently it is just displayed as one colour, and each entry says something like: Log from section 1: Some text here Log from section 125: Some text here Log from section 17: Some text here Log from section 1: Some text here Log from section 125: Some text here Log from section 1: Some text here Log from section 17: Some text here Now the logfile is displayed in real time, and it would be nice to make the rows with the same section number the same colour. However there could be potentially quite a large range of numbers. What I want to do is create a method that will take a number, and randomly generate a unique colour. The colour must be readable against a black background though, so #000000 is no good, nor is #101010 or anything too dark to read. Ideally two similar numbers will not produce the same colour because in the above examples, the numbers 1 and 17 might be too similar, and some numbers might be in the 10,000 range. Any ideas on this?

    Read the article

  • BufferedReader ready method in a while loop to determine EOF?

    - by BobTurbo
    I have a large file (wikipedia english arcticles only database as xml file) I am using to read one character at a time using BufferedReader. The psuedo code is: file = BufferedReader... while (file.ready()) character = file.read() is this actually valid? Or will ready just return false when it is waiting for the HDD to return data and not actually when the EOF has been reached? I tried to use if (file.read() == -1) but seemed to run into an infinite loop that I literally could not find. I am just wondering if it is reading the whole file as my statistics say 444 380 wikipedia pages have been read but I thought there were many more articles..

    Read the article

  • .NET Performance: Deep Recursion vs Queue

    - by JeffN825
    I'm writing a component that needs to walk large object graphs, sometimes 20-30 levels deep. What is the most performant way of walking the graph? A. Enqueueing "steps" so as to avoid deep recursion or B. A DFS (depth first search) which may step many levels deep and have a "deep" stack trace at times. I guess the question I'm asking is: Is there a performance hit in .NET for doing a DFS that causes a "deep" stack trace? If so, what is the hit? And would I better better off with some BFS by means of queueing up steps that would have been handled recursively in a DFS? Sorry if I'm being unclear. Thanks.

    Read the article

  • Anyone Know a Great Sparse One Dimensional Array Library in Python?

    - by TheJacobTaylor
    I am working on an algorithm in Python that uses arrays heavily. The arrays are typically sparse and are read from and written to constantly. I am currently using relatively large native arrays and the performance is good but the memory usage is high (as expected). I would like to be able to have the array implementation not waste space for values that are not used and allow an index offset other than zero. As an example, if my numbers start at 1,000,000 I would like to be able to index my array starting at 1,000,000 and not be required to waste memory with a million unused values. Array reads and writes needs to be fast. Expanding into new territory can be a small delay but reads and writes should be O(1) if possible. Does anybody know of a library that can do it? Thanks!

    Read the article

  • Comparing two xml files

    - by Ragini
    I have two large xml files. Almost 1.4 mb each. I want to compare them and see the differing part. I am using linux. Is there any free tool which can do this for me ? Or any other technique ? I used "diff" command in linux and tried to output the result in another file. (diff file1.xml file2.xml result.xml) But the resulted file showed "Could not parse the xml". However it showed something on screen. I would like the differ part to be stored somewhere if possible. (or atleast I should be able to see it properly) Thanks Ragini

    Read the article

  • Optimal directory structure for filesystem

    - by Pankaj
    We have large scale web application which has millions of customer. Each customer can have document based on document type. We may have 20-30 types of documents. We are planning to use GlusterFS for storing these documents. I'm trying to find out what are the limitations of Gluster as far as number of files/directories ? Do we need to have hierarchical directory structure ? What would be the optimal directory structure ? Does this make sense - CustmerId Documenttype File1 File2

    Read the article

  • How do I count the number of bytes read by TextReader.ReadLine()?

    - by Steve Guidi
    I am parsing a very large file of records (one per line, each of varying length), and I'd like to keep track of the number of bytes I've read in the file so that I may recover in the event of a failure. I wrote the following: string record = myTextReader.ReadLine(); bytesRead += record.Length; ParseRecord(record); However this doesn't work since ReadLine() strips any CR/LF characters in the line. Furthermore, a line may be terminated by either CR, LF, or CRLF characters, which means I can't just add 1 to bytesRead. Is there an easy way to get the actual line length, or do I write my own ReadLine() method in terms of the granular Read() operations?

    Read the article

  • Programmatically dial a series of numbers on a modem?

    - by Nathan Long
    At work, we just got a large number exotic cellular devices that need to be programmed. To do this, you plug in a standard home telephone and dial a series of numbers, with pauses between them. To me, this is a task that begs to be automated, and we've got one Linux desktop (a test Asterisk machine) with a modem on it. Does anybody know an easy way to approach this task? I could install Ruby or Python on that desktop. I also know PHP, but I'm not sure how to run it outside of a server setup, and it seems silly to install Apache for this.

    Read the article

  • Python - from file to data structure?

    - by Seafoid
    Hi, I have large file comprising ~100,000 lines. Each line corresponds to a cluster and each entry within each line is a reference i.d. for another file (protein structure in this case), e.g. 1hgn 1dju 3nmj 8kfn 9opu 7gfb 4bui I need to read in the file as a list of lists where each line is a sublist, thus preserving the integrity of the cluster, e.g. nested_list = [['1hgn', '1dju', '3nmj', '8kfn'], ['9opu', '7gfb'], ['4bui']] My current code creates a nested list but the entries within each list are a single string and not comma separated. Therefore, I cannot splice the list with indices so easily. Any help greatly appreciated. Thanks, S :-)

    Read the article

  • Good code visualization / refactoring tools for C++?

    - by Paul D.
    I've found myself coming across a lot of reasonably large, complicated codebases at work recently which I've been asked to either review or refactor or both. This can be extremely time consuming when the code is highly concurrent, makes heavy use of templates (particularly static polymorphism) and has logic that depends on callbacks/signals/condition variables/etc. Are there any good visualization tools for C++ period, and of those are there any that actually play well with "advanced" C++ features? Anything would probably be better than my approach now, which is basically pen+paper or stepping through the debugger. The debugger method can be good for following a particular code path, but isn't great for seeing the big picture you really need when doing serious refactoring. EDIT: I should mention that Visual Studio plugins aren't going to be a lot of help to me, since our stuff is mostly Linux-only.

    Read the article

  • On Solaris, what is the difference between cut and gcut?

    - by Chris J
    I recently came across this crazy script bug on one of my Solaris machines. I found that cut on Solaris skips lines from the files that it processes (or at least very large ones - 800 MB in my case). > cut -f 1 test.tsv | wc -l 457030 > gcut -f 1 test.tsv | wc -l 840571 > cut -f 1 test.tsv > temp_cut_1.txt > gcut -f 1 test.tsv > temp_gcut_1.txt > diff temp_cut_1.txt temp_gcut_1.txt | grep '[<]' | wc -l 0 My question is what the hell is going on with Solaris cut? My solution is updating my scripts to use gcut but... what the hell?

    Read the article

  • Autofiltered List; cross-row formula

    - by Chris Gunner
    I have a large Autofiltered list (~600 rows), with some of the rows being summary rows that I want to use a UDF to display the lowest priority listed in any of the 'child' cells. I can pass to my formula the right cells, but they are no longer correct if the list is re-ordered in any way. Is there a way to give the formula the right cell and have it recognise that I want that row and only ever that row? I can do it with a VLOOKUP to look at a hidden column that lists wether the 'child' row matches the right criteria, but with 600 rows and each parent row requiring about a dozen 'child' cells each, it's too slow.

    Read the article

  • Process.Exited event is not be called

    - by liys
    Hi all, I have the following code snippet to call into command line: p = new Process(); ProcessStartInfo psi = new ProcessStartInfo(); psi.FileName = "cmd.exe"; psi.Arguments = "/C " + "type " + “[abc].pdf”; psi.UseShellExecute = false; psi.RedirectStandardInput = false; psi.RedirectStandardOutput = true; psi.CreateNoWindow = true; p.StartInfo = psi; p.EnableRaisingEvents = true; p.Exited += new EventHandler(p_Exited); p.Start(); p.WaitForExit(); Strangely, When [abc] is a small pdf file(8kb) p_Exited is called. But when it's a large pdf file(120kb) it is never called. Any clues? Thanks,

    Read the article

  • Can you reuse a mysql result set in PHP?

    - by MarathonStudios
    I have a result set I pull from a large database: $result = mysql_query($sql); I loop through this recordset once to pull specific bits of data and get averages using while($row = mysql_fetch_array($result)). Later in the page, I want to loop through this same recordset again and output everything - but because I used the recordset earlier, my second loop returns nothing. I finally hacked around this by looping through a second identical recordset ($result2 = mysql_query($sql);), but I hate to make the same SQL call twice. Any way I can loop through the same dataset multiple times?

    Read the article

  • Mean of Sampleset and powered Sampleset

    - by Milla Well
    I am working on an ICA implementation wich is based on the assumption, that all source signals are independent. So I checked on the basic concepts of Dependence vs. Correlation and tried to show this example on sample data from numpy import * from numpy.random import * k = 1000 s = 10000 mn = 0 mnPow = 0 for i in arange(1,k): a = randn(s) a = a-mean(a) mn = mn + mean(a) mnPow = mnPow + mean(a**3) print "Mean X: ", mn/k print "Mean X^3: ", mnPow/k But I couldn't produce the last step of this example E(X^3) = 0: >> Mean X: -1.11174580826e-18 >> Mean X^3: -0.00125229267144 First value I would consider to be zero, but second value is too large, isn't it? Since I subtract the mean of a, I expected the mean of a^3 to be zero as well. Does the problem lie in the random number generator, the precision of the numerical values in my misunderstanding of the concepts of mean and expected value?

    Read the article

  • SSIS - Limiting Concurrent Connections

    - by Bigtoe
    Hi Folks, I am using SSIS to connect to a legecy mainframe database and this allows only 5 concurrent connections at a time. I have a dataflow task with many tables to transfer and it kicks outs because of this limitation. I have split up the Data Flow task into seperate data flows and this is working for the moment, but it is not optiomal as they need to be sequenced and 1 large transfer in a flow is holding up subsequent transfers. Anyone any idea of how to limit the number of connections in a single data flow, I had a look at using the Engine Threads but this did not make any difference. Any help much appericated.

    Read the article

  • Creating a Dib by only specifying the size with GDI+ and DotNet...

    - by Kris Erickson
    I have just recently discovered the difference between different constructors in GDI+. Going: var bmp = new Bitmap(width, height, pixelFormat); creates a DDB (Device Dependent Bitmap) whereas: var bmp = new Bitmap(someFile); creates a DIB (Device Independent Bitmap). This is really not usually important, except when handling very large images (where a DDB will run out of memory, and run out of memory at different sizes depending on the machine and its video memory). I need to create a DIB rather than DDB, but specify the height, width and pixelformat. Does anyone know how to do this in DotNet. Also is there a guide to what type of Bitmap (DIB or DDB) is being created by which Bitmap constructor?

    Read the article

  • Question about the mathematical properties of hashes

    - by levand
    Take a commonly used binary hash function - for example, SHA-256. As the name implies, it outputs a 256 bit value. Let A be the set of all possible 256 bit binary values. A is extremely large, but finite. Let B be the set of all possible binary values. B is infinite. Let C be the set of values obtained by running SHA-256 on every member of B. Obviously this can't be done in practice, but I'm guessing we can still do mathematical analysis of it. My Question: By necessity, C ? A. But does C = A?

    Read the article

  • Using memory mapping in C for reading binary

    - by user1320912
    I am trying to read data from a binary file and process it.It is a very large file so I thought I would use memory mapping. I am trying to use memory mapping so I can read the file byte by byte. I am getting a few compiler errors while doing this. I am doing this on a linux platform #include <unistd.h> #include <sys/types.h> #include <sys/mman.h> int fd; char *data; fd = open("data.bin", O_RDONLY); pagesize = 4000; data = mmap((caddr_t)0, pagesize, PROT_READ, MAP_SHARED, fd, pagesize); The errors i get are : caddr not initialized, R_RDONLY not initialized, mmap has too few arguments. Could someone help me out ?

    Read the article

  • Javascript: variable scope & the evils of globals

    - by Nick
    I'm trying to be good, I really am, but I can't see how to do it :) Any advice on how to not use a global here would be greatly appreciated. Let's call the global G. Function A Builds G by AJAX Function B Uses G Function C Calls B Called by numerous event handlers attached to DOM elements (type 1) Function D Calls B Called by numerous event handlers attached to DOM elements (type 2) I can't see how I can get around using a global here. The DOM elements (types 1 & 2) are created in other functions (E&F) which are unconnected with A. I don't want to add G to each event handler (because it's large and there's lots of these event handlers), and doing so would require the same kind of solution as I'm seeking here (i.e., getting G to E&F). The global G, BTW, is an array that is necessary to build other elements as they, in turn, are built by AJAX. I'm not convinced that a singleton is real solution, either. Thanks.

    Read the article

  • Stealing the contents of another application's tree view

    - by User1
    I have an application with a very large TreeView control in Java. I want to get the contents of the tree control in a list (just strings not a JList) of XPath-like elements of leaves only. Here's an example root |-Item1 |-Item1.1 |-Item1.1.1 (leaf) |-Item1.2 (leaf) |-Item2 |-Item2.1 (leaf) Would output: /Item1/Item1.1/Item1.1.1 /Item1/Item1.2 /Item2/Item2.1 I don't have any source code or anything handy like that. Is there I tool I can use to dig into the Window item itself and pull out this data? I don't mind if there are a few post-processing steps because typing it in by hand is my only other option.

    Read the article

  • How to find thousands of company names?

    - by schefdev
    How can I find or generate thousands of company names for testing and demo purposes? (Address, phone number, and related information would be nice too.) I've got a system I'm building which includes business contact information. Pretty common no doubt. My test/demo database currently has randomly generated individual's names loaded (thanks to a handy IRS spreadsheet I found). This has worked great for internal testing and review purposes, but it looks really odd when shown to prospective customers. I've tried various online public information sources (e.g. EDGAR, and county based property records searches), but these all require me to manually stitch together the results in blocks of 50 names or so at a time. I could do this, but was really hoping for a search service or data store out there that had this type of information readily searchable and retrievable in very large batches.

    Read the article

  • What is a good architecture for a Lift-JPA application?

    - by egervari
    I was wondering what is the best practice for a JPA model in Lift? I noticed that in the jpa demo application, there is just a Model object that is like a super object that does everything. I don't think this can be the most scalable approach, no? Is it is wise to still do the DAO pattern in Lift? For example, there's some code that looks a tad bloated and could be simplified across all model objects: Model.remove(Model.getReference(classOf[Author], someId)) Could be: AuthorDao.remove(someId) I'd appreciate any tips for setting up something that will work with the way Lift wants to work and is also easy to organize and maintain. Preferably from someone who has actually used JPA on a medium to large Lift site rather than just postulating what Spring does (we know how to do that) ;) The first phase of development will be around 30-40 tables, and will eventually get to over 100... we need a scalable, neat approach.

    Read the article

  • Access DB with SQL Server Back End

    - by uyuni99
    I have an old Access application that has a lot of code in forms and reports. The database is getting too large and I am thinking of moving the back end to SQL Server. My requirements are as follows: The DB needs to be multiuser and the users (3-5) will need to log in over the web I would prefer not to re-write the forms and reports in ASP or some other web front end. When I think about my choices, I see them as: Have an Access ADP front end and allows remote log-in to the server where it is stored. Not sure if it is possible for 2 users to simultaneously log in Distribute an ADP front end to the users, but I am not sure if it is possible to connect to a SQL Server back end over the internet, and the network traffic may be an issue. Any other solution? I appreciate all help. u

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >