Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 320/457 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • 3-clique counting in a graph

    - by Legend
    I am operating on a (not so) large graph having about 380K edges. I wrote a program to count the number of 3-cliques in the graph. A quick example: List of edges: A - B B - C C - A C - D List of cliques: A - B - C A 3-clique is nothing but a triangle in a graph. Currently, I am doing this using PHP+MySQL. As expected, it is not fast enough. Is there a way to do this in pure MySQL? (perhaps a way to insert all 3-cliques into a table?)

    Read the article

  • Android font out of view on small screen

    - by user581949
    Hi Everyone I have several text views that take up the majority of the screen in landscape view in a relativelayout and the font size i have set is quite big (150dp). The text views are all timers and the furthest to the right is the "seconds" textview. My problem is that when testing on a phone with a small screen res the seconds are way outside the limit of the screen and can't be seen. They are in perfect place on normal to large screen resolutions just not on a small screen. Is there any way i can force the "seconds" text view to stay on screen, without adjusting the font size or margins between each text view? Even if it means looking cramped on a small screen i can live with that. Any help is greatly appreciated. Thanks This is the corresponding code:

    Read the article

  • Faking a Single Address Space

    - by dsimcha
    I have a large scientific computing task that parallelizes very well with SMP, but at too fine grained a level to be easily parallelized via explicit message passing. I'd like to parallelize it across address spaces and physical machines. Is it feasible to create a scheduler that would parallelize already multithreaded code across multiple physical computers under the following conditions: The code is already multithreaded and can scale pretty well on SMP configurations. The fact that not all of the threads are running in the same address space or on the same physical machine must be transparent to the program, even if this comes at a significant performance penalty in some use cases. You may assume that all of the physical machines involved are running operating systems and CPU architectures that are binary compatible. Things like locks and atomic operations may be slow (having network latency to deal with and all) but must "just work".

    Read the article

  • C++ shifting bits

    - by Bobby
    I am new to shifting bits, but I am trying to debug the following snippet: if (!(strcmp(arr[i].GetValType(), "f64"))) { dem_content_buff[BytFldPos] = tmp_data; dem_content_buff[BytFldPos + 1] = tmp_data >> 8; dem_content_buff[BytFldPos + 2] = tmp_data >> 16; dem_content_buff[BytFldPos + 3] = tmp_data >> 24; dem_content_buff[BytFldPos + 4] = tmp_data >> 32; dem_content_buff[BytFldPos + 5] = tmp_data >> 40; dem_content_buff[BytFldPos + 6] = tmp_data >> 48; dem_content_buff[BytFldPos + 7] = tmp_data >> 56; } I am getting a warning saying the lines with "32" to "56" have a shift count that is too large. The "f64" in the predicate just means that the data should be 64bit data. How should this be done?

    Read the article

  • Looking for a specific kind of WEB framework, no malarkey please

    - by Hello you all men
    We do maintenence on a number of systems. I'm finally in a place where I'm teh fucking boss for once, and have to design a large system that will have a long maintenance contract. There's a couple of tasks I find myself always repeating: 1) similar tasks for users with JS and those without 2) similar things for contents and rss/atom feeds, etc. To combat these I will need appropriate handling of assets (think JS files, CSS, themes/templates, etc.), excellent auth/user systems, javascript/ajax forethought, appropriate model setups. Codeigniter fails on many of these. Basically, with enough time I could build this system with Zend, but I'm curious what else is out there as Zend is also kind of a heavy-weight. We need something that is Rapid but maintainable, CodeIgniter is not maintainable. We will have a lot of AJAX APIs in place for the design team to play with. At first I thought jQuery was cool, but now I'm looking at Dojo.

    Read the article

  • copy - paste in javascript

    - by Dumbledore of flash
    I have this code <input name="mpan[]" value="" maxlength="2" size="2"> <input name="mpan[]" value="" maxlength="2" size="3"> <input name="mpan[]" value="" maxlength="2" size="3"> <input name="mpan[]" value="" maxlength="2" size="12"> What I have to do is I am provided with a large key for example 0380112129021. When I do Ctrl+C on that key and select any box and press Ctrl+V, the number automatically get pasted in different box, for example: first input box gets 03, next gets 801, next gets 112 and rest gets pasted on last one 129021.how do i achive this from javascript

    Read the article

  • Profiling help required

    - by Mick
    I have a profiling issue - imagine I have the following code... void main() { well_written_function(); badly_written_function(); } void well_written_function() { for (a small number) { highly_optimised_subroutine(); } } void badly_written_function() { for (a wastefully and unnecessarily large number) { highly_optimised_subroutine(); } } void highly_optimised_subroutine() { // lots of code } If I run this under vtune (or other profilers) it is very hard to spot that anything is wrong. All the hotspots will appear in the section marked "// lots of code" which is already optimised. The badly_written_function() will not be highlighted in any way even though it is the cause of all the trouble. Is there some feature of vtune that will help me find the problem? Is there some sort of mode whereby I can find the time taken by badly_written_function() and all of its sub-functions?

    Read the article

  • IDL-like parser that turns a document definition into powerful classes?

    - by paniq
    I am looking for an IDL-like (or whatever) translator which turns a DOM- or JSON-like document definition into classes which are accessible from both C++ and Python, within the same application expose document properties as ints, floats, strings, binary blobs and compounds: array, string dict (both nestable) (basically the JSON type feature set) allow changes to be tracked to refresh views of an editing UI provide a change history to enable undo/redo operations can be serialized to and from JSON (can also be some kind of binary format) allow to keep large data chunks on disk, with parts only loaded on demand provide non-blocking thread-safe read/write access to exchange data with realtime threads allow multiple editors in different processes (or even on different machines) to view and modify the document The thing that comes closest so far is the Blender 2.5 DNA/RNA system, but it's not available as a separate library, and badly documented. I'm most of all trying to make sure that such a lib does not exist yet, so I know my time is not wasted when I start to design and write such a thing. It's supposed to provide a great foundation to write editing UI components.

    Read the article

  • Invisible JFrame/JTable how much faster ?

    - by chacko
    I have a swing app. with a jframe with lots of internal frames containing large JTable. Those jtables get updated continuously so there is lots of repainting going on. in some circumstances I can simply keep the JFrame invisible. (frame.setVisible(false)) I was wondering if anybody knows if I will gain something in terms of performance (something considerable or not) such as 50% gain or you would only get 2% gain... and maybe some sort of explaination on what to expect. thanks

    Read the article

  • NOT LIKE not working on comparison to a column

    - by rodling
    Data is fairly large and takes few minutes to run it every time, so its taking a lot of time debugging this problem. When I run like concat('%',T.item,'%') on smaller data it seems to identify items properly. However, when I run it on the main DB (the code shown), it still shows many(maybe even all) of the exceptions. EDIT: it seems when i add NOT it stops identifying items select distinct T.comment from (select comment, source, item from data, non_informative where ticker != "O" and source != 7 and source != 6) as T where T.comment not like concat('%',T.item,'%') order by T.comment; comment and source are in data, item is in non_informative Some items from T.item: 'Stock Analysis -', '#InsideTrades', 'IIROC Trade' Example comment which should be removed '#InsideTrades #4 | MACNAB CRAIG (Director,Officer,Chief Executive Officer): Filed Form 4 for $NNN (NATIONAL RETA' Can't seem to figure out it why shows all the items

    Read the article

  • Hibernate entities stored as HttpSession attribute values

    - by njudge
    I'm dealing with a legacy Java application with a large, fairly messy codebase. There's a fairly standard 'User' object that gets stored in the HttpSession between requests, so the servlets do stuff like this at the top: HttpSession session = request.getSession(true); User user = (User)session.getAttribute("User"); The old user authentication layer (which I won't describe; suffice to say, it did not use a database) is being replaced with code mapped to the DB with Hibernate. So 'User' is now a Hibernate entity. My understanding of Hibernate object life cycles is a little fuzzy, but it seems like storing 'User' in the HttpSession now becomes a problem, because it will be retrieved in a different transaction during the next request. What is the right thing to be doing here? Can I just use the Hibernate Session object's update() method to reattach the User instance the next time around? Do I need to?

    Read the article

  • How to avoid StaleObjectStateException when transaction updates thousands of entities?

    - by ThinkFloyd
    We are using Hibernate 3.6.0.Final with JPA 2 and Spring 3.0.5 for a large scale enterprise application running on tomcat 7 and MySQL 5.5. Most of the transactions in application, lives for less than a second and update 5-10 entities but in some use cases we need to update more than 10-20K entities in single transaction, which takes few minutes and hence more than 70% of times such transaction fails with StaleObjectStateException because some of those entities got updated by some other transaction. We generally maintain version column in all tables and in case of StaleObjectStateException we generally retry but since these longs transactions are anyways very long so if we keep on retrying then also I am not very sure that we'll be able to escape StaleObjectStateException. Also lot of activities keep updating these entities in busy hours so we cannot go with pessimistic approach because it can potentially halt many activities in system. Please suggest how to fix such long transaction issue because we cannot spawn thousands of independent and small transactions because we cannot afford messed up data in case of some failed & some successful transactions.

    Read the article

  • What algorithm would you use to code a parrot?

    - by Phil H
    A parrot learns the most commonly uttered words and phrases in its vicinity so it can repeat them at inappropriate moments. So how would you create a software version? Assuming it has access to a microphone and can record sound at will, how would you code it without requiring infinite resources? The best I can imagine is to divide the stream using silences in the sound, and then use some pattern recognition to encode each one as a list of tokens, storing new ones as you meet them. Hashing the token sequences and counting occurrences in a database, you could build up a picture of the most frequently uttered phrases. But given the huge variety in phrases, how do you prevent this just becoming a huge list? And the sheer number of pairs to match would surely generate lot of false positives from the combinatorial nature of matching. Would you use a neural net, since that's how a real parrot manages it? Or is there another, cleverer way of matching large-scale patterns in analogue data?

    Read the article

  • overlay items from mysql

    - by user1285471
    I have several locations in tables in a mysql database which i want to have displayed as POI's, the table is too large and changes too frequently to use sql lite so i need to get the info from mysql. I have set up a mapView indicating the device location in mapview using an image at drawable, but i now need to ad the other locations from the database. Then when a user clicks on an item i need a popup with options to route there using google maps navigation, or view mobi site, phone, or email. I am new to android. Please assist. Have gone through the overlay tut but no indication on how to get the info from db. Is there a tut somwhere so i can see some example code to do this?

    Read the article

  • Including full LaTeX documents within others.

    - by Chris Clarke
    I'm currently finishing off my dissertation, and would like to be able to include some documents within my LaTeX document. The files I'd like to include are weekly reports done in LaTeX to my supervisor. Obviously all documents are page numbered seperately. I would like them to be included in the final document. I could concatenate all the final PDFs using GhostScript or some other tool, but I would like to have consistent numbering throughout the document. I have tried including the LaTeX from each document in the main document, but the preamble etc causes problems and the small title I have in each report takes a whole page... In summary, I'm looking for a way of including a number of 1 or 2 page self-complete LaTeX files in a large report, keeping their original layouts, but changing the page numbering.

    Read the article

  • Are there libraries or techniques for collecting and weighing keywords from a block of text?

    - by Soviut
    I have a field in my database that can contain large blocks of text. I need to make this searchable but don't have the ability to use full text searching. Instead, on update, I want my business layer to process the block of text and extract keywords from it which I can save as searchable metadata. Ideally, these keywords could then be weighed based on the number of times they appear in the block of text. Naturally, words like "the", "and", "of", etc. should be discarded as they just add a lot of noise to the search. Are there tools or libraries in Python that can do this filtering or should I roll my own?

    Read the article

  • Which are the current/emerging desktop development technologies worth looking into?

    - by heeboir
    Greetings, With all the existing development towards web development and emerging technologies in that area, I'm left wondering; what is a state of the art way to implement desktop applications in this day and age? If you were to start a new application of considerable size from scratch what technology would you invest your efforts in (focusing on cross platform portability, decent performance and interoperability with existing standards)? I've looked into the Adobe Air platform which appears quite impressive but seems rather limited to support a large application. Would something like Java/SWT still be the sensible choice? Do things like GWT fit the bill? Thanks P.S. I'm leaving my question a bit open-ended in an effort to gather diverse answers. Surely this a subjective matter and there is no right and wrong answer.

    Read the article

  • Offline Database Write Cache in C#

    - by Todd Gardner
    I have a windows service that receives a large amount of data that needs to be transformed and persisted to a database. To ensure that we do not lose data, I want to create a "Write cache" for the data that will continue regardless if the database is online. Once the database becomes available again, I would want it to flush the content of the cache back into the database. I've seen some articles indicating that I might be able to do this with NHibernate, but I haven't found it conclusively. What options exist for this, and is NHibernate the appropriate direction?

    Read the article

  • Filtering out unique rows in MySQL

    - by jpatokal
    So I've got a large amount of SQL data that looks basically like this: user | src | dst 1 | 1 | 1 1 | 1 | 1 1 | 1 | 2 1 | 1 | 2 2 | 1 | 1 2 | 1 | 3 I want to filter out pairs of (src,dst) that are unique to one user (even if that user has duplicates), leaving behind only those pairs belonging to more than one user: user | src | dst 1 | 1 | 1 1 | 1 | 1 2 | 1 | 1 In other words, pair (1,2) is unique to user 1 and pair (1,3) to user 2, so they're dropped, leaving behind only all instances of pair (1,1). Any ideas? The answers to the question below can find the non-unique pairs, but my SQL-fu doesn't suffice to handle the complication of requiring that they belong to multiple users as well. [SQL question] How to select non "unique" rows

    Read the article

  • problem reading a csv file in python

    - by Hossein
    Hi, I am trying to read a very simple but somehow large(800Mb) csv file using the csv library in python. The delimiter is a single tab and each line consists of some numbers. Each line is a record, and I have 20681 rows in my file. I had some problems during my calculations using this file,it always stops at a certain row. I got suspicious about the number of rows in the file.I used the code below to count the number of row in this file: tfdf_Reader = csv.reader(open('v2-host_tfdf_en.txt'),delimiter=' ') c = 0 for row in tfdf_Reader: c = c + 1 print c To my surprise c is printed with the value of 61722!!! Why is this happening? What am I doing wrong?

    Read the article

  • How do I find the millionth number in the series: 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the series. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the series? Series Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the series according to Google: this one. Too slow Bash code #!/bin/bash function series { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do series $n # n gets new value in the function through global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

  • Can I force MySQL to output results before query is completed?

    - by Gordon Royle
    I have a large MySQL table (about 750 million rows) and I just want to extract a couple of columns. SELECT id, delid FROM tbl_name; No joins or selection criteria or anything. There is an index on both fields (separately). In principle, it could just start reading the table and spitting out the values immediately, but in practice the whole system just chews up memory and basically grinds to a halt. It seems like the entire query is being executed and the output stored somewhere before ANY output is produced... I've searched on unbuffering, turning off caches etc, but just cannot find the answer. (mysqldump is almost what I want except it dumps the whole table - but at least it just starts producing output immediately)

    Read the article

  • Parsing a string representing a float *with an exponent* in Python

    - by Lucas
    Hi, I have a large file with numbers in the form of 6,52353753563E-7. So there's an exponent in that string. float() dies on this. While I could write custom code to pre-process the string into something float() can eat, I'm looking for the pythonic way of converting these into a float (something like a format string passed somewhere). I must say I'm surprised float() can't handle strings with such an exponent, this is pretty common stuff. I'm using python 2.6, but 3.1 is an option if need be.

    Read the article

  • Override onDraw to change how the drawing occurs (Android)

    - by Casebash
    I want to change how my UI elements display, so I am overriding onDraw. The following code allows me to change a View to be drawn using PorterDuff.Mode.DARKEN. Unfortunately, this method involves creating a bitmap the size of the entire screen, then drawing to it then drawing this large bitmap the main screen again, for each UI element. This isn't at all efficient. Is it possible to achieve this in a more effecient way? @Override protected void onDraw(Canvas canvas) { //TODO: Reduce the burden from multiple drawing Bitmap bitmap=Bitmap.createBitmap(canvas.getWidth(), canvas.getHeight(), Config.ARGB_8888); Log.e("tmp",canvas.getClipBounds().toString()); Canvas offscreen=new Canvas(bitmap); super.onDraw(offscreen); //Then draw onscreen Paint p=new Paint(); p.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.DARKEN)); canvas.drawBitmap(bitmap, 0, 0, p); }

    Read the article

  • Resources for Win32 C/C++ programming

    - by EricM
    I have experience in a variety of languages (Java, Perl, C#, PHP, javascript, ansi-C for microprocessors, Objective-C and others), with Win32 programming not being an area I've done a lot of work in. Now part of my job entails maintaining a large Win32 codebase that stretches back 15 years and includes everything from C written originally for Win95 to MFC to COM to 64-bit code for Win7 to C++ using Boost and so on. If there's a variation on how to do something it's in there. Are there any good Win32 C/C++ references that discuss both the proper way to do things today and give you a little sense of how things evolved? Something like this discussion of all the various boolean types, or how to approach the API monstrosity of simply copying a string. I don't see my career heading too far down this path, but I do like to understand what I'm working with and I think this is an important part of programming history. thanks, Eric

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >