Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 37/457 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • What videoconferencing platforms work best for distributed software development teams?

    - by user11347
    Today I had a religious experience: I participated in a videoconference using a high quality Polycom system. This made a huge difference in communication quality -- people that I had a terrible time understanding previously now sounded like Shakespeare. Seeing a high quality video image was enormously helpful. I asked operations how much the Polycom cost and they said that it cost $20K new and $4K off eBay. So this solution doesn't work for people who work from home or who work in offices but are in groups of 3 or fewer people. My budget for a videoconferencing system is a few hundred dollars per person. Skype is not nearly good enough. And I haven't seen a consumer webcam that is good enough either. Does such a solution exist? I'm looking to collaborate both with people who are close by (in the same city but not in the same room) and far away (on different continents).

    Read the article

  • Is there a fast way to jump to element using XMLReader?

    - by Derk
    I am using XMLReader to read a large XML file with about 1 million elements on the level I am reading from. However, I've calculated it will take over 10 seconds when I jump to -for instance- element 500.000 using XMLReader::next ([ string $localname ] ) or XMLReader::read ( void ) This is not very usable. Is there a faster way to do this?

    Read the article

  • Parsing Huge XML Files in PHP

    - by Ian
    I'm trying to parse the dmoz content/structures xml files into mysql, but all existing scripts to do this are very old and don't work well. How can I go about opening a large (+1GB) xml file in php for parsing?

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by ming yeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

  • Can't Deploy or Upload Large SSRS 2008 Report from VS or IE

    - by Bratch
    So far in this project I have two reports in VS2008/BIDS. The first one contains 1 tablix and is about 100k. The second one contains 3 tablixes (tablices?) and is about 257k. I can successfully deploy the smaller report from VS and I can upload it from the Report Manager in IE. I can view/run it from Report Manager and I can get to the Report Server (web service) URL from my browser just fine. Everything is done over HTTPS and there is nothing wrong with the certificates. With the larger report, the error I get in VS is "The operation has timed out" after about 100 seconds. The error when I upload from IE is "The underlying connection was closed: An unexpected error occurred on a send" after about 130 seconds. In the RSReportServer.config file I tried changing Authentication/EnableAuthPersistence from true to false and restarting the service, but still get the error. I have the key "SecureConnectionLevel" set to 2. Changing this to 0 and turning off SSL is not going to be an option. I added a registry key named "MaxRequestBytes" to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters and set it to 5242880 (5MB) and restarted the HTTP and SRS services as suggested in a forum post by Jin Chen of MSFT. I still cannot upload the larger report. This is on MS SQL 2008 and WS 2003. Below is part of a log file entry from ...\Reporting Services\LogFiles when I attempted to upload from IE. library!WindowsService_0!89c!02/10/2010-07:57:57:: i INFO: Call to CleanBatch() ends ui!ReportManager_0-1!438!02/10/2010-07:59:33:: e ERROR: The underlying connection was closed: An unexpected error occurred on a send. ui!ReportManager_0-1!438!02/10/2010-07:59:34:: e ERROR: HTTP status code -- 500 -------Details-------- System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. --- System.IO.IOException: Unable to write data to the transport connection: An established connection was aborted by the software in your host machine. --- System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine at System.Net.Sockets.Socket.MultipleSend(BufferOffsetSize[] buffers, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] buffers) --- End of inner exception stack trace --- ...

    Read the article

  • MPMoviePlayerController on large videos causes massive memory spike, and a level 1 memory warning

    - by Shizam
    When viewing images my application hums along nicely with low memory consumption, once I try to watch a video using MPMoviePlayerController memory usage spikes way up, dwarfing the previous memory graph and if I play the video it causes a 'memory warning. Level=1' message. The video files (mp4) aren't even that big, 40MB or so, and it doesn't matter if I play the file streamed from a URL or loaded from a local file, actually the memory spike is even worse if I try to stream it. Here is the code I use to create the player: if (_photo.videoPath != nil) { _movieViewController=[[MPMoviePlayerViewController alloc] initWithContentURL:[NSURL fileURLWithPath:_photo.videoPath]]; } else { _movieViewController=[[MPMoviePlayerViewController alloc] initWithContentURL:[NSURL URLWithString:_photo.videoURL]]; } [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(videoMetaListener:) name:MPMovieDurationAvailableNotification object:_movieViewController.moviePlayer]; _movieViewController.moviePlayer.scalingMode=MPMovieScalingModeAspectFit; _movieViewController.moviePlayer.shouldAutoplay = YES; _movieViewController.moviePlayer.controlStyle = MPMovieControlStyleEmbedded; Anybody else running into issues playing video? Also I checked for leaks, there are none reported.

    Read the article

  • PHP File Upload second file does not upload, first file does without error

    - by Curtis
    So I have a script I have been using and it generally works well with multiple files... When I upload a very large file in a multiple file upload, only the first file is uploaded. I am not seeing an errors as to why. I figure this is related to a timeout setting but can not figure it out - Any ideas? I have foloowing set in my htaccess file php_value post_max_size 1024M php_value upload_max_filesize 1024M php_value memory_limit 600M php_value output_buffering on php_value max_execution_time 259200 php_value max_input_time 259200 php_value session.cookie_lifetime 0 php_value session.gc_maxlifetime 259200 php_value default_socket_timeout 259200

    Read the article

  • The remote server returned an unexpected response: (413) Request Entity Too Large

    - by user1583591
    If anyone can help me figure out why I am getting the following error when making a call to my WCF service I would be eternally grateful. The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. I have tried modifying the config file on both the service and client, and made sure the service name includes the namespace. I cannt seem to make any progress. Here is my service config settings: <services> <service name="CCC.CA-CP &amp; Sightlines Campus Carbon Calculator"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="Binding2" contract="CCC.ICCCService" behaviorConfiguration="WebBehavior2" /> </service> </services> <bindings> <basicHttpBinding> <binding name="Binding2" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="52428800" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="16384" maxBytesPerRead="20000" maxNameTableCharCount="16384" ></readerQuotas> </binding> </basicHttpBinding> </bindings> .. <dataContractSerializer maxItemsInObjectGraph="12097151" /> ... <requestLimits maxAllowedContentLength="157286400" /> ... <httpRuntime useFullyQualifiedRedirectUrl="true" maxRequestLength="2147483647"... I also set the client config with the same binding values. Here is the service contract : namespace CCC { [ServiceContract(Name = "CA-CP & Sightlines Campus Carbon Calculator", Namespace = "http://www.sightlines.com/CCC/01")] public interface ICCCService { .... } Thanks in advance for any help given!

    Read the article

  • Modulo in JavaScript - large number

    - by Benedikt R.
    Hi! I try to calculate with JS' modulo function, but don't get the right result (which should be 1). Here is a hardcoded piece of code. var checkSum = 210501700012345678131468; alert(checkSum % 97); Result: 66 Whats the problem here? Regards, Benedikt

    Read the article

  • CGContextDrawPDFPage taking up large amounts of memory

    - by Ed Marty
    I have a PDF file that I want to draw in outline form. I want to draw the first several pages on the document each in their own UIImage to use on a button so that when clicked, the main display will navigate to the clicked page. However, CGContextDrawPDFPage seems to be using copious amounts of memory when attempting to draw the page. Even though the image is only supposed to be around 100px tall, the application crashes while drawing one page in particular, which according to Instruments, allocates about 13 MB of memory just for the one page. Here's the code for drawing: //Note: This is always called in a background thread, but the autorelease pool is setup elsewhere + (void) drawPage:(CGPDFPageRef)m_page inRect:(CGRect)rect inContext:(CGContextRef) g { CGPDFBox box = kCGPDFMediaBox; CGAffineTransform t = CGPDFPageGetDrawingTransform(m_page, box, rect, 0,YES); CGRect pageRect = CGPDFPageGetBoxRect(m_page, box); //Start the drawing CGContextSaveGState(g); //Clip to our bounding box CGContextClipToRect(g, pageRect); //Now we have to flip the origin to top-left instead of bottom left //First: flip y-axix CGContextScaleCTM(g, 1, -1); //Second: move origin CGContextTranslateCTM(g, 0, -rect.size.height); //Now apply the transform to draw the page within the rect CGContextConcatCTM(g, t); //Finally, draw the page //The important bit. Commenting out the following line "fixes" the crashing issue. CGContextDrawPDFPage(g, m_page); CGContextRestoreGState(g); } Is there a better way to draw this image that doesn't take up huge amounts of memory?

    Read the article

  • Large Django application layout

    - by Rob Golding
    I am in a team developing a web-based university portal, which will be based on Django. We are still in the exploratory stages, and I am trying to find the best way to lay the project/development environment out. My initial idea is to develop the system as a Django "app", which contains sub-applications to separate out the different parts of the system. The reason I intended to make these "sub" applications is that they would not have any use outside the parent application whatsoever, so there would be little point in distributing them separately. We envisage that the portal will be installed in multiple locations (at different universities, for example) so the main app can be dropped into a number of Django projects to install it. We therefore have a different repository for each location's project, which is really just a settings.py file defining the installed portal applications, and a urls.py routing the urls to it. I have started to write some initial code, though, and I've come up against a problem. Some of the code that handles user authentication and profiles seems to be without a home. It doesn't conceptually belong in the portal application as it doesn't relate to the portal's functionality. It also, however, can't go in the project repository - as I would then be duplicating the code over each location's repository. If I then discovered a bug in this code, for example, I would have to manually replicate the fix over all of the location's project files. My idea for a fix is to make all the project repos a fork of a "master" location project, so that I can pull any changes from that master. I think this is messy though, and it means that I have one more repository to look after. I'm looking for a better way to achieve this project. Can anyone recommend a solution or a similar example I can take a look at? The problem seems to be that I am developing a Django project rather than just a Django application.

    Read the article

  • Horizontal Scrolling Flash Game/Large Horizontal Scene

    - by Nathan
    Hello, I'm currently learning Flash (CS4, AS3) and am creating a game. I have currently 1 flv file with 4 scenes, I then move from left to right and then to scene 2 and go from left to right. This is the game where items pop up that need to be clicked on and you get points. Is there any way I can combine these onto 1 scene? Flash only allows you to have a maximum of 2880px wide. The reason for this is the transition between the scenes is RUBBISH and that my AS is not working correctly in between scenes (it loses values). Any help would be greatly appreciated! Nathan

    Read the article

  • 2D colliding n-body simulation (fast Collision Detection for large number of balls)

    - by osgx
    Hello I want to write a program for simulating a motion of high number (N = 1000 - 10^5 and more) of bodies (circles) on 2D plane. All bodies have equal size and the only force between them is elastic collision. I want to get something like but in larger scale, with more balls and more dense filling of the plane (not a gas model as here, but smth like boiling water model). So I want a fast method of detection that ball number i does have any other ball on its path within 2*radius+V*delta_t distance. I don't want to do a full search of collision with N balls for each of i ball. (This search will be N^2.) PS Sorry for loop-animated GIF. Just press Esc to stop it. (Will not work in Chrome).

    Read the article

  • Importing a large dataset into a database

    - by peaceful
    I'm a beginning programmer in the relevant areas to this question, so if possible, it'd be helpful to avoid assuming I know a lot already. I'm trying to import the OpenLibrary dataset into a local Postgres database. After it's imported, I plan to use it as a starting seed for a Ruby on Rails application that will include information on books. The OpenLibrary datasets are available here, in a modified JSON format: http://openlibrary.org/dev/docs/jsondump I only need very basic information for my application, much less than what is provided in the dumps. I'm only trying to get out book titles, author names, and relationships between books and authors. Below are two typical entries from their dataset, the first for an author, and the second for a book (they seem to have an entry for each edition of a book). The entries seem to lead off with a primary key, and then with a type, before including the actual JSON database dump. /a/OL2A /type/author {"name": "U. Venkatakrishna Rao", "personal_name": "U. Venkatakrishna Rao", "last_modified": {"type": "/type/datetime", "value": "2008-09-10 08:44:01.978456"}, "key": "/a/OL2A", "birth_date": "1904", "type": {"key": "/type/author"}, "id": 99, "revision": 3} /b/OL345M /type/edition {"publishers": ["Social Science Research Project, Dept. of Geography, University of Dacca"], "pagination": "ii, 54 p.", "title": "Land use in Fayadabad area", "lccn": ["sa 65000491"], "subject_place": ["East Pakistan", "Dacca region."], "number_of_pages": 54, "languages": [{"comment": "initial import", "code": "eng", "name": "English", "key": "/l/eng"}], "lc_classifications": ["S471.P162 E23"], "publish_date": "1963", "publish_country": "pk ", "key": "/b/OL345M", "authors": [{"birth_date": "1911", "name": "Nafis Ahmad", "key": "/a/OL302A", "personal_name": "Nafis Ahmad"}], "publish_places": ["Dacca, East Pakistan"], "by_statement": "[by] Nafis Ahmad and F. Karim Khan.", "oclc_numbers": ["4671066"], "contributions": ["Khan, Fazle Karim, joint author."], "subjects": ["Land use -- East Pakistan -- Dacca region."]} The size of the uncompressed dumps are enormous, about 2GB for the authors list, and 18GB for the book editions list. OpenLibrary does not provide any tools for this themselves, they provide a simple unoptimized Python script for reading in sample data (which unlike the actual dumps comes in pure JSON format), but they estimate if that was modified for use on their actual data it would take 2 months (!) to finish loading the data. How can I read this into the database? I assume I'll need to write a program to do this. What language and any guidance on how I should do it to finish in a reasonable amount of time? The only scripting language I have any experience with is Ruby.

    Read the article

  • How to refactor large projects in visual studio

    - by Aaron
    I always run into a problem where my projects in Visual Studio (2008) become huge monstrosities and everything is generally thrown into a Web Application project. I know from checking out some open source stuff that they tend to have multiple projects within a solution, each with their own responsibilities. Does anyone have any advice for how to refactor this out? What should be in a separate project vs. part of the web project? Can you point me to any reference materials on the subject, or is it just something you become accustomed to with time?

    Read the article

  • Visualizing Undirected Graph That's Too Large for GraphViz?

    - by Gabe
    Hi Everyone, I was wondering if anyone has any advice for rendering an undirected graph with 178,000 nodes and 500,000 edges. I've tried Neato, Tulip, and Cytoscape. Neato doesn't even come remotely close, and Tulip and Cytoscape claim they can handle it but don't seem to be able to. (Tulip does nothing and Cytoscape claims to be working, and then just stops.) Does anyone have any ideas? I'd just like a vector format file (ps or pdf) with a remotely reasonable layout of the nodes. Thanks!

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • Slow select when inserting large amounts of data (MYSQL)

    - by siannopollo
    I have a process that imports a lot of data (950k rows) using inserts that insert 500 rows at a time. The process generally takes about 12 hours, which isn't too bad. Normally doing a query on the table is pretty quick (under 1 second) as I've put (what I think to be) the proper indexes in place. The problem I'm having is trying to run a query when the import process is running. It is making the query take almost 2 minutes! What can I do to make these two things not compete for resources (or whatever)? I've looked into "insert delayed" but not sure I want to change the table to MyISAM. Thanks for any help!

    Read the article

  • Large file uploads from web pages

    - by jerrygarciuh
    Hi folks, I code primarily in PHP and Perl. I have a client who is insisting on seeking video submissions (any encoding) from the public via one of their pages rather than letting YouTube do its job. Server in question is a virtual machine and I can adjust ini settings for max post, max upload size etc as needed. My initial thought is to use a Flash based uploader with PHP on the back end but I wondered if someone might have useful advice and experience on the subject? Peace JG

    Read the article

  • very large image manipulation and tiling

    - by Mohammad
    I need to a software , Program(Java),or a method for tiling very larg images (more than 140MB). I have used imagemagic and convert tools photoshop and corel draw and matlab (in win os) but I have problem with memory amount.and memory is not enough.imagemagic is very slow and result is not desirable. I dont know how can i only load a small part of image on hard disk to RAM .(with out load whole image from hard)

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • Large number of simultaneous long-running operations in Qt

    - by Hostile Fork
    I have some long-running operations that number in the hundreds. At the moment they are each on their own thread. My main goal in using threads is not to speed these operations up. The more important thing in this case is that they appear to run simultaneously. I'm aware of cooperative multitasking and fibers. However, I'm trying to avoid anything that would require touching the code in the operations, e.g. peppering them with things like yieldToScheduler(). I also don't want to prescribe that these routines be stylized to be coded to emit queues of bite-sized task items...I want to treat them as black boxes. For the moment I can live with these downsides: Maximum # of threads tend to be O(1000) Cost per thread is O(1MB) To address the bad cache performance due to context-switches, I did have the idea of a timer which would juggle the priorities such that only idealThreadCount() threads were ever at Normal priority, with all the rest set to Idle. This would let me widen the timeslices, which would mean fewer context switches and still be okay for my purposes. Question #1: Is that a good idea at all? One certain downside is it won't work on Linux (docs say no QThread::setPriority() there). Question #2: Any other ideas or approaches? Is QtConcurrent thinking about this scenario? (Some related reading: how-many-threads-does-it-take-to-make-them-a-bad-choice, many-threads-or-as-few-threads-as-possible, maximum-number-of-threads-per-process-in-linux)

    Read the article

  • Storing a large list in isolatedStorage on WP7

    - by Ra
    I'm storing a List with around 3,000 objects in Isolatedstorage using Xml serialize. It takes too long to deserialize this and I was wondering if you have any recommendations to speed it up. The time is tolerable to deserialize up to 500 objects, but takes forever to deserialize 3000. Does it take longer just on the emulator and will be faster on the phone? I did a whole bunch of searching and some article said to use a binary stream reader, but I can't find it. Whether I store in binary or xml doesn't matter, I just want to persist the List. I don't want to look at asynchronous loading just yet...

    Read the article

  • How large should my recv buffer be when calling recv in the socket library

    - by Silmaril89
    Hi, I have a few questions about the socket library in C. Here is a snippet of code I'll refer to in my questions. char recv_buffer[3000]; recv(socket, recv_buffer, 3000, 0); First, How do I decide how big to make recv_buffer? I'm using 3000, but it's arbitrary. Second, what happens if recv() receives a packet bigger than my recv_buffer? Third, how can I know if I have received the entire message without calling recv again and have it wait forever when there is nothing to be received? And finally, is there a way I can make a buffer not have a fixed amount of space, so that I can keep adding to it without fear of running out of space? maybe using strcat to concatenate the latest recv() response to the buffer? I know it's a lot of questions in one, but I would greatly appreciate any responses.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >