Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 5/457 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Using Partitions for a large MySQL table

    - by user293594
    An update on my attempts to implement a 505,000,000-row table on MySQL on my MacBook Pro: Following the advice given, I have partitioned my table, tr: i UNSIGNED INT NOT NULL, j UNSIGNED INT NOT NULL, A FLOAT(12,8) NOT NULL, nu BIGINT NOT NULL, KEY (nu), key (A) with a range on nu. nu ought to be a real number, but because I only have 6-d.p. accuracy and the maximum value of nu is 30000. I multiplied it by 10^8 made it a BIGINT - I gather one can't use FLOAT or DOUBLE values to PARTITION a MySQL table. Anyway, I have 15 partitions (p0: nu<25,000,000,000, p1: nu<50,000,000,000, etc.). I was thinking that this should speed up a typical to SELECT: SELECT * FROM tr WHERE nu>95000000000 AND nu<100000000000 AND A.>1. to something of the order of the same query on a table consisting of only the data in the relevant partition (<30 secs). But it's taking 30mins+ to return rows for queries within a partition and double that if the query is for rows spanning two (contiguous) partitions. I realise I could just have 15 different tables, and query them separately, but is there a way to do this 'automatically' with partitions? Has anyone got any suggestions?

    Read the article

  • Reading large excel file with PHP

    - by Itamar Bar-Lev
    I'm trying to read a 17MB excel file (2003) with PHPExcel1.7.3c, but it crushes already while loading the file, after exceeding the 120 seconds limit I have. Is there another library that can do it more efficiently? I have no need in styling, I only need it to support UTF8. Thanks for your help

    Read the article

  • Opening Large (24 GB) File In C

    - by zacaj
    I'm trying to read in a 24 GB XML file in C, but it won't work. I'm printing out the current position using ftell() as I read it in, but once it gets to a big enough number, it goes back to a small number and starts over, never even getting 20% through the file. I assume this is a problem with the range of the variable that's used to store the position (long), which can go up to about 4,000,000,000 according to http://msdn.microsoft.com/en-us/library/s3f49ktz%28VS.80%29.aspx, while my file is 25,000,000,000 bytes in size. A long long should work, but how would I change what my compiler(Cygwin/mingw32) uses or get it to have fopen64?

    Read the article

  • How to pick a chunksize for python multiprocessing with large datasets

    - by Sandro
    I am attempting to to use python to gain some performance on a task that can be highly parallelized using http://docs.python.org/library/multiprocessing. When looking at their library they say to use chunk size for very long iterables. Now, my iterable is not long, one of the dicts that it contains is huge: ~100000 entries, with tuples as keys and numpy arrays for values. How would I set the chunksize to handle this and how can I transfer this data quickly? Thank you.

    Read the article

  • Seamlessly use large background images on webpages

    - by Ben Shelock
    I want to have huge background images on my site but without giving the user a hard time downloading them and the site looking ugly as the background loads. They would be no bigger than 1920 X 1080 in size, however it's hard to say in terms of kilobytes/megabytes. What are my options here and which are most effective? I'm not too bothered about bandwidth, just want to user to think everything looks nice ;)

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Large Video Uploads via a website

    - by Andrew
    Some of the problems that can happen are timeouts, disconnections, and not being able to resume a file and having to start from the beginning. Assuming these files are up to around 5gigs in size, what is the best solution for dealing with this problem? I'm using a Drupal 6 install for the website. Some of my constraints due to the server setup I have to deal with: Shared hosting with max 200 connections at a time (unlimited disk space) Shared hosting. Unable to create users through an API (so can't automatically generate ftp accounts) I do have the ability to run cron-type scripts via a Drupal module. My initial thought was to create ftp users based off of Drupal accounts and requiring them to download an ftp client for their OS of choice. But the lack of API to auto-create ftp accounts and the inability to do it from command line kind of hinder that solution. If there's a workaround someone can think of, let me know! Thanks

    Read the article

  • [perl] Efficient processing of large text

    - by jesper
    I have text file that contains over one million urls. I have to process this file in order to assign urls to groups, based on host address: { 'http://www.ex1.com' = ['http://www.ex1.com/...', 'http://www.ex1.com/...', ...], 'http://www.ex2.com' = ['http://www.ex2.com/...', 'http://www.ex2.com/...', ...] } My current basic solution takes about 600mb of RAM to do this (size of file is about 300mb). Could You provide some more efficient ways? My current solution simply reads line by line, extracts host address by regex and put url into hash.

    Read the article

  • Fastest way for inserting very large number of records into a Table in SQL

    - by Irchi
    The problem is, we have a huge number of records (more than a million) to be inserted into a single table from a Java application. The records are created by the Java code, it's not a move from another table, so INSERT/SELECT won't help. Currently, my bottleneck is the INSERT statements. I'm using PreparedStatement to speed-up the process, but I can't get more than 50 recods per second on a normal server. The table is not complicated at all, and there are no indexes defined on it. The process takes too long, and the time it takes will make problems. What can I do to get the maximum speed (INSERT per second) possible? Database: MS SQL 2008. Application: Java-based, using Microsoft JDBC driver.

    Read the article

  • Large strings: Text files or SQL DB?

    - by Tommo
    I am coding a forum system using PHP. I am currently storing a threads ID, title, author, views and other attributes in an SQL database and then storing the thread body (the HTML and BBcode) in text files inside a folder named after the thread ID. In practise it's really simple to grab the database values then just grab the thread body from the text file, but I was wondering if this is the 'proper way'? I have personally no problems doing this but if it turns out it is massively inefficient and I should instead store both the thread body HTML and BBcode in the database instead then I will change. However, to me it seems wrong to store such a (very possibly) huge string of multi-line text along with lots of different characters in a database - I was taught that databases are more for short field 'values' rather than website content. I would just like a definitive answer to this because it's been bugging me for ages as to wherever I’ve been doing it properly. Does anyone know how popular forum systems store threads?

    Read the article

  • Processing large (over 1 Gig) files in PHP using stream_filter_*

    - by mike
    $fp_src=fopen('file','r'); $filter = stream_filter_prepend($fp_src, 'convert.iconv.ISO-8859-1/UTF-8'); while(fread($fp_src,4096)){ ++$count; if($count%1000==0) print ftell($fp_src)."\n"; } When I run this the script ends up consuming ~ 200 MB of RAM after going through just 35MB of the file. Running it without the stream_filter zips right through with a constant memory footprint of ~10 MB. What gives?

    Read the article

  • Creating a multi-column rollover image gallery with HTML 5

    - by nikolaosk
    I know it has been a while since I blogged about HTML 5. I have two posts in this blog about HTML 5. You can find them here and here.I am creating a small content website (only text,images and a contact form) for a friend of mine.He wanted to create a rollover gallery.The whole concept is that we have some small thumbnails on a page, the user hovers over them and they appear enlarged on a designated container/placeholder on a page. I am trying not to use Javascript scripts when I am using effects on a web page and this is what I will be doing in this post.  Well some people will say that HTML 5 is not supported in all browsers. That is true but most of the modern browsers support most of its recommendations. For people who still use IE6 some hacks must be devised.Well to be totally honest I cannot understand why anyone at this day and time is using IE 6.0.That really is beyond me.Well, the point of having a web browser is to be able to ENJOY the great experience that the WE? offers today.  Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. In order to be absolutely clear this is not (and could not be ) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.For the people who are not convinced yet that they should invest time and resources on becoming experts on HTML 5 I should point out that HTML 5 websites will be ranked higher than others. Search engines will be able to locate better the content of our site and its relevance/importance since it is using semantic tags. Let's move now to the actual hands-on example. In this case (since I am mad Liverpool supporter) I will create a rollover image gallery of Liverpool F.C legends. I create a folder in my desktop. I name it Liverpool Gallery.Then I create two subfolders in it, large-images (I place the large images in there) and thumbs (I place the small images in there).Then I create an empty .html file called LiverpoolLegends.html and an empty .css file called style.css.Please have a look at the HTML Markup that I typed in my fancy editor package below<!doctype html><html lang="en"><head><title>Liverpool Legends Gallery</title><meta charset="utf-8"><link rel="stylesheet" type="text/css" href="style.css"></head><body><header><h1>A page dedicated to Liverpool Legends</h1><h2>Do hover over the images with the mouse to see the full picture</h2></header><ul id="column1"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/john-barnes.jpg" alt=""><img class="large" src="large-images/john-barnes-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/ian-rush.jpg" alt=""><img class="large" src="large-images/ian-rush-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/graeme-souness.jpg" alt=""><img class="large" src="large-images/graeme-souness-large.jpg" alt=""></a></li></ul><ul id="column2"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/steven-gerrard.jpg" alt=""><img class="large" src="large-images/steven-gerrard-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/kenny-dalglish.jpg" alt=""><img class="large" src="large-images/kenny-dalglish-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/robbie-fowler.jpg" alt=""><img class="large" src="large-images/robbie-fowler-large.jpg" alt=""></a></li></ul><ul id="column3"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/alan-hansen.jpg" alt=""><img class="large" src="large-images/alan-hansen-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/michael-owen.jpg" alt=""><img class="large" src="large-images/michael-owen-large.jpg" alt=""></a></li></ul></body></html> It is very easy to follow the markup. Please have a look at the new doctype and the new semantic tag <header>. I have 3 columns and I place my images in there.There is a class called "large".I will use this class in my CSS code to hide the large image when the mouse is not on (hover) an image Make sure you validate your HTML 5 page in the validator found hereHave a look at the CSS code below that makes it all happen.img { border:none;}#column1 { position: absolute; top: 30; left: 100; }li { margin: 15px; list-style-type:none;}#column1 a img.large {  position: absolute; top: 0; left:700px; visibility: hidden;}#column1 a:hover { background: white;}#column1 a:hover img.large { visibility:visible;}#column2 { position: absolute; top: 30; left: 195px; }li { margin: 5px; list-style-type:none;}#column2 a img.large { position: absolute; top: 0; left:510px; margin-left:0; visibility: hidden;}#column2 a:hover { background: white;}#column2 a:hover img.large { visibility:visible;}#column3 { position: absolute; top: 30; left: 400px; width:108px;}li { margin: 5px; list-style-type:none;}#column3 a img.large { width: 260px; height:260px; position: absolute; top: 0; left:315px; margin-left:0; visibility: hidden;}#column3 a:hover { background: white;}#column3 a:hover img.large { visibility:visible;}?n the first line of the CSS code I set the images to have no border.Then I place the first column in the page and then remove the bullets from the list elements.Then I use the large CSS class to create a position for the large image and hide it.Finally when the hover event takes place I make the image visible.I repeat the process for the next two columns. I have tested the page with IE 10 and the latest versions of Opera,Chrome and Firefox.Feel free to style your HTML 5 gallery any way you want through the magic of CSS.I did not bother adding background colors and borders because that was beyond the scope of this post. Hope it helps!!!!

    Read the article

  • Genetic algorithms with large chromosomes

    - by Howie
    I'm trying to solve the graph partitioning problem on large graphs (between a billion and trillion elements) using GA. The problem is that even one chromosome will take several gigs of memory. Are there any general compression techniques for chromosome encoding? Or should I look into distributed GA? NOTE: using some sort of evolutionary algorithm for this problem is a must! GA seems to be the best fit (although not for such large chromosomes). EDIT: I'm looking for state-of-the-art methods that other authors have used to solved the problem of large chromosomes. Note that I'm looking for either a more general solution or a solution particular to graph partitioning. Basically I'm looking for related works, as I, too, am attempting using GA for the problem of graph partitioning. So far I haven't found anyone that might have this problem of large chromosomes nor has tried to solve it.

    Read the article

  • How to elevate engineering culture at large corporations?

    - by davidk01
    One thing I have realized working at a large corporation is that it doesn't matter how smart you are because if everyone else doesn't see the value in what you are doing then you are not going to get very far. It's much harder to convince 1000 people that a certain part of the software stack should be in groovy than it is to convince 10 people of the same thing. I'm curious how people go about elevating the engineering culture at large corporations because I've been running into walls left and right and I would like to be more proactive about how I go about it. I have been advocating tech talks and tech demos along with code reviews as potential solutions. Do people have other suggestions? Note that 1000 people and groovy are just representative examples. I am not married to groovy or any other language and 1000 people is meant to indicate large scale and how to go about teaching a large group of people about best practices and engineering principles in general.

    Read the article

  • What are the algorithms that are used for working with large data in popular web applications

    - by Moss Farmer
    I am looking for some well known algorithms that can be considered while handling very large amount of data.(Edit- By large amount of data I refer to records in a database excluding blobs). These algorithms if not in totality but in parts may be used in big web applications like Twitter, Last.fm , Amazon ,etc. Specifically, I'm looking for names or links to such algorithms. My primary interest lies in developing a very deep understanding on working with large database records and writing efficient code for working with the same.

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • How to manage two separate testing teams using different test tracking tools

    - by newuser
    I have two independent testing teams currently testing the same application. One team is using ClearQuest, and the other is using Mantis. It has been a huge effort to manage all of the duplicate reported bugs. What options would improve this situation? My constraint is that the ClearQuest team will not change test reporting tools. The migration to ClearQuest also comes with a large training effort.

    Read the article

  • What editor/viewer to use to inspect large text based files?

    - by Turismo
    Are there any text editors/viewers (preferably on windows but other platforms are also ok) that can handle files of 500 MB or more? The editors I checked so far (Notepad++, Notepad, Eclipse) all choked on files of that size. Edit: Many thanks for the great suggestions. I tried gvim as it was the top voted and was available on Windows. I opened the file in a reasonable time. After that scrolling and searching was very smooth as long as syntax highlighting was turned off. From the other editors mentioned TextPad and EmEditor both claim to be able to handle large files very well. EmEditor seems to be built exactly for editing large files. I'll probably try both and report back.

    Read the article

  • large product image structured data and visibility

    - by Mark Resølved
    On an eCommerce site we two images for a product. One medium sized shown on top of the page and one large photo shown on click in an overlay. We use http://schema.org/Product microdata on the page. We'd like the large, initially hidden, photo to be the main image for the product, as it's the better looking one. So it's also referenced in the XML sitemap as <image:image>. So we also put the itemprop"image" attribute on the, hidden large image. But i'm wondering is it a bad idea to use a microdata attribute on a hidden style="display:none;" element? is there a better way to embed the main image in terms of SEO, without showing it initially?

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • How to create a large level game?

    - by Siddharth
    I want to know how to create a large game which has more than one level in it and those levels are loaded from the xml file. In my game I have many objects for each different level which I have to load when use click on it. At present for example my game contain 20 levels and now I was loading all the graphic object for all 20 levels. But the correct way was that only load graphic of that particular level only. So I don't know how to do that. So please explain this by providing game example. At present I was creating a class for each my game object image by extending sprite to it. I know it was not a suitable way so provide guidance on it. Basically I want to know how to create large games in andengine? Please help me about that because it will provide help to other community member also because andengine did not have proper documentation for learning developer about how to manage large game?

    Read the article

  • How to gain understanding of large systems? [closed]

    - by vonolsson
    Possible Duplicate: How do you dive into large code bases? I have worked as a developer developing C/C++ applications for mobile platforms (Windows Mobile and Symbian) for about six years. About a year ago, however, I changed job and currently work with large(!) enterprise systems with high security and availability requirements, all developed in Java. My problem is that I am having a hard time getting a grip on the architecture of the systems, even after a year working with them, and understanding systems other people have built has never been my strong side. The fact that I haven't worked with enterprise systems before doesn't exactly help. Does anyone have a good approach on how to learn and understand large systems? Are there any particular techniques and/or patterns I should read up on?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >