Search Results

Search found 10417 results on 417 pages for 'large'.

Page 86/417 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Passing objects by reference or not in C#

    - by Piku
    Suppose I have a class like this: public class ThingManager { List<SomeClass> ItemList; public void AddToList (SomeClass Item) { ItemList.Add(Item); } public void ProcessListItems() { // go through list one item at a time, get item from list, // modify item according to class' purpose } } Assume "SomeClass" is a fairly large class containing methods and members that are quite complex (List<s and arrays, for example) and that there may be a large quantity of them, so not copying vast amounts of data around the program is important. Should the "AddToList" method have "ref" in it or not? And why? It's like trying to learn pointers in C all over again ;-) (which is probably why I am getting confused, I'm trying to relate these to pointers. In C it'd be "SomeClass *Item" and a list of "SomeClass *" variables)

    Read the article

  • Fastest way to zero out a 2d array in C?

    - by Eddy
    I want to repeatedly zero a large 2d array in C. This is what I do at the moment: for(j = 0; j < n; j++) { for(i = 0; i < n; i++) { array[i][j] = 0; } } I've tried using memset: memset(array, 0, sizeof(array)) But this only works for 1D arrays. When I printf the contents of the 2D array, the first row is zeroes, but then I got a load of random large numbers and it crashes.

    Read the article

  • Google Translation API not working for even one page long documents

    - by Saubhagya
    I'm using Google Translation API to translate text from Chinese Simplified to English in my C# program. The problem is if the text is small (around one line) the API is able to translate it, but if the text is larger (more than 3 lines) is gives an exception saying "The remote server returned an unexpected response: (414) Request-URI Too Large.". However if I use translate.google.com in my browser that works fine. Please tell me how can I process large documents using Google Translate API in my desktop application written in C#.

    Read the article

  • How do you convert a parent-child (adjacency) table to a nested set using PHP and MySQL?

    - by mrbinky3000
    I've spent the last few hours trying to find the solution to this question online. I've found plenty of examples on how to convert from nested set to adjacency... but few that go the other way around. The examples I have found either don't work or use MySQL procedures. Unfortunately, I can't use procedures for this project. I need a pure PHP solution. I have a table that uses the adjacency model below: id parent_id category 1 0 ROOT_NODE 2 1 Books 3 1 CD's 4 1 Magazines 5 2 Books/Hardcover 6 2 Books/Large Format 7 4 Magazines/Vintage And I would like to convert it to a Nested Set table below: id left right category 1 1 14 Root Node 2 2 7 Books 3 3 4 Books/Hardcover 4 5 6 Books/Large Format 5 8 9 CD's 6 10 13 Magazines 7 11 12 Magazines/Vintage Here is an image of what I need: I have a function, based on the pseudo code from this forum post (http://www.sitepoint.com/forums/showthread.php?t=320444) but it doesn't work. I get multiple rows that have the same value for left. This should not happen. <?php /** -- -- Table structure for table `adjacent_table` -- CREATE TABLE IF NOT EXISTS `adjacent_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `father_id` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; -- -- Dumping data for table `adjacent_table` -- INSERT INTO `adjacent_table` (`id`, `father_id`, `category`) VALUES (1, 0, 'ROOT'), (2, 1, 'Books'), (3, 1, 'CD''s'), (4, 1, 'Magazines'), (5, 2, 'Hard Cover'), (6, 2, 'Large Format'), (7, 4, 'Vintage'); -- -- Table structure for table `nested_table` -- CREATE TABLE IF NOT EXISTS `nested_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `lft` int(11) DEFAULT NULL, `rgt` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; */ mysql_connect('localhost','USER','PASSWORD') or die(mysql_error()); mysql_select_db('DATABASE') or die(mysql_error()); adjacent_to_nested(0); /** * adjacent_to_nested * * Reads a "adjacent model" table and converts it to a "Nested Set" table. * @param integer $i_id Should be the id of the "root node" in the adjacent table; * @param integer $i_left Should only be used on recursive calls. Holds the current value for lft */ function adjacent_to_nested($i_id, $i_left = 0) { // the right value of this node is the left value + 1 $i_right = $i_left + 1; // get all children of this node $a_children = get_source_children($i_id); foreach ($a_children as $a) { // recursive execution of this function for each child of this node // $i_right is the current right value, which is incremented by the // import_from_dc_link_category method $i_right = adjacent_to_nested($a['id'], $i_right); // insert stuff into the our new "Nested Sets" table $s_query = " INSERT INTO `nested_table` (`id`, `lft`, `rgt`, `category`) VALUES( NULL, '".$i_left."', '".$i_right."', '".mysql_real_escape_string($a['category'])."' ) "; if (!mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } echo "<p>$s_query</p>\n"; // get the newly created row id $i_new_nested_id = mysql_insert_id(); } return $i_right + 1; } /** * get_source_children * * Examines the "adjacent" table and finds all the immediate children of a node * @param integer $i_id The unique id for a node in the adjacent_table table * @return array Returns an array of results or an empty array if no results. */ function get_source_children($i_id) { $a_return = array(); $s_query = "SELECT * FROM `adjacent_table` WHERE `father_id` = '".$i_id."'"; if (!$i_result = mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } if (mysql_num_rows($i_result) > 0) { while($a = mysql_fetch_assoc($i_result)) { $a_return[] = $a; } } return $a_return; } ?> This is the output of the above script. INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '2', '5', 'Hard Cover' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '2', '7', 'Large Format' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '1', '8', 'Books' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '1', '10', 'CD\'s' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '10', '13', 'Vintage' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '1', '14', 'Magazines' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '0', '15', 'ROOT' ) As you can see, there are multiple rows sharing the lft value of "1" same goes for "2" In a nested-set, the values for left and right must be unique. Here is an example of how to manually number the left and right ID's in a nested set: UPDATE - PROBLEM SOLVED First off, I had mistakenly believed that the source table (the one in adjacent-lists format) needed to be altered to include a source node. This is not the case. Secondly, I found a cached page on BING (of all places) with a class that does the trick. I've altered it for PHP5 and converted the original author's mysql related bits to basic PHP. He was using some DB class. You can convert them to your own database abstraction class later if you want. Obviously, if your "source table" has other columns that you want to move to the nested set table, you will have to adjust the write method in the class below. Hopefully this will save someone else from the same problems in the future. <?php /** -- -- Table structure for table `adjacent_table` -- DROP TABLE IF EXISTS `adjacent_table`; CREATE TABLE IF NOT EXISTS `adjacent_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `father_id` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; -- -- Dumping data for table `adjacent_table` -- INSERT INTO `adjacent_table` (`id`, `father_id`, `category`) VALUES (1, 0, 'Books'), (2, 0, 'CD''s'), (3, 0, 'Magazines'), (4, 1, 'Hard Cover'), (5, 1, 'Large Format'), (6, 3, 'Vintage'); -- -- Table structure for table `nested_table` -- DROP TABLE IF EXISTS `nested_table`; CREATE TABLE IF NOT EXISTS `nested_table` ( `lft` int(11) NOT NULL DEFAULT '0', `rgt` int(11) DEFAULT NULL, `id` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`lft`), UNIQUE KEY `id` (`id`), UNIQUE KEY `rgt` (`rgt`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; */ /** * @class tree_transformer * @author Paul Houle, Matthew Toledo * @created 2008-11-04 * @url http://gen5.info/q/2008/11/04/nested-sets-php-verb-objects-and-noun-objects/ */ class tree_transformer { private $i_count; private $a_link; public function __construct($a_link) { if(!is_array($a_link)) throw new Exception("First parameter should be an array. Instead, it was type '".gettype($a_link)."'"); $this->i_count = 1; $this->a_link= $a_link; } public function traverse($i_id) { $i_lft = $this->i_count; $this->i_count++; $a_kid = $this->get_children($i_id); if ($a_kid) { foreach($a_kid as $a_child) { $this->traverse($a_child); } } $i_rgt=$this->i_count; $this->i_count++; $this->write($i_lft,$i_rgt,$i_id); } private function get_children($i_id) { return $this->a_link[$i_id]; } private function write($i_lft,$i_rgt,$i_id) { // fetch the source column $s_query = "SELECT * FROM `adjacent_table` WHERE `id` = '".$i_id."'"; if (!$i_result = mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } $a_source = array(); if (mysql_num_rows($i_result)) { $a_source = mysql_fetch_assoc($i_result); } // root node? label it unless already labeled in source table if (1 == $i_lft && empty($a_source['category'])) { $a_source['category'] = 'ROOT'; } // insert into the new nested tree table // use mysql_real_escape_string because one value "CD's" has a single ' $s_query = " INSERT INTO `nested_table` (`id`,`lft`,`rgt`,`category`) VALUES ( '".$i_id."', '".$i_lft."', '".$i_rgt."', '".mysql_real_escape_string($a_source['category'])."' ) "; if (!$i_result = mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } else { // success: provide feedback echo "<p>$s_query</p>\n"; } } } mysql_connect('localhost','USER','PASSWORD') or die(mysql_error()); mysql_select_db('DATABASE') or die(mysql_error()); // build a complete copy of the adjacency table in ram $s_query = "SELECT `id`,`father_id` FROM `adjacent_table`"; $i_result = mysql_query($s_query); $a_rows = array(); while ($a_rows[] = mysql_fetch_assoc($i_result)); $a_link = array(); foreach($a_rows as $a_row) { $i_father_id = $a_row['father_id']; $i_child_id = $a_row['id']; if (!array_key_exists($i_father_id,$a_link)) { $a_link[$i_father_id]=array(); } $a_link[$i_father_id][]=$i_child_id; } $o_tree_transformer = new tree_transformer($a_link); $o_tree_transformer->traverse(0); ?>

    Read the article

  • Is there a language designed for code golf?

    - by J S
    I am not really a fan of code golf, but I have to wonder, is there an esoteric language designed for it? I mean a language with following properties: Common programs may be expressed in very short amount of characters It uses ASCII character set effectively (for example, common operators are not identifiers, so they don't have to be separated by whitespace, character usage is distributed more or less evenly because we cannot use Huffman coding and so on) Except the terse syntax, it should have very expressible and clean semantics (like, let's say, Python or Scheme); it shouldn't be difficult to program in It doesn't need features for large scale programs, such as OOP, but it definitely should allow custom functions and data structures It should have a large standard library, identifiers in this library should be as short as possible Maybe it should be called CG? Languages that can be a source of inspiration are Forth, APL and Joy.

    Read the article

  • who wrote 250k tests for webkit?

    - by amwinter
    assuming a yield of 3 per hour, that's 83000 hours. 8 hours a day makes 10,500 days, divide by thirty to get 342 mythical man months. I call them mythical because writing 125 tests per person per week is unreal. can any wise soul out there on SO shed some light on what sort of mythical men write unreal quantities of tests for large software projects? thank you. update chrisw thinks there are only 20k tests (check out his explanation below). PS I'd really like to hear from folks who have worked on projects with large test bases

    Read the article

  • Can Crystal Reports Scale to Fit Page

    - by Jacob Reyes
    Hello , Can a crystal report be scaled to fit page? I'm hoping to achieve something similar to Microsoft Excel's Scale To Fit feature wherein a large spreadsheet can be scaled to fit a 8.5"x 11" page. (On MS Excel 2007 goto Page Layout Scale To Fit). Im searching of a way to make a large report fit into a smaller page during print. for example a report designed in Legal(8.5"x 14") page must be able to shrink when print previewed for Letter(8.5"x 11") page. In my crystal report, it should be scaled to fit the page by default. I was thinking maybe theres a Crystal Report Setting or C# code technique that I missed out. Any hint or link to the right direction is appreciated. Thanks!

    Read the article

  • What's the fastest way to strip and replace a document of high unicode characters using Python?

    - by Rhubarb
    I am looking to replace from a large document all high unicode characters, such as accented Es, left and right quotes, etc., with "normal" counterparts in the low range, such as a regular 'E', and straight quotes. I need to perform this on a very large document rather often. I see an example of this in what I think might be perl here: http://www.designmeme.com/mtplugins/lowdown.txt Is there a fast way of doing this in Python without using s.replace(...).replace(...).replace(...)...? I've tried this on just a few characters to replace and the document stripping became really slow.

    Read the article

  • How do I delete the 32k errored document?

    - by Ramkumar
    I am having some documents. If I try to open the document then it shows error like "field is too large 32k or view's column & selection formulas are too large" Whenever I try to delete the document, I am getting the same error. I am not able to delete. Okay we can try to get the document via backend, But there, I can not get the document handle. Whatever I try to search then the document collection count is 0. Important:- I am using Notes 6.5.2. Thanks in Advance,

    Read the article

  • Working effectively unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

  • Overlapping images w/ image maps obstructing each other

    - by Hamster
    Information: The images have large transparent sections, so each must be overlapped to create the needed effect. Specifically, the clickable portions of each image are in weird trapezoid shapes meant to be pressed up against each other. Images have image maps with large portions being overlapped by the transparent portions of other nearby (trapezoid) images. I don't expect any change in z indexes will solve this... Combining the image files into a larger single one to overlay a single image map for each section seems less than ideal, especially since I may need to re-order or rename them later and such. Never mind hover animations and other possibilities down the road. What would be the best workaround?

    Read the article

  • Adding a new column to Table which contains live data

    - by Ardman
    I have a large table consisting of over 60 millions records and I would like to add 2 new columns for data migration purposes. There are indexes on the table and some of them are large. So, by me adding the 2 new columns to the table, will I run the risk of slowing down the database whilst it attempts to add them and maybe time-out? Or will it just work? I know that if I try and rearrange the columns SQL Server will ask me to drop and re-create the table, so I definately don't want this. Is this something everyone is challenged with?

    Read the article

  • CUDA small kernel 2d convolution - how to do it

    - by paulAl
    I've been experimenting with CUDA kernels for days to perform a fast 2D convolution between a 500x500 image (but I could also vary the dimensions) and a very small 2D kernel (a laplacian 2d kernel, so it's a 3x3 kernel.. too small to take a huge advantage with all the cuda threads). I created a CPU classic implementation (two for loops, as easy as you would think) and then I started creating CUDA kernels. After a few disappointing attempts to perform a faster convolution I ended up with this code: http://www.evl.uic.edu/sjames/cs525/final.html (see the Shared Memory section), it basically lets a 16x16 threads block load all the convolution data he needs in the shared memory and then performs the convolution. Nothing, the CPU is still a lot faster. I didn't try the FFT approach because the CUDA SDK states that it is efficient with large kernel sizes. Whether or not you read everything I wrote, my question is: how can I perform a fast 2D convolution between a relatively large image and a very small kernel (3x3) with CUDA?

    Read the article

  • Love coding but offered a server/network job -- any advice?

    - by Pete
    I really enjoy software development. I've done it for going on 3 years now full-time for a small company and still find it interesting and exciting. I haven't had much server/network experience but have an opportunity to work for a large IT company dealing with server setups, configurations, maintenance and some networking work as well. The thing is, I'm not sure whether to accept. If I were to take this, it would have relatively little if any coding and I'm guessing would start me down a career path away from coding. The only thing is the company is large enough and has a coding division so I guess in a few years I could transition back to the software side of things if I wanted, but I'm just not sure whether I would enjoy the server/network side of things. Any advice is greatly appreciated. Especially if you have had a similar situation occur. Thanks!

    Read the article

  • Log with timestamps that have millisecond accuracy & resolution in Windows C++

    - by Psychic
    I'm aware that for timing accuracy, functions like timeGetTime, timeBeginPeriod, QueryPerformanceCounter etc are great, giving both good resolution & accuracy, but only based on time-since-boot, with no direct link to clock time. However, I don't want to time events as such. I want to be able to produce an exact timestamp (local time) so that I can display it in a log file, eg 31-12-2010 12:38:35.345, for each entry made. (I need the millisecond accuracy) The standard Windows time functions, like GetLocalTime, whilst they give millisecond values, don't have millisecond resolution, depending on the OS running. I'm using XP, so I can't expect much better than about a 15ms resolution. What I need is a way to get the best of both worlds, without creating a large overhead to get the required output. Overly large methods/calculations would mean that the logger would start to eat up too much time during its operation. What would be the best/simplest way to do this?

    Read the article

  • Rails - Active Record :conditions overrides :select

    - by Nick
    I have a fairly large model and I want to retrieve only a select set of fields for each record in order to keep the JSON string I am building small. Using :select with find works great but my key goal is to use conditional logic with an associated model. Is the only way to do this really with a lamda in a named scope? I'm dreading that perhaps unnecessarily but I'd like to understand if there is a way to make the :select work with a condition. This works: @sites = Site.find :all, :select => 'id,foo,bar' When I try this: @sites = Site.find :all, :select => 'id,foo,bar', :include => [:relatedmodel], :conditions => ["relatedmodel.type in (?)", params[:filters]] The condition works but each record includes all of the Site attributes which makes my JSON string way way too large. Thanks for any pointers!

    Read the article

  • Why the parent page get refreshed when I click the link to open thickbox-styled form?

    - by user333205
    Hi, all: I'm using Thickbox 3.1 to show signup form. The form content comes from jquery ajax post. The jquery lib is of version 1.4.2. I placed a "signup" link into a div area, which is a part of my other large pages, and the whole content of that div area is ajax+posted from my server. To make thickbox can work in my above arangement, I have modified the thickbox code a little like that: //add thickbox to href & area elements that have a class of .thickbox function tb_init(domChunk){ $(domChunk).live('click', function(){ var t = this.title || this.name || null; var a = this.href || this.alt; var g = this.rel || false; tb_show(t,a,g); this.blur(); return false; });} This modification is the only change against the original version. Beacause the "signup" link is placed in ajaxed content, so I Use live instead of binding the click event directly. When I tested on my pc, the thickbox works well. I can see the signup form quickly, without feeling the content of the parent page(here, is the other large pages) get refreshed. But after transmiting my site files into VHost, when I click the "signup" link, the signup form get presented very slowly. The large pages get refreshed evidently, because the borwser(ie6) are reloading images from server incessantly. These images are set as background images in CSS files. I think that's because the slow connection of network. But why the parent pages get refreshed? and why the browser reloads those images one more time? Havn't those images been placed in local computer's disk? Is there one way to stop that reloadding? Because the signup form can't get displayed sometimes due to slow connection of network. To verified the question, you can access http://www.juliantec.info/track-the-source.html and click the second link in left grey area, that is the "signup" link mentioned above. Thinks!

    Read the article

  • Long IF tree with strings

    - by DalGr
    I have a C program which uses Lua for scripting. In order to keep readability and avoid importing several constants within the individual Lua states, I condense a large amount of functions within a simple call (such as "ObjectSet(id, "ANGLE", 45)"), by using an "action" string. To do this I have a large if tree comparing the action string to a list (such as "if(stringcompare(action, "ANGLE") ... else if (stringcompare(action, "X")... etc") This approach works well, and within the program it's not really slow, and is fairly quick to add a new action. But I kind of feel perfectionist. Is there a better way to do this in C? And having Lua in heavy use, maybe there is a way to use it for this purpose? (embedded "chunks" making a dictionary?) Although this part is mostly curiosity.

    Read the article

  • MS SQL Server 2000 tables

    - by klork
    We currently have an MS SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: 1) Does anyone have any experience with a MS SQL Server (2000/2005) installation with millions of tables? 2) What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. 3) What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • What's The Best Object-Relational Mapping Tool For .NET?

    - by Icono123
    I've worked on a few Java web projects and we've always used Hibernate for our data object layer. I haven't worked on a large scale ASP.NET site and I'm unsure which solution to choose. I'm tempted to try NHibernate, but I don't like the fact that they use so many third party libraries. I found this list on Wikipedia of available ORM software: http://en.wikipedia.org/wiki/List_of_object-relational_mapping_software#.NET What ORM have you used? Was it easy to use? Would you recommend using it again? Was it used on a small, medium, or large project? Would you write your own? Thanks.

    Read the article

  • Best method for Binding ComboBox

    - by LnDCobra
    I am going to be developing a large project which will include a large number of ComboBoxes. Most of these combo boxes will be bound to a database field which is a related to another daataset/table. For instance. I have the following 2 tables: Company {CompanyID, CompanyName, MainContact} Contacts {ContactID, ContactName} And when the user clicks to edit a company, A TextBox will be there to edit a company name, but also a ComboBox will be there. The way I am currently doing it is binding the ComboBox to the Contacts dataset, and manually updating the Company MainContact field in code behind. Is there anyway for me to bind the selected item to the Company MainContact field in XAML and the items to the ContactName and eliminate the code behind? Reason for this is when you start making 100's combo boxes all over the application it gets long winded each time creating code behind to do the update.

    Read the article

  • Is there a way to combine streaming data retrieval with hibernate?

    - by Steve B.
    For the purposes of handling very large collections (and by very large I just mean "likely to throw OutOfMemory exception"), it seems problematic to use Hibernate because normally collection retrieval is done in a block, i.e. List values=session.createQuery("from X").list(), where you monolithically grab all N-million values and then process them. What I'd prefer to do is to retrieve the values as an iterator so that I grab 1000 or so (or whatever's a reasonable page size) at a time. Apart from writing my own iteration (which seems like it's likely to be re-inventing the wheel) is there a hibernate-native way to handle this?

    Read the article

  • Why does this bash command take up all space on device?

    - by chelmertz
    Hey! I'm a little new on searching via bash, so feel free to give me suggestions on the methods to use instead of this, which I'll never use again :) I'm searching for occurances of a string, recursively in a directory, with ~50 not-that-large php-files in it; some in current directory, some in directories beneath current dir, three levels of directories down at most. The method I'm using is: find . | xargs grep "module" > module.txt When in simple (one level) directories, this works fine, but in this case, the file became 4 GB large until it filled up all space on the partition :) It wasn't even done yet.. Would someone educate me so I won't embarass myself again?

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >