Search Results

Search found 11362 results on 455 pages for 'big o analysis'.

Page 385/455 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • Touch draw in Quatz 2D/Core Graphics

    - by OgreSwamp
    Hello, I'm trying to implement "hand draw tool". At the moment algorythm looks like that (I don't insert any code because methods are quite big, will try to explain an idea): Drawing In touchesStarted: method I create NSMutableArray *pointsArray and add point into it. Call setNeedsDisplay: method. In touchesMoved: method I calculate points between last added point from the pointsArray and current point. Add all points to the pointsArray. Call setNeedsDisplay: method. In touchesFinished: event I calculate points between last added point from the array and current point. Set flag touchesWereFinished. Call setNeedsDisplay:. Render: drawRect: method checks is pointsArray != nil and is there any data in it. If there is - it starts to traw circles in each point of this array. If flag touchesWereFinished is set - save current context to the UIImage, release pointsArray, set it to nil and reset the flag. There are a lot disadvantages of this method: It is slow It becomes extremely slow when user touches and move finger for long time. Array becomes enormous "Lines" composed by circles are ugly I would like to change my algorithm to make it bit faster and line smoother. In result I would like to have lines like on the picture at following URL (sorry, not enough reputation to insert an image): http://2.bp.blogspot.com/_r5VzEAUYXJ4/SrOYp8tJCPI/AAAAAAAAAMw/ZwDKXiHlhV0/s320/SketchBook+Mobile(4).png Can you advice me, ho I can draw lines this way (smooth and slim on the edges)? I thought to draw circles with alpha gradient on the edges (to make lines smoother), but it will be extremely slowly IMHO. Thanks for help

    Read the article

  • Splitting 25mb .txt file into smaller files using text delimiter

    - by user574141
    Regards, SO I am new to python and Perl. I have been trying to solve a simple problem and getting tied in knots with syntax. I hope someone has the time and patience to help. I have a 25mb file in ".txt" format which contains news-wire articles going back to 1970. Each news story is concatenated to the next, with only the "Copyright" statement to delimit. Each news story starts with "Item XX of XXX DOCUMENTS". There are certain metadata that are repeated throughout, I will use these for tagging later on. I wish to split this 25mb file into separate .txt files, each containing one news story (i.e. the text between "DOCUMENTS" and "Copyright", saving each with a different name (obviously). I am trying to 1 ) open the file... 2) iterate over lines in the file checking for the eof delimiter, and if it is not present writing the line to a list 3)write that list to a seperate small file. I'm having big problems with changing filenames using the counter, and how do I make Python start from where I left off, is the "seek" function appropriate? so far I have been trying this approach, completely unsuccessfully: myfile = open ("myfile.txt", 'r') filenumber = 0 for line in myfile.readline(): filenumber += 1 w=0 while myfile.readline() != '\s+DOCUMENTS\s*\n' ### read my line into a list mysmallfile()['w'] = [myfile.readline()] w += 1 output = open('C:\\Users\\dunner7\\Documents\###how do I change the filename each iteration???', 'w') output.writelines(mysmallfile) ###go back to start. Thank you for your time and patience. RD

    Read the article

  • IIS7 dynamic content compression and webservices

    - by vandalo
    I am moving and old asmx webservice to a new server with IIS7. This webservice basically sends a big dataset (10mb+) to a winform application. The old solution was implemented using a custom soap extension which compressed the content before sending the stream to the client. The client, of course, implemented the same custom soap extension, to decompressed the stream in a dataset. Everything has worked pretty well for years. My customer doesn't want to change the code upgrading to WCF. They just want to put the old App on the new server and use the new dynamic content compression features. We're testing things on a test server (win serv 2008) and it seems that it's working pretty well, even if it seems slow: we can't see any difference in performance (speed) between the uncompressed and compressed stream. Here's the question. Where should I put the settings? Most people say I can't put it in my web.config; others say it can be put there. I am a bit confused. Are there any tricks or things I should know? What about mimeTypes? Should I set some parameters, somewhere? ... considering my stream is XML (dataset) ?? Thanks to everyone who would like to help Alberto

    Read the article

  • Best practices for withstanding launch day traffic burst

    - by Sam McAfee
    We are working on a website for a client that (for once) is expected to get a fair amount of traffic on day one. There are press releases, people are blogging about it, etc. I am a little concerned that we're going to fall flat on our face on day one. What are the main things you would look at to ensure (in advance without real traffic data) that you can stay standing after a big launch. Details: This is a L/A/M/PHP stack, using an internally developed MVC framework. This is currently being launched on one server, with Apache and MySQL both on it, but we can break that up if need be. We are already installing memcached and doing as much PHP-level caching as we can think of. Some of the pages are rather query intensive, and we are using Smarty as our template engine. Keep in mind there is no time to change any of these major aspects--this is the just the setup. What sorts of things should we watch out for?

    Read the article

  • OpenSource Projects - Is there a site which lists projecs that need more developers?

    - by Jamie
    Morning/Afternoon/Evening all, Do any of you know of a website which lists opensource projects which are in need of more help? Let me elaborate, I would like to work on another open source project (I already work on a couple), however, it would be nice to have a site which lists lots of OS projects, their aims, deadlines, workload, how many more developers they are in need of etc. Of course, I could just pick a topic i'm interested in, find an OS project and then work on it, however, it would be nice to see a diversified list of projects. Primarily because some little known awesome projects get little attention and big projects such as jQuery forks, adium, gimp etc. etc. get a lot of attention because they are well known (and of course because they are great)and thus get a lot of developers working on them. It would be nice to see some little known projects getting more attention and thus hopefully drawing some people to work on them. Currently there are many websites hosting os projects, such as github, sourceforge, google code etc. A website to centralise all of this into one place and categorise it would be awesome. Let me know your thoughts please. I'm not looking for an answer per se, so I will mark it is as a community wiki. Your thoughts would be great.

    Read the article

  • Call .NET Webservice with Android

    - by Lasse P
    Hi, I know this question has been asked here before, but I don't think those answers were adequate for my needs. We have a SOAP webservice that is used for an iPhone application, but it is possible that we need an Android specific version or a proxy of the service, so we have the option to go with either SOAP or JSON. I have a few concerns about both methods: SOAP solution: Is it possible to generate java source code from a WSDL file, if so, will it include some kind of proxy class to invoke the webservice and will it work in the Android environment at all? Google has not provided any SOAP library in Android, so i need to use 3rd party, any suggestion? What about the performance/overhead with parsing and transmitting SOAP xml over the wire versus the JSON solution? JSON solution: There is a few classes in the Android sdk that will let me parse JSON, but does it support generic parsing, like if I want the result to be parsed as a complex type? Or would I need to implement that myself? I have read about 2 libraries before here on Stackoverflow, GSON an Jackson. What is the difference performance and usability (from a developers perspective) wise? Do you guys have any experince with either of those libraries? So i guess the big question is, what method to go with? I hope you can help me out. Thanks in advance :-)

    Read the article

  • Speed of QHash lookups using QStrings as keys.

    - by Ryan R.
    I need to draw a dynamic overlay on a QImage. The component parts of the overlay are defined in XML and parsed out to a QHash<QString, QPicture> where the QString is the name (such as "crosshairs") and the QPicture is the resolution independent drawing. I then draw components of the overlay as they are needed at a position determined during runtime. Example: I have 10 pictures in my QHash composing every possible element in a HUD. During a particular frame of video I need to draw 6 of them at different positions on the image. During the next frame something has changed and now I only need to draw 4 of them but 2 of those positions have changed. Now to my question: If I am trying to do this quickly, should I redefine my QHash as QHash<int, QPicture> and enumerate the keys to counteract the overhead caused by string comparisons; or are the comparisons not going to make a very big impact on performance? I can easily make the conversion to integer keys as the XML parser and overlay composer are completely separate classes; but I would like to use a consistent data structure across the application. Should I overcome my desire for consistency and re-usability in order to increase performance? Will it even matter very much if I do?

    Read the article

  • Is there a IDE/compiler PC benchmark I can use to compare my PCs performance?

    - by RickL
    I'm looking for a benchmark (and results on other PCs) which would give me an idea of the development performance gain I could get by upgrading my PC, also the benchmark could be used to justify the upgrade to my boss. I use Visual Studio 2008 for my development, so I'd like to get an idea of by what factor the build times would be improved, and also it would be good if the benchmark could incorporate IDE performance (i.e. when editing, using intellisense, opening code files etc) into its result. I currently have an AMD 3800x2, with 2GB RAM on Vista 32. For example, I'd like to know what kind of performance gain I'd see in Visual Studio 2008 with a Q6600, 4GB RAM on Vista 64. And also with other processors, and other RAM sizes... also see whether hard disk performance is a big factor. EDIT: I mentioned Vista 64 because I'm aware that Vista 32 can only use 3GB RAM maximum. So I'd presume that wanting to use more RAM would require Vista 64, but perhaps it could still be slower overall there is a large overhead in using the 32 bit VS 2008 on 64 bit OS.

    Read the article

  • Is there a programmatic way to transform a sequence of image files into a PDF?

    - by Salim Fadhley
    I have a sequence of JPG images. Each of the scans is already cropped to the exact size of one page. They are sequential pages of a valuable and out of print book. The publishing application requires that these pages be submitted as a single PDF file. I could take each of these images and just past them into a word-processor (e.g. OpenOffice) - unfortunately the problem here is that it's a very big book and I've got quite a few of these books to get through. It would obviously be time-consuming. This is volunteer work! My second idea was to use LaTeX - I could make a very simple document that consists of nothing more than a series of in-line image includes. I'm sure that this approach could be made to work, it's just a little on the complex side for something which seems like a very simple job. It occurred to me that there must be a simpler way - so any suggestions? I'm on Ubuntu 9.10, my primary programming language is Python, but if the solution is super-simple I'd happily adopt any technology that works.

    Read the article

  • What is the difference between calling Stream.Write and using a StreamWriter?

    - by John Nelson
    What is the difference between instantiating a Stream object, such as MemoryStream and calling the memoryStream.Write() method to write to the stream, and instantiating a StreamWriter object with the stream and calling streamWriter.Write()? Consider the following scenario: You have a method that takes a Stream, writes a value, and returns it. The stream is read from later on, so the position must be reset. There are two possible ways of doing it (both seem to work). // Instantiate a MemoryStream somewhere // - Passed to the following two methods MemoryStream memoryStream = new MemoryStream(); // Not using a StreamWriter private static Stream WriteToStream(Stream stream, string value) { stream.Write(Encoding.Default.GetBytes(value), 0, value.Length); stream.Flush(); stream.Position = 0; return stream; } // Using a StreamWriter private static Stream WriteToStreamWithWriter(Stream stream, string value) { StreamWriter sw = new StreamWriter(stream); sw.Write(value, 0, value.Length); sw.Flush(); stream.Position = 0; return stream; } This is partially a scope problem, as I don't want to close the stream after writing to it since it will be read from later. I also certainly don't want to dispose it either, because that will close my stream. The difference seems to be that not using a StreamWriter introduces a direct dependency on Encoding.Default, but I'm not sure that's a very big deal. What's the difference, if any?

    Read the article

  • How do you work around memcached's key/value limitations?

    - by mjy
    Memcached has length limitations for keys (250?) and values (roughtly 1MB), as well as some (to my knowledge) not very well defined character restrictions for keys. What is the best way to work around those in your opinion? I use the Perl API Cache::Memcached. What I do currently is store a special string for the main key's value if the original value was too big ("parts:<number") and in that case, I store <number parts with keys named 1+<main key, 2+<main key etc.. This seems "OK" (but messy) for some cases, not so good for others and it has the intrinsic problem that some of the parts might be missing at any time (so space is wasted for keeping the others and time is wasted reading them). As for the key limitations, one could probably implement hashing and store the full key (to work around collisions) in the value, but I haven't needed to do this yet. Has anyone come up with a more elegant way, or even a Perl API that handles arbitrary data sizes (and key values) transparently? Has anyone hacked the memcached server to support arbitrary keys/values?

    Read the article

  • Running code/script as a result of a form submission in ASP.NET

    - by firmbeliever
    An outside vendor did some html work for us, and I'm filling in the actual functionality. I have an issue that I need help with. He created a simple html page that is opened as a modal pop-up. It contains a form with a few input fields and a submit button. On submitting, an email should be sent using info from the input fields. I turned his simple html page into a simple aspx page, added runat=server to the form, and added the c# code inside script tags to create and send the email. It technically works but has a big issue. After the information is submitted and the email is sent, the page (which is supposed to just be a modal pop-up type thing) gets reloaded, but it is now no longer a pop-up. It's reloaded as a standalone page. So I'm trying to find out if there is a way to get the form to just execute those few lines of c# code on submission without reloading the form. I'm somewhat aware of cgi scripts, but from what I've read, that can be buggy with IIS and all. Plus I'd like to think I could get these few lines of code to run without creating a separate executable. Any help is greatly appreciated.

    Read the article

  • question about counting sort

    - by davit-datuashvili
    hi i have write following code which prints elements in sorted order only one big problem is that it use two additional array here is my code public class occurance{ public static final int n=5; public static void main(String[]args){ // n is maximum possible value what it should be in array suppose n=5 then array may be int a[]=new int[]{3,4,4,2,1,3,5};// as u see all elements are less or equal to n //create array a.length*n int b[]=new int[a.length*n]; int c[]=new int[b.length]; for (int i=0;i<b.length;i++){ b[i]=0; c[i]=0; } for (int i=0;i<a.length;i++){ if (b[a[i]]==1){ c[a[i]]=1; } else{ b[a[i]]=1; } } for (int i=0;i<b.length;i++){ if (b[i]==1) { System.out.println(i); } if (c[i]==1){ System.out.println(i); } } } } // 1 2 3 3 4 4 5 1.i have two question what is complexity of this algorithm?i mean running time 2. how put this elements into other array with sorted order? thanks

    Read the article

  • how to make accessor for Dictionary in a way that returned Dictionary cannot be changed C# / 2.0

    - by matti
    I thought of solution below because the collection is very very small. But what if it was big? private Dictionary<string, OfTable> _folderData = new Dictionary<string, OfTable>(); public Dictionary<string, OfTable> FolderData { get { return new Dictionary<string,OfTable>(_folderData); } } With List you can make: public class MyClass { private List<int> _items = new List<int>(); public IList<int> Items { get { return _items.AsReadOnly(); } } } That would be nice! Thanks in advance, Cheers & BR - Matti NOW WHEN I THINK THE OBJECTS IN COLLECTION ARE IN HEAP. SO MY SOLUTION DOES NOT PREVENT THE CALLER TO MODIFY THEM!!! CAUSE BOTH Dictionary s CONTAIN REFERENCES TO SAME OBJECT. DOES THIS APPLY TO List EXAMPLE ABOVE? class OfTable { private string _wTableName; private int _table; private List<int> _classes; private string _label; public OfTable() { _classes = new List<int>(); } public int Table { get { return _table; } set { _table = value; } } public List<int> Classes { get { return _classes; } set { _classes = value; } } public string Label { get { return _label; } set { _label = value; } } } so how to make this immutable??

    Read the article

  • Performing a SVD on tweets. Memory problem

    - by plotti
    I have generated a huge csv file as an output from my pos tagging and stemming. It looks like this: word1, word2, word3, ..., word14400 person1 1 2 0 1 person2 0 0 1 0 ... person650 It contains the word counts for each person. Like this I am getting characteristic vectors for each person. I want to run a SVD on this beast, but it seems the matrix is too big to be held in memory to perform the operation. My quesion is: should i reduce the column size by removing words which have a column sum of for example 1, which means that they have been used only once. Do I bias the data too much with this attempt? I tried the rapidminer attempt, by loading the csv into the db. and then sequentially reading it in with batches for processing, like rapidminer proposes. But Mysql can't store that many columns in a table. If i transpose the data, and then retranspose it on import it also takes ages.... -- So in general I am asking for advice how to perform a svd on such a corpus.

    Read the article

  • Find the number of congruent triangles?

    - by avd
    Say I have a square from (0,0) to (z,z). Given a triangle within this square which has integer coordinates for all its vertices. Find out the number of triangles within this square which are congruent to this triangle and have integer coordinates. My algorithm is as follows-- 1) Find out the minimum bounding rectangle(MBR) for the given triangle. 2) Find out the number of congruent triangles, y within that MBR, obtained after reflection, rotation of the given triangle. y can be either 2,4 or 8. 3) Now find out how many such MBR's can be drawn within the given big square, say x; (This is similar to finding number of squares on a chess board) 4) x*y is the required answer. Am I counting some triangles more than once or I am missing something by this algorithm? It is a problem on online judge? It gives me wrong answer. I have thought a lot about it, but I am not able to figure it out.

    Read the article

  • Is there a table of OpenGL extensions, versions, and hardware support somewhere?

    - by Thomas
    I'm looking for some resource that can help me decide what OpenGL version my game needs at minimum, and what features to support through extensions. Ideally, a table of the following format: 1.0 1.1 1.2 1.2.1 1.3 ... multitexture - ARB ARB core core texture_float - EXT EXT ARB ARB ... (Not sure about the values I put in, but you get the idea.) The extension specs themselves, at opengl.org, list the minimum OpenGL version they need, so that part is easy. However, many extensions have been accepted and became core standard in subsequent OpenGL versions, but it is very hard to find when that happened. The only way I could find is to compare the full OpenGL standards document for each version. On a related note, I would also very much like to know which extensions/features are supported by which hardware, to help me decide what features I can safely use in my game, and which ones I need to make optional. For example, a big honkin' table like this: MAX_TEXTURE_IMAGE_UNITS MAX_VERTEX_TEXTURE_IMAGE_UNITS ... GeForce 6xxx 8 4 GeForce 7xxx 16 8 ATi x300 8 4 ... (Again, I'm making the values up.) The table could list hardware limitations from glGet but also support for particular extensions, and limitations of such extension support (e.g. what floating-point texture formats are supported in hardware). Any pointers to these or similar resources would be hugely appreciated!

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • SIMPLE PHP MVC Framework [closed]

    - by Allen
    I need a simple and basic MVC example to get me started. I don't want to use any of the available packaged frameworks. I am in need of a simple example of a simple PHP MVC framework that would allow, at most, the basic creation of a simple multi-page site. I am asking for a simple example because I learn best from simple real world examples. Big popular frameworks (such as code igniter) are to much for me to even try to understand and any other "simple" example I have found are not well explained or seem a little sketchy in general. I should add that most examples of simple MVC frameworks I see use mod_rewrite (for URL routing) or some other Apache-only method. I run PHP on IIS. I need to be able to understand a basic MVC framework, so that I could develop my own that would allow me to easily extend functionality with classes. I am at the point where I understand basic design patterns and MVC pretty well. I understand them in theory, but when it comes down to actually building a real world, simple, well designed MVC framework in PHP, I'm stuck. Edit: I just want to note that I am looking for a simple example that an experienced programmer could whip up in under an hour. I mean simple as in bare bones simple. I don't want to use any huge frameworks, I am trying to roll my own. I need a decent SIMPLE example to get me going.

    Read the article

  • XSLT 1.0: restrict entries in a nodeset

    - by Mike
    Hi, Being relatively new to XSLT I have what I hope is a simple question. I have some flat XML files, which can be pretty big (eg. 7MB) that I need to make 'more hierarchical'. For example, the flat XML might look like this: <D0011> .... .... and it should end up looking like this: <D0011> .... .... I have a working XSLT for this, and it essentially gets a nodeset of all the b elements and then uses the 'following-sibling' axis to get a nodeset of the nodes following the current b node (ie. following-sibling::*[position() =$nodePos]). Then recursion is used to add the siblings into the result tree until another b element is found (I have parameterised it of course, to make it more generic). I also have a solution that just sends the position in the XML of the next b node and selects the nodes after that one after the other (using recursion) via a *[position() = $nodePos] selection. The problem is that the time to execute the transformation increases unacceptably with the size of the XML file. Looking into it with XML Spy it seems that it is the 'following-sibling' and 'position()=' that take the time in the two respective methods. What I really need is a way of restricting the number of nodes in the above selections, so fewer comparisons are performed: every time the position is tested, every node in the nodeset is tested to see if its position is the right one. Is there a way to do that ? Any other suggestions ? Thanks, Mike

    Read the article

  • Wasteful Ajax Page Loading

    - by Matt Dawdy
    I've started a new job, and the portion of the project I'm working has a very odd structure. Every pages is a .Net aspx page, and it loads just fine, but nothing is really done at load time. Everything is really loaded from a jquery document.onready handler. What is even more...interesting...is that the onready handler calls some ajax calls that drop entire .aspx pages into divs on the page, but first it strips out several parts of the the returned page. This is the "magic" script the previous programmer ran on all the returned html from his ajax calls: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } He then intercepts any kind of form postback and runs his own form submission function: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } Am I way offbase in thinking that this is less than optimal? I mean, how does a browser take out the html and head and other tags that don't have anything to do with what you are really trying to drop into that div? Also, he's returning things like asp:gridview controls, and the associate viewstate, which can be quite large if his dataset is big. Has anyone seen this before?

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Initialize listitem with blanks?

    - by VBartilucci
    Say I have a list made up of a listitem which contains three strings. I add a new listitem, and try to assign the values of said strings from an outside source. If one of those items is unassigned, the value in the listitem remains as null (unassigned). As a result I get an error if I try to assign that value to a field on my page. I can do a check on isNullOrEmpty for each field on the page, but that seems inefficient. I'd rather initialize the values to "" (Empty string) in the codebehind and send valid data. I can do it manually: ClaimPwk emptyNode = new ClaimPwk(); emptyNode.cdeAttachmentControl = ""; emptyNode.cdeRptTransmission = ""; emptyNode.cdeRptType = ""; headerKeys.Add(emptyNode); But I have some BIG list items, and writing that for those will get tedious. So is there a command, or just plain an easier way to initialize a listitem to empty string as opposed to null? Or has anyone got a better idea?

    Read the article

  • git commit best practices

    - by Ivan Z. Siu
    I am using git to manage a C++ project. When I am working on the projects, I find it hard to organize the changes into commits when changing things that are related to many places. For example, I may change a class interface in a .h file, which will affect the corresponding .cpp file, and also other files using it. I am not sure whether it is reasonable to put all the stuff into one big commit. Intuitively, I think the commits should be modular, each one of them corresponds to a functional update/change, so that the collaborators could pick things accordingly. But seems that sometimes it is inevitable to include lots of files and changes to make a functional change actually work. Searching did not yield me any good suggestion or tips. Hence I wonder if anyone could give me some best practices when doing commits. Thanks! PS. I've been using git for a while and I know how to interactively add/rebase/split/amend/... What I am asking is the PHILOSOPHY part.

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >