Search Results

Search found 298 results on 12 pages for 'truncate'.

Page 8/12 | < Previous Page | 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is their an optimal config/format for a TIFF when using Tesseract or other OCR?

    - by Zando
    I'm having a bizarre problem with Tesseract. I have a name, "Janice" that is in a 200x40 pixel tiff, that Tesseract interprets as a blank. I'm running hundreds of names through Tesseract and they are processed fine. What I'm actually doing, though, is breaking up a larger TIFF into smaller tiffs of one word each. In the larger TIFF, tesseract recognizes "Janice". What could cause it to hiccup in a TIFF that solely contains that word (and there's enough space around the word to not truncate any of the pixels)? I'm using ImageMagick to split the big TIFF, are there options I should set when reconstituting the new TIFF files?

    Read the article

  • VARCHAR does not work as expected in Apache Derby

    - by Tom Brito
    I'm having this same problem: How can I truncate a VARCHAR to the table field length AUTOMATICALLY in Derby using SQL? To be specific: CREATE TABLE A ( B VARCHAR(2) ); INSERT INTO A B VALUES ('1234'); would throw a SQLException: A truncation error was encountered trying to shrink VARCHAR '123' to length 2. that is already answered: No. You should chop it off after checking the meta-data. Or if you don't wanna check the meta-data everytime, then you must keep both your code and database in sync. But thats not a big deal, its a usual practice in validators. but my doubt is: isn't VARCHAR suppose to variate its size to fit the data? What's wrong with apache derby's VARCHAR?

    Read the article

  • Phantom updates due to decimal precision on calculated properties

    - by Jamie Ide
    This article describes my problem. I have several properties that are calculated. These are typed as decimal(9,2) in SQL Server and decimal in my C# classes. An example of the problem is: Object is loaded with a property value of 14.9 A calculation is performed and the property value is changed to 14.90393 When the session is flushed, NHibernate issues an update because the property is dirty Since the database field is decimal (9,2) the stored value doesn't change Basically a phantom update is issued every time this object is loaded. I don't want to truncate the calculations in my business objects because that tightly couples them to the database and I don't want to lose the precision in other calculations. I tried setting scale and precision or CustomType("Decimal(9,2)") in the mapping file but this appears to only affect schema generation. My only reasonable option appears to be creating an IUserType implementation to handle this. Is there a better solution?

    Read the article

  • SSIS - Parallel Execution of Tasks - How efficient is it?

    - by Randy Minder
    I am building an SSIS package that will contain dozens of Sequence tasks. Each Sequence task will contain three tasks. One to truncate a destination table and remove indexes on the table, another to import data from a source table, and a third to add back indexes to the destination table. My question is this. I currently have nine of these Sequences tasks built, and none are dependent on any of the others. When I execute the package, SSIS seems to do a pretty good job of determining which tasks in which Sequence to execute, which, by the way, appears to be quite random. As I continue adding more Sequences, should I attempt to be smarter about how SSIS should execute these Sequences, or is SSIS smart enough to do it itself? Thanks.

    Read the article

  • How to optimize this user ranking query

    - by James Simpson
    I have 2 databases (users, userRankings) for a system that needs to have rankings updated every 10 minutes. I use the following code to update these rankings which works fairly well, but there is still a full table scan involved which slows things down with a few hundred thousand users. mysql_query("TRUNCATE TABLE userRankings"); mysql_query("INSERT INTO userRankings (userid) SELECT id FROM users ORDER BY score DESC"); mysql_query("UPDATE users a, userRankings b SET a.rank = b.rank WHERE a.id = b.userid"); In the userRankings table, rank is the primary key and userid is an index. Both tables are MyISAM (I've wondered if it might be beneficial to make userRankings InnoDB).

    Read the article

  • XmlDocument.WriteTo truncates resultant file

    - by Brad Heller
    Trying to serialize an XmlDocument to file. The XmlDocument is rather large; however, in the debugger I can see that the InnerXml property has all of the XML blob in it -- it's not truncated there. Here's the code that writes my XmlDocument object to file: // Write that string to a file. var fileStream = new FileStream("AdditionalData.xml", FileMode.OpenOrCreate, FileAccess.Write); xmlDocument.WriteTo(new XmlTextWriter(fileStream, Encoding.UTF8) {Formatting = Formatting.Indented}); fileStream.Close(); The file that's produced here only writes out to line like 5,760 -- it's actually truncated in the middle of a tag! Anyone have any ideas why this would truncate here?

    Read the article

  • Tooltips problem, making this javascript work with my smarty foreach loop, help pelase!

    - by Kyle Sevenoaks
    I am using an example of tooltips from http://www.dynamicdrive.com/dynamicindex5/stickytooltip.htm on www.euroworker.no/order I have this code here to work with, but it just doesn't seem to work correctly, I've tried everything I can think of (not a lot of things) Here's the code. {foreach from=$cart.cartItems item="item" name="cart"} <div class="{zebra loop="cart"}"> <div id="sgproductview"> <div id="cart2Varekode"> <p> {if $product.sku} <span class="param">{$item.product.sku}</span> {else} <span>{img src=$item.Product.DefaultImage.paths.1 alt=$item.Product.name_lang|escape}</span> {/if} </p> </div> <div id="cart2Produkt"> <p>{if $item.Product.ID} <a href="{productUrl product=$item.Product}" data-tooltip="sticky{$smarty.foreach.cart.iteration}" target="_blank">{$item.Product.name_lang|truncate:20}</a> {else} <span>{$item.Product.name_lang|truncate:20}</span> </a> {/if} </p> <p> {include file="order/itemVariations.tpl"} {include file="order/block/itemOptions.tpl"} {if $multi} {include file="order/selectItemAddress.tpl" item=$item} {/if} </p> </div> {if $item.Product.DefaultImage.paths.3} <div id="mystickytooltip" class="stickytooltip"> <div style="padding:5px;"> <div id="sticky1" class="atip" style="width:200px;"> <img src="{$item.Product.DefaultImage.paths.3}" alt="{$item.Product.name_lang|escape}"><br> {$item.Product.name_lang} </div> <div id="sticky2" class="atip" style="width:200px;"> <img src={$item.Product.DefaultImage.paths.3} alt="{$item.Product.name_lang|escape}"><br> {$item.formattedPrice} </div> <div id="sticky3" class="atip" style="width:200px;"> <img src="{$item.Product.DefaultImage.paths.3}" alt="{$item.Product.name_lang|escape}"><br> {$item.Product.name_lang}PRODUCT 3 </div> <div id="sticky4" class="atip" style="width:200px;"> <img src="{$item.Product.DefaultImage.paths.3}" alt="{$item.Product.name_lang|escape}"><br> {$item.Product.name_lang} </div> </div> </div> {/if} <div id="cart2Price"> <p class="actualPrice"> {$item.formattedPrice} </p> </div> <div id="salg"></div> <div id="cart2Salg"> <p></p> </div> <div id="antallbox"> <p class="cartQuant"> {textfield name="item_`$item.ID`" class="text"} </p> </div> <div id="cart2Total"> <p> {if $item.count == 1} <span class="basePrice">{$item.formattedBasePrice}</span><span class="actualPrice">{$item.formattedPrice}</span> {else} {$item.formattedDisplaySubTotal} <div class="subTotalCalc"> {$item.count} x <span class="basePrice">{$item.formattedBasePrice}</span><span class="actualPrice">{$item.formattedPrice}</span> </div> {/if} </p> </div> <div id="delete"> {if 'ENABLE_WISHLISTS'|config} <a href="{link controller=order action=moveToWishList id=$item.ID query="return=`$return`"}">{t _move_to_wishlist}</a> {/if} <a id="slett" href="{link controller=order action=delete id=$item.ID query="return=`$return`"}" title="Slett"><!--{t _remove}--></a> </div> </div> </div> {/foreach} Anyone can help? {html_image} doesn't work, by the way and all the extensions are present and correct.

    Read the article

  • Base64 encoding in PHP not working for '&' and '#' ?

    - by Angad
    My knowledge about base64 is pretty limited. I am using it as an alternative to string escaping in a content management system, for I had been warned about how weaknesses have been found in mysql_real_escape_string(); and quite sheepishly so, as I am aware of how it buffs text size up. PHP seems to truncate everything after an instance of # or & in the string; please help me out of this one. Also, comment on whether using base64 to maintain the 'trueness' of post content in the CMS is just plain retarded, or a wise move. Thanks for your time :)

    Read the article

  • Smart pagination algorithm

    - by silvertab
    I'm looking for an example algorithm of smart pagination. By smart, what I mean is that I only want to show, for example, 2 adjacent pages to the current page, so instead of ending up with a ridiculously long page list, I truncate it. Here's a quick example to make it clearer... this is what I have now: Pages: 1 2 3 4 [5] 6 7 8 9 10 11 This is what I want to end up with: Pages: ... 3 4 [5] 6 7 ... (In this example, I'm only showing 2 adjacent pages to the current page) I'm implementing it in PHP/Mysql, and the "basic" pagination (no trucating) is already coded, I'm just looking for an example to optimize it... It can be an example in any language, as long as it gives me an idea as to how to implement it...

    Read the article

  • How important is it to use short names for Python packages and modules?

    - by Dan
    PEP 8 says that Python package and module names should be short, since some file systems will truncate long names. And I'm trying to follow Python conventions in a new project. But I really like long, descriptive names. So I'm wondering, how short do names need to be to comply with PEP 8. And does anyone really worry about this anymore? I'm tempted to ignore this recommendation, and use longer names, thinking this isn't all that relevant anymore. Does anyone think this recommendation is still worth following? If yes, why? And how short is short enough?

    Read the article

  • Need help to format the result page after searching

    - by kshama
    Hi, I have built a small text based search engine on ROR which will display relevant records having a specified search word in it.since few of the records has more than 1000 words i have truncated each result set to 200 characters.My views file search.html.erb looks like this <% @results_with_ranks.each do |result| -%> <% content_id = rtable.find(result[0]).content_id %> <% content= Content.find(content_id) %> <%= truncate content.body, :length => 200 %><br/> <p> Record id <%= content.id %></p> <hr style="color:blue"> <% end -%> I want to provide an option so that whenever any truncated record is selected its entire body has to be displayed. I also want to paginate the result page displaying some fixed number of records per page.Can any body help me in doing this? Thanks in advance.

    Read the article

  • Creating a new Guid inside a code snippet using c#

    - by Rob
    I want to make an intellisense code snippet using Ctl K + Ctl X that actually executes code when it runs... for example, I would like to do the following: <![CDATA[string.Format("{MM/dd/yyyy}", System.DateTime.Now);]]> But rather than giving me that string value, I want the date in the format specified. Another example of what I want is to create a new Guid but truncate to the first octet, so I would want to use a create a new Guid using System.Guid.NewGuid(); to give me {798400D6-7CEC-41f9-B6AA-116B926802FE} for example but I want the value: 798400D6 from the code snippet. I'm open to not using an Intellisense Code Snippet.. I just thought that would be easy.

    Read the article

  • efficiently trimming postgresql tables

    - by agilefall
    I have about 10 tables with over 2 million records and one with 30 million. I would like to efficiently remove older data from each of these tables. My general algorithm is: create a temp table for each large table and populate it with newer data truncate the original tables copy tmp data back to original tables using: "insert into originaltable (select * from tmp_table)" However, the last step of copying the data back is taking longer than I'd like. I thought about deleting the original tables and making the temp tables "permanent", but I lose constraint/foreign key info. If I delete from the tables directly, it takes much longer. Given that I need to preserve all foreign keys and constraints, are there any faster ways of removing the older data? Thanks.

    Read the article

  • Handling of data truncation (short reads/writes) in FUSE

    - by Vi
    I expect any good program should do all their reads and writes in a loop until all data written/read without relying that write will write everything (even with regular files). Am I right? Implemented simple FUSE filesystem which only allows reading and writing with small buffers, very often returning that it is written less bytes that in a buffer (using -o direct_io). Some programs work, some not (notably mountlo). Are them buggy or programs should not expect truncated writes and reads from the regular files? In general, are seekable file descriptors expected to truncate data like sockets and pipes?

    Read the article

  • How to remove duplicate records in a table?

    - by Mason Wheeler
    I've got a table in a testing DB that someone apparently got a little too trigger-happy on when running INSERT scripts to set it up. The schema looks like this: ID UNIQUEIDENTIFIER TYPE_INT SMALLINT SYSTEM_VALUE SMALLINT NAME VARCHAR MAPPED_VALUE VARCHAR It's supposed to have a few dozen rows. It has about 200,000, most of which are duplicates in which TYPE_INT, SYSTEM_VALUE, NAME and MAPPED_VALUE are all identical and ID is not. Now, I could probably make a script to clean this up that creates a temporary table in memory, uses INSERT .. SELECT DISTINCT to grab all the unique values, TRUNCATE the original table and then copy everything back. But is there a simpler way to do it, like a DELETE query with something special in the WHERE clause?

    Read the article

  • SQL: Is it quicker to insert sorted data into a table?

    - by AngryWhenHungry
    A table in Sybase has a unique varchar(32) column, and a few other columns. It is indexed on this column too. At regular intervals, I need to truncate it, and repopulate it with fresh data from other tables. insert into MyTable select list_of_columns from OtherTable where some_simple_conditions order by MyUniqueId If we are dealing with a few thousand rows, would it help speed up the insert if we have the order by clause for the select? If so, would this gain in time compensate for the extra time needed to order the select query? I could try this out, but currently my data set is small and the results don't say much.

    Read the article

  • SQL Server 2000, how to automate import data from excel

    - by Stan
    Say the source data comes in excel format, below is how I import the data. Converting to csv format via MS Excel Roughly find bad rows/columns by inspecting backup the table that needs to be updated in SQL Query Analyzer truncate the table (may need to drop foreign key constraint as well) import data from the revised csv file in SQL Server Enterprise Manager If there's an error like duplicate columns, I need to check the original csv and remove them I was wondering how to make this procedure more effecient in every step? I have some idea but not complete. For step 2&6, using scripts that can check automatically and print out all error row/column data. So it's easier to remove all errors once. For step 3&5, is there any way to automatically update the table without manually go through the importing steps? Could the community advise, please? Thanks.

    Read the article

  • How do I prevent buffer overflow converting a double to char?

    - by Tommy
    I'm converting a double to a char string: char txt[10]; double num; num = 45.344322345 sprintf(txt, "%.1f", num); and using ".1f" to truncate the decimal places, to the tenths digit. i.e. - txt contains 45.3 I usually use precision in sprintf to ensure the char buffer is not overflowed. How can I do that here also truncating the decimal, without using snprintf? (i.e. if num = 345694876345.3 for some reason) Thanks

    Read the article

  • PHP cURL: get headers only from post

    - by Stewart
    There ought to be a way of sending a post request and getting back just the headers. $ch = curl_init('http://www.stackoverflow.com/'); curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_NOBODY, true); doesn't work, because all setting CURLOPT_NOBODY does is to change the request method to HEAD, thereby overriding CURLOPT_POST. I could just leave the last of these lines out and only process the headers, but is there a more efficient way? It's also odd that there doesn't seem to be a way in cURL to truncate the received content to a specified length as there is with file_get_contents.

    Read the article

  • Highly efficient filesystem APIs for certain kinds of operations

    - by romkyns
    I occasionally find myself needing certain filesystem APIs which could be implemented very efficiently if supported by the filesystem, but I've never heard of them. For example: Truncate file from the beginning, on an allocation unit boundary Split file into two on an allocation unit boundary Insert or remove a chunk from the middle of the file, again, on an allocation unit boundary The only way that I know of to do things like these is to rewrite the data into a new file. This has the benefit that the allocation unit is no longer relevant, but is extremely slow in comparison to some low-level filesystem magic. I understand that the alignment requirements mean that the methods aren't always applicable, but I think they can still be useful. For example, a file archiver may be able to trim down the archive very efficiently after the user deletes a file from the archive, even if that leaves a small amount of garbage either side for alignment reasons. Is it really the case that such APIs don't exist, or am I simply not aware of them? I am mostly interested in NTFS, but hearing about other filesystems will be interesting too.

    Read the article

  • How to change particular column entries in a mysql table when uploading data from csv file?

    - by understack
    I upload data into a mysql table from csv file in a standard way like this: TRUNCATE TABLE table_name; load data local infile '/path/to/file/file_name.csv' into table table_name fields terminated by ',' enclosed by '"' lines terminated by '\r\n' (id, name, type, deleted); All 'deleted' column entries in csv file has either 'current' or 'deleted' value. Question: When csv data is being loaded into table, I want to put current date in table for all those corresponding 'deleted' entries in csv file. And null for 'current' entries. How can I do this? Example: csv file: id_1, name_1, type_1, current id_2, name_1, type_2, deleted id_3, name_3, type_3, current Table after loading this data should look like this: id_1, name_1, type_1, null id_2, name_1, type_2, 2010-05-10 id_3, name_3, type_3, null Edit Probably, I could run another separate query after loading csv file. Wondering if it could be done in same query?

    Read the article

  • SQLAuthority News – Monthly Roundup of Best SQL Posts

    - by pinaldave
    After receiving lots of requests from different readers for long time I have decided to write first monthly round up. If all of you like it I will continue writing the same every month. In fact, I really like the idea as I was able to go back and read all of my posts written in this month. This month was started with answering one of the most common question asked me to about What is Adventureworks? Many of you know the answer but to the surprise more number of the reader did not know the answer. There were few extra blog post which were in the same line as following. SQL SERVER – The Difference between Dual Core vs. Core 2 Duo SQLAuthority News – Wireless Router Security and Attached Devices – Complex Password SQL SERVER – DATE and TIME in SQL Server 2008 DMVs are also one of the most handy tools available in SQL Server, I have written following blog post where I have used DMV in scripts. SQL SERVER – Get Latest SQL Query for Sessions – DMV SQL SERVER – Find Most Expensive Queries Using DMV SQL SERVER – List All the DMV and DMF on Server I was able to write two follow-up of my earlier series where I was finding the size of the indexes using different SQL Scripts. And in fact one of the article Powershell is used as well. This was my very first attempt to use Powershell. SQL SERVER – Size of Index Table for Each Index – Solution 2 SQL SERVER – Size of Index Table for Each Index – Solution 3 – Powershell SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup Without realizing I wrote series of the blog post on disabled index here is its complete list. I plan to write one more follow-up list on the same. SQL SERVER – Disable Clustered Index and Data Insert SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index SQL SERVER – Disabled Index and Update Statistics Two special post which I found very interesting to write are as following. SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008 SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions In personal adventures, I won the Community Impact Award for Last Year from Microsoft. Please leave your comment about how can I improve this round up or what more details I should include in the same. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • What is new in Oracle SOA Suite 11g R1 PS6? by Shanny Anoep

    - by JuergenKress
    Oracle has released a new version 11.1.1.7.0 for their Oracle Fusion Middleware product line. This version includes Patch Set #6 (PS6) for Oracle SOA Suite 11g R1, with a big list of improvements and fixes for each component in that suite. In this post we will highlight some of the interesting updates with regards to troubleshooting, performance, reliability and scalability. Infrastructure/Purging scripts Database growth is a common problem for large-scale Oracle SOA Suite deployments. Oracle already provides multiple purging strategies for the SOA Suite runtime database. This patch set includes two new scripts for purging most of the runtime data: Table Recreation Script (TRS): This script can be used to reclaim as much database space as possible, while still retaining the open instances. It can be used as a corrective action for databases that grew excessively, for example when purging was not performed at all. This should be used as a single corrective action only; the script does not replace the normal purging scripts. Truncate script: Remove all records from the SOA Suite runtime tables without dropping the tables. This script can be used for cloning SOA Suite environments without copying the instance data, or for recreating test scenarios by cleaning all the runtime data. The Oracle SOA Suite Administrator's guide contains a table with the available purging strategies. Diagnostic dumps Using WLST you could already dump diagnostic information about various components of the SOA Suite. This version adds support to retrieve more information on BPEL and Adapters from the command-line. Diagnostic dumps for BPEL New diagnostic dumps are available for BPEL to get information on thread pools, average processing time for BPEL components, and average waiting times for asynchronous instances. This information can be very useful for performance analysis or troubleshooting. With WLST this information can be retrieved from the command-line and included for monitoring or reporting. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Suite PS6,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • What's the best way to cache a growing database table for html generation?

    - by McLeopold
    I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12  | Next Page >