Search Results

Search found 436 results on 18 pages for 'insertion'.

Page 7/18 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • NSTextView's adjustScroll method

    - by Mike
    I'm trying to implement typewriter scrolling in my Cocoa text editor, keeping the insertion point centered vertically in its scrollview. Toward this end, I have subclassed NSClipView to provide a scrollToPointWithoutConstraint method, which scrolls the document to a specified point without calling constrainScrollPoint. This is necessary because for short documents the insertion point can't be centered unless we scroll beyond the document's bounds. This seems reasonably straightforward so far and does what I want. The problem comes in when I try to scroll using the scroll bars. If I'm scrolled to the end of the document, such that part of the scroll view contains an area outside the document's bounds, trying to scroll up by a small increment causes the scroll view to jump, immediately clamping to the document's actual bounds. I gather that I might need to subclass NSTextView and override the adjustScroll method; this is where my actual question begins. The proposedVisibleRect that is passed to adjustScroll already has its dimensions adjusted so that they lie within the document's actual bounds. Is there a way that I can change the value of proposedVisibleRect before adjustScroll is called? Alternatively, am I going about this entirely wrong? Any suggestions would be greatly appreciated at this point.

    Read the article

  • priority queue with limited space: looking for a good algorithm

    - by SigTerm
    This is not a homework. I'm using a small "priority queue" (implemented as array at the moment) for storing last N items with smallest value. This is a bit slow - O(N) item insertion time. Current implementation keeps track of largest item in array and discards any items that wouldn't fit into array, but I still would like to reduce number of operations further. looking for a priority queue algorithm that matches following requirements: queue can be implemented as array, which has fixed size and _cannot_ grow. Dynamic memory allocation during any queue operation is strictly forbidden. Anything that doesn't fit into array is discarded, but queue keeps all smallest elements ever encountered. O(log(N)) insertion time (i.e. adding element into queue should take up to O(log(N))). (optional) O(1) access for *largest* item in queue (queue stores *smallest* items, so the largest item will be discarded first and I'll need them to reduce number of operations) Easy to implement/understand. Ideally - something similar to binary search - once you understand it, you remember it forever. Elements need not to be sorted in any way. I just need to keep N smallest value ever encountered. When I'll need them, I'll access all of them at once. So technically it doesn't have to be a queue, I just need N last smallest values to be stored. I initially thought about using binary heaps (they can be easily implemented via arrays), but apparently they don't behave well when array can't grow anymore. Linked lists and arrays will require extra time for moving things around. stl priority queue grows and uses dynamic allocation (I may be wrong about it, though). So, any other ideas?

    Read the article

  • Why isn't this simple PHP/MySQL code working?

    - by Sammy
    I am very new to php/mysql and this is causing me to loose hairs, I am trying to build a multi level site navigation. In this part of my script I am readying the sub and parent categories coming from a form for insertion into the database: // get child categories $catFields = $_POST['categories']; if (is_array($catFields)) { $categories = $categories; for ($i=0; $i<count($catFields); $i++) { $categories = $categories . $catFields[$i]"; } } // get parent category $select = mysql_query ("SELECT parent FROM categories WHERE id = $categories"); while ($return = mysql_fetch_assoc($select)) { $parentId = $return['parent']; } The first part of my script works fine, it grabs all the categories that the user has chosen to assign a post by checking the checkboxes in a form and readies it for insertion into the database. But the second part does not work and I can't understand why. I am trying to match a category with a parent that is stored in it's own table, but it returns nothing even though the categories all have parents. Can anyone tell me why this is? p.s. The $categories variable contains the sub category id.

    Read the article

  • How to return value using ajax

    - by Priyanka
    Hello. I have Ajax file in which code has written to accept values form user and then these values are taken in a Ajax function as follows: $(document).ready(function(){ $("#newsletterform").validate(); $('#Submit').click(function(){ var name = $('#newsletter_name').val(); var email = $('#newsletter_email').val(); sendValue(email,name); }); }); The function for paasing values and getting values from other file: function sendValue(str,name){ $.post( "newsletter/subscribe.php", //Ajax file { sendValue: str, sendVal: name }, function(data2){ $('#display').html(data2.returnValue); }, //How you want the data formated when it is returned from the server. "json" ); } and these values are passed to another file called "subscribe.php" in which insertion code to database is written and again I return the value to my first ajax function as follows: echo json_encode(array("returnValue"=$msg)); The msg is ging to contain my message to be displayed. But now, this works fine on localhost, I get the return values nad message properly but when I upload it on server this gives me an error as: data2 is null [Break on this error] $('#display').html(data2.returnValue); This only gives error for return value but insertion, sending mail functionality works fine. Please provide me with a good solution wherein I can be able to get back the return values without any error. Thanks in advance.

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • ASP.net MVC Linq-To-SQL Many-To-Many Field Binding

    - by user336858
    Hi there, The short version of this question is "Is there a way to gracefully handle database insertion for an object that has a many-to-many field that has been set up in a partial class?" Apologies if it's been asked before. Example Suppose I have a typical MVC setup with the tables: Posts {PostID, ...} Categories {CategoryID, ...} A post can have more than one category, and a category can identify more than one post. Thus suppose further that I need an extra table: PostCategories {PostID, CategoryID, ...} This handles the many-to-many relationship between posts and categories. As far as I know, there's no way to do this in Linq-to-SQL right now so I have to shoehorn it in by adding a partial Post class to the project to add that functionality. Something like: public partial class Post { public IEnumerable<Category> Categories{ get { ... } set { ... } } } So I can now create a "Create" view that automatically populates a "Categories" UI item. This is where the trouble starts. So here's my question: How do you get automatic object model binding to work cleanly with an object that has a many-to-many relationship to control? The workaround that makes many-to-many relationships possible relies on the Post object having a PostID in order to be associated with CategoryID(s), which is only issued after the Post object has been submitted for validation and insertion. Bit of a Catch22 here. Any terminology, links, or tips you can provide would be tremendously helpful!

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • How can I create an array of random numbers in C++

    - by Nick
    Instead of The ELEMENTS being 25 is there a way to randomly generate a large array of elements....10000, 100000, or even 1000000 elements and then use my insertion sort algorithms. I am trying to have a large array of elements and use insertion sort to put them in order and then also in reverse order. Next I used clock() in the time.h file to figure out the run time of each algorithm. I am trying to test with a large amount of numbers. #define ELEMENTS 25 void insertion_sort(int x[],int length); void insertion_sort_reverse(int x[],int length); int main() { clock_t tStart = clock(); int B[ELEMENTS]={4,2,5,6,1,3,17,14,67,45,32,66,88, 78,69,92,93,21,25,23,71,61,59,60,30}; int x; cout<<"Not Sorted: "<<endl; for(x=0;x<ELEMENTS;x++) cout<<B[x]<<endl; insertion_sort(B,ELEMENTS); cout <<"Sorted Normal: "<<endl; for(x=0;x<ELEMENTS;x++) cout<< B[x] <<endl; insertion_sort_reverse(B,ELEMENTS); cout <<"Sorted Reverse: "<<endl; for(x=0;x<ELEMENTS;x++) cout<< B[x] <<endl; double seconds = clock() / double(CLK_TCK); cout << "This program has been running for " << seconds << " seconds." << endl; system("pause"); return 0; }

    Read the article

  • SQLAuthority News – DotNET Challenge of Sorting Generic List

    - by pinaldave
    This is a quick announcement of .NET challenge posted by Nupur Dave. She has asked very interesting question. If you are interested in learning .NET and winning iPAD by Red-Gate. I strongly suggest that all of you should attempt the quiz. Here is the question: How to insert an item in sorted generic list such that after insertion list would be sorted? You can visit .NET Challenge to answer the question. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: DotNet, Nupur Dave

    Read the article

  • Favorite Visual Studio 2010 Extensions, Update

    - by Scott Dorman
    With the release of the Visual Studio Pro Power Tools (and many other new extensions having been released), my list of favorite Visual Studio extensions has changed. All of these extensions are available in the Visual Studio Gallery. Here is the list of extensions that I currently have installed and find useful: Bing Start Page CodeCompare Collapse Selection In Solution Explorer Collapse Solution Color Picker Completion Extension Analyzer Find Results Highlighter Find Results Tweak (Available from CodePlex) Format Document HelpViewerKeywordIndex HighlightMultiWord Image Insertion Indentation Matcher Extension ItalicComments MoveToRegionVSX Numbered Bookmarks PowerCommands for Visual Studio 2010 Regular Expressions Margin Search Work Items for TFS 2010 Source Outliner Spell Checker Structure Adornment This also installs the following extensions: BlockTagger BlockTaggerImpl SettingsStore SettingsStoreImpl StyleCop Team Founder Server Power Tools TFS Auto Shelve Visual Studio Color Theme Editor Visual Studio Pro Power Tools VS10x Code Map VS10x Code Marker VS10x Collapse All Projects VS10x Editor View Enhancer VS10x Insert Debug Names VS10x Selection Popup VS10x Super Copy Paste VSCommands 2010 Word Wrap with Auto-Indent   Technorati Tags: Visual Studio,Extensions

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Data structure: sort and search effectively

    - by Jiten Shah
    I need to have a data structure with say 4 keys . I can sort on any of these keys. What data structure can I opt for? Sorting time should be very little. I thought of a tree, but it will be only help searching on one key. For other keys I'll have to remake the tree on that particular key and then find it. Is there any data structure that can take care of all 4 keys at the same time? these 4 fields are of total 12 bytes and total size for each record - 40 bytes.. have memory constraints too... operations are : insertion, deletion, sorting on different keys.

    Read the article

  • Mise en place de l'autocomplétion dans une application Eclipse RCP, un tutoriel d'Alain Bernard

    Bonjour,Je vous propose un nouvel article qui traite de la mise en place de l'autocomplétion dans une application Eclipse RCP. L'autocomplétion est ce mécanisme bien connu qui permet de proposer à l'utilisateur une liste de choix pour l'insertion d'un contenu dans son document.Dans cet article, nous nous penchons sur deux principaux moyens de la mettre en place : soit sur des composants SWT tels que les champs texte ou les listes déroulantes ; soit dans les éditeurs de texte. http://alain-bernard.developpez.com/...to-completion/Bonne lecture et n'hésitez pas à profiter de cette discussion pour toute remarque ou question !

    Read the article

  • Prevent nautilus showing partition mounted in bash script

    - by bcbc
    In my bash script I mount partitions, check them, copy files to them, and unmount. When the script mounts the partition, Nautilus pops up with a Window showing the partition and stealing focus. This is something I want to avoid. Note: I know I can change the behaviour of this in System settings, Details, Removable media, Never prompt or start programs on media insertion, but I don't want to change the behaviour e.g. if a USB stick is plugged in, I just want to prevent it in my bash script. Actually this auto display doesn't seem consistent. If I do the exact same command from the terminal, Nautilus doesn't show, and I know there are other mounts in my script that don't show. So what could be causing this? Here's an example of the code: mkdir -p $target/home mount $target/home $homedev Thanks in advance

    Read the article

  • Inside the Raspberry Pi Factory

    - by Jason Fitzpatrick
    Curious where your pint-sized Raspberry Pi came from? You might be surprised to learn it was built, tested, and packaged all in an equally pint-sized factory in South Wales. Nick Heath of Tech Republic takes us on a photo tour of the Raspberry Pi factory with a stop at each stage of production and testing. The photo above shows one of the manual construction steps, the insertion of the large components such as the USB and Ethernet ports. Hit up the link below for the full tour. Raspberry Pi: Inside the Pi Factory [Tech Republic] Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus?

    Read the article

  • SanDisk présente une CompactFlash de 128 GB à 1500 $ l'unité, de telles capacités ont-elles un sens ?

    SanDisk présente une CompactFlash de 128 GB à 1500 $ l'unité, de telles capacités ont-elles un sens ? SanDisk vient de lancer une nouvelle carte mémoire CompactFlash dont les spécifications sont impressionnantes : 128 GB de capacité de stockage pour une vitesse d'écriture de 100MB par seconde. Appuyer sur le déclencheur pour prendre un cliché en deviendrait presque plus long que le transfert de l'image numérique sur les circuits ! De plus, ses dimensions aident à dissiper la chaleur qui découle de ce haut taux d'échange de données ; et offrent aussi plus de place pour l'insertion d'une protection, qui la protège des températures extrêmes. Mais cette Extreme Pro CompactFlash a un prix, et pas un petit... Elle coûte 15...

    Read the article

  • QuadTree: store only points, or regions?

    - by alekop
    I am developing a quadtree to keep track of moving objects for collision detection. Each object has a bounding shape, let's say they are all circles. (It's a 2D top-down game) I am unsure whether to store only the position of each object, or the whole bounding shape. If working with points, insertion and subdivision is easy, because objects will never span multiple nodes. On the other hand, a proximity query for an object may miss collisions, because it won't take the objects' dimensions into account. How to calculate the query region when you only have points? If working with regions, how to handle an object that spans multiple nodes? Should it be inserted in the nearest parent node that completely contains it, even if this exceeds the node's capacity? Thanks.

    Read the article

  • Recommend an open source CMS for single page web site

    - by RedMan
    Hi I want to create a single page web site like http://kiskolabs.com/ or http://www.carat.se to display my portfolio. I want to add new products after launching the site without having to edit the entire site. I've looked at opencart (too much for single page site), Magneto (more for ecommerce), Wordpress (couldn't find open source / free templates which i can start from). Can you suggest a CMS which will support the creation of a single page site and allow insertion of new products without having to edit the entire page? I would prefer a CMS which also has open source / free templates which I can tweak for my use. I can do php and mysql, xml. If it is an easier option I can do PSD to site (but don't know much about this at all).

    Read the article

  • Start script when connecting phone through usb

    - by choel
    Trying to run a script when my phone is plugged in via USB, a made a udev rule looks like this in /etc/udev/rules.d/85-lazydroid.rule ATTRS{idVendor}=="22b8", ATTRS{idProduct}=="428c", RUN+="/home/joel/.lazydroid" And the script .lazydroid looks like this: #!/bin/bash exec adb forward tcp:8080 tcp:8080 & exec chromium-browser 127.0.0.1:8080 --new-window & The script itself runs fine. The trick is I can't get the script to run up on insertion of the phone. And it's the right ID according to: lsusb | grep Motorola Bus 002 Device 042: ID 22b8:428c Motorola PCS Any ideas?

    Read the article

  • Are "skip deltas" unique to svn?

    - by echinodermata
    The good folks who created the SVN version control system use a structure they refer to as "skip deltas" to store the revision history of files internally. A revision is stored as a delta against an earlier revision. However, revision N is not necessarily stored as a delta against revision N-1, like this: 0 <- 1 <- 2 <- 3 <- 4 <- 5 <- 6 <- 7 <- 8 <- 9 Instead, revision N is stored as a delta against N-f(N), where f(N) is the greatest power of two that divides N: 0 <- 1 2 <- 3 4 <- 5 6 <- 7 0 <------ 2 4 <------ 6 0 <---------------- 4 0 <------------------------------------ 8 <- 9 (Superficially it looks like a skip list but really it's not that similar - for instance, skip deltas are not interested in supporting insertion in the middle of the list.) You can read more about it here. My question is: Do other systems use skip deltas? Were skip deltas known/used/published before SVN, or did the creators of SVN invent it themselves?

    Read the article

  • Importing CSV filte to SQL server...

    - by sam
    HI guys, I am trying to import CSV file to SQL server database, no success, I am still newbie to sql server, thanks Operation stopped... Initializing Data Flow Task (Success) Initializing Connections (Success) Setting SQL Command (Success) Setting Source Connection (Success) Setting Destination Connection (Success) Validating (Success) Messages Warning 0x80049304: Data Flow Task 1: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console. (SQL Server Import and Export Wizard) Prepare for Execute (Success) Pre-execute (Success) Messages Information 0x402090dc: Data Flow Task 1: The processing of file "D:\test.csv" has started. (SQL Server Import and Export Wizard) Executing (Error) Messages Error 0xc002f210: Drop table(s) SQL Task 1: Executing the query "drop table [dbo].[test] " failed with the following error: "Cannot drop the table 'dbo.test', because it does not exist or you do not have permission.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. (SQL Server Import and Export Wizard) Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column ""Code"" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) Error 0xc020902a: Data Flow Task 1: The "output column ""Code"" (38)" failed because truncation occurred, and the truncation row disposition on "output column ""Code"" (38)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard) Error 0xc0202092: Data Flow Task 1: An error occurred while processing file "D:\test.csv" on data row 21. (SQL Server Import and Export Wizard) Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Source - test_csv" (1) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. (SQL Server Import and Export Wizard) Copying to [dbo].[test] (Stopped) Post-execute (Success) Messages Information 0x402090dd: Data Flow Task 1: The processing of file "D:\test.csv" has ended. (SQL Server Import and Export Wizard) Information 0x402090df: Data Flow Task 1: The final commit for the data insertion in "component "Destination - test" (70)" has started. (SQL Server Import and Export Wizard) Information 0x402090e0: Data Flow Task 1: The final commit for the data insertion in "component "Destination - test" (70)" has ended. (SQL Server Import and Export Wizard) Information 0x4004300b: Data Flow Task 1: "component "Destination - test" (70)" wrote 0 rows. (SQL Server Import and Export Wizard)

    Read the article

  • How to get identities of inserted data records using SQL bulk copy

    - by Olga
    Hello I have a ADO.NET dataTable with about 100.000 records. In this table there is a column "xyID" which has no values in it, because they are generated by insertion into my MSSQL Database. Now i have the problem, that i need this IDs for other processes. I am looking for a way to bulk copy this dataTable into the MSSQL database, and within the same "step" to "fill" my dataTable with the generated IDs. Thank you for your answers!

    Read the article

  • Disable text selection in QT/WebKit GUI

    - by Adal
    I'm checking if it would be possible to implement a GUI using HTML through PyQT and WebKit. One of the problem is that using the mouse you can select the text making up the interface. Can this behaviour be disabled? Also, the mouse pointer changes to an insertion caret while over the selectable text. I would like to disable this, without disabling the hand mouse pointer which appears when over a clickable link.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >