Search Results

Search found 10463 results on 419 pages for 'task tracking'.

Page 366/419 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • Update table using SSIS

    - by thursdaysgeek
    I am trying to update a field in a table with data from another table, based on a common key. If it were in straight SQL, it would be something like: Update EHSIT set e.IDMSObjID = s.IDMSObjID from EHSIT e, EHSIDMS s where e.SITENUM = s.SITE_CODE However, the two tables are not in the same database, so I'm trying to use SSIS to do the update. Oh, and the sitenum/site_code are varchar in one and nvarchar in the other, so I'll have to do a data conversion so they'll match. How do I do it? I have a data flow object, with the source as EHSIDMS and the destination as EHSIT. I have a data conversion to convert the unicode to non-unicode. But how do I update based on the match? I've tried with the destination, using a SQL Command as the Data Access mode, but it doesn't appear to have the source table. If I just map the field to be updated, how does it limit it based on fields matching? I'm about to export my source table to Excel or something, and then try inputting from there, although it seems that all that would get me would be to remove the data conversion step. Shouldn't there be an update data task or something? Is it one of those Data Flow transformation tasks, and I'm just not figuring out which it is?

    Read the article

  • Execute JavaScript from within a C# assembly

    - by ScottKoon
    I'd like to execute JavaScript code from within a C# assembly and have the results of the JavaScript code returned to the calling C# code. It's easier to define things that I'm not trying to do: I'm not trying to call a JavaScript function on a web page from my code behind. I'm not trying to load a WebBrowser control. I don't want to have the JavaScript perform an AJAX call to a server. What I want to do is write unit tests in JavaScript and have then unit tests output JSON, even plain text would be fine. Then I want to have a generic C# class/executible that can load the file containing the JS, run the JS unit tests, scrap/load the results, and return a pass/fail with details during a post-build task. I think it's possible using the old ActiveX ScriptControl, but it seems like there ought to be a .NET way to do this without using SilverLight, the DLR, or anything else that hasn't shipped yet. Anyone have any ideas? update: From Brad Abrams blog namespace Microsoft.JScript.Vsa { [Obsolete("There is no replacement for this feature. Please see the ICodeCompiler documentation for additional help. http://go.microsoft.com/fwlink/?linkid=14202")] Clarification: We have unit tests for our JavaScript functions that are written in JavaScript using the JSUnit framework. Right now during our build process, we have to manually load a web page and click a button to ensure that all of the JavaScript unit tests pass. I'd like to be able to execute the tests during the post-build process when our automated C# unit tests are run and report the success/failure alongside of out C# unit tests and use them as an indicator as to whether or not the build is broken.

    Read the article

  • Image Resizing and Compression

    - by GSTAR
    Hi guys, I'm implementing an image upload facility for my website. The uploading facility is complete but what I'm working on at the moment is manipulating the images. For this task I am using PHPThumb (http://phpthumb.gxdlabs.com). Anyway as I go along I'm coming across potential issues, to do with resizing and compression. Basically I want to acheive the following results: The ideal image dimensions are: 800px width, 600px height. If an uploaded image exceeds either of these dimensions, it will need be resized to meet the requirements. Otherwise, leave as it is. The ideal file size is 200kb. If an uploaded image exceeds this then it will need to be compressed to meet this requirement. Otherwise, leave as it is. So in a nutshell: 1) Check the dimensions, resize if required. 2) Check the filesize, compress if required. Has anybody done anything like this / could you give me some pointers? Is PHPThumb the correct tool to do this in?

    Read the article

  • Jquery tooltip absolute position above a link which is inside paragraph text?

    - by BerggreenDK
    I am trying to retrieve the position of an HTML element inside a paragraph. Eg. a span or anchor. I would also like the width of the element. So that when I hover the object, I can activate/build/show a sort of toolbar/tooltip above the element dynamically. I need it to be dynamically added to exisiting content, so somekinda "search-replace" jQuery thingy that scans the elements within eg. a DIV and then does this for all that matches this "feature". Main problem/question is: How do I retrieve the "current absolute" position of the element I am hovering with the mouse. I dont want the toolbar/tooltip to be following the mouse, but instead it must "snap" to the element its hovering. so I was thinking: "place BOX -20px from current element. Match width.... Possible? is there a jQuery plugin for this already - or? Sample code: <div class="helper"> <h1>headline</h1> <p>Here is some sample text. But <a href="somewhere.htm" class="help help45">this is with an explanation you can hover</a>. <a href="somewhereelse.htm">And this isnt.</a> <ul> <li>We could also do it <a href="somewhere.htm" class="help help32">inside a bullet list</a></li> </ul> </div> The .help is the class triggering the "help" and the .help45 or .help32 is the helpsection needed to be shown (but thats another task later, as I am hoping to retreive the "id" from the "help45" so a serverlookup for id=45 can load the text needed to be shown. Nb. if the page scrolls etc. the helptip still needs to follow the item on the page until closed/hidden.

    Read the article

  • Data structure to build and lookup set of integer ranges

    - by actual
    I have a set of uint32 integers, there may be millions of items in the set. 50-70% of them are consecutive, but in input stream they appear in unpredictable order. I need to: Compress this set into ranges to achieve space efficient representation. Already implemented this using trivial algorithm, since ranges computed only once speed is not important here. After this transformation number of resulting ranges is typically within 5 000-10 000, many of them are single-item, of course. Test membership of some integer, information about specific range in the set is not required. This one must be very fast -- O(1). Was thinking about minimal perfect hash functions, but they do not play well with ranges. Bitsets are very space inefficient. Other structures, like binary trees, has complexity of O(log n), worst thing with them that implementation make many conditional jumps and processor can not predict them well giving poor performance. Is there any data structure or algorithm specialized in integer ranges to solve this task?

    Read the article

  • .NET Application with SQL Server CE Database

    - by blu
    I just started using SQL Server CE 3.5 in my WinForms Application (C# in VS 2008 SP1). I've noticed a couple of interesting things I'd like some input on: 1. Copying of sdf file to bin My sdf file is located inside of an Infrastructure project that houses my repository implementations. When the application is first debugged the sdf was copied to debug\bin. This is where all future reads/writes operate. At some point when this is deployed the file will go into a data folder using Click Once, but during development where should I be putting this sdf? Is having it in the bin typical, or are there any other recommendations? 2. Updating sdf It appears that writing to the sdf file does not immediately update the database. I am using Linq-to-SQL and am calling SubmitChanges, but on read the values are not returned. However if I close the application and re-open it the added value is there. Is there an additional flush step I need to take? What is causing this, file locking, buffering, something else? Update 3. Unit Tests I have an MS test project, and the sdf file is not being copied to the correct output directory. I have the settings: Build Action: Content Copy to Output Directory: Copy Always The message is: System.Data.SqlServerCe.SqlCeException: The database file cannot be found. Check the path to the database. I appreciate any guidance on these questions, thanks. If there is a tutorial other than what is on MSDN that you know about that would be great too. Working with CE is proving to be a difficult task and I welcome any help I can find.

    Read the article

  • Using custom Qt subclasses in Python

    - by kwatford
    First off: I'm new to both Qt and SWIG. Currently reading documentation for both of these, but this is a time consuming task, so I'm looking for some spoilers. It's good to know up-front whether something just won't work. I'm attempting to formulate a modular architecture for some in-house software. The core components are in C++ and exposed via SWIG to Python for experimentation and rapid prototyping of new components. Qt seems like it has some classes I could use to avoid re-inventing the wheel too much here, but I'm concerned about how some of the bits will fit together. Specifically, if I create some C++ classes, I'll need to expose them via SWIG. Some of these classes likely subclass Qt classes or otherwise have Qt stuff exposed in their public interfaces. This seems like it could raise some complications. There are already two interfaces for Qt in Python, PyQt and PySide. Will probably use PySide for licensing reasons. About how painful should I expect it to be to get a SWIG-wrapped custom subclass of a Qt class to play nice with either of these? What complications should I know about upfront?

    Read the article

  • Rails creating users, roles, and projects

    - by Bobby
    I am still fairly new to rails and activerecord, so please excuse any oversights. I have 3 models that I'm trying to tie together (and a 4th to actually do the tying) to create a permission scheme using user-defined roles. class User < ActiveRecord::Base has_many :user_projects has_many :projects, :through => :user_projects has_many :project_roles, :through => :user_projects end class Project < ActiveRecord::Base has_many :user_projects has_many :users, :through => :user_projects has_many :project_roles end class ProjectRole < ActiveRecord::Base belongs_to :projects belongs_to :user_projects end class UserProject < ActiveRecord::Base belongs_to :user belongs_to :project has_one :project_role attr_accessible :project_role_id end The project_roles model contains a user-defined role name, and booleans that define whether the given role has permissions for a specific task. I'm looking for an elegant solution to reference that from anywhere within the project piece of my application easily. I do already have a role system implemented for the entire application. What I'm really looking for though is that the users will be able to manage their own roles on a per-project basis. Every project gets setup with an immutable default admin role, and the project creator gets added upon project creation. Since the users are creating the roles, I would like to be able to pull a list of role names from the project and user models through association (for display purposes), but for testing access, I would like to simply reference them by what they have access to without having reference them by name. Perhaps something like this? def has_perm?(permission, user) # The permission that I'm testing user.current_project.project_roles.each do |role| if role.send(permission) # Not sure that's right... do_stuff end end end I think I'm in over my head on this one because I keep running in circles on how I can best implement this.

    Read the article

  • What is the correct approach to using GWT with persistent objects?

    - by dankilman
    Hi, I am currently working on a simple web application through Google App engine using GWT. It should be noted that this is my first attempt at such a task. I have run into to following problem/dilema: I have a simple Class (getters/setters and nothing more. For the sake of clarity I will refer to this Class as DataHolder) and I want to make it persistent. To do so I have used JDO which required me to add some annotations and more specifically add a Key field to be used as the primary key. The problem is that using the Key class requires me to import com.google.appengine.api.datastore.Key which is ok on the server side, but then I can't use DataHolder on the client side, because GWT doesn't allow it (as far as I know). So I have created a sister Class ClientDataHolder which is almost identical, though it doesn't have all the JDO annotations nor the Key field. Now this actually works but It feels like I'm doing something wrong. Using this approach would require maintaining to separate classes for each entity I wish to have. So my question is: Is there a better way of doing this? Thank you.

    Read the article

  • Attempt to use a while loop for the 'next' arg of a for loop generates #arg error

    - by JerryK
    Am attempting to teach myself to program using Tcl. The task i've set myself to motivate my learning of Tcl is to solve the 8 queens problem. My approach to creating a program is to successively 'prototype' a solution. I have asked an earlier question related to the correctly laying out the nested for loops and received a helpful answer. To my dismay I find that the next development of my code creates the same interpreter error : "wrong # args" I have been careful to have an open brace at the end of the line preceding the while loop command. I've also tried to put the arguments of the whileloop in braces. This generates a different error. I have sincerely tried to understand the Tcl syntax man page - not too successfully - suggested by the answerer of my earlier question. Here is the code set allowd 1 set notallowd 0 for {set r1p 1} {$r1p <= 8} {incr r1p } { puts "1st row q placed at $r1p" ;# re-initialize r2 'free for q placemnt' array after every change of r1 q pos: for {set i 1 } {$i <= 8} {incr i} { set r2($i) $allowd } for { set r2($r1p) $notallowd ; set r2([expr $r1p-1]) $notallowd ; set r2([expr $r1p+1]) $notallowd ; set r2p 1} {$r2p <= 8} { ;# 'next' arg of r2 forloop will be a whileloop : while r2($r2p)== $notallowd incr r2p } { puts "2nd row q placed at $r2p" ;# end of 'commnd' arg of r2 forloop } } Where am I going wrong? EDIT : to provide clear reply @slebetman As stated in my text, I did brace the arguments of the whileloop (indeed that was how i first wrote the code) below is exactly the layout of the r2 forloop tried: for { set r2($r1p) $notallowd ; set r2([expr $r1p-1]) $notallowd ; set r2([expr $r1p+1]) $notallowd ; set r2p 1} {$r2p <= 8} { ;# 'next' arg of r2 forloop will be a whileloop : while { r2($r2p)== $notallowd } { incr r2p } } { puts "2nd row q placed at $r2p" ;# end of 'commnd' arg of r2 forloop } but this generates the fatal interpreter error : "unknown math function 'r2' while compiling while { r2($r2p .... "

    Read the article

  • How can I dynamically set the default namespace declaration of an XSLT transformation's output XML?

    - by Paralife
    I can do it, but not for the default namespace, using the <xsl:namespace>. If I try to do it for the default namespace: <xsl:namespace name="" select"myUri"/> it never works. It demands that I explicitly define the namespace of the element to be able to use the above null prefix declaration. The reason I want this is because I have a task to transform an input XML file to another output xml. The output XML has many elements and i dont want to have to explicitly set the namespace for every element. Thats why I want to set the default and never bother again. But the default must be computed from some data in the source XML. It does not change during the whole transformation, but it is dependent on input XML data. Any solution? EDIT 1: To sup up: I want to create a namespace dynamically and set it to be the default namespace of the output xml document. The uri of the namespace is derived from some data in the input XML. If I use <xsl:namespace> in my root output element, I cannot create a default namespace for it, only a prefixed one. And even with the prefixed one, it does not propagate to children.

    Read the article

  • crash when using stl vector at instead of operator[]

    - by Jamie Cook
    I have a method as follows (from a class than implements TBB task interface - not currently multithreading though) My problem is that two ways of accessing a vector are causing quite different behaviour - one works and the other causes the entire program to bomb out quite spectacularly (this is a plugin and normally a crash will be caught by the host - but this one takes out the host program as well! As I said quite spectacular) void PtBranchAndBoundIterationOriginRunner::runOrigin(int origin, int time) const // NOTE: const method { BOOST_FOREACH(int accessMode, m_props->GetAccessModes()) { // get a const reference to appropriate vector from member variable // map<int, vector<double>> m_rowTotalsByAccessMode; const vector<double>& rowTotalsForAccessMode = m_rowTotalsByAccessMode.find(accessMode)->second; if (origin != 129) continue; // Additional debug constrain: I know that the vector only has one non-zero element at index 129 m_job->Write("size: " + ToString(rowTotalsForAccessMode.size())); try { // check for early return... i.e. nothing to do for this origin if (!rowTotalsForAccessMode[origin]) continue; // <- this works if (!rowTotalsForAccessMode.at(origin)) continue; // <- this crashes } catch (...) { m_job->Write("Caught an exception"); // but its not an exception } // do some other stuff } } I hate not putting in well defined questions but at the moment my best phrasing is : "WTF?" I'm compiling this with Intel C++ 11.0.074 [IA-32] using Microsoft (R) Visual Studio Version 9.0.21022.8 and my implementation of vector has const_reference operator[](size_type _Pos) const { // subscript nonmutable sequence #if _HAS_ITERATOR_DEBUGGING if (size() <= _Pos) { _DEBUG_ERROR("vector subscript out of range"); _SCL_SECURE_OUT_OF_RANGE; } #endif /* _HAS_ITERATOR_DEBUGGING */ _SCL_SECURE_VALIDATE_RANGE(_Pos < size()); return (*(_Myfirst + _Pos)); } (Iterator debugging is off - I'm pretty sure) and const_reference at(size_type _Pos) const { // subscript nonmutable sequence with checking if (size() <= _Pos) _Xran(); return (*(begin() + _Pos)); } So the only difference I can see is that at calls begin instead of simply using _Myfirst - but how could that possibly be causing such a huge difference in behaviour?

    Read the article

  • How to copy files without slowing down my app?

    - by Kevin Gebhardt
    I have a bunch of little files in my assets which need to be copied to the SD-card on the first start of my App. The copy code i got from here placed in an IntentService works like a charm. However, when I start to copy many litte files, the whole app gets increddible slow (I'm not really sure why by the way), which is a really bad experience for the user on first start. As I realised other apps running normal in that time, I tried to start a child process for the service, which didn't work, as I can't acess my assets from another process as far as I understood. Has anybody out there an idea how a) to copy the files without blocking my app b) to get through to my assets from a private process (process=":myOtherProcess" in Manifest) or c) solve the problem in a complete different way Edit: To make this clearer: The copying allready takes place in a seperate thread (started automaticaly by IntentService). The problem is not to separate the task of copying but that the copying in a dedicated thread somehow affects the rest of the app (e.g. blocking to many app-specific resources?) but not other apps (so it's not blocking the whole CPU or someting) Edit2: Problem solved, it turns out, there wasn't really a problem. See my answer below.

    Read the article

  • The application has stopped unexpectedly: How to Debug?

    - by Android Eve
    Please note, unlike many other questions having the subject title "application has stopped unexpectedly", I am not asking for troubleshooting a particular problem. Rather, I am asking for an outline of the best strategy for an Android/Eclipse/Java rookie to tackle this formidable task of digesting huge amounts of information in order to develop (and debug!) a simple Android application. In my case, I took the sample skeleton app from the SDK, modified it slightly and what did I get the moment I try to run it? The application (process.com.example.android.skeletonapp) has stopped unexpectedly. Please try again. OK, so I know that I have to look LogCat. It's full of timestamped lines staring at me... What do I do now? What do I need to look for? Is there a way to single-step the program, to find the statement that makes the app crash? (I thought Java programs never crash, but apparently I was mistaken) How do I place a breakpoint? Can you recommend an Android debug tutorial online, other than this one?

    Read the article

  • Time taken for memcpy decreases after certain point

    - by tss
    I ve a code which increases the size of the memory(identified by a pointer) exponentially. Instead of realloc, I use malloc followed by memcpy.. Something like this.. int size=5,newsize; int *c = malloc(size*sizeof(int)); int *temp; while(1) { newsize=2*size; //begin time temp=malloc(newsize*sizeof(int)); memcpy(temp,c,size*sizeof(int)); //end time //print time in mili seconds c=temp; size=newsize; } Thus the number of bytes getting copied is increasing exponentially. The time required for this task also increases almost linearly with the increase in size. However after certain point, the time taken abruptly reduces to a very small value and then remains constant. I recorded time for similar code, copyin data(Of my own type) 5 -> 10 - 2 ms 10 -> 20 - 2 ms . . 2560 -> 5120 - 5 ms . . 20480 -> 40960 - 30 ms 40960 -> 91920 - 58 ms 367680 -> 735360 - 2 ms 735360 -> 1470720 - 2 ms 1470720 -> 2941440 - 2 ms What is the reason for this drop in time ? Does a more optimal memcpy method get called when the size is large ?

    Read the article

  • Pascals Triangle by recursion

    - by Olpers
    Note : My Class Teacher gave me this question as an assignment... I am not asked to do it but please tell me how to do it with recursion Binomial coefficients can be calculated using Pascal's triangle: 1 n = 0 1 1 1 2 1 1 3 3 1 1 4 6 4 1 n = 4 Each new level of the triangle has 1's on the ends; the interior numbers are the sums of the two numbers above them. Task: Write a program that includes a recursive function to produce a list of binomial coefficients for the power n using the Pascal's triangle technique. For example, Input = 2 Output = 1 2 1 Input = 4 Output = 1 4 6 4 1 done this So Far but tell me how to do this with recursion... #include<stdio.h> int main() { int length,i,j,k; //Accepting length from user printf("Enter the length of pascal's triangle : "); scanf("%d",&length); //Printing the pascal's triangle for(i=1;i<=length;i++) { for(j=1;j<=length-i;j++) printf(" "); for(k=1;k<i;k++) printf("%d",k); for(k=i;k>=1;k--) printf("%d",k); printf("\n"); } return 0; }

    Read the article

  • Is there a design pattern to cut down on code duplication when subclassing Activities in Android?

    - by Daniel Lew
    I've got a common task that I do with some Activities - downloading data then displaying it. I've got the downloading part down pat; it is, of course, a little tricky due to the possibility of the user changing the orientation or cancelling the Activity before the download is complete, but the code is there. There is enough code handling these cases such that I don't want to have to copy/paste it to each Activity I have, so I thought to create an abstract subclass Activity itself such that it handles a single background download which then launches a method which fills the page with data. This all works. The issue is that, due to single inheritance, I am forced to recreate the exact same class for any other type of Activity - for example, I use Activity, ListActivity and MapActivity. To use the same technique for all three requires three duplicate classes, except each extends a different Activity. Is there a design pattern that can cut down on the code duplication? As it stands, I have saved much duplication already, but it pains me to see the exact same code in three classes just so that they each subclass a different type of Activity.

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • C# Hook Forms / Windows / Dialogs etc. (via HWND?) to Capture Video Buffer (D3D Device?)

    - by Drax
    I am looking to create a very simple C# application which runs Full-Screen in Direct3D, and is able to grab the Desktop 'scene', mapping each Window from the Desktop to a Textured Polygon in my D3D Scene... I'm hoping to create a simplistic "3D Desktop" type of application as an experiment, and I'm wondering if there is a specific method for doing something like the following: 1)Get a list of all the Windows open on the Desktop (List of HWNDs?). 2)Grab the X,Y position of each Window, as well as the Width and Height. 3)Grab the Rendered image of each Window (magic happens here). 4)Create a new Texture/Surface in D3D using the Width and Height of the Window(s), and apply the Image we grabbed as a Texture. Is there an efficient 'best practice' for acquiring the actual image(s) being rendered to the Desktop? Is there also a 'best practice' for "extending the desktop" to a virtual second, third, etc. "desktop" and being able to swap between them, including creating a unique instance of the task-bar for each virtual desktop. Thanks a million for any suggestions!

    Read the article

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

  • Is it safe to use a boolean flag to stop a thread from running in C#

    - by Lirik
    My main concern is with the boolean flag... is it safe to use it without any synchronization? I've read in several places that it's atomic. class MyTask { private ManualResetEvent startSignal; private CountDownLatch latch; private bool running; MyTask(CountDownLatch latch) { running = false; this.latch = latch; startSignal = new ManualResetEvent(false); } // A method which runs in a thread public void Run() { startSignal.WaitOne(); while(running) { startSignal.WaitOne(); //... some code } latch.Signal(); } public void Stop() { running = false; startSignal.Set(); } public void Start() { running = true; startSignal.Set(); } public void Pause() { startSignal.Reset(); } public void Resume() { startSignal.Set(); } } Is this a safe way to design a task? Any suggestions, improvements, comments? Note: I wrote my custom CountDownLatch class in case you're wondering where I'm getting it from.

    Read the article

  • Rapid Opening and Closing System.IO.StreamWriter in C#

    - by ccomet
    Suppose you have a file that you are programmatically logging information into with regards to a process. Kinda like your typical debug Console.WriteLine, but due to the nature of the code you're testing, you don't have a console to write onto so you have to write it somewhere like a file. My current program uses System.IO.StreamWriter for this task. My question is about the approach to using the StreamWriter. Is it better to open just one StreamWriter instance, do all of the writes, and close it when the entire process is done? Or is it a better idea to open a new StreamWriter instance to write a line into the file, then immediately close it, and do this for every time something needs to be written in? In the latter approach, this would probably be facilitated by a method that would do just that for a given message, rather than bloating the main process code with excessive amounts of lines. But having a method to aid in that implementation doesn't necessarily make it the better choice. Are there significant advantages to picking one approach or the other? Or are they functionally equivalent, leaving the choice on the shoulders of the programmer?

    Read the article

  • Automatically Organize Tags in Tax/Folksonomy

    - by Rob Wilkerson
    I'm working on a process that will perform natural language processing (NLP) on one--and potentially several--of our content rich sites. What I'd like to do once the NLP is complete is to automatically organize the output (generally a set of terms that you might think of as tags given the prevalence of that metaphor) into some kind of standard or generally accepted organizational structure. In a perfect world, I'd really like this to be crowd sourced under the folksonomy concept (as opposed to a taxonomy) since the ultimate goal is to target/appeal to real people rather than "domain experts", but I'm open to ideas and best practices. For the obvious purpose of scalability, I'd like to automate the population of this tax/folksonomy so that "some guy" in the team/organization isn't responsible for looking at a bunch of words (with or without context) and arbitrarily fleshing out the contextual components of the tree. I have a few ideas for doing this that require some research to establish viability, but I have exactly zero practical experience with this sort of thing so the ideas really just boil down to stuff I made up that might perform some role in accomplishing the task. Imagining that others have vastly more experience with this sort of thing, I'm hoping that I can stand on your shoulders. Thanks for your thoughts and insights.

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >