Search Results

Search found 1863 results on 75 pages for 'clone of anton makrushin'.

Page 65/75 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Add newline to a text field (Win32)

    - by user146780
    I'm making a Notepad clone. Right now my text loads fine but where their are newline characters, they do not make newlines in the text field. I load it like this: void LoadText(HWND ctrl,HWND parent) { int leng; char buf[330000]; char FileBuffer[500]; memset(FileBuffer,0,500); FileBuffer[0] = '*'; FileBuffer[1] = '.'; FileBuffer[2] = 't'; FileBuffer[3] = 'x'; FileBuffer[4] = 't'; OPENFILENAMEA ofn; memset(&ofn, 0, sizeof(OPENFILENAMEA)); ofn.lStructSize = sizeof(OPENFILENAMEA); ofn.hwndOwner = parent; ofn.lpstrFile = FileBuffer; ofn.nMaxFile = 500; ofn.lpstrFilter = "Filetype (*.txt)\0\0"; ofn.lpstrDefExt = "txt"; ofn.Flags = OFN_EXPLORER; if(!GetOpenFileNameA(&ofn)) { return; } ifstream *file; file = new ifstream(FileBuffer,ios::in); int lenn; lenn = 0; while (!file->eof()) { buf[lenn] = file->get(); lenn += 1; } buf[lenn - 1] = 0; file->read(buf,lenn); SetWindowTextA(ctrl,buf); file->close(); } How can I make it do the new line characters? Thanks

    Read the article

  • How can I coordinate code review tool and RCS (specifically git)

    - by Chris Nelson
    We're committed to git for code management. We're trying to find a tool that will help us systematize code reviews. We're considering Gerrit and Code Collaborator but would welcome other suggestions. We're having a problem answering the question, "How do we know every commit was reviewed?" (Or "What commits have yet to be reviewed?") One answer would be to submit every commit or every push for review and track incomplete reviews in the review tool. I'm not entirely happy with relying on a another tool -- especially if it's not open source -- to tell us this. What seems to be a better answer is to rely on sign offs in git (e.g., "Signed-off-by: Chris Nelson") and use a hook in the review tool to sign off commits on behalf of the reviewer. And advantage of this is if we use some other review mechanism for some commits, we have just one place to look for results. One problem with this is that we can't require review before push because the review tool is unlikely to have access to the developer's private repository clone to add the sign-off. Any ideas on integrating code review with code management to achieve ease of use and high visibility of unreviewed changes?

    Read the article

  • Help me with DB design

    - by eugeneK
    Hi, i'm developing text ads system. Some small clone of Google Ads. Here is diagram with common tables. Basically make it short, advertiser can have up to 10 variant of same campaign with different text variations, can geo-target his ads and unique impressions count only for IP that haven't been on certain site for more than 24 hours. Pretty simple but the question is what i lack in here from your experience because later it would much harder to fix design flaws and some of you probably done something alike also many SQL gurus in here so maybe i did over normalized DB or did not normalized as needed ? Second question is. My end goal is to get ads for user from ie. Germany that haven't seen same ad on same site for 24 hours as long as ads fit country of user. Each impression is count same as each click if there is one. I need to get 5 "random" ads based on IP, Country and higher CPC (pay per click). How can i achieve this with current design or maybe to design database the way it would be easy to get ads and show stats for advirtisers... thanks for any help...

    Read the article

  • Help to understand the issue with protected method

    - by zeroed
    I'm reading Sybex Complete Java 2 Certification Study Guide April 2005 (ISBN0782144195). This book is for java developers who wants to pass java certification. After a chapter about access modifiers (along with other modifiers) I found the following question (#17): True or false: If class Y extends class X, the two classes are in different packages, and class X has a protected method called abby(), then any instance of Y may call the abby() method of any other instance of Y. This question confused me a little. As far as I know you can call protected method on any variable of the same class (or subclasses). You cannot call it on variables, that higher in the hierarchy than you (e.g. interfaces that you implement). For example, you cannot clone any object just because you inherit it. But the questions says nothing about variable type, only about instance type. I was confused a little and answered "true". The answer in the book is False. An object that inherits a protected method from a superclass in a different package may call that method on itself but not on other instances of the same class. There is nothing here about variable type, only about instance type. This is very strange, I do not understand it. Can anybody explain what is going on here?

    Read the article

  • call a custom event from an item renderer in flex 4

    - by john
    I have a Renderer: [Event(name="addToCart",type="event.ProductEvent")] import mx.collections.ArrayCollection; protected function button1_clickHandler(event:MouseEvent):void { var eventObj:ProductEvent=new ProductEvent("addToCart",data.price,data.descript); dispatchEvent(eventObj); } ]]> <s:Label text="{data.descript}"/> <mx:Image source="{data.url}" width="50" height="50" width.hovered="100" height.hovered="100"/> <s:Label text="{data.price}"/> <s:Button includeIn="hovered" click="button1_clickHandler(event)" label="buy"/> and the custom event class: package events { import flash.events.Event; [Bindable] public class ProductEvent extends Event { public var price:String; public var descript:String; public function ProductEvent(type:String,price:String, descript:String) { super(type); this.price=price; this.descript=descript; } override public function clone():Event { return new ProductEvent(type,price,descript); } } } but a cannot call that event in : any ideas? thanks

    Read the article

  • Compiling x264 with mp4 support problem

    - by johnas
    I'm trying to compile x264 with mp4 output support. I download the latest version from their git by typing git clone git://git.videolan.org/x264.git When I run ./configure it configures and I'm able to make it. But when i try to configure it with ./configure --enable-mp4-output and then try to make it, it returns a strange error, indicating that there is a compilation error. The error message looks like: ... Lots of similar errors ... output/mp4.c:297: warning: implicit declaration of function ‘gf_isom_add_sample’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘p_file’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘i_track’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘i_descidx’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘p_sample’ output/mp4.c:299: error: ‘mp4_hnd_t’ has no member named ‘p_sample’ output/mp4.c:300: error: ‘mp4_hnd_t’ has no member named ‘i_numframe’ I've tried different releases. I've installed gpac, ffmpeg, and tried numerous tips from the net. But still can't get it to work. The reason I want it with mp4 output is because I want to use ffmpeg to create mp4 files encoded with x264. I'm running Ubuntu Server 9.10 32 bits.

    Read the article

  • Coredump in Multithreading Application in RHEL-5

    - by Chinnu
    I am working on multi-threading application it is dumping frequently.I could not able to analyaze the core.The core is showing like this Core was generated by 'thread-process'. Program terminated with signal 6, Aborted. #0 0x00000038f4e30045 in raise () from /lib64/libc.so.6 (gdb) where #0 0x00000038f4e30045 in raise () from /lib64/libc.so.6 #1 0x00000038f4e31ae0 in abort () from /lib64/libc.so.6 #2 0x00000038f4e681bb in __libc_message () from /lib64/libc.so.6 #3 0x00000038f4e72b96 in free () from /lib64/libc.so.6 #4 0x000000000046a137 in std::string::substr () #5 0x000000000042c549 in std::operator<< <char, std::char_traits<char>, std::allocator<char> > () #6 0x000000000042cc1d in std::operator<< <char, std::char_traits<char>, std::allocator<char> > () #7 0x000000000046b069 in std::string::substr () #8 0x000000000046c866 in std::string::substr () #9 0x0000000000431062 in std::operator<< <char, std::char_traits<char>, std::allocator<char> > () #10 0x00000038f5a062e7 in start_thread () from /lib64/libpthread.so.0 #11 0x00000038f4ece3bd in clone () from /lib64/libc.so.6

    Read the article

  • git pull crashes after other member push something

    - by naiad
    Here it's the story... we have a Github account. I clone the repository ... then I can work with it, commit things, push things, etc ... I use Linux with command line and git version 1.7.7.3 Then other user, using Eclipse and git plugin for eclipse eGit 1.1.0 pushes something, and it appears in the github web pages as the last commit, but when I try to pull: $ git pull remote: Counting objects: 13, done. remote: Compressing objects: 100% (6/6), done. remote: Total 9 (delta 2), reused 7 (delta 0) Unpacking objects: 100% (9/9), done. error: unable to find 3e6c5386cab0c605877f296642d5183f582964b6 fatal: object 3e6c5386cab0c605877f296642d5183f582964b6 not found "3e6c5386cab0c605877f296642d5183f582964b6" is the commit hash of the last commit, done by the other user ... there's no problem at all to browse it through web page ... but for me it's impossible to pull it. It's strange, because my command line output tells about that commit hash, so it knows that one is the last one commit in the github system, but my git can not pull it ! Maybe the git protocol used in eGit is incompatible with the console git 1.7.7.3...

    Read the article

  • Svn repository split problem

    - by Tuminoid
    I want to split a directory from a large Subversion repository to a repository of its own, and keep the history of the files in that directory. I tried the regular way of doing it first svnadmin dump /path/to/repo > largerepo.dump cat largerepo.dump | svndumpfilter include my/directory >mydir.dump but that does not work, since the directory has been moved and copied over the years and files have been moved into and out of it to other parts of the repository. The result is a lot of these: svndumpfilter: Invalid copy source path '/some/old/path' Next thing I tried is to include those /some/old/path as they appear and after a long, long list of files and directories included, the svndumpfilter completes, BUT importing the resulting dump isn't producing the same files as the current directory has. So, how do I properly split the directory from that repository while keeping the history? EDIT: I specifically want trunk/myproj to be the trunk in a new repository PLUS have the new repository include none of the other old stuff, ie. there should not be possibility for anyone to update to old revision before the split and get/see the files. The svndumpfilter solution I tried would achieve exactly that, sadly its not doable since the path/files have been moved around. The solution by ng isn't accetable since its basically a clone+removal of extras which keeps ALL the history, not just relevant myproj history. BUMP C'moon, there must be someone who definitely knows if this is doable or not, and how!

    Read the article

  • Merging changes to a workspace with uncommitted changes

    - by Kim L
    We've just recently switched over from SVN to Mercurial, but now we are running into problems with our workflow. Example I have my local clone of the repository which I work on. I'm making some highly experimental changes to our code base, something that I don't want to commit before I'm sure it works the way it is supposed to, I don't want to commit it even locally. Now, simultaneously, my co-worker has made some significant improvements/bug fixes which I need. He pushes his commits to our main repository. The question is, how can I merge his changes to my workspace without the requirement that I have to commit all my changes, since I need his changes to test my own code? A more day-to-day problem we have with the exact same workflow is where we have a couple of configuration files which are in the repository. Each developer makes a couple of small environment specific changes to the configuration files, but do not commit the changes. These couple of uncommitted files hinders us from making any merges to our workspace, just like with the example above. Ideally, the configuration files probably shouldn't be in the repository, unfortunately, that's just how it has to be for here unnamed reasons.

    Read the article

  • Copying a directory that is version controlled

    - by ibz
    I am curious whether it is OK to copy a directory that is under version control and start working on both copies. I know it can be different from one VCS to another, but I intentionally don't specify any VCS since I am curious about different cases. I was talking to a coworker recently about doing it in SVN. I think it should be OK, but I am still not 100% sure, since I don't know what exactly SVN is storing in the working copy. However, if we talk about the DVCS world, things might be even more unclear, since every working copy is a repository by itself. Being faced with doing this in bzr now, I decided to ask the question. Later edit: Some people asked why I would want to do that. Here is the whole story: In the case of SVN it was because being out of the office, the connection to the SVN server was really slow, so me and my coworker decided to check out the sources only once and make a local copy. That's what we did and it worked OK, but I am still wondering whether it is guaranteed to work, or it just happened. In the bzr case, I am planning to move the "main" repo to another server. So I was thinking to just copy it there and start considering that the main repo. I guess the safest is to make a clone though.

    Read the article

  • jquery pagination plugin problem

    - by John the horn
    I am using a pagination plugin called "pagination". Here is the script that is initialized : function pageselectCallback(page_index, jq){ var new_content = jQuery('#hiddenresult div.result:eq('+page_index+')').clone(); $('#Searchresult').empty().append(new_content); return false; } /** * Initialisation function for pagination */ function initPagination() { // count entries inside the hidden content var num_entries = jQuery('#hiddenresult div.result').length; // Create content inside pagination element $("#Pagination").pagination(num_entries, { callback: pageselectCallback, items_per_page:1 // Show only one item per page }); } // When document is ready, initialize pagination $(document).ready(function(){ initPagination(); }); and I am using this script for accordation on the page : <script type="text/javascript"> $(document).ready(function(){ $('#content').show(); $('#content1').hide(); $('#content2').hide(); $('#content3').hide(); $('#content4').hide(); $('#content5').hide(); $('a.bucatarii_serie').click(function(){ $("div[id^='content']").hide(); $('#content').toggle('slow'); }); $('a.livinguri_serie').click(function(){ $("div[id^='content']").hide(); $('#content1').toggle('slow'); }); $('a.dormitoare_serie').click(function(){ $("div[id^='content']").hide(); $('#content2').toggle('slow'); }); $('a.tratamente').click(function(){ $("div[id^='content']").hide(); $('#content3').toggle('slow'); }); $('a.chirurgie').click(function(){ $("div[id^='content']").hide(); $('#content4').toggle('slow'); }); $('a.frizerie').click(function(){ $("div[id^='content']").hide(); $('#content5').toggle('slow'); }); }); </script> My problem is that I don't know how to implement multiple paginations on the same page in diferent containers but in a bigger container.

    Read the article

  • Split large repo into multiple subrepos and preserve history (Mercurial)

    - by Andrew
    We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history? Here is an example of what our single repository (OurPlatform) currently looks like: /OurPlatform ---- Core ---- Core.Tests ---- Database ---- Database.Tests ---- CMS ---- CMS.Tests ---- Product1.Domain ---- Product1.Stresstester ---- Product1.Web ---- Product1.Web.Tests ---- Product2.Domain ---- Product2.Stresstester ---- Product2.Web ---- Product2.Web.Tests ==== Product1.sln ==== Product2.sln All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests. So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?

    Read the article

  • Git : Failed at pushing to remote server, ' REPOSITORY_PATH ' is not a git command

    - by Judarkness
    I'm using Git with TortoiseGit on Windows XP, and I have a remote bare repository on Windows Vista 64bit version. When I tried to push my local files to remote bare repository, I got the following error message. git.exe push "origin" master:master git: 'C:/Git_Repository/.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly the arbitrary URL is : username@serverip:C:/Git_Repository/.git The same arbitrary URL worked just fine while doing clone/fetch/pull. Access from a local directory in remote machine to this bare repository has no problem either so I belive there is something wrong with my path. I can push/pull at GitHub correctly but I was using URL provide by GitHub. Does anyone know what's wrong with my configuration? Here is my remote .git/config [core] repositoryformatversion = 0 filemode = false bare = true logallrefupdates = true ignorecase = true hideDotFiles = dotGitOnly Here is my local .git/config [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true symlinks = false ignorecase = true hideDotFiles = dotGitOnly [remote "origin"] fetch = +refs/heads url = username@serverip:C:/Git_Repository/.git [branch "master"] remote = origin merge = refs/heads/master Thanks for the reminding

    Read the article

  • Reference 3.5 assembly from 4.0 winforms phail

    - by Dean Lunz
    So I have this utility library that is compiled as a dll under .net 3.5 and it is used by my asp.net 3.5 website. I created a .net 4.0 winforms app to push data onto the website. I want to make use of the functionality in the utilities library from this winforms app. The problem lies in that when I make reference to the utilities library and use the code in it intellisense barks at me saying that it can't find the objects in that library. The thing is I would switch the winforms app to 3.5 which fixes the problem, but I am using Tasks which require 4.0. And because my website and utilities library both run 3.5 and my website is hosted at godaddy that currently only supports asp.net 3.5 so compiling my utilities library under 4.0 for my winforms app is not going to work because it breaks my website. I have tried the app.config trick ala useLegacyV2RuntimeActivationPolicy="true" ... But that did not help. Obviously I could start a new utilities project for 4.0 and and copy the code files from the existing utilities library then reference the new 4.0 utilities library in my winforms app but, that strikes me as being rather overkill when all I want to do is reference the library and use it's functionality. Not to mention that I would have two utility libraries both containing the exact same code, and if I update the code in one I will need to make sure that the other is also updated. I could use add file as link, but you get the idea. So is there anything else I could try or any other way to solve or get around this? Or am I just going to have to break down and create a identical clone of the utilities library for 4.0.

    Read the article

  • Does Ctypes Structures and POINTERS automatically free the memory when the Python object is deleted?

    - by jsbueno
    When using Python CTypes there are the Structures, that allow you to clone c-structures on the Python side, and the POINTERS objects that create a sofisticated Python Object from a memory address value and can be used to pass objects by reference back and forth C code. What I could not find on the documentation or elsewhere is what happens when a Python object containing a Structure class that was de-referenced from a returning pointer from C Code (that is - the C function alocated memory for the structure) is itself deleted. Is the memory for the original C structure freed? If not how to do it? Furthermore -- what if the Structure contains Pointers itself, to other data that was also allocated by the C function? Does the deletion of the Structure object frees the Pointers onits members? (I doubt so) Else - -how to do it? Trying to call the system "free" from Python for the Pointers in the Structure is crashing Python for me. In other words, I have this structure filled up by a c Function call: class PIX(ctypes.Structure): """Comments not generated """ _fields_ = [ ("w", ctypes.c_uint32), ("h", ctypes.c_uint32), ("d", ctypes.c_uint32), ("wpl", ctypes.c_uint32), ("refcount", ctypes.c_uint32), ("xres", ctypes.c_uint32), ("yres", ctypes.c_uint32), ("informat", ctypes.c_int32), ("text", ctypes.POINTER(ctypes.c_char)), ("colormap", ctypes.POINTER(PIXCOLORMAP)), ("data", ctypes.POINTER(ctypes.c_uint32)) ] And I want to free the memory it is using up from Python code.

    Read the article

  • jQueryUI: Sortable <thead> messes up ie ANY IE when css display property is set to RELATIVE

    - by Zlatev
    As the title describes most of the problem. Here is the example code that works as expected in Firefox. Could someone provide a workaround or fix? In action JavaScript $(function() { $('table thead').sortable({ items: 'th', containment: 'document', helper: 'clone', cursor: 'move', placeholder: 'placeHold', start: function(e, ui) { $overlay=$('<div>').css({ position: 'fixed', left: 0, top: 0, backgroundColor: 'black', opacity: 0.4, width: '100%', height: '100%', zIndex: 500 }).attr('id','sortOverlay').prependTo(document.body); $(this).parent().css({ position: 'relative', zIndex: 1000}); }, stop: function(e, ui){ $('#sortOverlay').remove(); $(this).parent().css({ position: 'static' }); } }); }); CSS <style type="text/css"> table { background-color: #f3f3f3; } table thead { background-color: #c1c1c1; } .placeHold { background-color: white; } </style> HTML <table> <thead><th>th1</th><th>th2</th><th>th3</th><th>th4</th></thead> <tbody> <tr> <td>content</td><td>content</td><td>content</td><td>content</td> </tr> </tbody> </table>

    Read the article

  • jquery ui drag and drop showing feedback

    - by sea_1987
    Hi there, I have some drag and drop functionality on my website, I am wanting to hightlight the area that is droppable with a chage in border color when the draggable element is clicked/starting to be dragged. If the click/or drag stops I want the border of the droppable element to change back to its origianl state, I currently have this code, but it does not work very well. $(".drag_check").draggable({helper:"clone", opacity:"0.5"}); $(".drag_check").mousedown(function() { $('.searchPage').css("border", "solid 3px #00FF66").fadeIn(1000); }); $(".drag_check").mouseup(function(){ $('.searchPage').css("border", "solid 3px #E2E5F1").fadeIn(1000); }) $(".searchPage").droppable({ accept:".drag_check", hoverClass: "dropHover", drop: function(ev, ui) { var droppedItem = ui.draggable.children(); cv_file = ui.draggable.map(function() {//map the names and values of each of the selected checkboxes into array return ui.draggable.children().attr('name')+"="+ui.draggable.children().attr('value'); }).get(); var link = ui.draggable.children().attr('name').substr(ui.draggable.children().attr('name').indexOf("[")+1, ui.draggable.children().attr('name').lastIndexOf("]")-8) $.ajax({ type:"POST", url:"/search", data:ui.draggable.children().attr('name')+"="+ui.draggable.children().val()+"&save=Save CVs", success:function(){ window.alert(cv_file+"&save=Save CVs"); $('.shortList').append('<li><span class="inp_bg"><input type="checkbox" name="remove_cv'+link+'" value="Y" /></span><a href="/cv/'+link+'/">'+link+'</a></li>'); $('.searchPage').css("border", "solid 3px #E2E5F1").fadeIn(1000); }, error:function() { alert("Somthing has gone wrong"); } }); } });

    Read the article

  • Using git-svn with slightly strange svn layout

    - by Ibrahim
    Hi guys, I'm doing an internship and they are using SVN (although there has been some discussion of moving to hg or git but that's not in the immediate future). I like git so I would like to use git-svn to interact with the svn repository and be able to do local commits and branches and stuff like that (rebasing before committing to svn of course). However, there is one slight wrinkle, the svn repository layout is a little weird. It basically looks like this /FOO +-branches +-tags +-trunk +-FOO +-myproject Basically, my project has been stuck into a subdirectory of trunk, and there is another project that is also a subdirectory of the trunk. If I use git-svn and only clone the directory for my project instead of the root, will it get confused or cause any problems? I just wonder because the commit numbers are incremented for the entire repository and not just my project, so would commits be off or anything like that? I probably wouldn't push any branches or tags to SVN because I'd prefer to just do those locally in git and I don't know how git-svn deals with branches and tags anyway, and no one else uses them so I find little point in doing so. Thanks for the help!

    Read the article

  • javascript scope problem when lambda function refers to a variable in enclosing loop

    - by Stefan Blixt
    First question on stackoverflow :) Hope I won't embarrass myself... I have a javascript function that loads a list of albums and then it creates a list item for each album. The list item should be clickable, so I call jQuery's click() with a function that does stuff. I do this in a loop. My problem is that all items seem to get the same click function, even though I try to make a new one that does different stuff in each iteration. Another possibility is that the iteration variable is global somehow, and the function refers to it. Code below. debug() is just an encapsulation of Firebug's console.debug(). function processAlbumList(data, c) { for (var album in data) { var newAlbum = $('<li class="albumLoader">' + data[album].title + '</li>').clone(); var clickAlbum = function() { debug("contents: " + album); }; debug("Album: " + album + "/" + data[album].title); $('.albumlist').append(newAlbum); $(newAlbum).click(clickAlbum); } } Here is a transcript of what it prints when the above function runs, after that are some debug lines caused by me clicking on different items. It always prints "10", which is the last value that the album variable takes (there are 10 albums). Album: 0/Live on radio.electro-music.com Album: 1/Doodles Album: 2/Misc Stuff Album: 3/Drawer Collection Album: 4/Misc Electronic Stuff Album: 5/Odds & Ends Album: 6/Tumbler Album: 7/Bakelit 32 Album: 8/Film Album: 9/Bakelit Album: 10/Slow Zoom/Atomic Heart contents: 10 contents: 10 contents: 10 contents: 10 contents: 10 Any ideas? Driving me up the wall, this is. :) /Stefan

    Read the article

  • Delete Duplicate records from large csv file C# .Net

    - by Sandhurst
    I have created a solution which read a large csv file currently 20-30 mb in size, I have tried to delete the duplicate rows based on certain column values that the user chooses at run time using the usual technique of finding duplicate rows but its so slow that it seems the program is not working at all. What other technique can be applied to remove duplicate records from a csv file Here's the code, definitely I am doing something wrong DataTable dtCSV = ReadCsv(file, columns); //columns is a list of string List column DataTable dt=RemoveDuplicateRecords(dtCSV, columns); private DataTable RemoveDuplicateRecords(DataTable dtCSV, List<string> columns) { DataView dv = dtCSV.DefaultView; string RowFilter=string.Empty; if(dt==null) dt = dv.ToTable().Clone(); DataRow row = dtCSV.Rows[0]; foreach (DataRow row in dtCSV.Rows) { try { RowFilter = string.Empty; foreach (string column in columns) { string col = column; RowFilter += "[" + col + "]" + "='" + row[col].ToString().Replace("'","''") + "' and "; } RowFilter = RowFilter.Substring(0, RowFilter.Length - 4); dv.RowFilter = RowFilter; DataRow dr = dt.NewRow(); bool result = RowExists(dt, RowFilter); if (!result) { dr.ItemArray = dv.ToTable().Rows[0].ItemArray; dt.Rows.Add(dr); } } catch (Exception ex) { } } return dt; }

    Read the article

  • How to minor updates to Drupal-6 with shared hosting

    - by marty.fried
    I've got Drupal working on a shared host, and I uploaded some modules from my home system successfully, but I've got the message that there is a security update for my version, and I should update immediately. I'm not sure how I'm supposed to do that. It seems like the update is an entire new installation. I originally installed it using the hosting company's installer, Fantastico. Should I simply over-write the existing installation with the new files? Or ignore the message? I realize I shouldn't over-write the sites folder, or anything I've modified. The instructions that come with the download seem to be for a major version upgrade, and are way too much trouble for frequent security updates. Searching Drupal's site shows many other methods, but no indication of anything official. And some were ridiculously error-prone, and not really useful. I don't have shell access to the hosting site, although I can pay extra to get it if I really need to. Or, maybe I can clone the site on my local Linux system, do the update using a script, then upload the whole thing. Does anyone have experience with this situation?

    Read the article

  • core dump during std::_List_node_base::unhook()

    - by Ron
    I have a program where std::list is used. The program uses threads which act on the std::list as producers and consumers. When a message is dealt with by the consumer, it is removed from the list using pop_front(). But, during pop_front, there is a core dump. The gdb trace is as below. could you help getting me some insights into this issue? (gdb) bt full 0 0xf7531d7b in std::_List_node_base::unhook () from /usr/lib/libstdc++.so.6 No symbol table info available. 1 0x0805c600 in std::list ::_M_erase (this=0x806b08c, __position={_M_node = 0x8075308}) at /opt/target/usr/include/c++/4.2.0/bits/stl_list.h:1169 __n = (class std::_List_node<myMsg> *) 0x0 2 0x0805c6af in std::list ::pop_front (this=0x806b08c) at /opt/target/usr/include/c++/4.2.0/bits/stl_list.h:750 No locals. 3 0x0805afb6 in Base::run () at ../../src/Base.cc:342 nSentBytes = 130 tmpnm = {_vptr.myMsg = 0x80652c0, m_msg = 0x8075140 "{0130,MSG_TYPE=ND_FUNCTION,ORG_PNAME=P01vm01Ax,FUNCTION=LOG,PARAM_CNT=3,DATETIME=06/12/2010 02:59:26.187,LOGNAME=N,ENTRY=Debug 0 }", m_from = 0x8096ee0 "P01vm01Ax", m_to = 0x0, static m_logged = false, static m_pLogMutex = {_data = {_lock = 0, __count = 0, __owner = 0, __kind = 0, _nusers = 0, {_spins = 0, _list = {_next = 0x0}}}, __size = '\0' , __align = 0}} newMsg = {_vptr.myMsg = 0x80652c0, m_msg = 0x0, m_from = 0x0, m_to = 0x0, static m_logged = false, static m_pLogMutex = {_data = {_lock = 0, __count = 0, __owner = 0, __kind = 0, _nusers = 0, {_spins = 0, _list = {_next = 0x0}}}, __size = '\0' , __align = 0}} strBuffer = "{0440,MSG_TYPE=NG_FUNCTION,ORG_PNAME=mach01./opt/abc/VAvsk/abc/comp/DML/gendrs.pl.17560,DST_PNAME=P01vm01Ax,FUNCTION=DRS_REPLICATE,CAUSE_DML_ERROR=N,CORRUPT_DATA=N,CORRUPT_HEADER=N,DEBUG=Y,EXTENDED_RU"... fds = {{fd = 5, events = 1, revents = 0}} retval = 0 iWaitTime = 0 4 0x0805b277 in startRun () at ../../src/Base.cc:454 No locals. 5 0xf7effe7b in start_thread () from /lib/libpthread.so.0 No symbol table info available. 6 0xf744d82e in clone () from /lib/libc.so.6 No symbol table info available.

    Read the article

  • How should I avoid memoization causing bugs in Ruby?

    - by Andrew Grimm
    Is there a consensus on how to avoid memoization causing bugs due to mutable state? In this example, a cached result had its state mutated, and therefore gave the wrong result the second time it was called. class Greeter def initialize @greeting_cache = {} end def expensive_greeting_calculation(formality) case formality when :casual then "Hi" when :formal then "Hello" end end def greeting(formality) unless @greeting_cache.has_key?(formality) @greeting_cache[formality] = expensive_greeting_calculation(formality) end @greeting_cache[formality] end end def memoization_mutator greeter = Greeter.new first_person = "Bob" # Mildly contrived in this case, # but you could encounter this in more complex scenarios puts(greeter.greeting(:casual) << " " << first_person) # => Hi Bob second_person = "Sue" puts(greeter.greeting(:casual) << " " << second_person) # => Hi Bob Sue end memoization_mutator Approaches I can see to avoid this are: greeting could return a dup or clone of @greeting_cache[formality] greeting could freeze the result of @greeting_cache[formality]. That'd cause an exception to be raised when memoization_mutator appends strings to it. Check all code that uses the result of greeting to ensure none of it does any mutating of the string. Is there a consensus on the best approach? Is the only disadvantage of doing (1) or (2) decreased performance? (I also suspect freezing an object may not work fully if it has references to other objects) Side note: this problem doesn't affect the main application of memoization: as Fixnums are immutable, calculating Fibonacci sequences doesn't have problems with mutable state. :)

    Read the article

  • 'Forward-Compatible' Program Design

    - by Jeffrey Kern
    The majority of my questions I've asked here so far on StackOverflow have been how to implement individual concepts and techniques towards developing a software-based NES clone via the XNA environment. The small samples that I've thrown together on my PC work relatively great and everything. Except I hit a brick wall. How do I merge all of these samples together. Having proof-of-concept is amazing, except when you need it to go beyond just that. I now have samples strewn about that I'm trying to merge, some of them incomplete. And now I'm stuck with the chicken-and-the-egg situation of where I would like to incorporate these samples together, to make sure they work, but I cannot without test data. And I don't have tools to create test data, because they'd need to be based off of the individual pieces that need to be put together. In my mind, I'm having nightmares with circular reference. For my sample data, I am hoping to save it in XML and write a specification - and then make sample data by hand - but I'm too paranoid of manually creating an XML file full of incorrect data and blaming it on my code, or vice-versa. It doesn't help that the end-result of my work is graphic-oriented, which makes it interseting how a graphic on the screen can be visualized in XML Nodes. I guess, my question is this: What design patterns and disciplines exist in the coding world that address this type of concern? I've always relied on brute-force coding and restarting a project with a whole new code base in attempts to further along my goals, but I doubt that would be the best way to do so. Within my college career, the majority of my programming was to work on simple projects that came out of a book, or with a given correct data set and a verifyable result. I don't have that, as my own design documents that I am going by could be terribly wrong.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >