Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 777/903 | < Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >

  • capturing CMD batch file parameter list; write to file for later processing

    - by BobB
    I have written a batch file that is launched as a post processing utility by a program. The batch file reads ~24 parameters supplied by the calling program, stores them into variables, and then writes them to various text files. Since the max input variable in CMD is %9, it's necessary to use the 'shift' command to repeatedly read and store these individually to named variables. Because the program outputs several similar batch files, the result is opening several CMD windows sequentially, assigning variables and writing data files. This ties up the calling program for too long. It occurs to me that I could free up the calling program much faster if maybe there's a way to write a very simple batch file that can write all the command parameters to a text file, where I can process them later. Basically, just grab the parameter list, write it and done. Q: Is there some way to treat an entire series of parameter data as one big text string and write it to one big variable... and then echo the whole big thing to one text file? Then later read the string into %n variables when there's no program waiting to resume? Parameter list is something like 25 - 30 words, less than 200 characters. Sample parameter list: "First Name" "Lastname" "123 Steet Name Way" "Cityname" ST 12345 1004968 06/01/2010 "Firstname+Lastname" 101738 "On Account" 20.67 xy-1z 1 8.95 3.00 1.39 0 0 239 8.95 Items in quotes are processed as string variables. List is space delimited. Any suggestions?

    Read the article

  • How to best future proof my application that needs to connect to Outlook?

    - by Troy
    I have a contact management application written in Delphi which has a “Sync with Outlook” feature that I developed 10 years ago. Now, I’m going back to add some features and fix some bugs. This sync feature uses the Outlook object model to get started, but it has an optional mode called “Use MAPI Enhancements” where it uses pure MAPI to speed up how it looks for changes, and it allows notes to be synced w/ RTF instead of just plain text. I'm wondering if supporting two parallel paths of execution is a good idea or not. If I went with all MAPI, I believe I'd avoid some security prompts, and I'd avoid situations where anti-virus has "script-blocking" features which block my app from connecting to Outlook. But I believe that on the down side, my 32-bit app would not be able to to connect with 64-bit Outlook 2010 using MAPI. And I wonder about the future of MAPI in general. If I stick with the Outlook object model, will my 32-bit app be able to connect to the Outlook object model (since it's out of process COM)? If so, this is a compelling reason to keep my Outlook object model execution path in place. But if not, and if my app needs to be compiled for x64, then why not just go with pure MAPI?

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • Can shared memory be read and validated without mutexes?

    - by Bribles
    On Linux I'm using shmget and shmat to setup a shared memory segment that one process will write to and one or more processes will read from. The data that is being shared is a few megabytes in size and when updated is completely rewritten; it's never partially updated. I have my shared memory segment laid out as follows: ------------------------- | t0 | actual data | t1 | ------------------------- where t0 and t1 are copies of the time when the writer began its update (with enough precision such that successive updates are guaranteed to have differing times). The writer first writes to t1, then copies in the data, then writes to t0. The reader on the other hand reads t0, then the data, then t1. If the reader gets the same value for t0 and t1 then it considers the data consistent and valid, if not, it tries again. Does this procedure ensure that if the reader thinks the data is valid then it actually is? Do I need to worry about out-of-order execution (OOE)? If so, would the reader using memcpy to get the entire shared memory segment overcome the OOE issues on the reader side? (This assumes that memcpy performs it's copy linearly and ascending through the address space. Is that assumption valid?)

    Read the article

  • Access 2007 and Special/Unicode Characters in SQL

    - by blockcipher
    I have a small Access 2007 database that I need to be able to import data from an existing spreadsheet and put it into our new relational model. For the most part this seems to work pretty well. Part of the process is attempting to see if a record already exists in a target table using SQL. For example, if I extract book information out of the current row in the spreadsheet, it may contain a title and abstract. I use SQL to get the ID of a matching record, if it exists. This works fine except when I have data that's in a non-English language. In this case, it seems that there is some punctuation that is causing me problems. At least I think it's punctuation as I do have some fields that do not have punctuation and are non-English that do not give me any problems. Is there a built-in function that can escape these characters? Currently I have a small function that will escape the single quote character, but that isn't enough. Or, is there a list of Unicode characters that can interfere with how SQL wants data quoted? Thanks in advance.

    Read the article

  • [WEB] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess (thanks ILMV:]) up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • Applications result affected by another running application.

    - by Jamie Keeling
    This is a follow on from my previous question although this is about something else. I've been having a problem where for some reason my message that I pass from one process to another only displays the first letter, in this case "M". My application is based on a MSDN sample so to make sure I hadn't missed something I create a separate solution, added the MSDN sample (without any changes for my needs) and unsurprisingly it works fine. Now for the weird bit, when I run the MSDN sample running (as in debugging) and have my own application running, the text prints out fine without any problems. The second I run my on its own without the original MSDN sample being open, and it fails to work and only shows an "M". I've looked in the debugger and don't seem to notice anything suspicious (it's a slightly dated picture, I've fixed the data type inconsistency). Can anyone provide a solution for this? I've never encountered anything like this before. To look at my source code it's easier to just look at the link I posted at the top of the question, there's no point in me posting it twice. Thank you for any help.

    Read the article

  • Implementing a scrabble trainer

    - by bstullkid
    Hello, I've recently been playing alot of online scrabble so I decided to make a program that quickly searches through a dictionary of 200,000+ words with an input of up to any 26 letters. My first attempt was fail as it took a while when you input 8 or more letters (just a basic look through dictionary and cancel out a letter if its found kind of thing), so I made a tree like structure containing only an array of 26 of the same structure and a flag to indicate the end of a word, doing that It can output all possible words in under a second even with an input of 26 characters. But it seems that when I input 12 or more letters with some of the same characters repeated i get duplicates; can anyone see why I would be getting duplicates with this code? (ill post my program at the bottom) Also, the next step once the duplicates are weeded out is to actually be able to input the letters on the game board and then have it calculate the best word you can make on a given board. I am having trouble trying to figure out a good algorithm that can analyze a scrabble board and an input of letters and output a result; the possible words that could be made I have no problem with but actually checking a board efficiently (ie can this word fit here, or here etc... without creating a non dictionary word in the process on some other string of letters) Anyone have a idea for an approach at that? (given a scrabble board, and an input of 7 letters, find all possible valid words or word sets that you can make) lol crap i forgot to email myself the code from my other computer thats in another state... ill post it on monday when I get back there! btw the dictionary im using is sowpods (http://www.calvin.edu/~rpruim/scrabble/ospd3.txt)

    Read the article

  • Easily (as in WYSIWYG) customize the docbook output

    - by Sukima
    I've used DocBook in the past and I love the idea behind the separation of content from presentation. I am very comfortable editing XML directly. In my extensive search to find the best documenting solution for my needs I am always coming back to this one solution: DocBook - Build system (ant, make, etc.) - Output I have seen lots of information concerning the best WYSIWYG, XML, Text editors for writing DocBook including alternative markup languages like asciidoc. All these solutions focus on the creation of DocBook or the nightmare of the DocBook tool chain. No one ever addresses the Output side other then to say "Just use XSL" or "Custom scripts" When tasked to make a document or manual I don't want to worry about spending countless hours attempting to reprogram, customize, and modify the XSL, CSS, and shell scripts (i.e. O'Riely books). That is a very arduous task. My query: is there a tool that makes the customizing easier? And is there anything that could be similar to say Pages or Word in that the user creates a template and the tool chain does the rest? Attempting to do a visual task like pretty logos and fixing all the broken layouts that the default XSL comes up with (pagination is a mess) is very difficult from a text editor. Content is easy. Editing DocBook XSL was truly a nightmare when I did it in the past. I've searched and I find lots of info on XML editors but nothing on XSL editors. Or am I lacking a key understanding of the process. Thanks.

    Read the article

  • How To Share Information Between Django and Javascript?

    - by Randy
    So I am pretty new to both Django and Javascript (I am using JQuery) and I am wondering if I am doing a hack or if there are more slick ways to send client-side displayed database ids to the django server-side. Here is my process: I have a dataTable (http://datatables.net) that I am displaying rows of data by using the bProcessing option to use AJAX to retrieve records from the database. The URL in my urls.py is something like: url(r'^assets/activitylog/(?P<cid>.*)$', views.getActivityTable_ajax, name="activitylog_table"), and my dataTable ajax-relavant code is something like: "sAjaxSource": "/assets/activitylog/" + getIDFromHTML(), where the javascript function getIDFromHTML() grabs <cid> that is used by the Django view is simply: function getIDFromHTML(){ // Simply return the text in the #release_id div element from the HTML return $("#release_id").html(); }; This is the part that seems "hacky" to me. I am inserting into my template code the database id that I am using in the datatables URL (with display:none in the css) just so I can pass it back to the view. Most of this is necessitated because one cannot use django template tags in the javascript code unless the code is embedded into the HTML itself, which I am not (and will not) do. The only other thing that I have found is to change the URL to get rid of the parameter passed in to: url(r'^assets/activitylog', views.getActivityTable_ajax, name="activitylog_table"), and change the view code to: def getActivityTable_ajax(request): """Returns the activity for a given pid from HTTP GET ajax reqest""" pid = int(urlparse.urlparse(request.META['HTTP_REFERER']).path.split('/')[-1]) # rest of view code here... since the id that I need is on the end of this referer url. This way I don't have to monkey around with embedding the hidden database id into the HTML and passing it back to via ajax the the table population view code. Is it okay to use HTTP_REFERER in the request object in this manner? Am I going about this in the totally wrong way? Thanks in advance!

    Read the article

  • Why do I get empty request from the Jakarta Commons HttpClient?

    - by polyurethan
    I have a problem with the Jakarta Commons HttpClient. Before my self-written HttpServer gets the real request there is one request which is completely empty. That's the first problem. The second problem is, sometimes the request data ends after the third or fourth line of the http request: POST / HTTP/1.1 User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:4232 For debugging I am using the Axis TCPMonitor. There every things is fine but the empty request. How I process the stream: StringBuffer requestBuffer = new StringBuffer(); InputStreamReader is = new InputStreamReader(socket.getInputStream(), "UTF-8"); int byteIn = -1; do { byteIn = is.read(); if (byteIn > 0) { requestBuffer.append((char) byteIn); } } while (byteIn != -1 && is.ready()); String requestData = requestBuffer.toString(); How I send the request: client.getParams().setSoTimeout(30000); method = new PostMethod(url.getPath()); method.getParams().setContentCharset("utf-8"); method.setRequestHeader("Content-Type", "application/xml; charset=utf-8"); method.addRequestHeader("Connection", "close"); method.setFollowRedirects(false); byte[] requestXml = getRequestXml(); method.setRequestEntity(new InputStreamRequestEntity(new ByteArrayInputStream(requestXml))); client.executeMethod(method); int statusCode = method.getStatusCode(); Have anyone of you an idea how to solve these problems? Alex

    Read the article

  • Uploading a file fails under WordPress

    - by Ash
    I'm using WordPress and I'm following W3's guide for uploading a file: HTML code: <html> <body> <form action="upload_file.php" method="post" enctype="multipart/form-data"> <label for="file">Filename:</label> <input type="file" name="file" id="file" /> <br /> <input type="submit" name="submit" value="Submit" /> </form> </body> </html> PHP code (upload_file.php): <?php if ($_FILES["file"]["error"] > 0) { echo "Error: " . $_FILES["file"]["error"] . "<br />"; } else { echo "Upload: " . $_FILES["file"]["name"] . "<br />"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />"; echo "Stored in: " . $_FILES["file"]["tmp_name"]; } ?> The HTML code is pasted in a PHP page template and the PHP file under the WP installation directory under www. The problem is when I submit the file I get Error: 1. If I remark the "if" part of the PHP code and leave the "else" part I get: Upload: IMG_4258.JPG Type: Size: 0 Kb Stored in: So at least I know the PHP code is running. But what's causing it to fail? Is there a problem with the code or is WordPress meddling with the process?

    Read the article

  • Delphi: Error when starting MCI

    - by marco92w
    I use the TMediaPlayer component for playing music. It works fine with most of my tracks. But it doesn't work with some tracks. When I want to play them, the following error message is shown: Which is German but roughly means that: In the project pMusicPlayer.exe an exception of the class EMCIDeviceError occurred. Message: "Error when starting MCI.". Process was stopped. Continue with "Single Command/Statement" or "Start". The program quits directly after calling the procedure "Play" of TMediaPlayer. This error occurred with the following file for example: file size: 7.40 MB duration: 4:02 minutes bitrate: 256 kBit/s I've encoded this file with a bitrate of 128 kBit/s and thus a file size of 3.70 MB: It works fine! What's wrong with the first file? Windows Media Player or other programs can play it without any problems. Is it possible that Delphi's TMediaPlayer cannot handle big files (e.g. 5 MB) or files with a high bitrate (e.g. 128 kBit/s)? What can I do to solve the problem?

    Read the article

  • Excel automation: Close event missing

    - by chiccodoro
    Another hi all, I am doing Excel automation via Interop in C#, and I want to be informed when a workbook is closed. However, there is no Close event on the workbook nor a Quit event on the application. Has anybody done that before? How can I write a piece of code which reacts to the workbook being closed (which is only executed if the workbook is really closed)? Ideally that should happen after closing the workbook, so I can rely on the file to reflect all changes. Details about what I found so far: There is a BeforeClose() event, but if there are unsaved changes this event is raised before the user being asked whether to save them, so at the moment I can process the event, I don't have the final file and I cannot release the COM objects, both things that I need to have/do. I do not even know whether the workbook will actually be closed, since the user might choose to abort closing. Then there is a BeforeSave() event. So, if the user chooses "Yes" to save unsaved changes, then BeforeSave() is executed after BeforeClose(). However, if the user chooses to "Abort", then hits "file-save", the exact same order of events is executed. Further, if the user chooses "No", the BeforeSave() isn't executed at all. The same holds as long as the user doesn't click any of these options.

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • How do I get jQuery's Uploadify plugin to work with ASP.NET MVC?

    - by KingNestor
    I'm in the process of trying to get the jQuery plugin, Uploadify, to work with ASP.NET MVC. I've got the plugin showing up fine: With the following javascript snippet: <script type="text/javascript"> $(document).ready(function() { $('#fileUpload').fileUpload({ 'uploader': '/Content/Flash/uploader.swf', 'script': '/Placement/Upload', 'folder': '/uploads', 'multi': 'true', 'buttonText': 'Browse', 'displayData': 'speed', 'simUploadLimit': 2, 'cancelImg': '/Content/Images/cancel.png' }); }); </script> Which seems like all is well in good. If you notice, the "script" attribute is set to my /Placement/Upload, which is my Placement Controller and my Upload Action. The main problem is, I'm having difficulty getting this action to fire to receive the file. I've set a breakpoint on that action and when I select a file to upload, it isn't getting executed. I've tried changing the method signature based off this article: public string Upload(HttpPostedFileBase FileData) { /* * * Do something with the FileData * */ return "Upload OK!"; } But this still doesn't fire. Can anyone help me write and get the Upload controller action's signature correctly so it will actually fire? I can then handle dealing with the file data myself. I just need some help getting the method action to fire.

    Read the article

  • NUnit integration programmatically with spring

    - by harkon
    Hi! I have a component based architecture framework designed and I use NUnit for isolated testing - okay so far. Now I want to enable integration tests. Therefore the tests use real implementations of the existing components. Each element of the component has a life cycle (init, start and stop) and I created a NUnit component. In the start section the Console runner of the NUnit will be executed. Okay - now if I have a test fixture class in my dlls in the execution path the runner exectues them - fine! But: And this is crucial! Each to be tested implementation exists so far in the process and I want to use this instances for testing. If I use NUnit runner in the current way each instance will be created twice - and above all: I have a spring container and a implementation registry. Via this registry I can get access to all instances in the processes. But how do I give the test fixture access to the existing registry? Good: I can start the component architecture framework in the startup of the nunit runner - but this is not what I want. My guide is the apache Cactus framework (with JUnit and tomcat, JBoss etc.) Can someone help? Thanks a lot! Check: http://cone.codeplex.com

    Read the article

  • OpenID Authentication using AuthLogic Error

    - by Steve
    Hi, I am trying to implement openid authentication using authlogic. I have installed the open_id_authentication in the process but when I entered rake open_id_authentication:db:create --trace I get the following error (in /Users/felix/login) rake aborted! Don't know how to build task 'open_id_authentication:db:create' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:1728:in `[]' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2050:in `invoke_task' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2029:in `block (2 levels) in top_level' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2029:in `each' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2029:in `block in top_level' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2001:in `block in run' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/bin/rake:31:in `<top (required)>' /usr/local/bin/rake:19:in `load' /usr/local/bin/rake:19:in `<main>' Can someone tell what am i doing incorrectly Thanks

    Read the article

  • OpenID Authentication using AuthLogic

    - by Steve
    Hi, I am trying to implement openid authentication using authlogic. I have installed the open_id_authentication in the process but when I entered rake open_id_authentication:db:create --trace I get the following error (in /Users/felix/login) rake aborted! Don't know how to build task 'open_id_authentication:db:create' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:1728:in `[]' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2050:in `invoke_task' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2029:in `block (2 levels) in top_level' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2029:in `each' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2029:in `block in top_level' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2001:in `block in run' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/bin/rake:31:in `<top (required)>' /usr/local/bin/rake:19:in `load' /usr/local/bin/rake:19:in `<main>' Can someone tell what am i doing incorrectly Thanks

    Read the article

  • How much detail should be in a project plan or spec?

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Using pointers to adjust global objects in objective-c

    - by Rob
    Ok, so I am working with two sets of data that are extremely similar, and at the same time, these data sets are both global NSMutableArrays within the object. data_set_one = [[NSMutableArray alloc] init]; data_set_two = [[NSMutableArray alloc] init]; Two new NSMutableArrays are loaded, which need to be added to the old, existing data. These Arrays are also global. xml_dataset_one = [[NSMutableArray alloc] init]; xml_dataset_two = [[NSMutableArray alloc] init]; To reduce code duplication (and because these data sets are so similar) I wrote a void method within the class to handle the data combination process for both Arrays: -(void)constructData:(NSMutableArray *)data fromDownloadArray:(NSMutableArray *)down withMatchSelector:(NSString *)sel_str Now, I have a decent understanding of object oriented programming, so I was thinking that if I were to invoke the method with the global Arrays in the data like so... [self constructData:data_set_one fromDownloadArray:xml_dataset_one withMatchSelector:@"id"]; Then the global NSMutableArrays (data_set_one) would reflect the changes that happen to "array" within the method. Sadly, this is not the case, data_set_one doesn't reflect the changes (ex: new objects within the Array) outside of the method. Here is a code snippet of the problem // data_set_one is empty // xml_dataset_one has a few objects [constructData:(NSMutableArray *)data_set_one fromDownloadArray:(NSMutableArray *)xml_dataset_one withMatchSelector:(NSString *)@"id"]; // data_set_one should now be xml_dataset_one, but when echoed to screen, it appears to remain empty And here is the gist of the code for the method, any help is appreciated. -(void)constructData:(NSMutableArray *)data fromDownloadArray:(NSMutableArray *)down withMatchSelector:(NSString *)sel_str { if ([data count] == 0) { data = down; // set data equal to downloaded data } else if ([down count] == 0) { // download yields no results, do nothing } else { // combine the two arrays here } } This project is not ARC enabled. Thanks for the help guys! Rob

    Read the article

  • Managing multiple customer databases in ASP.NET MVC application

    - by Robert Harvey
    I am building an application that requires separate SQL Server databases for each customer. To achieve this, I need to be able to create a new customer folder, put a copy of a prototype database in the folder, change the name of the database, and attach it as a new "database instance" to SQL Server. The prototype database contains all of the required table, field and index definitions, but no data records. I will be using SMO to manage attaching, detaching and renaming the databases. In the process of creating the prototype database, I tried attaching a copy of the database (companion .MDF, .LDF pair) to SQL Server, using Sql Server Management Studio, and discovered that SSMS expects the database to reside in c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabaseName.MDF Is this a "feature" of SQL Server? Is there a way to manage individual databases in separate directories? Or am I going to have to put all of the customer databases in the same directory? (I was hoping for a little better control than this). NOTE: I am currently using SQL Server Express, but for testing purposes only. The production database will be SQL Server 2008, Enterprise version. So "User Instances" are not an option.

    Read the article

  • Binding a null variable in a PreparedStatement

    - by Paul Tomblin
    I swear this used to work, but it's not in this case. I'm trying to match col1, col2 and col3, even if one or more of them is null. I know that in some languages I've had to resort to circumlocutions like ((? is null AND col1 is null) OR col1 = ?). Is that required here? PreparedStatement selStmt = getConn().prepareStatement( "SELECT * " + "FROM tbl1 " + "WHERE col1 = ? AND col2 = ? and col3 = ?"); try { int col = 1; setInt(selStmt, col++, col1); setInt(selStmt, col++, col2); setInt(selStmt, col++, col3); ResultSet rs = selStmt.executeQuery(); try { while (rs.next()) { // process row } } finally { rs.close(); } } finally { selStmt.close(); } // Does the equivalient of stmt.setInt(col, i) but preserves nullness. protected static void setInt(PreparedStatement stmt, int col, Integer i) throws SQLException { if (i == null) stmt.setNull(col, java.sql.Types.INTEGER); else stmt.setInt(col, i); }

    Read the article

  • What is a good automated data import method for SQL Server?

    - by Joel Potter
    I'm in the process of porting some SQL Server 2005 databases to SQL Server 2008. One of these databases has an associated import application (Windows task) which uses SSIS with a DTS package to import a large dataset from an MS Access database nightly. In upgrading to SQL Server 2008, I discovered that I can't run the same console application which has been performing the imports due to the missing manageddts DLL in SQL Server 2008. It's several years old and in need of a rewrite for various reason, plus, I've been fairly unhappy with DTS in general. The original reason DTS was chosen was for speed (5 min import time compared to 30+ for ADO.NET). The format of the data to import is out of my control (the client likes Access). I would also like to be able to run the import from a machine completely separate from the server hosting SQL Server and preferably with minimal SQL features installed. Options I've considered: Creating an Access application to connect to both databases (SQL Server and Access) and perform the import (Ugh!) Revisiting ADO.NET to see if the original implementation was poorly written. Updated SSIS packages. What other technologies should I be considering for this job?

    Read the article

< Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >