Search Results

Search found 93727 results on 3750 pages for 'code documentation'.

Page 86/3750 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • share code between check and process methods

    - by undu
    My job is to refactor an old library for GIS vector data processing. The main class encapsulates a collection of building outlines, and offers different methods for checking data consistency. Those checking functions have an optional parameter that allows to perform some process. For instance: std::vector<Point> checkIntersections(int process_mode = 0); This method tests if some building outlines are intersecting, and return the intersection points. But if you pass a non null argument, the method will modify the outlines to remove the intersection. I think it's pretty bad (at call site, a reader not familiar with the code base will assume that a method called checkSomething only performs a check and doesn't modifiy data) and I want to change this. I also want to avoid code duplication as check and process methods are mostly similar. So I was thinking to something like this: // a private worker std::vector<Point> workerIntersections(int process_mode = 0) { // it's the equivalent of the current checkIntersections, it may perform // a process depending on process_mode } // public interfaces for check and process std::vector<Point> checkIntersections() /* const */ { workerIntersections(0); } std::vector<Point> processIntersections(int process_mode /*I have different process modes*/) { workerIntersections(process_mode); } But that forces me to break const correctness as workerIntersections is a non-const method. How can I separate check and process, avoiding code duplication and keeping const-correctness?

    Read the article

  • I’m new to C++ and unsure about how to improve this code [migrated]

    - by Laian Alsabbagh
    The purpose of the following code is to get a random number of 100 nodes and to distribute these nodes randomly in range 500*500 …(X,Y).. this was the first step #include<iostream> #include <fstream> #include<cmath> using namespace std; int main() { const int x = 0, y = 1; int nodes[100][2]; ofstream myfile; myfile.open ("example.txt"); myfile << "Writing this to a file.\n"; for (int i=0; i<100 ;i++) { nodes[i][x] = rand() % 501; nodes[i][y] = rand() % 501; myfile <<nodes[i][x]<<" "<<nodes[i][y]; } myfile.close(); } now the next step is to improve this code to distribute these nodes in order ( "Imust divide both xy_coordinates as : x= 0-100-200-300-400-500 & y=0-100-200-300-400-500) next is to distribute the nodes (regardless number of nodes) in order range Starting from (0,100 )….(100,100)..(100,200)…….untile i reach the last point (500,500),, ") I’m really confused of how to do it correctly I start to think to define 2 dimensional array , and then to define 2 for loops enter code here Int no_nodes=100; Int XY_coordinate [500][500]; For (int i=0;i<no_nodes; i++) { For (int j=0;j<no_nodes; j++)

    Read the article

  • Unable to build my c++ code with g++ 4.6.3

    - by Mriganka
    I am facing multiple issues with building my c++ code on Ubuntu 12.04. This code was building and running fine on RH Enterprise. I am using g++ 4.6.3. Here's the output of g++ -v. g++ -v Using built-in specs. COLLECT_GCC=g++ COLLECT_LTO_WRAPPER=/usr/lib/gcc/i686-linux-gnu/4.6/lto-wrapper Target: i686-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --enable-targets=all --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=i686-linux-gnu --host=i686-linux-gnu --target=i686-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) Here's a sample of my code: #include "Word.h" #include < string> using namespace std; pthread_mutex_t Word::_lock = PTHREAD_MUTEX_INITIALIZER; Word::Word(): _occurrences(1) { memset(_buf, 0, 25); } Word::Word(char *str): _occurrences(1) { memset(_buf, 0, 25); if (str != NULL) { strncpy(_buf, str, strlen(str)); } } g++ -c -ansi or g++ -c -std=c++98 or g++ -c -std=c++03, none of these options are able to build the code correctly. I get the following compilation errors: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o Word.cpp: In constructor ‘Word::Word()’: Word.cpp:10:21: error: ‘memset’ was not declared in this scope Word.cpp: In constructor ‘Word::Word(char*)’: Word.cpp:16:21: error: ‘memset’ was not declared in this scope Word.cpp:19:34: error: ‘strlen’ was not declared in this scope Word.cpp:19:35: error: ‘strncpy’ was not declared in this scope Word.cpp: In member function ‘void Word::operator=(const Word&)’: Word.cpp:37:42: error: ‘strlen’ was not declared in this scope Word.cpp:37:43: error: ‘strncpy’ was not declared in this scope Word.cpp: In copy constructor ‘Word::Word(const Word&)’: Word.cpp:44:21: error: ‘memset’ was not declared in this scope Word.cpp:45:52: error: ‘strlen’ was not declared in this scope Word.cpp:45:53: error: ‘strncpy’ was not declared in this scope So basically g++ 4.6.3 on Ubuntu 12.04 is not able to recognize the standard c++ headers. And I am not finding a way out of this situation. Second problem: In order to make progress, I included < string.h instead of < string. But now I am facing linking errors with my message queue and pthread library functions. Here's the error that I am getting: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o g++ -lrt -I/usr/lib/i386-linux-gnu Word.o HashMap.o main.o -o word_count main.o: In function `main': /home/mriganka/WordCount/main.cpp:75: undefined reference to `pthread_create' /home/mriganka/WordCount/main.cpp:90: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:93: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:113: undefined reference to `mq_send' /home/mriganka/WordCount/main.cpp:123: undefined reference to `pthread_join' /home/mriganka/WordCount/main.cpp:129: undefined reference to `mq_close' /home/mriganka/WordCount/main.cpp:130: undefined reference to `mq_unlink' main.o: In function `count_words(void*)': /home/mriganka/WordCount/main.cpp:151: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:154: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:162: undefined reference to `mq_timedreceive' collect2: ld returned 1 exit status Here's my makefile: CC=g++ CFLAGS=-c -g -ansi LDFLAGS=-lrt INC=-I/usr/lib/i386-linux-gnu SOURCES=Word.cpp HashMap.cpp main.cpp OBJECTS=$(SOURCES:.cpp=.o) EXECUTABLE=word_count all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(LDFLAGS) $(INC) -pthread $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm -f *.o word_count Please help me to resolve both the issues. I searched online relentlessly for any solution of these problems, but no one seems to have encountered these issues.

    Read the article

  • RPi and Java Embedded GPIO: Writing Java code to blink LED

    - by hinkmond
    So, you've followed the previous steps to install Java Embedded on your Raspberry Pi ?, you went to Fry's and picked up some jumper wires, LEDs, and resistors ?, you hooked up the wires, LED, and resistor the the correct pins ?, and now you want to start programming in Java on your RPi? Yes? ???????! OK, then... Here we go. You can use the following source code to blink your first LED on your RPi using Java. In the code you can see that I'm not using any complicated gpio libraries like wiringpi or pi4j, and I'm not doing any low-level pin manipulation like you can in C. And, I'm not using python (hell no!). This is Java programming, so we keep it simple (and more readable) than those other programming languages. See: Write Java code to do this In the Java code, I'm opening up the RPi Debian Wheezy well-defined file handles to control the GPIO ports. First I'm resetting everything using the unexport/export file handles. (On the RPi, if you open the well-defined file handles and write certain ASCII text to them, you can drive your GPIO to perform certain operations. See this GPIO reference). Next, I write a "1" then "0" to the value file handle of the GPIO0 port (see the previous pinout diagram). That makes the LED blink. Then, I loop to infinity. Easy, huh? import java.io.* /* * Java Embedded Raspberry Pi GPIO app */ package jerpigpio; import java.io.FileWriter; /** * * @author hinkmond */ public class JerpiGPIO { static final String GPIO_OUT = "out"; static final String GPIO_ON = "1"; static final String GPIO_OFF = "0"; static final String GPIO_CH00="0"; /** * @param args the command line arguments */ public static void main(String[] args) { FileWriter commandFile; try { /*** Init GPIO port for output ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); // Reset the port unexportFile.write(GPIO_CH00); unexportFile.flush(); // Set the port for use exportFile.write(GPIO_CH00); exportFile.flush(); // Open file handle to port input/output control FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/direction"); // Set port for output directionFile.write(GPIO_OUT); directionFile.flush(); /*--- Send commands to GPIO port ---*/ // Opne file handle to issue commands to GPIO port commandFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/value"); // Loop forever while (true) { // Set GPIO port ON commandFile.write(GPIO_ON); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); // Set GPIO port OFF commandFile.write(GPIO_OFF); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); } } catch (Exception exception) { exception.printStackTrace(); } } } Hinkmond

    Read the article

  • EF4 Code First Control Unicode and Decimal Precision, Scale with Attributes

    - by Dane Morgridge
    There are several attributes available when using code first with the Entity Framework 4 CTP5 Code First option.  When working with strings you can use [MaxLength(length)] to control the length and [Required] will work on all properties.  But there are a few things missing. By default all string will be created using unicode so you will get nvarchar instead of varchar.  You can change this using the fluent API or you can create an attribute to make the change.  If you have a lot of properties, the attribute will be much easier and require less code. You will need to add two classes to your project to create the attribute itself: 1: public class UnicodeAttribute : Attribute 2: { 3: bool _isUnicode; 4:  5: public UnicodeAttribute(bool isUnicode) 6: { 7: _isUnicode = isUnicode; 8: } 9:  10: public bool IsUnicode { get { return _isUnicode; } } 11: } 12:  13: public class UnicodeAttributeConvention : AttributeConfigurationConvention<PropertyInfo, StringPropertyConfiguration, UnicodeAttribute> 14: { 15: public override void Apply(PropertyInfo memberInfo, StringPropertyConfiguration configuration, UnicodeAttribute attribute) 16: { 17: configuration.IsUnicode = attribute.IsUnicode; 18: } 19: } The UnicodeAttribue class gives you a [Unicode] attribute that you can use on your properties and the UnicodeAttributeConvention will tell EF how to handle the attribute. You will need to add a line to the OnModelCreating method inside your context for EF to recognize the attribute: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: base.OnModelCreating(modelBuilder); 5: } Once you have this done, you can use the attribute in your classes to make sure that you get database types of varchar instead of nvarchar: 1: [Unicode(false)] 2: public string Name { get; set; }   Another option that is missing is the ability to set the precision and scale on a decimal.  By default decimals get created as (18,0).  If you need decimals to be something like (9,2) then you can once again use the fluent API or create a custom attribute.  As with the unicode attribute, you will need to add two classes to your project: 1: public class DecimalPrecisionAttribute : Attribute 2: { 3: int _precision; 4: private int _scale; 5:  6: public DecimalPrecisionAttribute(int precision, int scale) 7: { 8: _precision = precision; 9: _scale = scale; 10: } 11:  12: public int Precision { get { return _precision; } } 13: public int Scale { get { return _scale; } } 14: } 15:  16: public class DecimalPrecisionAttributeConvention : AttributeConfigurationConvention<PropertyInfo, DecimalPropertyConfiguration, DecimalPrecisionAttribute> 17: { 18: public override void Apply(PropertyInfo memberInfo, DecimalPropertyConfiguration configuration, DecimalPrecisionAttribute attribute) 19: { 20: configuration.Precision = Convert.ToByte(attribute.Precision); 21: configuration.Scale = Convert.ToByte(attribute.Scale); 22:  23: } 24: } Add your line to the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: modelBuilder.Conventions.Add(new DecimalPrecisionAttributeConvention()); 5: base.OnModelCreating(modelBuilder); 6: } Now you can use the following on your properties: 1: [DecimalPrecision(9,2)] 2: public decimal Cost { get; set; } Both these options use the same concepts so if there are other attributes that you want to use, you can create them quite simply.  The key to it all is the PropertyConfiguration classes.   If there is a class for the datatype, then you should be able to write an attribute to set almost everything you need.  You could also create a single attribute to encapsulate all of the possible string combinations instead of having multiple attributes on each property. All in all, I am loving code first and having attributes to control database generation instead of using the fluent API is huge and saves me a great deal of time.

    Read the article

  • When does a Tumbling Window Start in StreamInsight

    Whilst getting some courseware ready I was playing around writing some code and I decided to very simply show when a window starts and ends based on you asking for a TumblingWindow of n time units in StreamInsight.  I thought this was going to be a two second thing but what I found was something I haven’t yet found documented anywhere until now.   All this code is written in C# and will slot straight into my favourite quick-win dev tool LinqPad   Let’s first create a sample dataset   var EnumerableCollection = new [] { new {id = 1, StartTime = DateTime.Parse("2010-10-01 12:00:00 PM").ToLocalTime()}, new {id = 2, StartTime = DateTime.Parse("2010-10-01 12:20:00 PM").ToLocalTime()}, new {id = 3, StartTime = DateTime.Parse("2010-10-01 12:30:00 PM").ToLocalTime()}, new {id = 4, StartTime = DateTime.Parse("2010-10-01 12:40:00 PM").ToLocalTime()}, new {id = 5, StartTime = DateTime.Parse("2010-10-01 12:50:00 PM").ToLocalTime()}, new {id = 6, StartTime = DateTime.Parse("2010-10-01 01:00:00 PM").ToLocalTime()}, new {id = 7, StartTime = DateTime.Parse("2010-10-01 01:10:00 PM").ToLocalTime()}, new {id = 8, StartTime = DateTime.Parse("2010-10-01 02:00:00 PM").ToLocalTime()}, new {id = 9, StartTime = DateTime.Parse("2010-10-01 03:20:00 PM").ToLocalTime()}, new {id = 10, StartTime = DateTime.Parse("2010-10-01 03:30:00 PM").ToLocalTime()}, new {id = 11, StartTime = DateTime.Parse("2010-10-01 04:40:00 PM").ToLocalTime()}, new {id = 12, StartTime = DateTime.Parse("2010-10-01 04:50:00 PM").ToLocalTime()}, new {id = 13, StartTime = DateTime.Parse("2010-10-01 05:00:00 PM").ToLocalTime()}, new {id = 14, StartTime = DateTime.Parse("2010-10-01 05:10:00 PM").ToLocalTime()} };   Now let’s create a stream of point events   var inputStream = EnumerableCollection .ToPointStream(Application,evt=> PointEvent .CreateInsert(evt.StartTime,evt),AdvanceTimeSettings.StrictlyIncreasingStartTime);   Now we can create our windows over the stream.  The first window we will create is a one hour tumbling window.  We’'ll count the events in the window but what we do here is not the point, the point is our window edges.   var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromHours(1),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()};   Now we can have a look at what we get.  I am only going to show the first non Cti event as that is enough to demonstrate what is going on   windowedStream.ToIntervalEnumerable().First(e=> e.EventKind == EventKind.Insert).Dump("First Row from Windowed Stream");   The results are below   EventKind Insert   StartTime 01/10/2010 12:00   EndTime 01/10/2010 13:00     { CountOfEntries = 5 }   Payload CountOfEntries 5   Now this makes sense and is quite often the width of window specified in examples.  So what happens if I change the windowing code now to var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromHours(5),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()}; Now where does your window start?  What about   var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromMinutes(13),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()};   Well for the first example your window will start at 01/10/2010 10:00:00 , and for the second example it will start at  01/10/2010 11:55:00 Surprised?   Here is the reason why and thanks to the StreamInsight team for listening.   Windows start at TimeSpan.MinValue. Windows are then created from that point onwards of the size you specified in your code.  If a window contains no events they are not produced by the engine to the output.  This is why window start times can be before the first event is created.

    Read the article

  • Simple OCR programming tutorials/articles

    - by daddz
    I'm interested in simple OCR methods and algorithms. And with simple I mean simple! Best would be a tutorial/article/documentation without dependencies on 3rd party librarys if that's even possible. I would really like to build up my knowledge from the ground up. The programming language doesn't matter. Thanks in advance! Edit: An interesting link I found in another question: OCR and Neural Nets in JavaScript I especially like the python implemenation.

    Read the article

  • 10 lines of code per day is the global average!? -- true?

    - by Earlz
    Ok so last year I participated in a high school curriculum contest thing at a college(I currently attend this college). I actually got 1st in it but was still a bit angry I didn't get every single one right. The most baffling of questions on there was How many lines of code does the average programmer write per day? A. 5 B. 10 C. 25 D. 30 Aside from being a subjective question which depended on language and everything else I was more baffled at what they had as the correct answer. 10. Even on my bad days at my job I touch more than 10 lines of code(either adding, modifying, or deleting) per day. And when I took this test I had only programmed as a hobby where it was common for me to write a few hundred lines for one of my new projects per day. Where are they getting this random number of ten!? Is this published somewhere? A quick googling found me nothing.

    Read the article

  • Is there a list of known web crawlers?

    - by J. Pablo Fernández
    I'm trying to get accurate download numbers for some files on a web server. I look at the user agents and some are clearly bots or web crawlers, but many for many I'm not sure, they may or may not be a web crawler and they are causing many downloads so it's important for me to know. Is there somewhere a list of know web crawlers with some documentation like user agent, IPs, behavior, etc? I'm not interested in the official ones, like Google's, Yahoo's, or Microsoft's. Those are generally well behaved and self-indentified.

    Read the article

  • Has MSDN Dropped Compact Framework already?

    - by Vaccano
    MSDN Documentation used to indicate if a method was supported on the compact framework. But now I can't find that info anymore. I know that Microsoft has dropped Compact Framework like a hot potato, but I did not know that they had ripped it out of the docs. As examples of what I am talking about here is a link to the Graphics Members. They used to show which methods were supported in the Compact Framework next to each method. Now they do not. Also, here are two methods: Graphics.MeasureString Method (String, Font, Int32) Graphics.MeasureString Method (String, Font) The first is not supported in the compact framework, but the second is. But the docs don't tell you that (at least not at the bottom where they used to). Am I missing something? Is there a way to still get this info?

    Read the article

  • is there any easy way to auto generated code like this?

    - by Kevin Yang
    i have a file which will be used across many app projects. the only difference of these files is the webserice referrence name. code like this: public void Test(){ Kevin.ServiceReference1.Service1Client client = new Kevin.ServiceReference1.Service1Client(); // do something.... } like code above, the 'Kevin.ServiceReference1' will be replace by specified app project namespace. so, according to DRY(dont repeat yourself), i shouldn't just copy the file to many projects and rename the specified part manually. is there any way i can easily replace some parts of my template file to something related to the project?

    Read the article

  • VS2010 / Code Analysis: Turn off a rule for a project without custom ruleset....

    - by TomTom
    ...any change? The scenario is this: For our company we develop a standard how code should look. This will be the MS full rule set as it looks now. For some specific projects we may want to turn off specific rules. Simply because for a specific project this is a "known exception". Example? CA1026 - while perfectly ok in most cases, there are 1-2 specific libraries we dont want to change those. We also want to avoid having a custom rule set. OTOH putting in a suppress attribute on every occurance gets pretty convoluted pretty fast. Any way to turn off a code analysis warning for a complete assembly without a custom rule set? We rather have that in a specific file (GlobalSuppressions.cs) than in a rule set for maintenance reasons, and to be more explicit ;)

    Read the article

  • Hiding privates from Javascript Intellisense

    - by Robert Koritnik
    Is it possible to hide certain functions/fields from displaying in javascript intellisense drop down list in Visual Studio 2008? Either by javascript documentaion XML of by naming privates in a certain way? I've seen <private /> in jquery vsdoc file that implies exactly this behaviour, but doesn't meet my expectations { __hiddenField: 0, /// <private /> increment: function(){ /// <summary>Increments a private variable</summary> __hiddenField++; } } But since fields can't contain documentation (because they have no body) they have to be documented at the top. But still doesn't work: { /// <field name="__hiddenField" type="Number" private="true">PRIVATE USE</field> __hiddenField: 0, increment: function(){ /// <summary>Increments a private variable</summary> __hiddenField++; } } Impossible is a perfectly possible answer and will be accepted if you have the knowledge that it's actually not possible.

    Read the article

  • How to document an XML Schema?

    - by lucas clemente
    I have developed a XML schema for an application I wrote. Now I want to document the valid structure for the end user, however I can't come up with any natural way to do this. I've seen things like xs3p, which essentially converts a xsd schema to a HTML representation, however that doesn't look like good documentation to me; the user shouldn't need to know anything about schemas to understand what he is allowed to do. Any ideas how to document this? Any programs / editors / graphical solutions or simply concepts I can build on?

    Read the article

  • Are there any PHP DocBlock parser tools available?

    - by Beau Simensen
    I would like to build some smaller scale but hightly customized documentation sites for a few projects. PhpDocumentor is pretty great but it is very heavy. I thought about trying to tweak the templates for that but after spending only a couple of minutes looking into it I decided that it would be too much work. Ideally I'd like to see something to which I could pass a bunch of files and have it return all of the files, classes and properties and methods, along with their meta data, so that I could build out some simple templates based on the data. Are there any DocBlock parser-only projects that will help me in this task, or am I stuck reinventing that wheel?

    Read the article

  • SQL Server - Schema/Code Analysis Rules - What would your rules include?

    - by Randy Minder
    We're using Visual Studio Database Edition (DBPro) to manage our schema. This is a great tool that, among the many things it can do, can analyse our schema and T-SQL code based on rules (much like what FxCop does with C# code), and flag certain things as warnings and errors. Some example rules might be that every table must have a primary key, no underscore's in column names, every stored procedure must have comments etc. The number of rules built into DBPro is fairly small, and a bit odd. Fortunately DBPro has an API that allows the developer to create their own. I'm curious as to the types of rules you and your DB team would create (both schema rules and T-SQL rules). Looking at some of your rules might help us decide what we should consider. Thanks - Randy

    Read the article

  • A CSS code to put text saved in other server into blogspot blog..???

    - by Nok Imchen
    I have a blog hosted on blogspot dot com. In that blog, i want to put some data like Google search string or etc automatically. I want it to be done in this way: Just put a code (server side scripting) linking to a text file or PHP file, and the code will extract the text and output in my blogspot blog. What i DONT want is to use javascript. Beacause, if i use javascript then the output will be seen only in the users screen. I want the output to be seen by Google Bot too. Thanking you in anticipation.

    Read the article

  • How do I get code coverage of Perl cgi script when executed by Selenium?

    - by Kurt W. Leucht
    I'm using Eclipse EPIC IDE to write some Perl cgi scripts which call some Perl modules that I have also written. The EPIC IDE lets me configure a Perl CGI "run configuration" which runs my CGI script. And then I've got Selenium set up and one of my unit test files runs some Selenium commands to run my cgi script through its paces. But the coverage report from Module::Build dispatch 'testcover' doesn't show that any of my module code has been executed. It's been executed by my cgi script, but I guess the CGI script was run manually and was not executed directly by my unit test file, so maybe that's why the coverage isn't being recognized. Is there a way to do this right so I can integrate Selenium and unit test files and code coverage all together somehow?

    Read the article

  • Read line comments in INI file

    - by Dave Jarvis
    The parse_ini_file function removes comments in configuration files. What would you do to keep the comments that are associated with the next line? For example: [email] ; Verify that the email's domain has a mail exchange (MX) record. validate_domain = true Am thinking of using X(HT)ML and XSLT to transform the content into an INI file (so that the documentation and options can be single sourced). For example: <h1>email</h1> <p>Verify that the email's domain has a mail exchange (MX) record.</p> <dl> <dt>validate_domain</dt> <dd>true</dd> </dl> Any other ideas?

    Read the article

  • Explaining persistent data structures in simple terms

    - by Jason Baker
    I'm working on a library for Python that implements some persistent data structures (mainly as a learning exercise). However, I'm beginning to learn that explaining persistent data structures to people unfamiliar with them can be difficult. Can someone help me think of an easy (or at least the least complicated) way to describe persistent data structures to them? I've had a couple of people tell me that the documentation that I have is somewhat confusing. (And before anyone asks, no I don't mean persistent data structures as in persisted to the file system. Google persistent data structures if you're unclear on this.)

    Read the article

  • How can I graph the Lines of Code history for git repo?

    - by dbr
    Basically I want to get the number of lines-of-code in the repository after each commit. The only (really crappy) ways I have found is to use git filter-branch to run "wc -l *", and a script that run git reset --hard on each commit, then ran wc -l To make it a bit clearer, when the tool is run, it would output the lines of code of the very first commit, then the second and so on.. This is what I want the tool to output (as an example): me@something:~/$ gitsloc --branch master 10 48 153 450 1734 1542 I've played around with the ruby 'git' library, but the closest I found was using the .lines() method on a diff, which seems like it should give the added lines (but does not.. it returns 0 when you delete lines for example) require 'rubygems' require 'git' total = 0 g = Git.open(working_dir = '/Users/dbr/Desktop/code_projects/tvdb_api') last = nil g.log.each do |cur| diff = g.diff(last, cur) total = total + diff.lines puts total last = cur end

    Read the article

  • ASP.NET MVC reminds me of old Classic ASP spaghetti code...

    - by EdenMachine
    I just went through some MVC tutorials after checking this site out for a while. Is it just me, or does MVC View pages brinig back HORRIBLE flashbacks of Classic ASP spaghetti code with all the jumping in and out of HTML and ASP.NET with yellow delimiters everywhere making it impossible to read? What ever happened to the importance of code/design separation?? I was really sold on the new technology until the tutorials hit the View page development section. Or am I missing something? (And don't say you can use the template to help because it's jsut moving the spaghetti to another location - sweeps it under the rug - it doesn't fix the problem)

    Read the article

  • What do the different columns (of letters) mean for the svn merge output?

    - by James
    The output of SVN merge has 4 columns of letters listed before the file name. I understand the meaning of the letters (mostly) but I can't find any information on the meanings of the columns and so only have a vague understanding based on context. Can anyone point me to the documentation on this? Based on context I've been able to infer that column: Is about text changes to a file Seems to be related to use of the svn ignore command on a folder (or maybe it's just properties of the file?) I've never seen a letter in the third column and hence I have no idea what it means. Might be tree conflicts? This is the one I'm mostly worried about because I don't know how to handle it yet.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >