Search Results

Search found 10046 results on 402 pages for 'repository pattern'.

Page 178/402 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Subversive Connector Discovery error

    - by Zac
    Trying to install Subversive for the first time. Successfully installed Subversive "Team Provider" via the Available Sites dialog in Help Install New Software. Added a site that points to the Helios Update Site. Restarted Eclipse and tried to open the SVN Repository Browser and it automatically launched a Subversive Connector Discovery dialog, presenting me with 6 or 7 connectors to choose from. I've tried every single one, and they all fail, stating that an "error occured" and that I need to check the "error log" (Eclipse error log?) for details. I then exited the dialog and noticed that I can now view all the SVN-related perspective panes. (1) Was it alright for me to cancel that Connector Discovery process, or will I need that to create/edit repositories locally? If I need it, what was going wrong and does anyone have any ideas for how to get it working? (2) I'm trying to undertand the SVN Repositories feature for creating a new repository. How do I point it to my local SVN server instance? The default is to query me for a URL that I assume I would point to one of those "publicly available" repos.... which I don't want! Thanks for any and all help here!

    Read the article

  • AS3 : RegExp exec method in loop problem

    - by Boun
    Hi everyone, I need some help about RegExp in AS3. I have a simple pattern : patternYouTube = new RegExp ( "v(?:\/|=)([A-Z0-9_-]+)", "gi" ); This pattern is looking for the youTube id video. For example : var tmpUrl : String; var result : Object; var toto : Array = new Array(); toto = [http://www.youtube.com/v/J-vCxmjCm-8&autoplay=1, http://www.youtube.com/v/xFTRnE1WBmU&autoplay=1]; var i : uint; for ( i = 0 ; i < toto.length ; i++) { tmpUrl = toto[i]; result = patternYouTube.exec ( tmpUrl ); if ( result.length != 0 && result != null ) { trace(result); } } When i == 0, it works perfectly. Flash returns me : v/J-vCxmjCm-8,J-vCxmjCm-8 When i == 1, it fails. Flash returns me : null When I revert the two strings in my array such as : toto = [ http://www.youtube.com/v/xFTRnE1WBmU&autoplay=1, http://www.youtube.com/v/J-vCxmjCm-8&autoplay=1 ]; When i == 0, it works perfectly : Flash returns me : xFTRnE1WBmU When i == 1, it fails : Flash returns me : null Do you have any idea about the problem in the loop ?

    Read the article

  • Need help modifying C++ application to accept continuous piped input in Linux

    - by GreeenGuru
    The goal is to mine packet headers for URLs visited using tcpdump. So far, I can save a packet header to a file using: tcpdump "dst port 80 and tcp[13] & 0x08 = 8" -A -s 300 | tee -a ./Desktop/packets.txt And I've written a program to parse through the header and extract the URL when given the following command: cat ~/Desktop/packets.txt | ./packet-parser.exe But what I want to be able to do is pipe tcpdump directly into my program, which will then log the data: tcpdump "dst port 80 and tcp[13] & 0x08 = 8" -A -s 300 | ./packet-parser.exe Here is the script as it is. The question is: how do I need to change it to support continuous input from tcpdump? #include <boost/regex.hpp> #include <fstream> #include <cstdio> // Needed to define ios::app #include <string> #include <iostream> int main() { // Make sure to open the file in append mode std::ofstream file_out("/var/local/GreeenLogger/url.log", std::ios::app); if (not file_out) std::perror("/var/local/GreeenLogger/url.log"); else { std::string text; // Get multiple lines of input -- raw std::getline(std::cin, text, '\0'); const boost::regex pattern("GET (\\S+) HTTP.*?[\\r\\n]+Host: (\\S+)"); boost::smatch match_object; bool match = boost::regex_search(text, match_object, pattern); if(match) { std::string output; output = match_object[2] + match_object[1]; file_out << output << '\n'; std::cout << output << std::endl; } file_out.close(); } } Thank you ahead of time for the help!

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • CCNet: "Failing Tasks : FilteredSourceControl: CheckForModifications" error

    - by Marc
    I've installed CCNet and now I'm trying to set up a link to our repository. When I visit the CCNet dashboard website the project shows up ok, but when I click the Force button I receive this error in the messages column: Failing Tasks : FilteredSourceControl: CheckForModifications If I log into the server as the account which I've specified CCNet should use to connect to the repository, and do an Update on the project by hand (i.e. using SVN.exe or TortoiseSVN) the update works fine. My sourcecontrol section of the CCNet.config file is below. <sourcecontrol type="filtered"> <sourceControlProvider type="svn" autoGetSource="true"> <executable>E:\SVNServer\bin\svn.exe</executable> <trunkUrl> https://bserver.int:4443/trunk </trunkUrl> <workingDirectory>E:\buildserver</workingDirectory> <username>USER</username> <password>PASSWORD</password> </sourceControlProvider> <inclusionFilters> <pathFilter> <pattern>**/*.*</pattern> </pathFilter> </inclusionFilters> </sourcecontrol> Both the cruisecontrol.net website and google seem utterly devoid of any information on this error, other than that it probably relates to the inclusionfilter section in the block above. Can anyone provide any ideas?

    Read the article

  • java.lang.ClassCastException: $Proxy99 cannot be cast

    - by svaret
    Hi, I am using JBoss4.2.2 and java6. The deployed ear's name is apa.ear In a servlet I have the following code line: placeBid = (PlaceBid) context.lookup("apa/" + PlaceBid.class.getSimpleName() + "/remote"); I have a generated jboss-app.xml like this: <jboss-app> <loader-repository>apa:app=ejb3</loader-repository> </jboss-app> When trying to get the PlaceBid via the context I get this exception java.lang.ClassCastException: $Proxy99 cannot be cast to se.nextit.actionbazaar.buslogic.PlaceBid The PlaceBid interface looks like this: @Remote public interface PlaceBid { Long addBid(String userId, Long itemId, Double bidPrice); } When I run the example coming with EJB3 in action it works. EJB3 in action sample code comes with ant building. I want to use Maven so I have rearranged the code some. However, I don't understan what I am doing wrong here. I have some thoughts about the jboss-app.xml file. I am not sure of how its content should look like. Grateful for any help. Best wishes Lasse

    Read the article

  • Converting from CVS to SVN to Hg, does 'hg convert' require pointing to an SVN checkout or just a re

    - by Terry
    As part of migrating full CVS history to Hg, I've used cvs2svn to create an SVN repo in a local directory. It's first level directory structure is: 2010-04-21 09:39 AM <DIR> . 2010-04-21 09:39 AM <DIR> .. 2010-04-21 09:39 AM <DIR> locks 2010-04-21 09:39 AM <DIR> hooks 2010-04-21 09:39 AM <DIR> conf 2010-04-21 09:39 AM 229 README.txt 2010-04-21 11:45 AM <DIR> db 2010-04-21 09:39 AM 2 format 2 File(s) 231 bytes After setting up hg and the convert extension and attempting the convert, I get the following on convert: C:\>hg convert file://localhost/Users/terry/Desktop/repoSVN assuming destination repoSVN-hg initializing destination repoSVN-hg repository file://localhost/Users/terry/Desktop/repoSVN does not look like a CVS checkout file://localhost/Users/terry/Desktop/repoSVN does not look like a Git repo file://localhost/Users/terry/Desktop/repoSVN does not look like a Subversion repo file://localhost/Users/terry/Desktop/repoSVN is not a local Mercurial repo file://localhost/Users/terry/Desktop/repoSVN does not look like a darcs repo file://localhost/Users/terry/Desktop/repoSVN does not look like a monotone repo file://localhost/Users/terry/Desktop/repoSVN does not look like a GNU Arch repo file://localhost/Users/terry/Desktop/repoSVN does not look like a Bazaar repo file://localhost/Users/terry/Desktop/repoSVN does not look like a P4 repo abort: file://localhost/Users/terry/Desktop/repoSVN: missing or unsupported repository I have TortoiseHg installed. For info, hg version reports: Mercurial Distributed SCM (version 1.4.3) This version of Mercurial seems to have some svn bindings if library.zip in the install is to be believed. Do I need to do a checkout and point hg convert to it for this to work properly?

    Read the article

  • How to configure MAVEN?

    - by i2ijeya
    I am a newbie to maven and i gone through the configuration steps given in Apache site, but still i cant configure it. So anyone please help me with simple steps to configure MAVEN in windows. Thanks in advance. EDITED C:\Documents and Settings\arselv>mvn install [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [install] [INFO] ------------------------------------------------------------------------ Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven- resources- plugin/2.3/maven-resources-plugin-2.3.pom Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-resources- plugin/2.3/maven-resources-plugin-2.3.pom [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: org.apache.maven.plugins:maven-resources-plugin Reason: POM 'org.apache.maven.plugins:maven-resources-plugin' not found in repository: Unable to download the artifact from any repository org.apache.maven.plugins:maven-resources-plugin:pom:2.3 from the specified remote repositories: central (http://repo1.maven.org/maven2) for project org.apache.maven.plugins:maven-resources-plugin [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 42 seconds [INFO] Finished at: Fri Feb 05 13:10:06 IST 2010 [INFO] Final Memory: 2M/5M [INFO] ------------------------------------------------------------------------ So Above is the Error whil trying to do the steps given in apache site.

    Read the article

  • Calculate time of method execution and send to WCF service async

    - by Tim
    I need to implement time calculation for repository methods in my asp .net mvc project classes. The problem is that i need to send time calculation data to WCF Service which is time consuming. I think about threads which can help to cal WCF service asynchronously. But I have very little experience with it. Do I need to create new thread each time or I can create a global thread, if so then how? I have something like that: StopWatch class public class StopWatch { private DateTime _startTime; private DateTime _endTime; public void Start() { _startTime = DateTime.Now; } protected void StopTimerAndWriteStatistics() { _endTime = DateTime.Now; TimeSpan timeResult = _endTime - _startTime; //WCF proxy object var reporting = AppServerUtility.GetProxy<IReporting>(); //Send data to server reporting.WriteStatistics(_startTime, _endTime, timeResult, "some information"); } public void Stop() { //Here is the thread I have question with var thread = new Thread(StopTimerAndWriteStatistics); thread.Start(); } } Using of StopWatch class in Repository public class SomeRepository { public List<ObjectInfo> List() { StopWatch sw = new StopWatch(); sw.Start(); //performing long time operation sw.Stop(); } } What am I doing wrong with threads?

    Read the article

  • ASP.NET MembershipProvider and StructureMap

    - by Ben
    I was using the default AspNetSqlMembershipProvider in my application. Authentication is performed via an AuthenticationService (since I'm also supporting other forms of membership like OpenID). My AuthenticationService takes a MembershipProvider as a constructor parameter and I am injecting the dependency using StructureMap like so: For<MembershipProvider>().Use(Membership.Provider); This will use the MembershipProvider configured in web.config. All this works great. However, now I have rolled my own MembershipProvider that makes use of a repository class. Since the MembershipProvider isn't exactly IoC friendly, I added the following code to the MembershipProvider.Initialize method: _membershipRepository = ObjectFactory.GetInstance<IMembershipRepository>(); However, this raises an exception, like StructureMap hasn't been initialized (cannot get instance of IMembershipRepository). However, if I remove the code and put breakpoints at my MembershipProvider's initialize method and my StructureMap bootstrapper, it does appear that StructureMap is configured before the MembershipProvider is initialized. My only workaround so far is to add the above code to each method in the MembershipProvider that needs the repository. This works fine, but I am curious as to why I can't get my instance in the Initialize method. Is the MembershipProvider performing some internal initialization that runs before any of my own application code does? Thanks Ben

    Read the article

  • Team Foundation Server - A programmer's guide

    - by Filip Ekberg
    In addition to my Previous topic on How to use SVN, Branch? Tag? Trunk? I would like to get in-depth on how a programmer should/could use TFS. The things that are most interesting to me is not how to set up the server, rather how you use it on a daily basis. In the area of software engineering where your responsibility not only lies on code but achitecture, documentation and other fields, you need to have a collection of your work, prefferably on the same place. So these are my point of interest which I would like to get more knowledge about How would you strucuter a TFS Workspace / Project to support lots of different customers / projects and maybe different projects per customer? Splitting up the folder strucutre on the above project into different pieces such as, Code, Documents - Architecture, Requirements and other, what more could there be and what would be a nice commonly used folder structure? An easy to browse repository; Again the folder structure here is important however this point is more aimed at different Explorers for the repository, not only the built in Team Foundation Explorer. These are just a couple of the points that I would like to know more about, suggestions on Beginners guides, in-depth guides and links covering the above would be very much helpful, please feel free to add other important knowledge-points to this as well.

    Read the article

  • load class not in classpath dynamically in web application - without using custom classloader

    - by swdeveloper
    I am developing a web application. The web application generates java classes on the fly. For example it generates class com.people.Customer.java In my code, I dynamically compile this to get com.people.Customer.class and store in some directory say repository/com/people/Customer.class which is not on the classpath of my application server.My application server(I am using WebSphere Application Server/Apache Tomcat etc) picks up the classes from the WEB-INF/classes directory. The Classloader would use this to load the classes. After compilation I need to load this class so that it becomes accessible to other classes using it after its creation. 4.When I use Thread.currentThread().getContextClassLoader().loadClass(com.people.Customer) obviously the Classloader is not able to load the class, since its not on the classpath(not in WEB-INF/classes). Due to similar reasons, getResource(..) or getResourceAsStream(..) also does not work. I need a way to : Read the class Customer.class maybe as a stream (or any other way would do) and then load it. Following are the constraints: I cannot add the repository folder to the WEB-INF/classes folder. I cannot create a new Custom ClassLoader. If I create a new ClassLoader and this loads the class, it will not be accessible to its parent ClassLoader. Is there any way of achieving this? If not this, in the worse case, is there a way of overriding the default class loader with a custom class loader for web applications the same classloader should be used to load applications throughout entire lifecycle of my web application. Appreciate any solution :)

    Read the article

  • git pull not working

    - by dorelal
    I am not using github. We have git setup on our machine. I created a branch from master called experiment. However when I am trying to do git pull I am getting following message. > git pull You asked me to pull without telling me which branch you want to merge with, and 'branch.experiment.merge' in your configuration file does not tell me either. Please specify which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. Here is result of git remote show origin > git remote show origin * remote origin Fetch URL: ssh://git.domain.com/var/git/app.git Push URL: ssh://git.domain.com/var/git/app.git HEAD branch: master Remote branches: experiment tracked master tracked Local branches configured for 'git pull': master merges with remote master Local refs configured for 'git push': experiment pushes to experiment (local out of date) master pushes to master (up to date) As I read the message above experiment is mapped to origin/experiment. And my local repository knows that it is out of date. Then why I am not able to do git pull?

    Read the article

  • What does this svn2git error mean?

    - by Hisham
    I am trying to import my repository from svn to git using svn2git, but it seems like it's failing when it hits a branch. What's the problem? Found possible branch point: https://s.aaa.com/repo/trunk/project => https://s.aaa.com/repo/branches/project-beta1.0, 128 Use of uninitialized value in substitution (s///) at /opt/local/libexec/git-core/git-svn line 1728. Use of uninitialized value in concatenation (.) or string at /opt/local/libexec/git-core/git-svn line 1728. refs/remotes/trunk: 'https://s.aaa.com/repo' not found in '' Running command: git branch -l --no-color * master Running command: git branch -r --no-color trunk Running command: git checkout trunk Note: checking out 'trunk'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at f4e6268... Changing svn repository in cap files Running command: git branch -D master Deleted branch master (was f4e6268). Running command: git checkout -f -b master Switched to a new branch 'master' Running command: git gc Counting objects: 450, done. Delta compression using up to 2 threads. Compressing objects: 100% (368/368), done. Writing objects: 100% (450/450), done. Total 450 (delta 63), reused 450 (delta 63)

    Read the article

  • How to let m2eclipse use nexus repositories instead of maven one

    - by lisak
    I have this situation: An artifact in maven local repo that I don't want to use anymore. Instead, I want it to be downloaded by maven from proxied nexus remote repository. It's a typical situation cause a lot of artifacts are called just name-SNAPSHOT and the artifact is changing but the name is still the same. Eclipse with m2eclipse is running. I delete the entire directory of the artifact in local maven repo m2eclipse "Reindex local maven repository" - which creates a new nexus index for local maven repo I guess Project - maven Update Dependencies - now m2eclipse should run maven, which doesn't see the artifact in local maven repo, so it uses nexus repositories to download it (expected behavior) Instead, the directory structure in maven local repo is recreated and there is this file: "m2e-lastUpdated.properties" with following inside: local|http\://nexus\:8082/nexus-webapp-1.6.0/content/groups/public|javadoc=1274399332215 local|http\://nexus\:8082/nexus-webapp-1.6.0/content/groups/public|sources=1274399332161 and m2eclipse says Missing artifact net.sourceforge.htmlunit:htmlunit:jar:2.8-SNAPSHOT:compile even though the artifact physically exists here: nexus:8082/nexus-webapp-1.6.0/content/repositories/htmlunit-snapshot/net/sourceforge/htmlunit/htmlunit/2.8-SNAPSHOT/htmlunit-2.8-SNAPSHOT.jar Maven just doesn't use this location at all. Trust me I tried everything, this m2eclipse behavior is terrible.

    Read the article

  • Maven Eclipse plugin and classpath issues in Eclipse

    - by subferno
    I am using Eclipse 3.51, Maven 2.0.9, Java 1.4, with the Maven Eclipse plugin 2.7 with WTP 2.0 (not m2Eclipse). I have a flat multi module project that is set up as follows (module Parent with the parent pom, module A and B is dependent on C). Importing the four modules in for the first time will show compile errors as expected since I have not ran the eclipse plugin. With my local repository empty, running eclipse clean will resolve all compile errors and dependencies within the my local workspace. If I were to make some minor code changes to module B and run the eclipse plugin again, compile errors would show up in module A and B. Compile errors about classes that cannot be found. Its like module C is no longer in the classpath for A and B to see. I look at the .classpath file and its definitely looking at the right modules in the Eclipse workspace. If I delete the maven repository and do the eclipse clean again, the compile errors about unresolved classes are fixed. Also, if I run the eclpse clean command with the useProjectReferences flag to false and then rerun it with true, Eclipse would rebuild my workspace and the errors will go away. Whats going on?

    Read the article

  • seam iText integration libraries

    - by Joshua
    seam iText integration seems to use older version of iText jars, would it be possible to use the latest iText 5.0.2 specific jars as part of the maven dependencies. Has anyone done this before? http://repository.jboss.org/maven2/org/jboss/seam/jboss-seam-pdf/2.2.0.GA/jboss-seam-pdf-2.2.0.GA.pom http://repository.jboss.org/maven2/com/lowagie/itext/2.1.2/itext-2.1.2.pom The following dependency uses 2.1.2 version of iText, not sure how to make it use the latest version 5.0.2 of iText. <dependency> <groupId>org.jboss.seam</groupId> <artifactId>jboss-seam-pdf</artifactId> <version>${jboss-seam.version}</version> <exclusions> <exclusion> <groupId>org.jboss.seam</groupId> <artifactId>jboss-seam</artifactId> </exclusion> <exclusion> <groupId>org.jboss.seam</groupId> <artifactId>jboss-seam-ui</artifactId> </exclusion> </exclusions> </dependency>

    Read the article

  • Am I deleting this properly?

    - by atch
    I have some struct: struct A { const char* name_; A* left_; A* right_; A(const char* name):name_(name), left_(nullptr), right_(nullptr){} A(const A&); //A(const A*);//ToDo A& operator=(const A&); ~A() { /*ToDo*/ }; }; /*Just to compile*/ A& A::operator=(const A& pattern) { //check for self-assignment if (this != &pattern) { void* p = new char[sizeof(A)]; } return *this; } A::A(const A& pat) { void* p = new char[sizeof(A)]; A* tmp = new (p) A("tmp"); tmp->~A(); delete tmp;//I WONDER IF HERE I SHOULD USE DIFFERENT delete[]? } int _tmain(int argc, _TCHAR* argv[]) { A a("a"); A b = a; cin.get(); return 0; } Guys I know this is far from ideal and far from finished. But please don't tell me the answer how to do it properly. I'm trying to figure it out myself. The only think I would like to know if I'm deleting my memory in proper way. Thanks.

    Read the article

  • Is a VCS appropriate for usage by a designer?

    - by iconiK
    I know that a VCS is absolutely critical for a developer to increase productivity and protect the code, no doubts about it. But what about a designer, using say, Photoshop (though it's not specific to any tools, just to make my point clearer). VCSs uses delta compression to store different versions of files. This works very well for code, but for images, that's a problem. Raster image files are binary formats, though vector image files are text (SVG comes to my mind) and pose to problem. The problem comes with .psd files (and any other image "source" file) - those can get pretty big and since I'm not familiar with the format, I'll consider them as binary files. How would a VCS work in this condition? The repository could be pretty darned big if the VCS server isn't able to diff the files efficiently (or worse, not at all) and over time this can become a really big pain when someone needs to check out the repository (or clone it if using a DVCS). Have any of you used a VCS for this purpose? How well does it work? I'm mostly interested in Mercurial, though this is a general situation that applies to any VCS.

    Read the article

  • ASP.NET MVC Image refreshing

    - by user295541
    Hi, I have an Employee object which has an image property. Image class contains image metadata as image caption, and image file name. If I upload a new image for an employee on async way without full post back the new image is not appeared on the page. I use GUID to name the image file to avoid the page caching. I do the image modifying the following way: ctrEmployee employee = Repository.Get(PassedItemID); if (employee.ctrImage != null) { string fullFileName = serverFolder + employee.ctrImage.FileName; FileInfo TheFile = new FileInfo(fullFileName); if (TheFile.Exists) { TheFile.Delete(); } fileName = Guid.NewGuid() + ".jpg"; employee.ctrImage.FileName = fileName; } resizedBmp.Save(string.Format("{0}{1}", serverFolder, fileName), System.Drawing.Imaging.ImageFormat.Jpeg); Repository.Edit<ctrEmployee>(employee); ImageID = employee.Image.Value; return PartialView(UserControlPaths.Thumbnail, new ThumbnailDataModel(employee.Image.Value, 150, 150)); The partial view has an image tag which gets the saved image url string which is a GUID. Anybody has an idea what I do wrong?

    Read the article

  • How do you capture a group with regex?

    - by Sylvain
    Hi, I'm trying to extract a string from another using regex. I'm using the POSIX regex functions (regcomp, regexec ...), and I fail at capturing a group ... For instance, let the pattern be something as simple as "MAIL FROM:<(.*)>" (with REG_EXTENDED cflags) I want to capture everything between '<' and '' My problem is that regmatch_t gives me the boundaries of the whole pattern (MAIL FROM:<...) instead of just what's between the parenthesis ... What am I missing ? Thanks in advance, edit: some code #define SENDER_REGEX "MAIL FROM:<(.*)>" int main(int ac, char **av) { regex_t regex; int status; regmatch_t pmatch[1]; if (regcomp(&regex, SENDER_REGEX, REG_ICASE|REG_EXTENDED) != 0) printf("regcomp error\n"); status = regexec(&regex, av[1], 1, pmatch, 0); regfree(&regex); if (!status) printf( "matched from %d (%c) to %d (%c)\n" , pmatch[0].rm_so , av[1][pmatch[0].rm_so] , pmatch[0].rm_eo , av[1][pmatch[0].rm_eo] ); return (0); } outputs: $./a.out "012345MAIL FROM:<abcd>$" matched from 6 (M) to 22 ($) solution: as RarrRarrRarr said, the indices are indeed in pmatch[1].rm_so and pmatch[1].rm_eo hence regmatch_t pmatch[1]; becomes regmatch_t pmatch[2]; and regexec(&regex, av[1], 1, pmatch, 0); becomes regexec(&regex, av[1], 2, pmatch, 0); Thanks :)

    Read the article

  • How to use InterfaceInterceptor with NServiceBus UnityBuilder

    - by David
    I have a MessageHandler with a dependency declared as: public IRepository<Person> PersonRepository {get;set;} I set up my Unity container with: IUnityContainer container = new UnityContainer(); container.AddNewExtension<Interception>(); container.RegisterType(typeof(IRepository<Person>), typeof(Example.Data.InMemory.Repository<Person>)); container.Configure<Interception>() .AddPolicy("PersonAddAdvice") .AddMatchingRule(new AddMatchingRule()) .AddCallHandler(new PersonAddedEventPublishingCallHandler()); container.Configure<Interception>() .SetInterceptorFor<IRepository<Person>>(new InterfaceInterceptor()); Then I Init my endpoint with: class EndpointConfig : IConfigureThisEndpoint, AsA_Publisher, IWantCustomInitialization { public void Init() { NServiceBus.Configure.With() .Log4Net() .UnityBuilder(ContainerConfig.LoadContainer()) .XmlSerializer(); } } (ContainerConfig.LoadContainer() returns the pre-configured IUnityContainer) the problem is that the IRepository dependency in my MessageHandler is not resolved (it's null when handling the messages). If I remove the interception configuration from the Unity container, and simply do this: IUnityContainer container = new UnityContainer(); container.RegisterType(typeof(IRepository<Person>), typeof(Example.Data.InMemory.Repository<Person>)); It works as expected, and the MessageHandler has an instance if IRepository resolved.

    Read the article

  • Subsonic 3, SimpleRepository, SQL Server: How to find rows with a null field?

    - by desautelsj
    How ca I use Subsonic's Find<T> method to search for rows with a field containing the "null" value. For the sake of the discussion, let's assume I have a c# class called "Visit" which contains a nullable DateTime field called "SynchronizedOn" and also let's assume that the Subsonic migration has created the corresponding "Visits" table and the "SynchronizedOn" field. If I was to write the SQL query myself, I would write something like: SELECT * FROM Visits WHERE SynchronizedOn IS NULL When I use the following code: var visits = myRepository.Find<Visit>(x => x.SynchronizedOn == null); Subsonic turns it into the following SQL query: SELECT * FROM Visits WHERE SynchronizedOn == null which never returns any rows. I tried the following code but it throws an error: visits = repository.Find<Visit>(x => x.SynchronizedOn.HasValue); I was able to use the following syntax: var query = from v in repository.All<Visit>() where v.SynchronizedOn == null orderby v.CreatedOn select v; visits = query.ToList<Visit>(); but it's not as nice an short as using the Find<T> method. Anyone knows how I can specify the "SynchronizedOn IS NULL" condition in the Find<T> method?

    Read the article

  • vlad the deployer: why do I need a scm folder?

    - by egarcia
    I'm learning to use vlad the deployer and I've got a question. Since I'm still learning I don't know what is pertinent to the question and what isn't, so please bear with me if I'm a little verbose. I've got 2 environments for a new application (test and production) besides my development machine. I've figured out this way to do the initial setup in my vlad.rake: namespace :test task :set set :domain, 'test.myserver.com' end end namespace :production task :set set :domain, 'www.myserver.com' end end This way I can have environment-specific stuff inside the namespaces, and still have shared tasks. For example, this would be the initial setup for test: rake vlad:test:set vlad:setup vlad:update This creates the following folders on my test server: releases/ scm/ shared/ current -> symlink to last release (inside the releases folder) My question is: what's the point of the scm folder? Every time I do vlad:update, the following happens: svn checkout on the scm/ folder above svn export on the /releases/{date} folder update current symlink So scm is a copy of my repository... but then there's an "export" copy of the repository on /releases/{date}. And that is the one used by the application... scm doesn't seem to be used by anyone? Wouldn't I be just fine without the scm folder?

    Read the article

  • Publish Maven artifacts on FTP with Hudson FTP Publisher Plugin

    - by jaguard
    I'm building a number of artefacts (zip files for different environments: test, dev) using the maven-assembly-plugin using a specialized Maven profile. These artefacts I want to copy/collect on on a FTP server keeping the version (01.07.10.16.Wed-1626) as a folder, so I need to copy from test/build/01.07.10.16.Wed-1626/ to ftp://my-ftp-server:21/projects/myserver-1.7/01.07.10.16.Wed-1626/ The layout for the Maven output is this: target/ build/ 01.07.10.16.Wed-1626/ my-server-01.07.10.16.Wed-1626-dev.zip my-server-01.07.10.16.Wed-1626-test.zip For copying the artefacts I'm using FTP Publisher Plugin but it seams I miss something since that even the build is OK and the artefacts are build without problem but the job is finishing without copying the artefacts, and in the console there is no log info about copying the artefacts My FTP publisher config (FTP repository hosts) is: Hostname: my-ftp-server Port: 21 Timeout: 10000 Root Repository Path: projects User Name: my-user Password: my-pass My Hudson job FTP publisher config (Publish artifacts to FTP) is: FTP site: my-ftp-server Files to upload Source: target/build/** Destination: myserver-1.7 1: There is any log to check if there are any FTP copy errors ? 2: There is any problem with the file pattern (source) or with the dest ?

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >