Search Results

Search found 745 results on 30 pages for 'fork bomb'.

Page 18/30 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • What sorts of tools make a Django Developer valuable? [closed]

    - by MrOodles
    I am a Django Consultant and I want to increase the value that I provide to clients. My first question was an epic failure according to the FAQ. So I'll try again before I delete it. What types of tools should a Django developer have in his tool belt to increase the value to the client? Would a collection of project templates be useful? Are there open-source project templates available that can be forked and altered? Is there a proper way to configure templates to include dependencies for certain types of projects? What about deployment scripts using tools like Puppet or Chef? Does it make a lot of sense to fork Django apps on GitHub and make contributions to open source projects there? Would clients percieve extra value in programmers that are contributing to open source projects? Are there industry best practices for implementing continuous integration in a Django project? I want the answer to be open ended, as I'm at the beginning of my research. I am curious to know what sorts of tools other Django consultants use on a daily and per-project basis, and how they use them.

    Read the article

  • how to merge changes from original project -- GitHub in Windows

    - by user62046
    I created an account at https://github.com/, fork someone's project so I have my own repository, instal github client for windows, and clone my repository to my local drive. I will work on my local drive. But during the developement of the project, I would like to merge the changes in the official, original, project. I didn't find how to do this. Before, I use tortoiseSVN client for windows, and there is an option "SVN Update" which can update the project to the latest revision. But I am new to Github and its client, and don't know how to do it.

    Read the article

  • Run command on startup / login (Mac OS X)

    - by Wolfy87
    I was not sure if this was for StackOverflow or here, I settled for here. I was wondering which file I should place this bash command in so it will be run on startup. # Start the MongoDB server /Applications/MongoDB/bin/mongod --dbpath /usr/local/mongo/data --fork --logpath /usr/local/mongo/log I have been scouring the net and think it is between ~/.bashrc, ~/profile, /etc/bashrc, /etc/profile or ~/.bash_profile. Although I have tried these and they seem to run on terminal startup not Mac startup. Am I missing a file here? Thanks for any help you can give.

    Read the article

  • Is there any script to do accounting for the proftpd's xferlog?

    - by Aseques
    I would like to convert from the xferlog format that proftpd uses into per user in/out bytes, to have a summary on how much traffic does each user use per month. The exact format is this: Thu Oct 17 12:47:05 2013 1 123.123.123.123 74852 /home/vftp/doc1.txt b _ i r user ftp 0 * c Thu Oct 17 12:47:06 2013 2 123.123.123.123 86321 /home/vftp/doc2.txt b _ i r user ftp 0 * c So far I only found a script that makes a nice report but not exactly what I needed, that one can be found here I might create a fork of this one and place it somewhere but it probably has been done a lot of times already. Just found a well hidden page in proftpd site with some more examples here

    Read the article

  • Using (embedding?) wireshark in a C application for sniffing

    - by happy_emi
    I'm writing a C/C++ application which needs (among other things) to sniff packets and save the output in a file. This file will be read and processed by wireshark after a few days, using a LUA script to do some other stuff. My question is for the sniffing part which must be provided by my application. I can see two ways to do this: 1) Fork the wireshark process in background (of course using the command line version) 2) Using wireshark as library: no forking but include stuff like "wireshark.h" and link against libwireshark.so, thus using function calls to do the sniffing. So far I haven't found any documentation about #2 and it seems that #1 is the "right way" to embed sniffing capabilities in my code. Do you think I'm doing he right thing? Does wireshark allow embedding as a library?

    Read the article

  • ?Java SE 7 ?????? ???????????

    - by user13137856
    7?7?(?)14:00~??????????????????Java SE 7 ?????? ???????????????NetBeans ???????????????????7.0 ?????? JDK7 ????????????????????????????????????????? Java SE 7 ?????? ??????  ??:2011?7?7?(?)14:00 ~19:00  ??: ????????13F???????  ??: ????? + ??? Java ??????(?? ??? ?) Java SE 7 + 8 ???(?? ??) HotRockit +Java ? New ????????(? ??) NetBeans 7.0 + Project Coin ???(?? ??) InvokeDynamic ???(?? ??) Fork/Join Framework????Lambda??(?? ?? ?) ???New I/O(?? ?? ?) ?????????????????????????????????????????????

    Read the article

  • Configuring ant to run unit tests. Where should libraries be? How should classpath be configured? av

    - by chillitom
    Hi All, I'm trying to run my junit tests using ant. The tests are kicked off using a JUnit 4 test suite. If I run this direct from Eclipse the tests complete without error. However if I run it from ant then many of the tests fail with this error repeated over and over until the junit task crashes. [junit] java.util.zip.ZipException: error in opening zip file [junit] at java.util.zip.ZipFile.open(Native Method) [junit] at java.util.zip.ZipFile.(ZipFile.java:114) [junit] at java.util.zip.ZipFile.(ZipFile.java:131) [junit] at org.apache.tools.ant.AntClassLoader.getResourceURL(AntClassLoader.java:1028) [junit] at org.apache.tools.ant.AntClassLoader$ResourceEnumeration.findNextResource(AntClassLoader.java:147) [junit] at org.apache.tools.ant.AntClassLoader$ResourceEnumeration.nextElement(AntClassLoader.java:130) [junit] at org.apache.tools.ant.util.CollectionUtils$CompoundEnumeration.nextElement(CollectionUtils.java:198) [junit] at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:43) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.checkForkedPath(JUnitTask.java:1128) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.executeAsForked(JUnitTask.java:1013) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.execute(JUnitTask.java:834) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.executeOrQueue(JUnitTask.java:1785) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.execute(JUnitTask.java:785) [junit] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [junit] at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [junit] at java.lang.reflect.Method.invoke(Method.java:597) [junit] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [junit] at org.apache.tools.ant.Task.perform(Task.java:348) [junit] at org.apache.tools.ant.Target.execute(Target.java:357) [junit] at org.apache.tools.ant.Target.performTasks(Target.java:385) [junit] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337) [junit] at org.apache.tools.ant.Project.executeTarget(Project.java:1306) [junit] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [junit] at org.apache.tools.ant.Project.executeTargets(Project.java:1189) [junit] at org.apache.tools.ant.Main.runBuild(Main.java:758) [junit] at org.apache.tools.ant.Main.startAnt(Main.java:217) [junit] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257) [junit] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104) my test running task is as follows: <target name="run-junit-tests" depends="compile-tests,clean-results"> <mkdir dir="${test.results.dir}"/> <junit failureproperty="tests.failed" fork="true" showoutput="yes" includeantruntime="false"> <classpath refid="test.run.path" /> <formatter type="xml" /> <test name="project.AllTests" todir="${basedir}/test-results" /> </junit> <fail if="tests.failed" message="Unit tests failed"/> </target> I've verified that the classpath contains the following as well as all of the program code and libraries: ant-junit.jar ant-launcher.jar ant.jar easymock.jar easymockclassextension.jar junit-4.4.jar I've tried debugging to find out which ZipFile it is trying to open with no luck, I've tried toggling includeantruntime and fork and i've tried running ant with ant -lib test/libs where test/libs contains the ant and junit libraries. Any info about what causes this exception or how you've configured ant to successfully run unit tests is gratefully received. ant 1.7.1 (ubuntu), java 1.6.0_10, junit 4.4 Thanks. Update - Fixed Found my problem. I had included my classes directory in my path using a fileset as opposed to a pathelement this was causing .class files to be opened as ZipFiles which of course threw an exception.

    Read the article

  • Ant + JUnit: NoClassDefFoundError

    - by K-Boo
    Ok, I'm frustrated! I've hunted around for a good number of hours and am still stumped. Environment: WinXP, Eclipse Galileo 3.5 (straight install - no extra plugins). So, I have a simple JUnit test. It runs fine from it's internal Eclipse JUnit run configuration. This class has no dependencies on anything. To narrow this problem down as much as possible it simply contains: @Test public void testX() { assertEquals("1", new Integer(1).toString()); } No sweat so far. Now I want to take the super advanced step of running this test case from within Ant (the final goal is to integrate with Hudson). So, I create a build.xml: <project name="Test" default="basic"> <property name="default.target.dir" value="${basedir}/target" /> <property name="test.report.dir" value="${default.target.dir}/test-reports" /> <target name="basic"> <mkdir dir="${test.report.dir}" /> <junit fork="true" printSummary="true" showOutput="true"> <formatter type="plain" /> <classpath> <pathelement path="${basedir}/bin "/> </classpath> <batchtest fork="true" todir="${test.report.dir}" > <fileset dir="${basedir}/bin"> <include name="**/*Test.*" /> </fileset> </batchtest> </junit> </target> </project> ${basedir} is the Java project name in the workspace that contains the source, classes and build file. All .java's and the build.xml are in ${basedir}/src. The .class files are in ${basedir}/bin. I have added eclipse-install-dir/plugins/org.junit4_4.5.0.v20090423/junit.jar to the Ant Runtime Classpath via Windows / Preferences / Ant / Runtime / Contributed Entries. ant-junit.jar is in Ant Home Entries. So, what happens when I run this insanely complex target? My report file contains: Testsuite: com.xyz.test.RussianTest Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec Testcase: initializationError took 0 sec Caused an ERROR org/hamcrest/SelfDescribing java.lang.NoClassDefFoundError: org/hamcrest/SelfDescribing at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.access$000(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) Caused by: java.lang.ClassNotFoundException: org.hamcrest.SelfDescribing at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) What is this org.hamcrest.SelfDescribing class? Something to do with mocks? OK, fine. But why the dependency? I'm not doing anything at all with it. This is literally a Java project with no dependencies other than JUnit. Stumped (and frustrated)!!

    Read the article

  • Help with simple linux shell implementation

    - by nunos
    I am implementing a simple version of a linux shell in c. I have succesfully written the parser, but I am having some trouble forking out the child process. However, I think the problem is due to arrays, pointers and such, because just started C with this project and am not still very knowledgable with them. I am getting a segmentation fault and don't know where from. Any help is greatly appreciated. #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <sys/wait.h> #include <sys/types.h> #define MAX_COMMAND_LENGTH 250 #define MAX_ARG_LENGTH 250 typedef enum {false, true} bool; typedef struct { char **arg; char *infile; char *outfile; int background; } Command_Info; int parse_cmd(char *cmd_line, Command_Info *cmd_info) { char *arg; char *args[MAX_ARG_LENGTH]; int i = 0; arg = strtok(cmd_line, " "); while (arg != NULL) { args[i] = arg; arg = strtok(NULL, " "); i++; } int num_elems = i; if (num_elems == 0) return -1; cmd_info->infile = NULL; cmd_info->outfile = NULL; cmd_info->background = 0; int iarg = 0; for (i = 0; i < num_elems-1; i++) { if (!strcmp(args[i], "<")) { if (args[i+1] != NULL) cmd_info->infile = args[++i]; else return -1; } else if (!strcmp(args[i], ">")) { if (args[i+1] != NULL) cmd_info->outfile = args[++i]; else return -1; } else cmd_info->arg[iarg++] = args[i]; } if (!strcmp(args[i], "&")) cmd_info->background = true; else cmd_info->arg[iarg++] = args[i]; cmd_info->arg[iarg] = NULL; return 0; } void print_cmd(Command_Info *cmd_info) { int i; for (i = 0; cmd_info->arg[i] != NULL; i++) printf("arg[%d]=\"%s\"\n", i, cmd_info->arg[i]); printf("arg[%d]=\"%s\"\n", i, cmd_info->arg[i]); printf("infile=\"%s\"\n", cmd_info->infile); printf("outfile=\"%s\"\n", cmd_info->outfile); printf("background=\"%d\"\n", cmd_info->background); } void get_cmd(char* str) { fgets(str, MAX_COMMAND_LENGTH, stdin); str[strlen(str)-1] = '\0'; //apaga o '\n' do fim } pid_t exec_simple(Command_Info *cmd_info) { pid_t pid = fork(); if (pid < 0) { perror("Fork Error"); return -1; } if (pid == 0) { execvp(cmd_info->arg[0], cmd_info->arg); perror(cmd_info->arg[0]); exit(1); } return pid; } int main(int argc, char* argv[]) { while (true) { char cmd_line[MAX_COMMAND_LENGTH]; Command_Info cmd_info; printf(">>> "); get_cmd(cmd_line); if ( (parse_cmd(cmd_line, &cmd_info) == -1) ) return -1; parse_cmd(cmd_line, &cmd_info); if (!strcmp(cmd_info.arg[0], "exit")) exit(0); pid_t pid = exec_simple(&cmd_info); waitpid(pid, NULL, 0); } return 0; } Thanks.

    Read the article

  • From NaN to Infinity...and Beyond!

    - by Tony Davis
    It is hard to believe that it was once possible to corrupt a SQL Server Database by storing perfectly normal data values into a table; but it is true. In SQL Server 2000 and before, one could inadvertently load invalid data values into certain data types via RPC calls or bulk insert methods rather than DML. In the particular case of the FLOAT data type, this meant that common 'special values' for this type, namely NaN (not-a-number) and +/- infinity, could be quite happily plugged into the database from an application and stored as 'out-of-range' values. This was like a time-bomb. When one then tried to query this data; the values were unsupported and so data pages containing them were flagged as being corrupt. Any query that needed to read a column containing the special value could fail or return unpredictable results. Microsoft even had to issue a hotfix to deal with failures in the automatic recovery process, caused by the presence of these NaN values, which rendered the whole database inaccessible! This problem is history for those of us on more current versions of SQL Server, but its ghost still haunts us. Recently, for example, a developer on Red Gate’s SQL Response team reported a strange problem when attempting to load historical monitoring data into a SQL Server 2005 database via the C# ADO.NET provider. The ratios used in some of their reporting calculations occasionally threw out NaN or infinity values, and the subsequent attempts to load these values resulted in a nasty error. It turns out to be a different manifestation of the same problem. SQL Server 2005 still does not fully support the IEEE 754 standard for floating point numbers, in that the FLOAT data type still cannot handle NaN or infinity values. Instead, they just added validation checks that prevent the 'invalid' values from being loaded in the first place. For people migrating from SQL Server 2000 databases that contained out-of-range FLOAT (or DATETIME etc.) data, to SQL Server 2005, Microsoft have added to the latter's version of the DBCC CHECKDB (or CHECKTABLE) command a DATA_PURITY clause. When enabled, this will seek out the corrupt data, but won’t fix it. You have to do this yourself in what can often be a slow, painful manual process. Our development team, after a quizzical shrug of the shoulders, simply decided to represent NaN and infinity values as NULL, and move on, accepting the minor inconvenience of not being able to tell them apart. However, what of scientific, engineering and other applications that really would like the luxury of being able to both store and access these perfectly-reasonable floating point data values? The sticking point seems to be the stipulation in the IEEE 754 standard that, when NaN is compared to any other value including itself, the answer is "unequal" (i.e. FALSE). This is clearly different from normal number comparisons and has repercussions for such things as indexing operations. Even so, this hardly applies to infinity values, which are single definite values. In fact, there is some encouraging talk in the Connect note on this issue that they might be supported 'in the SQL Server 2008 timeframe'. If didn't happen; SQL 2008 doesn't support NaN or infinity values, though one could be forgiven for thinking otherwise, based on the MSDN documentation for the FLOAT type, which states that "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types". However, the truth is revealed in the XPath documentation, which states that "…float (53) is not exactly IEEE 754. For example, neither NaN (Not-a-Number) nor infinity is used…". Is it really so hard to fix this problem the right way, and properly support in SQL Server the IEEE 754 standard for the floating point data type, NaNs, infinities and all? Oracle seems to have managed it quite nicely with its BINARY_FLOAT and BINARY_DOUBLE types, so it is technically possible. We have an enterprise-class database that is marketed as being part of an 'integrated' Windows platform. Absurdly, we have .NET and XPath libraries that fully support the standard for floating point numbers, and we can't even properly store these values, let alone query them, in the SQL Server database! Cheers, Tony.

    Read the article

  • What is the SharePoint Action Framework and Why do I need it ?

    - by SAF
    For those out there that are a little curious as to whether SAF is any use to your organisation, please read this FAQ.  What is SAF ? SAF is free to use. SAF is the "SharePoint Action Framework", it was built by myself and Hugo (plus a few others along the way). SAF is written entirely in C# code available from : http://saf.codeplex.com.   SAF is a way to automate SharePoint configuration changes. An Action is a command/class/task/script written in C# that performs a unit of execution against SharePoint such as "CreateWeb"  or "AddLookupColumn". A SAF Macro is collection of one or more Actions. SAF Macro can be run from Msbuild, a Feature, StsAdm or common plain old .Net code. Parameters can be passed to a Macro at run-time from a variety of sources such as "Environment Variable", "*.config", "Msbuild Properties", Feature Properties, command line args, .net code. SAF emits lots of trace statements at run-time, these can be viewed using "DebugView". One Action can pass parameters to another Action. Parameters can be set using Expression Syntax such as "DateTime.Now".  You should consider SAF is you suffer from one of the following symptoms... "Our developers write lots of code to deploy changes at release time - it's always rushed" "I don't want my developers shelling out to Powershell or Stsadm from a Feature". "We have loads of Console applications now, I have lost track of where they are, or what they do" "We seem to be writing similar scripts against SharePoint in lots of ways, testing is hard". "My scripts often have lots of errors - they are done at the last minute". "When something goes wrong - I have no idea what went wrong or how to solve it". "Our Features get stuck and bomb out half way through - there no way to roll them back". "We have tons of Features now - I can't keep track". "We deploy Features to run one-off tasks" "We have a library of reusable scripts, but, we can only run it in one way, sometimes we want to run it from MSbuild and a Feature". "I want to automate the deployment of changes to our development environment". "I would like to run a housekeeping task on a scheduled basis"   So I like the sound of SAF - what's the problems ?  Realistically, there are few things that need to be considered: Someone on your team will need to spend a day or 2 understanding SAF and deciding exactly how you want to use it. I would suggest a Tech Lead, SysAdm or SP Architect will need to download it, try out the examples, look through the unit tests. Ask us questions. Although, SAF can be downloaded and set to go in a few minutes, you will still need to address issues such as - "Do you want to execute your Macros in MsBuild or from a Feature ?" You will need to decide who is going to do your deployments - is it each developer to themself, or do you require a dedicated Build Manager ? As most environments (Dev, QA, Live etc) require different settings (e.g. Urls, Database names, accounts etc), you will more than likely want to define these and set a properties file up for each environment. (These can then be injected into Saf at run-time). There may be no Action to solve your particular problem. If this is the case, suggest it to us - we can try and write it, or write it yourself. It's very easy to write a new Action - we have an approach to easily unit test it, document it and author it. For example, I wrote one to deploy  a WSP in 2 hours the other day. Alternatively, Saf can also call Stsadm commands and Powershell scripts.   Anyway, I do hope this helps! If you still need help, or a quick start, we can also offer consultancy around SAF. If you want to know more give us a call or drop an email to [email protected]

    Read the article

  • PostgreSQL: Rolling back a transaction within a plpgsql function?

    - by jamieb
    Coming from the MS SQL world, I tend to make heavy use of stored procedures. I'm currently writing an application uses a lot of PostgreSQL plpgsql functions. What I'd like to do is rollback all INSERTS/UPDATES contained within a particular function if I get an exception at any point within it. I was originally under the impression that each function is wrapped in it's own transaction and that an exception would automatically rollback everything. However, that doesn't seem to be the case. I'm wondering if I ought to be using savepoints in combination with exception handling instead? But I don't really understand the difference between a transaction and a savepoint to know if this is the best approach. Any advice please? CREATE OR REPLACE FUNCTION do_something( _an_input_var int ) RETURNS bool AS $$ DECLARE _a_variable int; BEGIN INSERT INTO tableA (col1, col2, col3) VALUES (0, 1, 2); INSERT INTO tableB (col1, col2, col3); VALUES (0, 1, 'whoops! not an integer'); -- The exception will cause the function to bomb, but the values -- inserted into "tableA" are not rolled back. RETURN True; END; $$ LANGUAGE plpgsql;

    Read the article

  • PHP get "Application Root" in Xampp on Windows correctly

    - by Michael Mao
    Hi all: I found this thread on StackOverflow about how to get the "Application Root" from inside my web app. However, most of the approaches suggested in that thread can hardly be applied to my Xampp on Windows. Say, I've got a "common.php" which stays inside my web app's app directory: / /app/common.php /protected/index.php In my common.php, what I've got is like this: define('DOCROOT', $_SERVER['DOCUMENT_ROOT']); define('ABSPATH', dirname(__FILE__)); define('COMMONSCRIPT', $_SERVER['PHP_SELF']); After I required the common.php inside the /protected/index.php, I found this: C:/xampp/htdocs //<== echo DOCROOT; C:\xampp\htdocs\comic\app //<== echo ABSPATH /comic/protected/index.php //<== echo COMMONSCRIPT So the most troublesome part is the path delimiters are not universal, plus, it seems all superglobals from the $_SERVER[] asso array, such as $_SERVER['PHP_SELF'], are relative to the "caller" script, not the "callee" script. It seems that I can only rely on dirname(__FILE__) to make sure this always returns an absolute path to the common.php file. I can certainly parse the returning values from DOCROOT and ABSPATH and then calculate the correct "application root". For instance, I can compare the parts after htdocs and substitute all backslashes with slashes to get a unix-like path I wonder is this the right way to handle this? What if I deploy my web app on a LAMP environment? would this environment-dependent approach bomb me out? I have used some PHP frameworks such as CakePHP and CodeIgniter, to be frank, They just work on either LAMP or WAMP, but I am not sure how they approached such a elegant solution. Many thanks in advance for all the hints and suggestions.

    Read the article

  • Freemarker - lack of special character in email subject template cause email content crash

    - by freakman
    im fighting with strange error. Im using seperate freemarker templates for mail subject and body. It is sent using org.springframework.mail.javamail.JavaMailSender. Only templates that contains some special swedish character works in my application ( yes you read right... not the other way). If I delete it my email content crashes. It contains then: MIME-Version: 1.0 Content-Type: text/html;charset=UTF-8 Content-Transfer-Encoding: 7bit .. html code here .. My freemarker.properties file locale=sv_SE classic_compatible=false number_format= date_format=yyyy-MM-dd time_format=HH:mm datetime_format=yyyy-MM-dd HH:mm output_encoding=UTF-8 url_escaping_charset=UTF-8 auto_import=spring.ftl as spring auto_include= default_encoding=UTF-8 localized_lookup=true strict_syntax=true whitespace_stripping=true template_update_delay=10 Ive tried to convert subject file with dos2unix tool. Using 'find -bi subject.ftl' show that encoding is us-ascii. With added special character - utf-8. This thing is suprisingly strange for me... //SOLUTION: use :set bomb and save file in vim.

    Read the article

  • AJAX vs ActiveX/Flash for browser-based game

    - by iconiK
    I have been following the usage of JavaScript for the past few years, and with the release of extremely fast scripting engines (V8, SquirrelFish Extrene, TraceMonkey, etc.) the possibilities of JavaScript have increased dramatically. However, the usage share of Internet Explorer coupled with it's total lack of support for recent standards makes me want to drop a bomb on Microsoft's HQ, as it creates a huge amount of problems for any website. The game will need to be pretty dynamic client-side, with animations and other eye-candy things, but not a full-blown game like those that run directly in the OS using DirectX or OpenGL. However, this might be a little stretch for JavaScript and will certainly feel extremely slow in Internet Explorer (given that the current IE engine can be hundreds of times slower than SFX; gotta see what IE9 will bring), would it be better to just do the whole thing in Flash? I know this means requiring the plug-in AND I have no experience whatsoever with Flash (other than browsing YouTube :P). It also means I can't just output directly from PHP, I would have to use XML or some other format to pass data to it (JSON is directly integrated in JS and PHP can deal with it easily). Another idea would be to provide an alternative interface just for IE, though I don't know how (ActiveX maybe? or with Flash, then why not just provide it to all browsers) or totally not supporting it and requiring the use of other browsers, although this is plain stupid from a business perspective. So here am I, wondering what approach to take and thus asking for your advice. How should I build the client-side? AJAX in all browsers, Flash in all browsers or a mix (AJAX for "modern" browsers and something else for the "grandpa": IE).

    Read the article

  • howto distinguish composition and self-typing use-cases

    - by ayvango
    Scala has two instruments for expressing object composition: original self-type concept and well known trivial composition. I'm curios what situations I should use which in. There are obvious differences in their applicability. Self-type requires you to use traits. Object composition allows you to change extensions on run-time with var declaration. Leaving technical details behind I can figure two indicators to help with classification of use cases. If some object used as combinator for a complex structure such as tree or just have several similar typed parts (1 car to 4 wheels relation) than it should use composition. There is extreme opposite use case. Lets assume one trait become too big to clearly observe it and it got split. It is quite natural that you should use self-types for this case. That rules are not absolute. You may do extra work to convert code between this techniques. e.g. you may replace 4 wheels composition with self-typing over Product4. You may use Cake[T <: MyType] {part : MyType} instead of Cake { this : MyType => } for cake pattern dependencies. But both cases seem counterintuitive and give you extra work. There are plenty of boundary use cases although. One-to-one relations is very hard to decide with. Is there any simple rule to decide what kind of technique is preferable? self-type makes you classes abstract, composition makes your code verbose. self-type gives your problems with blending namespaces and also gives you extra typing for free (you got not just a cocktail of two elements but gasoline-motor oil cocktail known as a petrol bomb). How can I choose between them? What hints are there? Update: Let us discuss the following example: Adapter pattern. What benefits it has with both selt-typing and composition approaches?

    Read the article

  • How To Go About Updating Old C Code

    - by Ben313
    Hello: I have been working on some 10 year old C code at my job this week, and after implementing a few changes, I went to the boss and asked if he needed anything else done. Thats when he dropped the bomb. My next task was to go through the 7000 or so lines and understand more of the code, AND, to modularize the code somewhat. I asked him how he would like the source code modularized, and he said to start putting the old C code into c++ classes. Being a good worker, I nodded my head yes, and went back to my desk, where I sit now, wondering how in the world to take this code, and "modularize" it. Its already in 20 source files, each with its own purpose and function. in addition, there are three "main" structs. each of these stuctures has 30 plus fields, many of them being other, smaller sturcts. Its a complete mess to try to understand, but almost every single function in the program is passed a pointer to one of the structs, and uses the struct heavily. Is there any clean way for me to shoehorn this into classes? I am resolved to do it if it can be done, I just have no idea how to begin.

    Read the article

  • How to handle people who lie on their resume

    - by Juliet
    I'm conducting technical interviews to fill a few .NET positions. Many of the people I interview really do know .NET pretty well, but I find at least 90% of embellish their skillset anywhere between "a little" and "quite drastically". Sometimes they fabricate skills relevant to the position they're applying for, sometimes they not. Most of the people I interview, even the most egregious liars, are not scam artists. They just want to stand out among the crowd, so they drop a few buzzwords on their resume like "JBoss", "LINQ", "web services", "Django" or whatever just to pad their skillset and stay competitive. (You might wonder if a person lies about those skills, whether they are just bluffing their way through a technical interview. My interviews involve a lot of hands-on coding and problem-solving -- people who attempt to bluff will bomb the hands-on coding portion in the first 3 minutes.) These are two open-ended questions, but it would really help me out when I make my recommendations to the hiring managers: 1) Regarding interviewing etiquette, should I attempt to determine whether a person really possesses all of the skills they claim to have? Can I do this without making the candidate feel uncomfortable? 2) Regarding the final decision, should I recommend candidates who are genuinely qualified for the positions they're applying for, even if they've fabricated portions of their skillset?

    Read the article

  • crash when using stl vector at instead of operator[]

    - by Jamie Cook
    I have a method as follows (from a class than implements TBB task interface - not currently multithreading though) My problem is that two ways of accessing a vector are causing quite different behaviour - one works and the other causes the entire program to bomb out quite spectacularly (this is a plugin and normally a crash will be caught by the host - but this one takes out the host program as well! As I said quite spectacular) void PtBranchAndBoundIterationOriginRunner::runOrigin(int origin, int time) const // NOTE: const method { BOOST_FOREACH(int accessMode, m_props->GetAccessModes()) { // get a const reference to appropriate vector from member variable // map<int, vector<double>> m_rowTotalsByAccessMode; const vector<double>& rowTotalsForAccessMode = m_rowTotalsByAccessMode.find(accessMode)->second; if (origin != 129) continue; // Additional debug constrain: I know that the vector only has one non-zero element at index 129 m_job->Write("size: " + ToString(rowTotalsForAccessMode.size())); try { // check for early return... i.e. nothing to do for this origin if (!rowTotalsForAccessMode[origin]) continue; // <- this works if (!rowTotalsForAccessMode.at(origin)) continue; // <- this crashes } catch (...) { m_job->Write("Caught an exception"); // but its not an exception } // do some other stuff } } I hate not putting in well defined questions but at the moment my best phrasing is : "WTF?" I'm compiling this with Intel C++ 11.0.074 [IA-32] using Microsoft (R) Visual Studio Version 9.0.21022.8 and my implementation of vector has const_reference operator[](size_type _Pos) const { // subscript nonmutable sequence #if _HAS_ITERATOR_DEBUGGING if (size() <= _Pos) { _DEBUG_ERROR("vector subscript out of range"); _SCL_SECURE_OUT_OF_RANGE; } #endif /* _HAS_ITERATOR_DEBUGGING */ _SCL_SECURE_VALIDATE_RANGE(_Pos < size()); return (*(_Myfirst + _Pos)); } (Iterator debugging is off - I'm pretty sure) and const_reference at(size_type _Pos) const { // subscript nonmutable sequence with checking if (size() <= _Pos) _Xran(); return (*(begin() + _Pos)); } So the only difference I can see is that at calls begin instead of simply using _Myfirst - but how could that possibly be causing such a huge difference in behaviour?

    Read the article

  • Cannot determine ethernet address for proxy ARP on PPTP

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 and i get this error also on PPTP server error log Cannot determine ethernet address for proxy ARP Ping from Client2 to Client1 PING 10.0.0.60 (10.0.0.60) 56(84) bytes of data. --- 10.0.0.60 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5000ms route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables is stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • Very basic running of drools 5, basic setup and quickstart

    - by Berlin Brown
    Is there a more comprehensive quick start for drools 5. I was attempting to run the simple Hello World .drl rule but I wanted to do it through an ant script, possibly with just javac/java: I get the following error: Note: I don't am running completely without Eclipse or any other IDE: Is there a more comprehensive quick start for drools 5. I was attempting to run the simple Hello World .drl rule but I wanted to do it through an ant script, possibly with just javac/java: I get the following error: Note: I don't am running completely without Eclipse or any other IDE: test: [java] Exception in thread "main" org.drools.RuntimeDroolsException: Unable to load d ialect 'org.drools.rule.builder.dialect.java.JavaDialectConfiguration:java:org.drools.rule .builder.dialect.java.JavaDialectConfiguration' [java] at org.drools.compiler.PackageBuilderConfiguration.addDialect(PackageBuild erConfiguration.java:274) [java] at org.drools.compiler.PackageBuilderConfiguration.buildDialectConfigurati onMap(PackageBuilderConfiguration.java:259) [java] at org.drools.compiler.PackageBuilderConfiguration.init(PackageBuilderConf iguration.java:176) [java] at org.drools.compiler.PackageBuilderConfiguration.<init>(PackageBuilderCo nfiguration.java:153) [java] at org.drools.compiler.PackageBuilder.<init>(PackageBuilder.java:242) [java] at org.drools.compiler.PackageBuilder.<init>(PackageBuilder.java:142) [java] at org.drools.builder.impl.KnowledgeBuilderProviderImpl.newKnowledgeBuilde r(KnowledgeBuilderProviderImpl.java:29) [java] at org.drools.builder.KnowledgeBuilderFactory.newKnowledgeBuilder(Knowledg eBuilderFactory.java:29) [java] at org.berlin.rpg.rules.Rules.rules(Rules.java:33) [java] at org.berlin.rpg.rules.Rules.main(Rules.java:73) [java] Caused by: java.lang.RuntimeException: The Eclipse JDT Core jar is not in the classpath [java] at org.drools.rule.builder.dialect.java.JavaDialectConfiguration.setCompil er(JavaDialectConfiguration.java:94) [java] at org.drools.rule.builder.dialect.java.JavaDialectConfiguration.init(Java DialectConfiguration.java:55) [java] at org.drools.compiler.PackageBuilderConfiguration.addDialect(PackageBuild erConfiguration.java:270) [java] ... 9 more [java] Java Result: 1 ... ... I do include the following libraries with my javac and java target: <path id="classpath"> <pathelement location="${lib.dir}" /> <pathelement location="${lib.dir}/drools-api-5.0.1.jar" /> <pathelement location="${lib.dir}/drools-compiler-5.0.1.jar" /> <pathelement location="${lib.dir}/drools-core-5.0.1.jar" /> <pathelement location="${lib.dir}/janino-2.5.15.jar" /> </path> Here is the Java code that is throwing the error. I commented out the java.compiler code, that didn't work either. public void rules() { /* final Properties properties = new Properties(); properties.setProperty( "drools.dialect.java.compiler", "JANINO" ); PackageBuilderConfiguration cfg = new PackageBuilderConfiguration( properties ); JavaDialectConfiguration javaConf = (JavaDialectConfiguration) cfg.getDialectConfiguration( "java" ); */ final KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); // this will parse and compile in one step kbuilder.add(ResourceFactory.newClassPathResource("HelloWorld.drl", Rules.class), ResourceType.DRL); // Check the builder for errors if (kbuilder.hasErrors()) { System.out.println(kbuilder.getErrors().toString()); throw new RuntimeException("Unable to compile \"HelloWorld.drl\"."); } // Get the compiled packages (which are serializable) final Collection<KnowledgePackage> pkgs = kbuilder.getKnowledgePackages(); // Add the packages to a knowledgebase (deploy the knowledge packages). final KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase(); kbase.addKnowledgePackages(pkgs); final StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession(); ksession.setGlobal("list", new ArrayList<Object>()); ksession.addEventListener(new DebugAgendaEventListener()); ksession.addEventListener(new DebugWorkingMemoryEventListener()); // Setup the audit logging KnowledgeRuntimeLogger logger = KnowledgeRuntimeLoggerFactory.newFileLogger(ksession, "log/helloworld"); final Message message = new Message(); message.setMessage("Hello World"); message.setStatus(Message.HELLO); ksession.insert(message); ksession.fireAllRules(); logger.close(); ksession.dispose(); } ... Here I don't think Ant is relevant because I have fork set to true: <target name="test" depends="compile"> <java classname="org.berlin.rpg.rules.Rules" fork="true"> <classpath refid="classpath.rt" /> <classpath> <pathelement location="${basedir}" /> <pathelement location="${build.classes.dir}" /> </classpath> </java> </target> The error is thrown at line 1. Basically, I haven't done anything except call final KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); I am running with Windows XP, Java6, and within Ant.1.7. The most recent (as of yesterday) version 5 of Drools-Rules.

    Read the article

  • Silverlight: DataContractSerializer cannot handle read only collection properties

    - by moonground.de
    Hey Stackoverflowers :) For our Silverlight Project (SL4) I'm using a Model which might contain Lists (IList<AnotherModel>). According to good practice and rule CA2227:CollectionPropertiesShouldBeReadOnly the IList properties don't have a public setter. We serialize the Model using the DataContractSerializer which is working. But when I try to deserialize, a SecurityException is thrown by DataContractSerializer's ReadObject(Stream) Method, complaining that the target property (pointing to the IList property) cannot be set due to a missing public setter. Since the DataContractSerializer is sealed and neither extendable nor flexible so I currently see no chance to add some kind of additional rules which allow to deserialize the ILists using a foreach-loop on Add() method or some other method of transferring the collection items. I've also tried to dig into DataContractSerializer source (using Reflector) to create a little fork but it looks like i'd have to dig very deep and replicating whole serialization classes doesn't seem to be a viable solution. Do you see another chance to serialize a List with no public setter using the DataContractSerializer? Thank you very much in advance for your ideas! Thomas

    Read the article

  • activemq-maven-plugin ignore files in classpath?

    - by Oscar Chan
    I have been trying to get activemq-maven-plugin to run activemq with configuration in classpath of the bundle. However, I don't have much luck. It seems that the activemq-maven-plugin just ignore resources (resources/main/conf/activemq.properties) the local bundle. I checked the jar and target/classes and they are built into the right local. I am able to get plugin to run (mvn activemq:run) if I take out the PropertyPlaceholderConfigurer bean in the activemq.xml Did I do anything wrong? Here is the output [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to start ActiveMQ Broker Embedded error: Could not load properties; nested exception is java.io.FileNotFoundException: class path resource [conf/activemq.properties] cannot be opened because it does not exist [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2 seconds [INFO] Finished at: Mon May 03 15:56:05 PDT 2010 [INFO] Final Memory: 11M/79M [INFO] ------------------------------------------------------------------------ Here is the pom.xml, which I specific the plugin to look up activemq.xml via file, that works. However, in the activemq.xml <?xml version="1.0"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>oc.test</groupId> <artifactId>mq</artifactId> <version>0.1</version> <name>mq</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.activemq.tooling</groupId> <artifactId>maven-activemq-plugin</artifactId> <version>5.3.1</version> <configuration> <configUri>xbean:file:src/main/resources/conf/activemq.xml</configUri> <fork>false</fork> <systemProperties> <property> <name>javax.net.ssl.keyStorePassword</name> <value>password</value> </property> <property> <name>org.apache.activemq.default.directory.prefix</name> <value>./target/</value> </property> </systemProperties> </configuration> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring</artifactId> <version>2.5.5</version> </dependency> </dependencies> </plugin> </plugins> </build> </project> Here is the src/main/resources/conf/activemq.xml <?xml version="1.0"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd "> <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <value>classpath:conf/activemq.properties</value> </property> <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE"/> </bean> <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="./data"> <!-- The transport connectors ActiveMQ will listen to --> <transportConnectors> <transportConnector name="openwire" uri="tcp://localhost:61616"/> </transportConnectors> </broker> </beans> Here is the src/main/resources/conf/activemq.properties activemq.port=61616

    Read the article

  • "Requested registry access is not allowed." on Windows 7 / Vista

    - by Trainee4Life
    I'm attempting to write a key to Registry. It works on Windows XP, but fails on Windows 7 / Vista. The code below throws a Security Exception with description "Requested registry access is not allowed." RegistryKey regKey = Registry.LocalMachine.OpenSubKey("SOFTWARE\\App_Name\\" + subKey, true); I realise that this has to do with the UAC settings, but I couldn't figure out an ideal workaround. I don't want to fork out another process, and may be don't even want to request for any credentials. Just want it to work the same way as on Windows XP. I have modified the manifest file and removed requestedExecutionLevel node. This seems to do the trick. Is there any other possible workaround, and are there any serious flaws with the "manifest" solution?

    Read the article

  • Passenger Bundler::PathError

    - by firecall
    So I'm going around in circles with this - I'm using a fork of the Paperclip Rails gem to get it to work with Rails3. Works fine on my OSX box with Passenger. But on my server (CentOS 5) I get this this error: git://github.com/lmumar/paperclip.git (at rails3) is not checked out. Please runbundle install(Bundler::PathError)Blockquote I tried Bundle Pack, but that doesnt pack gems from github. I read a post about setting the parh to the BUNDLE_HOME in my application.rb file which I tried: ENV['BUNDLER_HOME']="~/.bundle/ruby/1.8/bundler/gems/" But that doesnt work. Any ideas anyone? I dont know what else to do and have no idea how to debug or trace the problem further :( Passenger version 2.2.11. thanks.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >