Search Results

Search found 3397 results on 136 pages for 'libwww perl'.

Page 118/136 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Can Foswiki be used as a distributed Redmine replacement? [closed]

    - by Tobias Kienzler
    I am quite familiar with and love using git, among other reasons due to its distributed nature. Now I'd like to set up some similarly distributed (FOSS) Project Management software with features similar to what Redmine offers, such as Issue & time tracking, milestones Gantt charts, calendar git integration, maybe some automatic linking of commits and issues Wiki (preferably with Mathjax support) Forum, news, notifications Multiple Projects However, I am looking for a solution that does not require a permanently accesible server, i.e. like in git, each user should have their own copy which can be easily synchronized with others. However it should be possible to not have a copy of every Project on every machine. Since trac uses multiple instances for multiple projects anyway, I was considering using that, but I neither know how well it adapts to simply giting the database itself (which would be be easiest way to handle the distribution due to git being used anyway), nor does it include all of Redmine's feature. After checking http://www.wikimatrix.org for Wikis with integrated tracking system and RCS support, and filtering out seemingly stale project, the choices basically boil down to Foswiki, TWiki and Ikiwiki. The latter doesn't seem to offer as many usability features, and in the TWiki vs Foswiki issue I tend to the latter. Finally, there is Fossil, which starts from the other end by attempting to replace git entirely and tracking itself. I am however not too comfortable with the thought of replacing git, and Fossil's non-SCM features don't seem to be as developed. Now before I invest too much time when someone else might already have tried this, I basically have two questions: Are there crucial features of Project Management software like Redmine that Foswiki does not provide even with all the extensions available? How to set Foswiki up to use git instead of the perl RcsLite?

    Read the article

  • Apache Server-Side Includes Refuse to Work (Tried everything in the docs but still no joy)

    - by raindog308
    Trying to get apache server-side includes to work. Really simple - just want to include a footer on each page. Apache 2.2: # ./httpd -v Server version: Apache/2.2.21 (Unix) Server built: Dec 4 2011 18:24:53 Cpanel::Easy::Apache v3.7.2 rev9999 mod_include is compiled in: # /usr/local/apache/bin/httpd -l | grep mod_include mod_include.c And it's in httpd.conf: # grep shtml httpd.conf AddType text/html .shtml DirectoryIndex index.html.var index.htm index.html index.shtml index.xhtml index.wml index.perl index.pl index.plx index.ppl index.cgi index.jsp index.js index.jp index.php4 index.php3 index.php index.phtml default.htm default.html home.htm index.php5 Default.html Default.htm home.html AddHandler server-parsed .shtml AddType text/html .shtml In the web directory I created a .htaccess with Options +Includes And then in the document, I have: <h1>next should be the include</h1> <!--#include virtual="/footer.html" --> <h1>include done</h1> And I see nothing in between those headers. Tried file=, also with/without absolute path. Is there something else I'm missing? I see the same thing on another unrelated server (more or less stock CentOS 6), so I suspect the problem is between keyboard and chair...

    Read the article

  • Choosing the right language for the job

    - by Ampt
    I'm currently working for a company on the engineering team of about 5-6 people and have been given the job of heading up the redesign of an embedded system tester. We've decided the general requirements and attributes that would be desirable in the system, and now I have to decide on a language to use for the system, or at the very least come up with a list of languages with pros and cons to present to the team. The general idea of the project is that we currently have a tester written in c++, which was never designed to be a tester, but instead has evolved to be such over the course of 3-4 years due to need. Writing tests for a new product requires modifying the 'framework' and writing code that is completely non-human readable or intuitive due to the way the system was originally designed. Now, we've decided that the time to modify this tester for each new product that we want to test has become too high and want to partially re-write the system so that we can program the actual tests in a scripting language that would then use the modified c++ framework on the back end to test the actual systems. The c++ framework would be responsible for doing all the actual work and the scripting language would just integrate with that to tell the framework what to do. Never having programmed in a scripting language (we program embedded systems), I've run into a wall where I have no experience with any of the languages that we could possibly use, but must somehow give pros and cons of each language so that we can choose the best one for the job. Currently my short list of possibilities includes: Python TCL Lua Perl My question is this: How can a person evaluate a language that he/she has never used before? What criteria are good indicators for a languages potential usability on a project? While helpful suggestions for my particular case are appreciated, I feel that this is a good skill to possess and would like to be able to apply this to many different projects if at all possible

    Read the article

  • Is it wise to ask about design decisions made on a product during an interview?

    - by Desolate Planet
    I've been thinking about interview questions lately and I've been reflecting on bad interview experiences I've had in the past. One of particular note is where I had asked the interviewer why the team chose to use Spring over EJB3 in their product. The interviewer pretty much tore my face off, yelling "Because Spring is not the be all and end all of Java software development, do you want this job or not?". In response to this, I told him that this probably wasn't the job for me and I walked out the interview. He told me at the start of the interview that they had high stuff turnover, the product had gone from Modula 3 to Perl to Java then after asking him a technical question, he went in flames. It seemed obvious to me that he was toxic to the company with that kind of attitude. Question: Is it a good idea to probe on architectural choices taken in an interview? If not, why? From my own point of view, an interview is a two-way process. If the interviewers are testing me on my technical skills, I've got every right to ask them the same questions to 1) Figure out what their mindset and attitudes towards developing software solutions are and 2) To figure out if there are in line with how I would approach problems of that kind. It's very possible that the interviewer who got angry was a bad interviewer and forgot that an interview is a two-way process. If I was asked this, I would have simply said something along the lines of wanting to leverage the container more, but I certainly wouldn't have tried to put him in a state of meek capitulation. The interviewer in question was the lead developer in the team.

    Read the article

  • Software for video subscription service

    - by Clinton Blackmore
    I'd like to sell instructional videos over the web. Primarily, I'd like uses to subscribe to the site and be allowed access to videos over the internet. Secondarily, I might sell DVDs for those who have poor internet connections or would like a physical copy, or possibly I'd sell eBooks and the like in the future. Regarding the subscriptions: I'd like a system that automatically sends out e-mails when it is time to renew I'd like to be able to offer free trials Users without a free trial or subscription should not be able to access the content Incidentally, I plan to host videos on my current web host and move them to a CDN when volume (and capital) make this a good idea. While I have no intention to go crazy with the DRM, it seems expedient not to directly link to the files -- how can I link to them indirectly? It would be nice to support multiple payment processors -- specifically, I'd like to avoid a PayPal only approach. Are there any web applications (or plugins) you'd recommend for something like this? While I've set up and administered several web technologies, I've never done anything with e-commerce. I see there are possibilities like osCommerce, one friend recommends using WordPress with plugins, and it really appears that for any given CMS, you can graft on components like this, although I imagine that not all are created equal. As I'm not tied to a particular web application (and, while open source software that can run on a LAMP [p=perl, python, php] stack is preferable), I'd like to make a good choice at the beginning.

    Read the article

  • rkhunter: right way to handle warnings further?

    - by zuba
    I googled some and checked out two first links it found: http://www.skullbox.net/rkhunter.php http://www.techerator.com/2011/07/how-to-detect-rootkits-in-linux-with-rkhunter/ They don't mention what shall I do in case of such warnings: Warning: The command '/bin/which' has been replaced by a script: /bin/which: POSIX shell script text executable Warning: The command '/usr/sbin/adduser' has been replaced by a script: /usr/sbin/adduser: a /usr/bin/perl script text executable Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne-Again shell script text executable Warning: The file properties have changed: File: /usr/bin/lynx Current hash: 95e81c36428c9d955e8915a7b551b1ffed2c3f28 Stored hash : a46af7e4154a96d926a0f32790181eabf02c60a4 Q1: Is there more extended HowTos which explain how to deal with different kind warnings? And the second question. Were my actions sufficient to resolve these warnings? a) To find the package which contains the suspicious file, e.g. it is debianutils for the file /bin/which ~ > dpkg -S /bin/which debianutils: /bin/which b) To check the debianutils package checksums: ~ > debsums debianutils /bin/run-parts OK /bin/tempfile OK /bin/which OK /sbin/installkernel OK /usr/bin/savelog OK /usr/sbin/add-shell OK /usr/sbin/remove-shell OK /usr/share/man/man1/which.1.gz OK /usr/share/man/man1/tempfile.1.gz OK /usr/share/man/man8/savelog.8.gz OK /usr/share/man/man8/add-shell.8.gz OK /usr/share/man/man8/remove-shell.8.gz OK /usr/share/man/man8/run-parts.8.gz OK /usr/share/man/man8/installkernel.8.gz OK /usr/share/man/fr/man1/which.1.gz OK /usr/share/man/fr/man1/tempfile.1.gz OK /usr/share/man/fr/man8/remove-shell.8.gz OK /usr/share/man/fr/man8/run-parts.8.gz OK /usr/share/man/fr/man8/savelog.8.gz OK /usr/share/man/fr/man8/add-shell.8.gz OK /usr/share/man/fr/man8/installkernel.8.gz OK /usr/share/doc/debianutils/copyright OK /usr/share/doc/debianutils/changelog.gz OK /usr/share/doc/debianutils/README.shells.gz OK /usr/share/debianutils/shells OK c) To relax about /bin/which as I see OK /bin/which OK d) To put the file /bin/which to /etc/rkhunter.conf as SCRIPTWHITELIST="/bin/which" e) For warnings as for the file /usr/bin/lynx I update checksum with rkhunter --propupd /usr/bin/lynx.cur Q2: Do I resolve such warnings right way?

    Read the article

  • Unit Tests as a learning tool - a good idea?

    - by Ekkehard.Horner
    I'm interested in ways and means for learning (a) programming language(s) efficiently. I believe that using Unit Test concepts and infrastructure early in that process is a good thing, even better than starting with "Hello world". Why: To write a decent program even for a toy/restricted problem in a new language, you'll have to master many heterogenous concepts (control flow & variables & IO ...), you are tempted to glance over details just to get your program 'to work'. Putting (your understanding of) the facts about the new language in assertions with good descriptions (=success messages) enforces thinking thru/clearness/precision. Grouping topics and adding assertions to such groups is much easier than incorporation features from the 2. chapter of your "Learning X" book to your chapter 1 program. Why not: 'Real' Unit Tests are meant to output "1234 tests ok; 1 failure: saveWorld() chokes on negative input"; 'didactic' Unit Tests should output relevant facts about the new language like perl6 10-string.t # ### p5chop ... ok 13 - p5chop( "cbä" ) returns "ä" ok 14 - after that, victim is changed to "cb" # ### (p6) chop ... ok 27 - (p6) chop( "cbä" ) returns chopped copy: "cb" ok 18 - after that, victim is unchanged: "cbä" # ### chomp ... So (mis?)using Unit Tests may be counterproductive - practicing actions while learning you wouldn't use professionally. How: Writing 'didactic' Unit Tests in languages with lightweight testing systems (Perl 5/6) is easy; (mis?)using more elaborate systems (JUnit, CppUnit) may be not worth the effort or not suitable for a person just starting with a new language. So Is using Unit Tests as a learning tool a bad idea? Can the Unit Test tool(s) of your favourite language(s) used didactically? Should implementation details (eventually) be discussed here or over at stackoverflow.com?

    Read the article

  • Is Movable Type among the most secure PHP blogs? How secure are the various PHP blog applications?

    - by user6025
    Basically I'm trying to find a blog for a website, and security is the highest priority in our case. We don't need any features that I would imagine are special. Wordpress was our first idea, but its reputation precedes it, and though it may have cleaned up its act lately, I'm not seeing much solid evidence. I get the impression that Movable Type (at least the Perl version) has a much better reputation for security than Wordpress (historically at least). I'm not sure I want to take a chance with Wordpress at this point, but is there some objective source I can got to to back up (or counter) the notion that MT is at least among the best? Secunia doesn't recommend using their stats for comparisons, and securityfocus.com doesn't have stats at all that I can see. Searching here http://web.nvd.nist.gov makes MT look way better than WP (at least in 2007), but this site was referenced by MT's own page boasting about their security, so I don't know how relevant it is or how seriously people take it. Any suggestions on sites where I could/should make a somewhat objective comparison?

    Read the article

  • Formatting php, what works more efficiently?

    - by JamesM-SiteGen
    Hello fellow programmers, I was just wondering what makes php work faster, I have a few methods that I always go and do, but that only improves the way I can read it, but how about the interpreter? Should I include the curly braces when there is only one statement to run? if(...){ echo "test"; } # Or.. if(...) echo "test"; === Which should be used? I have also found http://beta.phpformatter.com/ and I find the following settings to be good, but are they? Indentation: Indentation style: {K&R (One true brace style)} Indent with: {Tabs} Starting indentation: [1] Indentation: [1] Common: [x] Remove all comments [x] Remove empty lines [x] Align assignments statements nicely [ ] Put a comment with the condition after if, while, for, foreach, declare and catch statements Improvement: [x] Remove lines with just a semicolon (;) [x] Make normal comments (//) from perl comments (#) [x] Make long opening tag (<?php) from short one (<?) Brackets: [x] Space inside brackets- ( ) [x] Space inside empty brackets- ( ) [x] Space inside block brackets- [ ] [x] Space inside empty block brackets- [ ] Tiny var names: often I go through my code and change $var1 to $a, $var2 to $b and so on. I do include comments at the start of the file to show to me what each letter(s) mean.. Final note: So am I doing the right thing with the curly braces and the settings? Are there any great tips that help it run faster?

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • Netcat I/O enhancements

    - by user13277689
    When Netcat integrated into OpenSolaris it was already clear that there will be couple of enhancements needed. The biggest set of the changes made after Solaris 11 Express was released brings various I/O enhancements to netcat shipped with Solaris 11. Also, since Solaris 11, the netcat package is installed by default in all distribution forms (live CD, text install, ...). Now, let's take a look at the new functionality: /usr/bin/netcat alternative program name (symlink) -b bufsize I/O buffer size -E use exclusive bind for the listening socket -e program program to execute -F no network close upon EOF on stdin -i timeout extension of timeout specification -L timeout linger on close timeout -l -p port addr previously not allowed usage -m byte_count Quit after receiving byte_count bytes -N file pattern for UDP scanning -I bufsize size of input socket buffer -O bufsize size of output socket buffer -R redir_spec port redirection addr/port[/{tcp,udp}] syntax of redir_spec -Z bypass zone boundaries -q timeout timeout after EOF on stdin Obviously, the Swiss army knife of networking tools just got a bit thicker. While by themselves the options are pretty self explanatory, their combination together with other options, context of use or boundary values of option arguments make it possible to construct small but powerful tools. For example: the port redirector allows to convert TCP stream to UDP datagrams. the buffer size specification makes it possible to send one byte TCP segments or to produce IP fragments easily. the socket linger option can be used to produce TCP RST segments by setting the timeout to 0 execute option makes it possible to simulate TCP/UDP servers or clients with shell/python/Perl/whatever script etc. If you find some other helpful ways use please share via comments. Manual page nc(1) contains more details, along with examples on how to use some of these new options.

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, JavaScript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist someone like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (i.e. like Thread, Synchronization, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • Tried teaching myself to program before college, accidently overwhelmed myself, tips?

    - by Gunnar Keith
    I'm sixteen, I'm overly interested in programming, and I'm currently taking IT classes during my mornings in high school. Last year, I tried teaching myself to code. It was quite exciting, but all I did was watch TheNewBoston's videos on YouTube for Python. After his tutorials, I just did research, made some CMD programs, and that's it. After that, I got cocky and got my feet wet in many other languages. Java, C++, C#, Perl, Ruby... and it overwhelmed me. Which made it less fun to code. I want to go to college for a 2 year programming course. And I want to make writing code my profession. But how do you recommend I attack re-learning it all again? Start with Python? Don't even try? Also, I'm not 100% in math, but I'm good friends with a lot of programmers, who say they suck at math, but manage to code just fine. I'm not looking for negative feedback. I just want the proper head-start on things before college.

    Read the article

  • KVM not installed?

    - by NJRandy
    When I run virt-manager, and click on the icon to create a new virtual machine, I get an error that KVM is not installed or is not loaded. I use Ubuntu GNOME 14.04 All qemu packages are version 2.0.0+dfsg-2ubuntu1 qemu-kvm and many other qemu packages installed... libvirt packages: 1.2.2-0ubuntu13.1 libvirt0 libvirt-bin libvirt-doc python-libvirt virt-manager 0.9.5-1ubuntu3 When I open terminal and enter lsmod | grep kvm I get nothing returned. No lines showing kvm or kvm_amd and no error of any kind. Hardware: Tyan S2877 with dual Opteron 285s I have the latest bios and don't see any setting in there to turn virtualization on or off. when I run sudo apt-get -s install qemu-kvm Here are the results: Reading package lists... Done Building dependency tree Reading state information... Done qemu-kvm is already the newest version. The following packages were automatically installed and are no longer required: kde-l10n-engb libgtk2-gladexml-perl libqt4-test libvncserver0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. @jobin: the problem was my hardware. I just bought it a few months ago, although obviously used LOL

    Read the article

  • Script language native extensions - avoiding name collisions and cluttering others' namespace

    - by H2CO3
    I have developed a small scripting language and I've just started writing the very first native library bindings. This is practically the first time I'm writing a native extension to a script language, so I've run into a conceptual issue. I'd like to write glue code for popular libraries so that they can be used from this language, and because of the design of the engine I've written, this is achieved using an array of C structs describing the function name visible by the virtual machine, along with a function pointer. Thus, a native binding is really just a global array variable, and now I must obviously give it a (preferably good) name. In C, it's idiomatic to put one's own functions in a "namespace" by prepending a custom prefix to function names, as in myscript_parse_source() or myscript_run_bytecode(). The custom name shall ideally describe the name of the library which it is part of. Here arises the confusion. Let's say I'm writing a binding for libcURL. In this case, it seems reasonable to call my extension library curl_myscript_binding, like this: MYSCRIPT_API const MyScriptExtFunc curl_myscript_lib[10]; But now this collides with the curl namespace. (I have even thought about calling it curlmyscript_lib but unfortunately, libcURL does not exclusively use the curl_ prefix -- the public APIs contain macros like CURLCODE_* and CURLOPT_*, so I assume this would clutter the namespace as well.) Another option would be to declare it as myscript_curl_lib, but that's good only as long as I'm the only one who writes bindings (since I know what I am doing with my namespace). As soon as other contributors start to add their own native bindings, they now clutter the myscript namespace. (I've done some research, and it seems that for example the Perl cURL binding follows this pattern. Not sure what I should think about that...) So how do you suggest I name my variables? Are there any general guidelines that should be followed?

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Reaching Intermediate Programming Status

    - by George Stocker
    I am a software engineer that's had positions programming in VBA (though I dare not consider that 'real' experience, as it was trial and error!), Perl w/ CGI, C#, and ASP.NET. The latter two are post-undergraduate, with my entrance into the 'real world'. I'm 2 years out of college, and have had 5 years of experience (total) across the languages I've mentioned. However, when it comes to my resume, I can only put 2 years down for C#, and less than a year down for ASP.NET. I feel like I know C#, but I still have to spend time going 'What does this method do?', whereas some of the more senior level engineers can immediately say, "Oh, Method X does this, without ever having looked at that method before." So I know empirically that there's a gulf there, but I'm not exactly sure how to bridge it. I've started programming in Project Euler, and I picked up a book on design patterns, but I still feel like I spend each day treading water, instead of moving forward. That isn't to say that I don't feel like I've made progress, it just means that as far as I come each day, I still see the mountain top way off in the distance. My question is this: How did you overcome this plateau? How long did it take you? What methods can you suggest to assist me in this? I've read through Code Complete, The Mythical Man Month, and CLR via C#, 2nd edition -- my question is: What do I do now? Edit: I just found this question on projects for an intermediate level programmer. I think it adds to the discussion (though it does not supplant my question). As such, I'm adding it to the question as a "For More Information".

    Read the article

  • sqlplus: Running "set lines" and "set pagesize" automatially

    - by katsumii
    This is a followup to my previous entry. Using the full tty real estate with sqlplus (INOUE Katsumi @ Tokyo) 'rlwrap' is widely used for adding 'sqlplus' the history function and command line editing. Here's another but again kludgy implementation. First this is the alias. alias sqlplus="rlwrap -z ~/sqlplus.filter sqlplus" And this is the file content. #!/usr/bin/env perl use lib ($ENV{RLWRAP_FILTERDIR} or "."); use RlwrapFilter; use POSIX qw(:signal_h); use strict; my $filter = new RlwrapFilter; $filter -> prompt_handler(\&prompt); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; sub winchHandler { $filter -> input_handler(\&input); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; } sub input { $filter -> input_handler(undef); return `resize |sed -n "1s/COLUMNS=/set linesize /p;2s/LINES=/set pagesize /p"` . $_; } sub prompt { if ($_ =~ "SQL> ") { $filter -> input_handler(\&input); $filter -> prompt_handler(undef); } return $_; } I hope I can compare these 2 implementations after testing more and getting some feedbacks.

    Read the article

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, Javascript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist some one like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (ie like Thread, Synchronisation, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • Open Source Projects for Beginning Coders?

    - by MattDMo
    After working as a molecular biologist at the bench for many years, I lost my job last year and am thinking about a career change. I've been using open-source software and doing Linux system administration since the mid 90s, and have written/improved some small shell/Perl/PHP scripts, and am very comfortable building from source, but never progressed to creating non-trivial programs de novo. I want to move to actually learning real programming skills and contributing back to the community, with the possible eventual goal of getting into bioinformatics as a career in the future. I'm a stay-at-home dad now, so I have some time on my hands. I've done a lot of research on languages, and have settled on Python as my major focus for now. I'm set up on GitHub, but haven't forked anything yet. I've looked around OpenHatch some, but nothing really grabbed me. I've heard the advice to work on what you use/love, but that category is so broad that I'm having trouble finding any one thing to get started on. What are your suggestions for getting started? How do you pick a project that will welcome your (possibly amateurish) help? With a fairly limited skill set, how do you find a request that you can handle? What are common newbie mistakes to avoid? Any other advice?

    Read the article

  • how to create java zip archives with a max file size limit [closed]

    - by Marci Casvan
    I need to write an algorithm in java (for an android app) to read a folder containing more folders and each of those containing images and audio files so the structure is this: mainDir/subfolders/myFile1.jpg It must be in java, something like perl script is not an option. It would preferably be for the compressed archive in order to squeeze as many files as possible before mailing the zip. Just a normal zip (no jar). My problem is that I need to limit the size of the archive to 16mb and at runtime, create as many archives as needed to contain all my files from my main mainDir folder. I tried several examples from the net, I read the java documentation, but I can't manage to understand and put it all together the way I need it. Has someone done this before or has a link or an example for me? I resolved the reading of the files with a recursive method but I can't write the logic for the zip creation I'm open for suggestions or better, a working example. EDIT: FileNotFoundException (no such file or directory) this was my initial post at Stack Overflow. I've got an answer to it, but I can't set the size of the ZipEntry and the logic doesn't work and also when extracting the my files from the zip I get the compression method not supported error.

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • Package libxul not fount - Kiwix Wikpedia in Ubuntu Precise 12.04

    - by JHOSmAN
    I'm trying to install the service Kiwix but I need a library that is not available for Ubuntu 12.04 LTS Precise leave the log and if someone could tell me how to install Seller would appreciate. kiwix-0.9# ls aclocal.m4 COMPILE config.sub COPYING install-sh ltmain.sh missing static AUTHORS config.guess configure depcomp kiwix Makefile.am README CHANGELOG config.log configure.ac desktop libxul-dev_1.8.1.16+nobinonly-0ubuntu1_all.deb Makefile.in src root@ubuntu-MM061:/home/ubuntu/Escritorio/kiwix-0.9# ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking for g++... g++ checking for cl... no checking for cl... no checking for Xcode... no checking for jar... jar checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a sed that does not truncate output... /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for ar... ar checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking whether we are using the GNU C++ compiler... (cached) yes checking whether g++ accepts -g... (cached) yes checking dependency style of g++... (cached) gcc3 checking how to run the C++ preprocessor... g++ -E checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking for ld used by g++... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld) supports shared libraries... yes checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking for ranlib... (cached) ranlib checking whether make sets $(MAKE)... (cached) yes checking for pkg-config... pkg-config checking for perl... perl checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdint.h... (cached) yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for strings.h... (cached) yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking for unistd.h... (cached) yes checking wchar.h usability... yes checking wchar.h presence... yes checking for wchar.h... yes checking for stdbool.h that conforms to C99... yes checking for _Bool... no checking for inline... inline checking for int16_t... yes checking for int32_t... yes checking for int64_t... yes checking for int8_t... yes checking for off_t... yes checking for pid_t... yes checking for size_t... yes checking for uint16_t... yes checking for uint32_t... yes checking for uint64_t... yes checking for uint8_t... yes checking for ptrdiff_t... yes checking vfork.h usability... no checking vfork.h presence... no checking for vfork.h... no checking for fork... yes checking for vfork... yes checking for working fork... yes checking for working vfork... (cached) yes checking for stdlib.h... (cached) yes checking for GNU libc compatible malloc... yes checking for working strtod... yes checking for getcwd... yes checking for gettimeofday... yes checking for memmove... yes checking for memset... yes checking for pow... yes checking for regcomp... yes checking for sqrt... yes checking for strcasecmp... yes checking for strchr... yes checking for strdup... yes checking for strerror... yes checking for strtol... yes Package libxul was not found in the pkg-config search path. Perhaps you should add the directory containing libxul.pc' to the PKG_CONFIG_PATH environment variable No package 'libxul' found Package libxul was not found in the pkg-config search path. Perhaps you should add the directory containinglibxul.pc' to the PKG_CONFIG_PATH environment variable No package 'libxul' found checking for /stable... no checking for "/nsISupports.idl"... no configure: error: unable to find nsISupports.idl apt-get install libxul Leyendo lista de paquetes... Hecho Creando árbol de dependencias Leyendo la información de estado... Hecho E: No se ha podido localizar el paquete libxul

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >