Search Results

Search found 3541 results on 142 pages for 'idiomatic perl'.

Page 124/142 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, JavaScript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist someone like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (i.e. like Thread, Synchronization, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • Tried teaching myself to program before college, accidently overwhelmed myself, tips?

    - by Gunnar Keith
    I'm sixteen, I'm overly interested in programming, and I'm currently taking IT classes during my mornings in high school. Last year, I tried teaching myself to code. It was quite exciting, but all I did was watch TheNewBoston's videos on YouTube for Python. After his tutorials, I just did research, made some CMD programs, and that's it. After that, I got cocky and got my feet wet in many other languages. Java, C++, C#, Perl, Ruby... and it overwhelmed me. Which made it less fun to code. I want to go to college for a 2 year programming course. And I want to make writing code my profession. But how do you recommend I attack re-learning it all again? Start with Python? Don't even try? Also, I'm not 100% in math, but I'm good friends with a lot of programmers, who say they suck at math, but manage to code just fine. I'm not looking for negative feedback. I just want the proper head-start on things before college.

    Read the article

  • KVM not installed?

    - by NJRandy
    When I run virt-manager, and click on the icon to create a new virtual machine, I get an error that KVM is not installed or is not loaded. I use Ubuntu GNOME 14.04 All qemu packages are version 2.0.0+dfsg-2ubuntu1 qemu-kvm and many other qemu packages installed... libvirt packages: 1.2.2-0ubuntu13.1 libvirt0 libvirt-bin libvirt-doc python-libvirt virt-manager 0.9.5-1ubuntu3 When I open terminal and enter lsmod | grep kvm I get nothing returned. No lines showing kvm or kvm_amd and no error of any kind. Hardware: Tyan S2877 with dual Opteron 285s I have the latest bios and don't see any setting in there to turn virtualization on or off. when I run sudo apt-get -s install qemu-kvm Here are the results: Reading package lists... Done Building dependency tree Reading state information... Done qemu-kvm is already the newest version. The following packages were automatically installed and are no longer required: kde-l10n-engb libgtk2-gladexml-perl libqt4-test libvncserver0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. @jobin: the problem was my hardware. I just bought it a few months ago, although obviously used LOL

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Reaching Intermediate Programming Status

    - by George Stocker
    I am a software engineer that's had positions programming in VBA (though I dare not consider that 'real' experience, as it was trial and error!), Perl w/ CGI, C#, and ASP.NET. The latter two are post-undergraduate, with my entrance into the 'real world'. I'm 2 years out of college, and have had 5 years of experience (total) across the languages I've mentioned. However, when it comes to my resume, I can only put 2 years down for C#, and less than a year down for ASP.NET. I feel like I know C#, but I still have to spend time going 'What does this method do?', whereas some of the more senior level engineers can immediately say, "Oh, Method X does this, without ever having looked at that method before." So I know empirically that there's a gulf there, but I'm not exactly sure how to bridge it. I've started programming in Project Euler, and I picked up a book on design patterns, but I still feel like I spend each day treading water, instead of moving forward. That isn't to say that I don't feel like I've made progress, it just means that as far as I come each day, I still see the mountain top way off in the distance. My question is this: How did you overcome this plateau? How long did it take you? What methods can you suggest to assist me in this? I've read through Code Complete, The Mythical Man Month, and CLR via C#, 2nd edition -- my question is: What do I do now? Edit: I just found this question on projects for an intermediate level programmer. I think it adds to the discussion (though it does not supplant my question). As such, I'm adding it to the question as a "For More Information".

    Read the article

  • sqlplus: Running "set lines" and "set pagesize" automatially

    - by katsumii
    This is a followup to my previous entry. Using the full tty real estate with sqlplus (INOUE Katsumi @ Tokyo) 'rlwrap' is widely used for adding 'sqlplus' the history function and command line editing. Here's another but again kludgy implementation. First this is the alias. alias sqlplus="rlwrap -z ~/sqlplus.filter sqlplus" And this is the file content. #!/usr/bin/env perl use lib ($ENV{RLWRAP_FILTERDIR} or "."); use RlwrapFilter; use POSIX qw(:signal_h); use strict; my $filter = new RlwrapFilter; $filter -> prompt_handler(\&prompt); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; sub winchHandler { $filter -> input_handler(\&input); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; } sub input { $filter -> input_handler(undef); return `resize |sed -n "1s/COLUMNS=/set linesize /p;2s/LINES=/set pagesize /p"` . $_; } sub prompt { if ($_ =~ "SQL> ") { $filter -> input_handler(\&input); $filter -> prompt_handler(undef); } return $_; } I hope I can compare these 2 implementations after testing more and getting some feedbacks.

    Read the article

  • Open Source Projects for Beginning Coders?

    - by MattDMo
    After working as a molecular biologist at the bench for many years, I lost my job last year and am thinking about a career change. I've been using open-source software and doing Linux system administration since the mid 90s, and have written/improved some small shell/Perl/PHP scripts, and am very comfortable building from source, but never progressed to creating non-trivial programs de novo. I want to move to actually learning real programming skills and contributing back to the community, with the possible eventual goal of getting into bioinformatics as a career in the future. I'm a stay-at-home dad now, so I have some time on my hands. I've done a lot of research on languages, and have settled on Python as my major focus for now. I'm set up on GitHub, but haven't forked anything yet. I've looked around OpenHatch some, but nothing really grabbed me. I've heard the advice to work on what you use/love, but that category is so broad that I'm having trouble finding any one thing to get started on. What are your suggestions for getting started? How do you pick a project that will welcome your (possibly amateurish) help? With a fairly limited skill set, how do you find a request that you can handle? What are common newbie mistakes to avoid? Any other advice?

    Read the article

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, Javascript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist some one like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (ie like Thread, Synchronisation, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • how to create java zip archives with a max file size limit [closed]

    - by Marci Casvan
    I need to write an algorithm in java (for an android app) to read a folder containing more folders and each of those containing images and audio files so the structure is this: mainDir/subfolders/myFile1.jpg It must be in java, something like perl script is not an option. It would preferably be for the compressed archive in order to squeeze as many files as possible before mailing the zip. Just a normal zip (no jar). My problem is that I need to limit the size of the archive to 16mb and at runtime, create as many archives as needed to contain all my files from my main mainDir folder. I tried several examples from the net, I read the java documentation, but I can't manage to understand and put it all together the way I need it. Has someone done this before or has a link or an example for me? I resolved the reading of the files with a recursive method but I can't write the logic for the zip creation I'm open for suggestions or better, a working example. EDIT: FileNotFoundException (no such file or directory) this was my initial post at Stack Overflow. I've got an answer to it, but I can't set the size of the ZipEntry and the logic doesn't work and also when extracting the my files from the zip I get the compression method not supported error.

    Read the article

  • Package libxul not fount - Kiwix Wikpedia in Ubuntu Precise 12.04

    - by JHOSmAN
    I'm trying to install the service Kiwix but I need a library that is not available for Ubuntu 12.04 LTS Precise leave the log and if someone could tell me how to install Seller would appreciate. kiwix-0.9# ls aclocal.m4 COMPILE config.sub COPYING install-sh ltmain.sh missing static AUTHORS config.guess configure depcomp kiwix Makefile.am README CHANGELOG config.log configure.ac desktop libxul-dev_1.8.1.16+nobinonly-0ubuntu1_all.deb Makefile.in src root@ubuntu-MM061:/home/ubuntu/Escritorio/kiwix-0.9# ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking for g++... g++ checking for cl... no checking for cl... no checking for Xcode... no checking for jar... jar checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a sed that does not truncate output... /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for ar... ar checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking whether we are using the GNU C++ compiler... (cached) yes checking whether g++ accepts -g... (cached) yes checking dependency style of g++... (cached) gcc3 checking how to run the C++ preprocessor... g++ -E checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking for ld used by g++... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld) supports shared libraries... yes checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking for ranlib... (cached) ranlib checking whether make sets $(MAKE)... (cached) yes checking for pkg-config... pkg-config checking for perl... perl checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdint.h... (cached) yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for strings.h... (cached) yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking for unistd.h... (cached) yes checking wchar.h usability... yes checking wchar.h presence... yes checking for wchar.h... yes checking for stdbool.h that conforms to C99... yes checking for _Bool... no checking for inline... inline checking for int16_t... yes checking for int32_t... yes checking for int64_t... yes checking for int8_t... yes checking for off_t... yes checking for pid_t... yes checking for size_t... yes checking for uint16_t... yes checking for uint32_t... yes checking for uint64_t... yes checking for uint8_t... yes checking for ptrdiff_t... yes checking vfork.h usability... no checking vfork.h presence... no checking for vfork.h... no checking for fork... yes checking for vfork... yes checking for working fork... yes checking for working vfork... (cached) yes checking for stdlib.h... (cached) yes checking for GNU libc compatible malloc... yes checking for working strtod... yes checking for getcwd... yes checking for gettimeofday... yes checking for memmove... yes checking for memset... yes checking for pow... yes checking for regcomp... yes checking for sqrt... yes checking for strcasecmp... yes checking for strchr... yes checking for strdup... yes checking for strerror... yes checking for strtol... yes Package libxul was not found in the pkg-config search path. Perhaps you should add the directory containing libxul.pc' to the PKG_CONFIG_PATH environment variable No package 'libxul' found Package libxul was not found in the pkg-config search path. Perhaps you should add the directory containinglibxul.pc' to the PKG_CONFIG_PATH environment variable No package 'libxul' found checking for /stable... no checking for "/nsISupports.idl"... no configure: error: unable to find nsISupports.idl apt-get install libxul Leyendo lista de paquetes... Hecho Creando árbol de dependencias Leyendo la información de estado... Hecho E: No se ha podido localizar el paquete libxul

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • How do I get away from PHP in the web industry?

    - by Kurtis
    I'm just looking for some tips on getting away from using PHP for web development. I'm self-employed but it seems like all of the work I find deals with PHP. I'm not complaining about the work -- just the poor choice of a language that is incredibly popular. I'd love to do my web development in Python, Perl, C#, or even a fun and fancy functional language. There's the old saying that you don't tell a carpenter what kind of a hammer to use. At the same time, you do tell them what kind of material to build your house out of and how much you're willing to spend. The problem I am running in to is that I don't know how to get out of this spiral. I can't just turn down work because then I wouldn't have any. I really don't want to go work for another company -- and even if I did, I'd probably still be stuck using something I don't enjoy. I'm hoping someone has "been there" before and might have some good ideas on how to get out of this situation. Thanks!

    Read the article

  • Picking Core Language For Large Scale Web Platform

    - by ryanzec
    Now I have work with PHP and ASP.NET quite a bit and also played around few other language for web development. I am now at a point where need to start building a backend platform that will have the ability to support a large set of applications and I am trying to figure out which language I want to choose as my core language. When I say core language I mean the language that the majority of the backend code is going to be in. This is not to say that other languages won't be used because my guess is that they will but I want a large majority of the code (90%-98%) to be in 1 language. While I see to benefit of using the language that is best for the job, having 15% in php, 15% in ASP.NET, 5% in perl, 10% in python, 15% in ruby, etc… seems like a very bad idea to me (not to mention integrating everything seamlessly would probably add a bit of overhead). If you were going to be building a large scale web platform that need to support multiple applications from scratch, what would you choose as your core language and why?

    Read the article

  • Dropping the full-time high-pay gig - I need help choosing a smart path that I can rely on to produce enough to survive comfortably ($2,500 per month)

    - by Jeff V
    I have about 6 years of full time experience developing web applications and tools. I know perl, python, PHP, ruby, and a good deal of SQL and relational theory. I have never had to choose a self-employed path as I have always had full time work or a bank account (credit cards) to support a big project. I'm planning to move out of the country to an area that will not offer local employment, and need some advice on what to focus on. I want to move in no more than six months, I have enough savings to live for an additional six months, but I would like to conserve it as much as possible. I enjoy taking risks, so I'm not looking for discussion of whether this is a good idea or not. I want advice on the most reliable solution given my skill set. Some paths I'm considering: Learn objective-c and build quality Apple software. Develop subscription based web tools for SEO, or other Marketing applications Attempt to acquire freelance projects by developing a reputation within open source projects, freelancer.com, and other online communities The last time I left my job, I was building a startup (that went under), and missed out living in a beautiful place due to the amount of time I worked. I would like to work 30-40 hours per week max. I can dedicate 10-15 hours per week while at my current job to prepare and learn. A preemptive thanks for the advice...

    Read the article

  • Static pages for large photo album

    - by Phil P
    I'm looking for advice on software for managing a largish photo album for a website. 2000+ pictures, one-time drop (probably). I normally use MarginalHack's album, which does what I want: pre-generate thumbnails and HTML for the pictures, so I can serve without needing a dynamic run-time, so there's less attack surface to worry about. However, it doesn't handle pagination or the like, so it's unwieldy for this case. This is a one-time drop for pictures from a wedding, with a shared usercode/password for distribution to the guests; I don't wish to put the pictures in a third-party hosting environment. I don't wish to use PHP, simply because that's another run-time to worry about, I might relent and use something dynamic if it's Python or Perl based (as I can maintain things written in those). I currently have: Apache serving static files, Album-generated, some sub-directories to divide up the content to be a little more manageable. Something like Album but with pagination already handled would be great, but I'm willing to have something a little more dynamic, if it lets people comment or caption and store the extra data in something like an sqlite DB. I'd want something light-weight, not a full-blown CMS with security updates every three months. I don't want to upload pictures of other peoples' children into a third-party free service where I don't know what the revenue model is. (For my site: revenue is none, costs out of pocket). Existing server hosting is *nix, Apache, some WSGI. Client-side I have MacOS. Any advice?

    Read the article

  • Where is he now?

    - by Chris G. Williams
    A couple months ago, I announced I was leaving Magenic in order to take a break from consulting. I figured I'd post an update as to what I'm doing now, since I haven't exactly been slacking off.1) I accepted a position as a Lead Developer with RealPage. I work on a number of internal use applications for a subsidiary known as LevelOne. The majority of my work is in ASP.NET, a surprising amount of VB.NET, some C# and I'm picking up a few new tools for my belt... specifically Python, MongoDB and Perl.2) I am still the owner of Big Robot Games, a retail game store / coffee shop in the South Carolina upstate region. I'm not as involved in the day to day activity as I was, but I'm there most nights and weekends, when I'm not off doing other things, like #3.3) I am on the staff of Rock Revolt Magazine as a journalist, covering live performances as well as interviewing bands, providing album & video game reviews, fixing the website and the occasional prison ink. (Just kidding on that last one.)4) In whatever time is leftover, I still manage to bang out a little code on Heroic Adventure! (aka HA!) and talk about Windows Phone, XNA and whatever else suits me, wherever they'll let me.I guess that's about it.

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • Why does there seem to be a lot of fear in choosing the "wrong" language to learn?

    - by Shewbox
    Perhaps its just me, but as a current CS student I have already come across many questions on this site and elsewhere about not just "Which language should I use for x?" but also "Does anyone still use language Y?" My first CS class was taught in Scheme, which, if I'm not mistaken, isn't used widely (at least in comparison to languages like Java, PHP, Python, etc). Many of my classmates balked at the idea of having to learn a language they would never have to use again, but I don't quite understand where so much of this fear of learning less popular languages comes from. No, I may not use Scheme in any job I get, but I certainly don't regret having learned to use it (albeit in a very beginner, not very in-depth manner in that one semester). I am taking a search engines class this semester, which is done in Perl and again I am seeing classmates complaining about the language choice. I can understand having a favorite language and disliking others but why do some get worked up over learning it in the first place? Can you really learn the "wrong" language? Isn't learning something like Scheme or Haskell good mental exercise if nothing else, and useful at least to exposure to different ways of solving problems?

    Read the article

  • How do you keep down your urge to learn many things [closed]

    - by devsundar
    One of the difficulties i have is to lower my urge to learn new things (Languages, tools, frameworks etc.). I know it's good to stay the bleeding edge, but at the same time i want to learn things properly. I really see that i need to strike a balance between staying bleeding edge and knowing things properly. For example: Before choosing Arch (Desktop), Ubuntu(Server) and Knoppix(Portable) -- depending on situation -- as favourite distributions. Virtually i have tried all popular linux distributions. You name any popular linux (Redhat, Ubuntu, Arch, Suse, Knoppix, Slax, Slackware) i have tried it for some time. In fact i have spent few years experimenting the operating systems. Before choosing Python, Javascript (nodejs). I have tried all the languages i cameacross Scala, Haskell, Erlang, Ruby, Python, Perl, Scheme. Same applies for database. All popular db RDBMS (Oracle, Mysql, Postgres, SQLite[Favourite] etc) and NoSQL (Mongo, Couch, Neo4j etc.). Advantages i see: We get a overall picture of the technologies/tools/languages. It's useful to select the right tool for the job. We develop a taste and choose the One we like. Disadvantages: I feel that i spend somuch time and see a need to strike a balance. In summary, for e.g. If i see a blog post in HackerNews about CofeeScript i will try it out irrespective of what i am currently learning (Say Haskell). I switch back to learning Haskell, then again i see DART i check it out. And this continues.. Effectively i take more time to learn Haskell, but learnt about other new stuff on the way. The quetion i have is how do you strike a balance between staying bleeding edge and learning properly.

    Read the article

  • What is the best objective way to measure language popularity trends? (What's better than TIOBE?)

    - by Eric Wilson
    The best way to get data on computer language popularity that I know is the TIOBE index. But everyone knows that TIOBE is hopelessly flawed. (If someone provides a link to support this, I'll add it here.) So is there any data on programming language popularity that is generally considered meaningful? The only other option I know is to look at the trends at indeed.com, which is inherently flawed, being based on job postings. It isn't like I would make a future language decision solely based on an index, but it might provide a useful balance to the skewed perspective one obtains by talking to ones friends and colleagues. To illustrate that bias, I'll point out that based on the experience of those I personally know, the only languages used professionally today (in order of popularity) are Java, C#, Groovy, JavaScript, Ruby, Objective C, and Perl. (Though it is evident that C, C++ and PHP were used in the past.) So my question is, everyone bashes TIOBE, but is there anything else? If so, can anyone explain how we know the alternative has better methodology? Thanks.

    Read the article

  • How can I test linkable/executable files that require re-hosting or retargeting?

    - by hagubear
    Due to data protection, I cannot discuss fine details of the work itself so apologies PROBLEM CASE Sometimes my software projects require merging/integration with third party (customer or other suppliers) software. these software are often in linkable executables or object code (requires that my source code is retargeted and linked with it). When I get the executables or object code, I cannot validate its operation fully without integrating it with my system. My initial idea is that executables are not meant to be unit tested, they are meant to be linkable with other system, but what is the guarantee that post-linkage and integration behaviour will be okay? There is also no sufficient documentation available (from the customer) to indicate how to go about integrating the executables or object files. I know this is philosophical question, but apparently not enough research could be found at this moment to conclude to a solution. I was hoping that people could help me go to the right direction by suggesting approaches. To start, I have found out that Avionics OEM software is often rehosted and retargeted by third parties e.g. simulator makers. I wonder how they test them. Surely, the source code will not be supplied due to IPR rgulations. UPDATE I have received reasonable and very useful suggestions regarding this area. My current struggle has shifted into testing 3rd party OBJECT code that needs to be linked with my own source code (retargeted) on my host machine. How can I even test object code? Surely, I need to link them first to even think about doing anything. Is it the post-link behaviour that needs to be determined and scripted (using perl,Tcl, etc.) so that inputs and outputs could be verified? No clue!! :( thanks,

    Read the article

  • What packages are safe to uninstall to reduce installation size?

    - by mathematician1975
    This question is similar to a previous question I asked How can I turn my desktop Ubuntu 8.04 into a command line only install?. I was wondering if anyone can recommend any other bulky packages from the standard 8.04 installation that can reduce the size on disk of my installation. All I really require is socket functionality, g++ and gcc, some kind of text editor and SSH client and server. Things that I don't require are things like media players, audio packages, and the more "superficial" kind of desktop niceties. Is there anything particularly large in a standard install that is safe for me to remove without compromising my requirements above? I am a bit apprehensive about trying to uninstall items and I am not totally confident about removal of particular things having a negative effect on the functionality of any other things I might need (an example is would it be safe for me to remove everything to do with Perl, or does the system/kernel/other processes require this) ??? Basically I would like to be left with the kind of items that would have been installed in the CLI version of 8.04 (had the alternative iso image not been faulty). Any help/suggestions would be gratefully received.

    Read the article

  • How do I develop database-utilizing application in an agile/test-driven-development way?

    - by user39019
    I want to add databases (traditional client/server RDBMS's like Mysql/Postgresql as opposed to NoSQL, or embedded databases) to my toolbox as a developer. I've been using SQLite for simpler projects with only 1 client, but now I want to do more complicated things (ie, db-backed web development). I usually like following agile and/or test-driven-development principles. I generally code in Perl or Python. Questions: How do I test my code such that each run of the test suite starts with a 'pristine' state? Do I run a separate instance of the database server every test? Do I use a temporary database? How do I design my tables/schema so that it is flexible with respect to changing requirements? Do I start with an ORM for my language? Or do I stick to manually coding SQL? One thing I don't find appealing is having to change more than one thing (say, the CREATE TABLE statement and associated crud statements) for one change, b/c that's error prone. On the other hand, I expect ORM's to be a low slower and harder to debug than raw SQL. What is the general strategy for migrating data between one version of the program and a newer one? Do I carefully write ALTER TABLE statements between each version, or do I dump the data and import fresh in the new version?

    Read the article

  • Looking for some advice on the next steps to take [closed]

    - by mopsyd
    I am looking for some advice on the next step to take in development of my programming skills. I was directed here when asking this question on Stack Overflow. What I know already Have a solid grasp of xhtml, xml, php, javascript, MySQL, actionscript. Have a working knowledge of vb, and have a slight grasp of java from tinkering with a minecraft server. Some brief exposure to the Unreal Engine in college. Some skills with sql server, ms sql, office integration, etc. Also some knowledge of Asterix and PBX/VOIP. Been coding off and on since the age of 8 but I have no computer science education aside from what I have taught myself or learned from work/freelance. I work in OSX mostly, but can use/troubleshoot windows and ubuntu fluently also. Decent with both UNIX and DOS CLI. What I'm considering I'm looking to learn a scripting language to build web apps, help streamline my home server that I am building and run shell scripts. Being able to help code games later is a big plus. My Question Between java, ruby, perl, and python, which would be the best investment of my time considering what I already know and what direction I would like to take my skillset? What are good resources for your suggested direction? Thanks in advance.

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >