Search Results

Search found 559 results on 23 pages for 'preprocessor abuse'.

Page 20/23 | < Previous Page | 16 17 18 19 20 21 22 23  | Next Page >

  • Why Software Sucks...and What You Can Do About It – book review

    - by DigiMortal
        How do our users see the products we are writing for them and how happy they are with our work? Are they able to get their work done without fighting with cool features and crashes or are they just switching off resistance part of their brain to survive our software? Yeah, the overall picture of software usability landscape is not very nice. Okay, it is not even nice. But, fortunately, Why Software Sucks...and What You Can Do About It by David S. Platt explains everything. Why Software Sucks… is book for software users but I consider it as a-must reading also for developers and specially for their managers whose politics often kills all usability topics as soon as they may appear. For managers usability is soft topic that can be manipulated the way it is best in current state of project. Although developers are not UI designers and usability experts they are still very often forced to deal with these topics and this is how usability problems start (of course, also designers are able to produce designs that are stupid and too hard to use for users, but this blog here is about development). I found this book to be very interesting and funny reading. It is not humor book but it explains you all so you remember later very well what you just read. It took me about three evenings to go through this book and I am still enjoying what I found and how author explains our weird young working field to end users. I suggest this book to all developers – while you are demanding your management to hire or outsource usability expert you are at least causing less pain to end users. So, go and buy this book, just like I did. And… they thanks to mr. Platt :) There is one book more I suggest you to read if you are interested in usability - Don't Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition by Steve Krug. Editorial review from Amazon Today’s software sucks. There’s no other good way to say it. It’s unsafe, allowing criminal programs to creep through the Internet wires into our very bedrooms. It’s unreliable, crashing when we need it most, wiping out hours or days of work with no way to get it back. And it’s hard to use, requiring large amounts of head-banging to figure out the simplest operations. It’s no secret that software sucks. You know that from personal experience, whether you use computers for work or personal tasks. In this book, programming insider David Platt explains why that’s the case and, more importantly, why it doesn’t have to be that way. And he explains it in plain, jargon-free English that’s a joy to read, using real-world examples with which you’re already familiar. In the end, he suggests what you, as a typical user, without a technical background, can do about this sad state of our software—how you, as an informed consumer, don’t have to take the abuse that bad software dishes out. As you might expect from the book’s title, Dave’s expose is laced with humor—sometimes outrageous, but always dead on. You’ll laugh out loud as you recall incidents with your own software that made you cry. You’ll slap your thigh with the same hand that so often pounded your computer desk and wished it was a bad programmer’s face. But Dave hasn’t written this book just for laughs. He’s written it to give long-overdue voice to your own discovery—that software does, indeed, suck, but it shouldn’t. Table of contents Acknowledgments xiii Introduction Chapter 1: Who’re You Calling a Dummy? Where We Came From Why It Still Sucks Today Control versus Ease of Use I Don’t Care How Your Program Works A Bad Feature and a Good One Stopping the Proceedings with Idiocy Testing on Live Animals Where We Are and What You Can Do Chapter 2: Tangled in the Web Where We Came From How It Works Why It Still Sucks Today Client-Centered Design versus Server-Centered Design Where’s My Eye Opener? It’s Obvious—Not! Splash, Flash, and Animation Testing on Live Animals What You Can Do about It Chapter 3: Keep Me Safe The Way It Was Why It Sucks Today What Programmers Need to Know, but Don’t A Human Operation Budgeting for Hassles Users Are Lazy Social Engineering Last Word on Security What You Can Do Chapter 4: Who the Heck Are You? Where We Came From Why It Still Sucks Today Incompatible Requirements OK, So Now What? Chapter 5: Who’re You Looking At? Yes, They Know You Why It Sucks More Than Ever Today Users Don’t Know Where the Risks Are What They Know First Milk You with Cookies? Privacy Policy Nonsense Covering Your Tracks The Google Conundrum Solution Chapter 6: Ten Thousand Geeks, Crazed on Jolt Cola See Them in Their Native Habitat All These Geeks Who Speaks, and When, and about What Selling It The Next Generation of Geeks—Passing It On Chapter 7: Who Are These Crazy Bastards Anyway? Homo Logicus Testosterone Poisoning Control and Contentment Making Models Geeks and Jocks Jargon Brains and Constraints Seven Habits of Geeks Chapter 8: Microsoft: Can’t Live With ’Em and Can’t Live Without ’Em They Run the World Me and Them Where We Came From Why It Sucks Today Damned if You Do, Damned if You Don’t We Love to Hate Them Plus ça Change Growing-Up Pains What You Can Do about It The Last Word Chapter 9: Doing Something About It 1. Buy 2. Tell 3. Ridicule 4. Trust 5. Organize Epilogue About the Author

    Read the article

  • Installing gtk-config and/or fsv in Ubuntu 10.10

    - by Wayne Werner
    Hi, I'm trying to install the File System Visualizer (think "It's a UNIX System! I know this!" from Jurassic Park) on Ubuntu 10.10. I've got the .tar.gz downloaded, and extracted. However, when I ./configure, I get this output: loading cache ./config.cache checking for a BSD compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking whether make sets ${MAKE}... yes checking for working aclocal... found checking for working autoconf... found checking for working automake... found checking for working autoheader... found checking for working makeinfo... missing checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking how to run the C preprocessor... gcc -E checking for ranlib... ranlib checking for POSIXized ISC... no checking for dirent.h that defines DIR... yes checking for opendir in -ldir... no checking for ANSI C header files... yes checking whether time.h and sys/time.h may both be included... yes checking for strings.h... yes checking for sys/time.h... yes checking for unistd.h... yes checking for working const... yes checking for mode_t... yes checking for uid_t in sys/types.h... yes checking for pid_t... yes checking for size_t... yes checking for comparison_fn_t... yes checking for st_blocks in struct stat... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for working alloca.h... yes checking for alloca... yes checking for working fnmatch... yes checking for strftime... yes checking for getcwd... yes checking for gettimeofday... yes checking for mktime... yes checking for strcspn... yes checking for strdup... yes checking for strspn... yes checking for strtod... yes checking for strtoul... yes checking for scandir... yes checking for inline... inline checking for off_t... yes checking for unistd.h... (cached) yes checking for getpagesize... yes checking for working mmap... yes checking for argz.h... yes checking for limits.h... yes checking for locale.h... yes checking for nl_types.h... yes checking for malloc.h... yes checking for string.h... yes checking for unistd.h... (cached) yes checking for sys/param.h... yes checking for getcwd... (cached) yes checking for munmap... yes checking for putenv... yes checking for setenv... yes checking for setlocale... yes checking for strchr... yes checking for strcasecmp... yes checking for strdup... (cached) yes checking for __argz_count... yes checking for __argz_stringify... yes checking for __argz_next... yes checking for stpcpy... yes checking for LC_MESSAGES... yes checking whether NLS is requested... yes checking whether included gettext is requested... no checking for libintl.h... yes checking for gettext in libc... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for gtk-config... no checking for GTK - version >= 1.2.1... no *** The gtk-config script installed by GTK could not be found *** If GTK was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GTK_CONFIG environment variable to the *** full path to gtk-config. configure: error: Cannot find proper GTK+ version Obviously it's looking for gtk-config. However, apparently it doesn't exist in the repos anymore. Then this post mentioned that gtkglarea solved their problem, as mentioned in this file. Of course that poster neatly forgets to mention exactly what and how gtkglarea solved their problem, and Google is mostly devoid of information on the problem. So I come here asking for help! I would like to install fsv, but it tells me gtk-config doesn't exist. How can I fix this problem in Ubuntu 10.10? Thanks!

    Read the article

  • Open Source but not Free Software (or vice versa)

    - by TRiG
    The definition of "Free Software" from the Free Software Foundation: “Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program's users have the four essential freedoms: The freedom to run the program, for any purpose (freedom 0). The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, to anyone anywhere. Being free to do these things means (among other things) that you do not have to ask or pay for permission to do so. The definition of "Open Source Software" from the Open Source Initiative: Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria: Free Redistribution The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale. Source Code The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost preferably, downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed. Derived Works The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software. Integrity of The Author's Source Code The license may restrict source-code from being distributed in modified form only if the license allows the distribution of "patch files" with the source code for the purpose of modifying the program at build time. The license must explicitly permit distribution of software built from modified source code. The license may require derived works to carry a different name or version number from the original software. No Discrimination Against Persons or Groups The license must not discriminate against any person or group of persons. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research. Distribution of License The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties. License Must Not Be Specific to a Product The rights attached to the program must not depend on the program's being part of a particular software distribution. If the program is extracted from that distribution and used or distributed within the terms of the program's license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution. License Must Not Restrict Other Software The license must not place restrictions on other software that is distributed along with the licensed software. For example, the license must not insist that all other programs distributed on the same medium must be open-source software. License Must Be Technology-Neutral No provision of the license may be predicated on any individual technology or style of interface. These definitions, although they derive from very different ideologies, are broadly compatible, and most Free Software is also Open Source Software and vice versa. I believe, however, that it is possible for this not to be the case: It is possible for software to be Open Source without being Free, or to be Free without being Open Source. Questions Is my belief correct? Is it possible for software to fall into one camp and not the other? Does any such software actually exist? Please give examples. Clarification I've already accepted an answer now, but I seem to have confused a lot of people, so perhaps a clarification is in order. I was not asking about the difference between copyleft (or "viral", though I don't like that term) and non-copyleft ("permissive") licenses. Nor was I asking about your personal idiosyncratic definitions of "Free" and "Open". I was asking about "Free Software as defined by the FSF" and "Open Source Software as defined by the OSI". Are the two always the same? Is it possible to be one without being the other? And the answer, it seems, is that it's impossible to be Free without being Open, but possible to be Open without being Free. Thank you everyone who actually answered the question.

    Read the article

  • problem occur during installation of moses scripts

    - by lenny99
    we got error when compile moses-script. process of it as follows: minakshi@minakshi-Vostro-3500:~/Desktop/monu/moses/scripts$ make release # Compile the parts make all make[1]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts' # Building memscore may fail e.g. if boost is not available. # We ignore this because traditional scoring will still work and memscore isn't used by default. cd training/memscore ; \ ./configure && make \ || ( echo "WARNING: Building memscore failed."; \ echo 'training/memscore/memscore' >> ../../release-exclude ) checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for g++... g++ checking whether the C++ compiler works... yes checking for C++ compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking for style of include used by make... GNU checking dependency style of g++... gcc3 checking for gcc... gcc checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking for boostlib >= 1.31.0... yes checking for cos in -lm... yes checking for gzopen in -lz... yes checking for cblas_dgemm in -lgslcblas... no checking for gsl_blas_dgemm in -lgsl... no checking how to run the C++ preprocessor... g++ -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking n_gram.h usability... no checking n_gram.h presence... no checking for n_gram.h... no checking for size_t... yes checking for ptrdiff_t... yes configure: creating ./config.status config.status: creating Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make all-am make[3]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make[3]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' touch release-exclude # No files excluded by default pwd=`pwd`; \ for subdir in cmert-0.5 phrase-extract symal mbr lexical-reordering; do \ make -C training/$subdir || exit 1; \ echo "### Compiler $subdir"; \ cd $pwd; \ done make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/cmert-0.5' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/cmert-0.5' ### Compiler cmert-0.5 make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/phrase-extract' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/phrase-extract' ### Compiler phrase-extract make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/symal' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/symal' ### Compiler symal make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/mbr' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/mbr' ### Compiler mbr make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/lexical-reordering' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/lexical-reordering' ### Compiler lexical-reordering ## All files that need compilation were compiled make[1]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts' /bin/sh: ./check-dependencies.pl: not found make: *** [release] Error 127 We don't know why this error occurs? check-dependencies.pl file existed in scripts folder ...

    Read the article

  • Amanda Todd&ndash;What Parents Can Learn From Her Story

    - by D'Arcy Lussier
    Amanda Todd was a bullied teenager who committed suicide this week. Her story has become headline news due in part to her You Tube video she posted telling her story:   The story is heartbreaking for so many reasons, but I wanted to talk about what we as parents can learn from this. Being the dad to two girls, one that’s 10, I’m very aware of the dangers that the internet holds. When I saw her story, one thing jumped out at me – unmonitored internet access at an early age. My daughter (then 9) came home from a friends place once and asked if she could be in a YouTube video with her friend. Apparently this friend was allowed to do whatever she wanted on the internet, including posting goofy videos. This set off warning bells and we ensured our daughter realized the dangers and that she was not to ever post videos of herself. In looking at Amanda’s story, the access to unmonitored internet time along with just being a young girl and being flattered by an online predator were the key events that ultimately led to her suicide. Yes, the reaction of her classmates and “friends” was horrible as well, I’m not diluting that. But our youth don’t fully understand yet that what they do on the internet today will follow them potentially forever. And the people they meet online aren’t necessarily who they claim to be. So what can we as parents learn from Amanda’s story? Parents Shouldn’t Feel Bad About Being Internet Police Our job as parents is in part to protect our kids and keep them safe, even if they don’t like our measures. This includes monitoring, supervising, and restricting their internet activities. In our house we have a family computer in the living room that the kids can watch videos and surf the web. It’s in plain view of everyone, so you can’t hide what you’re looking at. If our daughter goes to a friend’s place, we ask about what they did and what they played. If the computer comes up, we ask about what they did on it. Luckily our daughter is very up front and honest in telling us things, so we have very open discussions. Parents Need to Be Honest About the Dangers of the Internet I’m sure every generation says that “kids grow up so fast these days”, but in our case the internet really does push our kids to be exposed to things they otherwise wouldn’t experience. One wrong word in a Google search, a click of a link in a spam email, or just general curiosity can expose a child to things they aren’t ready for or should never be exposed to (and I’m not just talking about adult material – have you seen some of the graphic pictures from war zones posted on news sites recently?). Our stance as parents has been to be open about discussing the dangers with our kids before they encounter any content – be proactive instead of reactionary. Part of this is alerting them to the monsters that lurk on the internet as well. As kids explore the world wide web, they’re eventually going to encounter some chat room or some Facebook friend invite or other personal connection with someone. More than ever kids need to be educated on the dangers of engaging with people online and sharing personal information. You can think of it as an evolved discussion that our parents had with us about using the phone: “Don’t say ‘I’m home alone’, don’t say when mom or dad get home, don’t tell them any information, etc.” Parents Need to Talk Self Worth at Home Katie makes the point better than I ever could (one bad word towards the end): Our children need to understand their value beyond what the latest issue of TigerBeat says, or the media who continues flaunting physical attributes over intelligence and character, or a society that puts focus on status and wealth. They also have to realize that just because someone pays you a compliment, that doesn’t mean you should ignore personal boundaries and limits. What does this have to do with the internet? Well, in days past if you wanted to be social you had to go out somewhere. Now you can video chat with any number of people from the comfort of wherever your laptop happens to be – and not just text but full HD video with sound! While innocent children head online in the hopes of meeting cool people, predators with bad intentions are heading online too. As much as we try to monitor their online activity and be honest about the dangers of the internet, the human side of our kids isn’t something we can control. But we can try to influence them to see themselves as not needing to search out the acceptance of complete strangers online. Way easier said than done, but ensuring self-worth is something discussed, encouraged, and celebrated is a step in the right direction. Parental Wake Up Call This post is not a critique of Amanda’s parents. The reality is that cyber bullying/abuse is happening every day, and there are millions of parents that have no clue its happening to their children. Amanda’s story is a wake up call that our children’s online activities may be putting them in danger. My heart goes out to the parents of this girl. As a father of daughters, I can’t imagine what I would do if I found my daughter having to hide in a ditch to avoid a mob or call 911 to report my daughter had attempted suicide by drinking bleach or deal with a child turning to drugs/alcohol/cutting to cope. It would be horrendous if we as parents didn’t re-evaluate our family internet policies in light of this event. And in the end, Amanda’s video was meant to bring attention to her plight and encourage others going through the same thing. We may not be kids, but we can still honour her memory by helping safeguard our children.

    Read the article

  • How to tell whether your programmers are under-performing?

    - by A Team Lead
    I am a team lead with 5+ developers. I have a developer (let's call him A) who is a good programmer, who writes good clean, easy to understand code. However he is somewhat difficult to manage, and sometimes I wonder whether he is really under-performing or not. Our company requires the developers to indicate the work progress in the bug tracker we use, not so much as to monitor the programmers but to let the stackholders know the progress. The thing is, A only updates a task progress when it is done ( maybe 3 weeks after it is first worked on) and this leaves everyone wondering what is going on in the middle of the development week. He wouldn't change his habit despite repeated probing. ( It's OK, developers hate paperwork, I do, too) Recent 2-3 months he on leave quite often due to various events-- either he is sick, or have to attend a lot of personal events etc. ( It's OK, bad things happen in a string. It's just a coincidence) We define sprints, or roadmaps for each month. And in the beginning of the sprint, we will discuss the amount of work each of the developers have to do in a sprint and the developers get to set the amount of time they need for each task. He usually won't be able to complete all of them. (It's OK, the developers are regularly missing deadlines not due to their fault). If only one or two of the above events happen, I won't feel that A is under-performing, but they all happen together. So I have the feeling that A is under-performing and maybe-- God forbid--- slacking off. This is just a feeling based on my years of experience as programmer. But I could be wrong. It is notoriously hard to measure the work of a programmer, given that not all two tasks are alike, and there lacks a standard objective to measure the commitment of a programmer to your company. It is downright impossible to tell whether the programmer is doing his job or slacking off. All you can do, is to trust them-- yeah, trusting and giving them autonomy is the best way for programmers to work, I know that, so don't start a lecture on why you need to trust your programmers, thank you every much-- but if they abuse your trust, can you know? My question is, how can you tell whether your programmers are under-performing? Surely there are experience team leads who know better than me on this? Outcome: I've a straight talk with him regarding my perception on his performance. He was indignant when I suggested that I had the feeling that he wasn't performing at his best level. He felt that this was a completely unfair feeling. I then replied that this was my feeling and I didn't know whether my feeling was right or not. He would have none of this and ended the discussion immediately. Before he left he said that he "would try to give more to the company" in a very cold tone. I was taken aback by his reaction. I am sure that I offended him in some ways. Not too sure whether that was the right thing to do for me to be so frank with him, though. Extra notes: I hate micromanaging. So all that we have for our software process is Sprint ( where tasks get prioritized and assigned, and at the end of the month, a review of the amount of work done). Developers would require to update the tasks as they go along everyday. There is no standup meeting, or anything of the sort. Mainly because we have the freedom to work from home and everyone cherishes this freedom. Although I am the one who sets the deadline, but the developers will provide the estimate for each tasks and I will decide-- based on the estimate-- the tasks that go into a particular sprint. If they can't finish the tasks at the end of the sprint, I will push them to the next. So theoretically one can just do only 1 or 2 tasks during the whole sprint and then push the remaining 99 tasks to the next sprint and still he will be fine as long as justifies this-- in the form of daily work progress updates

    Read the article

  • Rkhunter 122 suspect files; do I have a problem?

    - by user276166
    I am new to ubuntu. I am using Xfce Ubuntu 14.04 LTS. I have ran rkhunter a few weeks age and only got a few warnings. The forum said that they were normal. But, this time rkhunter reported 122 warnings. Please advise. casey@Shaman:~$ sudo rkhunter -c [ Rootkit Hunter version 1.4.0 ] Checking system commands... Performing 'strings' command checks Checking 'strings' command [ OK ] Performing 'shared libraries' checks Checking for preloading variables [ None found ] Checking for preloaded libraries [ None found ] Checking LD_LIBRARY_PATH variable [ Not found ] Performing file properties checks Checking for prerequisites [ Warning ] /usr/sbin/adduser [ Warning ] /usr/sbin/chroot [ Warning ] /usr/sbin/cron [ OK ] /usr/sbin/groupadd [ Warning ] /usr/sbin/groupdel [ Warning ] /usr/sbin/groupmod [ Warning ] /usr/sbin/grpck [ Warning ] /usr/sbin/nologin [ Warning ] /usr/sbin/pwck [ Warning ] /usr/sbin/rsyslogd [ Warning ] /usr/sbin/useradd [ Warning ] /usr/sbin/userdel [ Warning ] /usr/sbin/usermod [ Warning ] /usr/sbin/vipw [ Warning ] /usr/bin/awk [ Warning ] /usr/bin/basename [ Warning ] /usr/bin/chattr [ Warning ] /usr/bin/cut [ Warning ] /usr/bin/diff [ Warning ] /usr/bin/dirname [ Warning ] /usr/bin/dpkg [ Warning ] /usr/bin/dpkg-query [ Warning ] /usr/bin/du [ Warning ] /usr/bin/env [ Warning ] /usr/bin/file [ Warning ] /usr/bin/find [ Warning ] /usr/bin/GET [ Warning ] /usr/bin/groups [ Warning ] /usr/bin/head [ Warning ] /usr/bin/id [ Warning ] /usr/bin/killall [ OK ] /usr/bin/last [ Warning ] /usr/bin/lastlog [ Warning ] /usr/bin/ldd [ Warning ] /usr/bin/less [ OK ] /usr/bin/locate [ OK ] /usr/bin/logger [ Warning ] /usr/bin/lsattr [ Warning ] /usr/bin/lsof [ OK ] /usr/bin/mail [ OK ] /usr/bin/md5sum [ Warning ] /usr/bin/mlocate [ OK ] /usr/bin/newgrp [ Warning ] /usr/bin/passwd [ Warning ] /usr/bin/perl [ Warning ] /usr/bin/pgrep [ Warning ] /usr/bin/pkill [ Warning ] /usr/bin/pstree [ OK ] /usr/bin/rkhunter [ OK ] /usr/bin/rpm [ Warning ] /usr/bin/runcon [ Warning ] /usr/bin/sha1sum [ Warning ] /usr/bin/sha224sum [ Warning ] /usr/bin/sha256sum [ Warning ] /usr/bin/sha384sum [ Warning ] /usr/bin/sha512sum [ Warning ] /usr/bin/size [ Warning ] /usr/bin/sort [ Warning ] /usr/bin/stat [ Warning ] /usr/bin/strace [ Warning ] /usr/bin/strings [ Warning ] /usr/bin/sudo [ Warning ] /usr/bin/tail [ Warning ] /usr/bin/test [ Warning ] /usr/bin/top [ Warning ] /usr/bin/touch [ Warning ] /usr/bin/tr [ Warning ] /usr/bin/uniq [ Warning ] /usr/bin/users [ Warning ] /usr/bin/vmstat [ Warning ] /usr/bin/w [ Warning ] /usr/bin/watch [ Warning ] /usr/bin/wc [ Warning ] /usr/bin/wget [ Warning ] /usr/bin/whatis [ Warning ] /usr/bin/whereis [ Warning ] /usr/bin/which [ OK ] /usr/bin/who [ Warning ] /usr/bin/whoami [ Warning ] /usr/bin/unhide.rb [ Warning ] /usr/bin/mawk [ Warning ] /usr/bin/lwp-request [ Warning ] /usr/bin/heirloom-mailx [ OK ] /usr/bin/w.procps [ Warning ] /sbin/depmod [ Warning ] /sbin/fsck [ Warning ] /sbin/ifconfig [ Warning ] /sbin/ifdown [ Warning ] /sbin/ifup [ Warning ] /sbin/init [ Warning ] /sbin/insmod [ Warning ] /sbin/ip [ Warning ] /sbin/lsmod [ Warning ] /sbin/modinfo [ Warning ] /sbin/modprobe [ Warning ] /sbin/rmmod [ Warning ] /sbin/route [ Warning ] /sbin/runlevel [ Warning ] /sbin/sulogin [ Warning ] /sbin/sysctl [ Warning ] /bin/bash [ Warning ] /bin/cat [ Warning ] /bin/chmod [ Warning ] /bin/chown [ Warning ] /bin/cp [ Warning ] /bin/date [ Warning ] /bin/df [ Warning ] /bin/dmesg [ Warning ] /bin/echo [ Warning ] /bin/ed [ OK ] /bin/egrep [ Warning ] /bin/fgrep [ Warning ] /bin/fuser [ OK ] /bin/grep [ Warning ] /bin/ip [ Warning ] /bin/kill [ Warning ] /bin/less [ OK ] /bin/login [ Warning ] /bin/ls [ Warning ] /bin/lsmod [ Warning ] /bin/mktemp [ Warning ] /bin/more [ Warning ] /bin/mount [ Warning ] /bin/mv [ Warning ] /bin/netstat [ Warning ] /bin/ping [ Warning ] /bin/ps [ Warning ] /bin/pwd [ Warning ] /bin/readlink [ Warning ] /bin/sed [ Warning ] /bin/sh [ Warning ] /bin/su [ Warning ] /bin/touch [ Warning ] /bin/uname [ Warning ] /bin/which [ OK ] /bin/kmod [ Warning ] /bin/dash [ Warning ] [Press <ENTER> to continue] Checking for rootkits... Performing check of known rootkit files and directories 55808 Trojan - Variant A [ Not found ] ADM Worm [ Not found ] AjaKit Rootkit [ Not found ] Adore Rootkit [ Not found ] aPa Kit [ Not found ] Apache Worm [ Not found ] Ambient (ark) Rootkit [ Not found ] Balaur Rootkit [ Not found ] BeastKit Rootkit [ Not found ] beX2 Rootkit [ Not found ] BOBKit Rootkit [ Not found ] cb Rootkit [ Not found ] CiNIK Worm (Slapper.B variant) [ Not found ] Danny-Boy's Abuse Kit [ Not found ] Devil RootKit [ Not found ] Dica-Kit Rootkit [ Not found ] Dreams Rootkit [ Not found ] Duarawkz Rootkit [ Not found ] Enye LKM [ Not found ] Flea Linux Rootkit [ Not found ] Fu Rootkit [ Not found ] Fuck`it Rootkit [ Not found ] GasKit Rootkit [ Not found ] Heroin LKM [ Not found ] HjC Kit [ Not found ] ignoKit Rootkit [ Not found ] IntoXonia-NG Rootkit [ Not found ] Irix Rootkit [ Not found ] Jynx Rootkit [ Not found ] KBeast Rootkit [ Not found ] Kitko Rootkit [ Not found ] Knark Rootkit [ Not found ] ld-linuxv.so Rootkit [ Not found ] Li0n Worm [ Not found ] Lockit / LJK2 Rootkit [ Not found ] Mood-NT Rootkit [ Not found ] MRK Rootkit [ Not found ] Ni0 Rootkit [ Not found ] Ohhara Rootkit [ Not found ] Optic Kit (Tux) Worm [ Not found ] Oz Rootkit [ Not found ] Phalanx Rootkit [ Not found ] Phalanx2 Rootkit [ Not found ] Phalanx2 Rootkit (extended tests) [ Not found ] Portacelo Rootkit [ Not found ] R3dstorm Toolkit [ Not found ] RH-Sharpe's Rootkit [ Not found ] RSHA's Rootkit [ Not found ] Scalper Worm [ Not found ] Sebek LKM [ Not found ] Shutdown Rootkit [ Not found ] SHV4 Rootkit [ Not found ] SHV5 Rootkit [ Not found ] Sin Rootkit [ Not found ] Slapper Worm [ Not found ] Sneakin Rootkit [ Not found ] 'Spanish' Rootkit [ Not found ] Suckit Rootkit [ Not found ] Superkit Rootkit [ Not found ] TBD (Telnet BackDoor) [ Not found ] TeLeKiT Rootkit [ Not found ] T0rn Rootkit [ Not found ] trNkit Rootkit [ Not found ] Trojanit Kit [ Not found ] Tuxtendo Rootkit [ Not found ] URK Rootkit [ Not found ] Vampire Rootkit [ Not found ] VcKit Rootkit [ Not found ] Volc Rootkit [ Not found ] Xzibit Rootkit [ Not found ] zaRwT.KiT Rootkit [ Not found ] ZK Rootkit [ Not found ] [Press <ENTER> to continue] Performing additional rootkit checks Suckit Rookit additional checks [ OK ] Checking for possible rootkit files and directories [ None found ] Checking for possible rootkit strings [ None found ] Performing malware checks Checking running processes for suspicious files [ None found ] Checking for login backdoors [ None found ] Checking for suspicious directories [ None found ] Checking for sniffer log files [ None found ] Performing Linux specific checks Checking loaded kernel modules [ OK ] Checking kernel module names [ OK ] [Press <ENTER> to continue] Checking the network... Performing checks on the network ports Checking for backdoor ports [ None found ] Checking for hidden ports [ Skipped ] Performing checks on the network interfaces Checking for promiscuous interfaces [ None found ] Checking the local host... Performing system boot checks Checking for local host name [ Found ] Checking for system startup files [ Found ] Checking system startup files for malware [ None found ] Performing group and account checks Checking for passwd file [ Found ] Checking for root equivalent (UID 0) accounts [ None found ] Checking for passwordless accounts [ None found ] Checking for passwd file changes [ Warning ] Checking for group file changes [ Warning ] Checking root account shell history files [ None found ] Performing system configuration file checks Checking for SSH configuration file [ Not found ] Checking for running syslog daemon [ Found ] Checking for syslog configuration file [ Found ] Checking if syslog remote logging is allowed [ Not allowed ] Performing filesystem checks Checking /dev for suspicious file types [ Warning ] Checking for hidden files and directories [ Warning ] [Press <ENTER> to continue] System checks summary ===================== File properties checks... Required commands check failed Files checked: 137 Suspect files: 122 Rootkit checks... Rootkits checked : 291 Possible rootkits: 0 Applications checks... All checks skipped The system checks took: 5 minutes and 11 seconds All results have been written to the log file (/var/log/rkhunter.log)

    Read the article

  • Myths about Coding Craftsmanship part 2

    - by tom
    Myth 3: The source of all bad code is inept developers and stupid people When you review code is this what you assume?  Shame on you.  You are probably making assumptions in your code if you are assuming so much already.  Bad code can be the result of any number of causes including but not limited to using dated techniques (like boxing when generics are available), not following standards (“look how he does the spacing between arguments!” or “did he really just name that variable ‘bln_Hello_Cats’?”), being redundant, using properties, methods, or objects in a novel way (like switching on button.Text between “Hello World” and “Hello World “ //clever use of space character… sigh), not following the SOLID principals, hacking around assumptions made in earlier iterations / hacking in features that should be worked into the overall design.  The first two issues, while annoying are pretty easy to spot and can be fixed so easily.  If your coding team is made up of experienced professionals who are passionate about staying current then these shouldn’t be happening.  If you work with a variety of skills, backgrounds, and experience then there will be some of this stuff going on.  If you have an opportunity to mentor such a developer who is receptive to constructive criticism don’t be a jerk; help them and the codebase will improve.  A little patience can improve the codebase, your work environment, and even your perspective. The novelty and redundancy I have encountered has often been the use of creativity when language knowledge was perceived as unavailable or too time consuming.  When developers learn on the job you get a lot of this.  Rather than going to MSDN developers will use what they know.  Depending on the constraints of their assignment hacking together what they know may seem quite practical.  This was not stupid though I often wonder how much time is actually “saved” by hacking.  These issues are often harder to untangle if we ever do.  They can also grow out of control as we write hack after hack to make it work and get back to some development that is satisfying. Hacking upon an existing hack is what I call “feeding the monster”.  Code monsters are anti-patterns and hacks gone wild.  The reason code monsters continue to get bigger is that they keep growing in scope, touching more and more of the application.  This is not the result of dumb developers. It is probably the result of avoiding design, not taking the time to understand the problems or anticipate or communicate the vision of the product.  If our developers don’t understand the purpose of a feature or product how do we expect potential customers to do so? Forethought and organization are often what is missing from bad code.  Developers who do not use the SOLID principals should be encouraged to learn these principals and be given guidance on how to apply them.  The time “saved” by giving hackers room to hack will be made up for and then some. Not as technical debt but as shoddy work that if not replaced will be struggled with again and again.  Bad code is not the result of dumb developers (usually) it is the result of trying to do too much without the proper resources and neglecting the right thing that needs doing with the first thoughtless thing that comes into our heads. Object oriented code is all about relationships between objects.  Coders who believe their coworkers are all fools tend to write objects that are difficult to work with, not eager to explain themselves, and perform erratically and irrationally.  If you constantly find you are surrounded by idiots you may want to ask yourself if you are being unreasonable, if you are being closed minded, of if you have chosen the right profession.  Opening your mind up to the idea that you probably work with rational, well-intentioned people will probably make you a better coder and it might even make you less grumpy.  If you are surrounded by jerks who do not engage in the exchange of ideas who do not care about their customers or the durability of the code you are building together then I suggest you find a new place to work.  Myth 4: Customers don’t care about “beautiful” code Craftsmanship is customer focused because it means that the job was done right, the product will withstand the abuse, modifications, and scrutiny of our customers.  Users can appreciate a predictable timeline for a release, a product delivered on time and on budget, a feature set that does not interfere with the task(s) it is supporting, quick turnarounds on exception messages, self healing issues, and less issues.  These are all hindered by skimping on craftsmanship.  When we write data access and when we write reusable code.   What do you think?  Does bad code come primarily from low IQ individuals?  Do customers care about beautiful code?

    Read the article

  • [AS3/C#] Byte encryption ( DES-CBC zero pad )

    - by mark_dj
    Hi there, Currently writing my own AMF TcpSocketServer. Everything works good so far i can send and recieve objects and i use some serialization/deserialization code. Now i started working on the encryption code and i am not so familiar with this stuff. I work with bytes , is DES-CBC a good way to encrypt this stuff? Or are there other more performant/secure ways to send my data? Note that performance is a must :). When i call: ReadAmf3Object with the decrypter specified i get an: InvalidOperationException thrown by my ReadAmf3Object function when i read out the first byte the Amf3TypeCode isn't specified ( they range from 0 to 16 i believe (Bool, String, Int, DateTime, etc) ). I got Typecodes varying from 97 to 254? Anyone knows whats going wrong? I think it has something to do with the encryption part. Since the deserializer works fine w/o the encryption. I am using the right padding/mode/key? I used: http://code.google.com/p/as3crypto/ as as3 encryption/decryption library. And i wrote an Async tcp server with some abuse of the threadpool ;) Anyway here some code: C# crypter initalization code System.Security.Cryptography.DESCryptoServiceProvider crypter = new DESCryptoServiceProvider(); crypter.Padding = PaddingMode.Zeros; crypter.Mode = CipherMode.CBC; crypter.Key = Encoding.ASCII.GetBytes("TESTTEST"); AS3 private static var _KEY:ByteArray = Hex.toArray(Hex.fromString("TESTTEST")); private static var _TYPE:String = "des-cbc"; public static function encrypt(array:ByteArray):ByteArray { var pad:IPad = new NullPad; var mode:ICipher = Crypto.getCipher(_TYPE, _KEY, pad); pad.setBlockSize(mode.getBlockSize()); mode.encrypt(array); return array; } public static function decrypt(array:ByteArray):ByteArray { var pad:IPad = new NullPad; var mode:ICipher = Crypto.getCipher(_TYPE, _KEY, pad); pad.setBlockSize(mode.getBlockSize()); mode.decrypt(array); return array; } C# read/unserialize/decrypt code public override object Read(int length) { object d; using (MemoryStream stream = new MemoryStream()) { stream.Write(this._readBuffer, 0, length); stream.Position = 0; if (this.Decrypter != null) { using (CryptoStream c = new CryptoStream(stream, this.Decrypter, CryptoStreamMode.Read)) using (AmfReader reader = new AmfReader(c)) { d = reader.ReadAmf3Object(); } } else { using (AmfReader reader = new AmfReader(stream)) { d = reader.ReadAmf3Object(); } } } return d; }

    Read the article

  • Solving a difficult incomplete type error

    - by ChAoS
    I get an incomplete type error when trying to compile my code. I know that it is related to includes, but my project is large and it uses several templates so I can't find which type is actually incomplete. The error message doesn't help either: Compiling: ../../../addons/ofxTableGestures/src/Graphics/objects/CursorFeedback.cpp In file included from ../../../addons/ofxTableGestures/ext/boost/fusion/include/invoke_procedure.hpp:10, from ../../../addons/ofxTableGestures/src/oscGestures/tuioApp.hpp:46, from /home/thechaos/Projectes/of_preRelease_v0061_linux_FAT/addons/../apps/OF-TangibleFramework/ofxTableGestures/src/Graphics/objects/CursorFeedback.hpp:35, from /home/thechaos/Projectes/of_preRelease_v0061_linux_FAT/addons/../apps/OF-TangibleFramework/ofxTableGestures/src/Graphics/objects/CursorFeedback.cpp:31: ../../../addons/ofxTableGestures/ext/boost/fusion/functional/invocation/invoke_procedure.hpp: In function ‘void boost::fusion::invoke_procedure(Function, const Sequence&) [with Function = void (tuio::CanBasicFingers<Graphic>::*)(long int, float, float, float, float, float), Sequence = boost::fusion::joint_view<boost::fusion::joint_view<boost::fusion::iterator_range<boost::fusion::vector_iterator<const boost::fusion::vector6<long int, float, float, float, float, float>, 0>, boost::fusion::vector_iterator<boost::fusion::vector6<long int, float, float, float, float, float>, 0> >, const boost::fusion::single_view<tuio::CanBasicFingers<Graphic>*> >, boost::fusion::iterator_range<boost::fusion::vector_iterator<boost::fusion::vector6<long int, float, float, float, float, float>, 0>, boost::fusion::vector_iterator<const boost::fusion::vector6<long int, float, float, float, float, float>, 6> > >]’: ../../../addons/ofxTableGestures/src/oscGestures/tuioApp.hpp:122: instantiated from ‘void tuio::AlternateCallback<C, M, E>::run(tuio::TEvent*) [with C = tuio::CanBasicFingers<Graphic>, M = void (tuio::CanBasicFingers<Graphic>::*)(long int, float, float, float, float, float), E = tuio::TeventBasicFingersMoveFinger]’ /home/thechaos/Projectes/of_preRelease_v0061_linux_FAT/addons/../apps/OF-TangibleFramework/ofxTableGestures/src/Graphics/objects/CursorFeedback.cpp:64: instantiated from here ../../../addons/ofxTableGestures/ext/boost/fusion/functional/invocation/invoke_procedure.hpp:88: error: incomplete type ‘boost::fusion::detail::invoke_procedure_impl<void (tuio::CanBasicFingers<Graphic>::*)(long int, float, float, float, float, float), const boost::fusion::joint_view<boost::fusion::joint_view<boost::fusion::iterator_range<boost::fusion::vector_iterator<const boost::fusion::vector6<long int, float, float, float, float, float>, 0>, boost::fusion::vector_iterator<boost::fusion::vector6<long int, float, float, float, float, float>, 0> >, const boost::fusion::single_view<tuio::CanBasicFingers<Graphic>*> >, boost::fusion::iterator_range<boost::fusion::vector_iterator<boost::fusion::vector6<long int, float, float, float, float, float>, 0>, boost::fusion::vector_iterator<const boost::fusion::vector6<long int, float, float, float, float, float>, 6> > >, 7, true, false>’ used in nested name specifier If I copy the conflictive code to a same file I can compile it. So I know that the code itself is OK, the problem is the way I instantiate it. How can I trace the origin of this error? Is there any way to get the trace of the c++ compiler and preprocessor to get more informative messages?

    Read the article

  • Storing objects in the array

    - by Ockonal
    Hello, I want to save boost signals objects in the map (association: signal name ? signal object). The signals signature is different, so the second type of map should be boost::any. map<string, any> mSignalAssociation; The question is how to store objects without defining type of new signal signature? typedef boost::signals2::signal<void (int KeyCode)> sigKeyPressed; mSignalAssociation.insert(make_pair("KeyPressed", sigKeyPressed())); // This is what I need: passing object without type definition mSignalAssociation["KeyPressed"] = (typename boost::signals2::signal<void (int KeyCode)>()); // One more trying which won't work. And I don't want use this sigKeyPressed mKeyPressed; mSignalAssociation["KeyPressed"] = mKeyPressed; All this tryings throw the error: /usr/include/boost/noncopyable.hpp: In copy constructor ‘boost::signals2::signal_base::signal_base(const boost::signals2::signal_base&)’: In file included from /usr/include/boost/signals2/detail/signals_common.hpp:17:0, /usr/include/boost/noncopyable.hpp:27:7: error: ‘boost::noncopyable_::noncopyable::noncopyable(const boost::noncopyable_::noncopyable&)’ is private /usr/include/boost/signals2/signal_base.hpp:22:5: error: within this context ---------- /usr/include/boost/signals2/detail/signal_template.hpp: In copy constructor ‘boost::signals2::signal1<void, int&, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>::signal1(const boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>&)’: In file included from /usr/include/boost/preprocessor/iteration/detail/iter/forward1.hpp:52:0, /usr/include/boost/signals2/detail/signal_template.hpp:578:5: note: synthesized method ‘boost::signals2::signal_base::signal_base(const boost::signals2::signal_base&)’ first required here from /usr/include/boost/signals2.hpp:16, --------- /usr/include/boost/signals2/preprocessed_signal.hpp: In copy constructor ‘boost::signals2::signal<void(int)>::signal(const boost::signals2::signal<void(int)>&)’: In file included from /usr/include/boost/signals2/signal.hpp:36:0, /usr/include/boost/signals2/preprocessed_signal.hpp:42:5: note: synthesized method ‘boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>::signal1(const boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>&)’ first required here from /home/ockonal/Workspace/Projects/Pseudoform-2/include/Core/Systems.hpp:6,

    Read the article

  • Best practices for sending automated daily emails from web service

    - by Tauren
    I am running a web service that currently sends confirmation emails out to new users via the gmail smtp servers. As I'm only getting a few new users each day, this hasn't been a problem. I've recently added new features to the webapp that will require a customized message to be sent out to each user every day. Think of this as similar to the regular messages LinkedIn sends out that give you a status report on the activity in your network. Every user's message will be different. With thousands of users, this means thousands of unique messages will be sent each day. Edit: I've since found that these types of email are called "transactional or relationship messages". Spamtacular has a good article on differentiating between marketing and transactional email. I don't think using gmail's smtp servers will cut it anymore, but I don't know that for sure. I don't know what gmail's maximum outgoing messages per account is (it might be 100/day), but they limit outgoing mail to 500 recipients per message. I'm not sending a single message to 500 recipients, but I'm going to be sending 1000's of customized messages with each recipient getting one per day. I'm interested to learn any best practices for doing this (especially for Java-based webapps). Here are some of my thoughts and concerns on it: Should I set up my own outgoing mail server? If I do this, it seems like I'll have all sorts of other issues to worry about, such as preventing mail server abuse, monitoring bounces, allowing ways to opt-out of emails, etc. Are there any tools or services to help with this? Maybe something like OpenEMM or a services like MailChimp? But those seem focused more toward email marketing campaigns. I don't think I should have the webapp itself handle sending emails as it currently is for new user signups. I'm thinking I should setup a separate messaging server that can access the same backend/datastore as the webapp. Thoughts on this? Should I consider setting up some sort of message queueing service to help with this, such as JMS, RabbitMQ, ActiveMQ, etc.? Do I need to provide users a way to opt-out? Do I need to flag these as bulk messages? I don't really consider these email marketing messages, but I'm unsure what is considered appropriate or proper netiquette. Any advice is appreciated. I'm also very interested in open source tools or web services that simplify things and could help me to ramp up as quickly as possible. Thanks!

    Read the article

  • Authorizing a computer to access a web application

    - by HackedByChinese
    I have a web application, and am tasked with adding secure sign-on to bolster security, akin to what Google has added to Google accounts. Use Case Essentially, when a user logs in, we want to detect if the user has previously authorized this computer. If the computer has not been authorized, the user is sent a one-time password (via email, SMS, or phone call) that they must enter, where the user may choose to remember this computer. In the web application, we will track authorized devices, allowing users to see when/where they logged in from that device last, and deauthorize any devices if they so choose. We require a solution that is very light touch (meaning, requiring no client-side software installation), and works with Safari, Chrome, Firefox, and IE 7+ (unfortunately). We will offer x509 security, which provides adequate security, but we still need a solution for customers that can't or won't use x509. My intention is to store authorization information using cookies (or, potentially, using local storage, degrading to flash cookies, and then normal cookies). At First Blush Track two separate values (local data or cookies): a hash representing a secure sign-on token, as well as a device token. Both values are driven (and recorded) by the web application, and dictated to the client. The SSO token is dependent on the device as well as a sequence number. This effectively allows devices to be deauthorized (all SSO tokens become invalid) and mitigates replay (not effectively, though, which is why I'm asking this question) through the use of a sequence number, and uses a nonce. Problem With this solution, it's possible for someone to just copy the SSO and device tokens and use in another request. While the sequence number will help me detect such an abuse and thus deauthorize the device, the detection and response can only happen after the valid device and malicious request both attempt access, which is ample time for damage to be done. I feel like using HMAC would be better. Track the device, the sequence, create a nonce, timestamp, and hash with a private key, then send the hash plus those values as plain text. Server does the same (in addition to validating the device and sequence) and compares. That seems much easier, and much more reliable.... assuming we can securely negotiate, exchange, and store private keys. Question So then, how can I securely negotiate a private key for authorized device, and then securely store that key? Is it more possible, at least, if I settle for storing the private key using local storage or flash cookies and just say it's "good enough"? Or, is there something I can do to my original draft to mitigate the vulnerability I describe?

    Read the article

  • Singleton: How should it be used

    - by Loki Astari
    Edit: From another question I provided an answer that has links to a lot of questions/answers about singeltons: More info about singletons here: So I have read the thread Singletons: good design or a crutch? And the argument still rages. I see Singletons as a Design Pattern (good and bad). The problem with Singleton is not the Pattern but rather the users (sorry everybody). Everybody and their father thinks they can implement one correctly (and from the many interviews I have done, most people can't). Also because everybody thinks they can implement a correct Singleton they abuse the Pattern and use it in situations that are not appropriate (replacing global variables with Singletons!). So the main questions that need to be answered are: When should you use a Singleton How do you implement a Singleton correctly My hope for this article is that we can collect together in a single place (rather than having to google and search multiple sites) an authoritative source of when (and then how) to use a Singleton correctly. Also appropriate would be a list of Anti-Usages and common bad implementations explaining why they fail to work and for good implementations their weaknesses. So get the ball rolling: I will hold my hand up and say this is what I use but probably has problems. I like "Scott Myers" handling of the subject in his books "Effective C++" Good Situations to use Singletons (not many): Logging frameworks Thread recycling pools /* * C++ Singleton * Limitation: Single Threaded Design * See: http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf * For problems associated with locking in multi threaded applications * * Limitation: * If you use this Singleton (A) within a destructor of another Singleton (B) * This Singleton (A) must be fully constructed before the constructor of (B) * is called. */ class MySingleton { private: // Private Constructor MySingleton(); // Stop the compiler generating methods of copy the object MySingleton(MySingleton const& copy); // Not Implemented MySingleton& operator=(MySingleton const& copy); // Not Implemented public: static MySingleton& getInstance() { // The only instance // Guaranteed to be lazy initialized // Guaranteed that it will be destroyed correctly static MySingleton instance; return instance; } }; OK. Lets get some criticism and other implementations together. :-)

    Read the article

  • Automatically generate table of function pointers in C.

    - by jeremytrimble
    I'm looking for a way to automatically (as part of the compilation/build process) generate a "table" of function pointers in C. Specifically, I want to generate an array of structures something like: typedef struct { void (*p_func)(void); char * funcName; } funcRecord; /* Automatically generate the lines below: */ extern void func1(void); extern void func2(void); /* ... */ funcRecord funcTable[] = { { .p_func = &func1, .funcName = "func1" }, { .p_func = &func2, .funcName = "func2" } /* ... */ }; /* End automatically-generated code. */ ...where func1 and func2 are defined in other source files. So, given a set of source files, each of which which contain a single function that takes no arguments and returns void, how would one automatically (as part of the build process) generate an array like the one above that contains each of the functions from the files? I'd like to be able to add new files and have them automatically inserted into the table when I re-compile. I realize that this probably isn't achievable using the C language or preprocessor alone, so consider any common *nix-style tools fair game (e.g. make, perl, shell scripts (if you have to)). But Why? You're probably wondering why anyone would want to do this. I'm creating a small test framework for a library of common mathematical routines. Under this framework, there will be many small "test cases," each of which has only a few lines of code that will exercise each math function. I'd like each test case to live in its own source file as a short function. All of the test cases will get built into a single executable, and the test case(s) to be run can be specified on the command line when invoking the executable. The main() function will search through the table and, if it finds a match, jump to the test case function. Automating the process of building up the "catalog" of test cases ensures that test cases don't get left out (for instance, because someone forgets to add it to the table) and makes it very simple for maintainers to add new test cases in the future (just create a new source file in the correct directory, for instance). Hopefully someone out there has done something like this before. Thanks, StackOverflow community!

    Read the article

  • Creating meaningful routes in wizard style ASP.NET MVC form

    - by R0MANARMY
    I apologize in advance for a long question, figured better have a bit more information than not enough. I'm working on an application with a fairly complex form (~100 fields on it). In order to make the UI a little more presentable the fields are organized into regions and split across multiple (~10) tabs (not unlike this, but each tab does a submit/redirect to next tab). This large input form can also be in one of 3 views (read only, editable, print friendly). The form represents a large domain object (let's call it Foo). I have a controller for said domain object (FooController). It makes sense to me to have the controller be responsible for all the CRUD related operations. Here are the problems I'm having trouble figuring out. Goals: I'd like to keep to conventions so that Foo/Create creates a new record Foo/Delete deletes a record Foo/Edit/{foo_id} takes you to the first tab of the form ...etc I'd like to be able to not repeat the data access code such that I can have Foo/Edit/{foo_id}/tab1 Foo/View/{foo_id}/tab1 Foo/Print/{foo_id}tab1 ...etc use the same data access code to get the data and just specify which view to use to render it. My current implementation has a massive FooController with Create, Delete, Tab1, Tab2, etc actions. Tab actions are split out into separate files for organization (using partial classes, which may or may not be abuse of partial classes). Problem I'm running into is how to organize my controller(s) and routes to make that happen. I have the default route {controller}/{action}/{id} Which handles goal 1 properly but doesn't quite play nice with goal 2. I tried to address goal 2 by defining extra routes like so: routes.MapRoute( "FooEdit", "Foo/Edit/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "Edit", id = (string)null } ); routes.MapRoute( "FooView", "Foo/View/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "View", id = (string)null } ); routes.MapRoute( "FooPrint", "Foo/Print/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "Print", id = (string)null } ); However defining these extra routes causes the Url.Action to generate routs like Foo/Edit/Create instead of Foo/Create. That leads me to believe I designed something very very wrong, but this is my first attempt an asp.net mvc project and I don't know any better. Any advice with this particular situation would be awesome, but feedback on design in similar projects is welcome.

    Read the article

  • Alternatives to requiring users to register for an account?

    - by jamieb
    I'm working on a side project to build a new web app idea of mine. For the sake of discussion, let's say this app displays a random photograph of a famous work of art. On a scale of 1 to 5, users are asked to rate how well they like each piece of art, and then are shown the next photo. Eventually, the app is able to get an sense of the person's style and is able to recommend artwork that he/she may find pleasing. The whole concept is similar to Netflix. I understand how to do all the preference matching logic (although not as sophisticated as Netflix). But I'd like to find a way to do this without requiring that users create an account first. This is a novelty website that a typical user might use only a handful of times. Requiring registration is overkill and will likely drastically reduce it's utility. I'd like to allow people to begin rating artwork within five seconds of their initial pageview, yet maintain the integrity of the voting (since recommendations are predicated on how other people have rated the various pieces of artwork). Can it be done? Some ideas: OpenID. The perfect solution except for the fact that it's not wildly used and my target audience isn't the most technically adept demographic. Text message. User inputs phone number and is texted a four digit code to key into the web app. Quick, easy, and great way to limit abuse. However, privacy concerns abound... people are probably even less likely to give me their phone number than their email address. Facebook login. I personally don't have a Facebook account due to privacy concerns. And I'd really hate to support such a proprietary platform. Hash code/Bookmark. Vistor's initial pageview generates a 5 or 6 digit alphanumeric code that is embedded in each subsequent URL. They can bookmark any page to save their state. Good: Very simple system that doesn't require any user action. Bad: Very easy to stuff the ballot box, might be difficult to account for users sharing the link containing their ID code via email or social networking sites.

    Read the article

  • C++ type-checking at compile-time

    - by Masterofpsi
    Hi, all. I'm pretty new to C++, and I'm writing a small library (mostly for my own projects) in C++. In the process of designing a type hierarchy, I've run into the problem of defining the assignment operator. I've taken the basic approach that was eventually reached in this article, which is that for every class MyClass in a hierarchy derived from a class Base you define two assignment operators like so: class MyClass: public Base { public: MyClass& operator =(MyClass const& rhs); virtual MyClass& operator =(Base const& rhs); }; // automatically gets defined, so we make it call the virtual function below MyClass& MyClass::operator =(MyClass const& rhs); { return (*this = static_cast<Base const&>(rhs)); } MyClass& MyClass::operator =(Base const& rhs); { assert(typeid(rhs) == typeid(*this)); // assigning to different types is a logical error MyClass const& casted_rhs = dynamic_cast<MyClass const&>(rhs); try { // allocate new variables Base::operator =(rhs); } catch(...) { // delete the allocated variables throw; } // assign to member variables } The part I'm concerned with is the assertion for type equality. Since I'm writing a library, where assertions will presumably be compiled out of the final result, this has led me to go with a scheme that looks more like this: class MyClass: public Base { public: operator =(MyClass const& rhs); // etc virtual inline MyClass& operator =(Base const& rhs) { assert(typeid(rhs) == typeid(*this)); return this->set(static_cast<Base const&>(rhs)); } private: MyClass& set(Base const& rhs); // same basic thing }; But I've been wondering if I could check the types at compile-time. I looked into Boost.TypeTraits, and I came close by doing BOOST_MPL_ASSERT((boost::is_same<BOOST_TYPEOF(*this), BOOST_TYPEOF(rhs)>));, but since rhs is declared as a reference to the parent class and not the derived class, it choked. Now that I think about it, my reasoning seems silly -- I was hoping that since the function was inline, it would be able to check the actual parameters themselves, but of course the preprocessor always gets run before the compiler. But I was wondering if anyone knew of any other way I could enforce this kind of check at compile-time.

    Read the article

  • How do we, as a community, help encourage programming in public schools? (Or state Schools for the U

    - by NoMoreZealots
    PRIMARY MOTIVATION My office gets involved with the "First Robotics" competitions and one thing that lingers year to year is the students typically have no preparation for doing even simple programming as part of the public schools system. While the science classes provide some basic grasp of mechanical and electrical concepts, by in large computer programming gets no coverage from the curriculum. (This my be different in other areas of the country/world.) What makes it worse is there is only a short period of time you have to prepare the student's and help them design the robot. Talking to some professors from local colleges, it's a problem because you can't assume even the most basic understanding for freshman CS majors. Languages like Python, Lua and BASIC are simple enough for at least high school level students, if not younger. SCOPE So how do you get public schools to support a programming, at least to the level of "Try it in BASIC" examples that used to be at the end of a chapter in my Algebra book? At least enough to prepare them for event's such as the FIRST Robotic competitions. Which the primary objectives are to teach problem solving and team work, and to possible foster an interest in Math, Science and Engineering in general. (Not force feed to them, as some people her seem to be implying.) Edit: Why teach kids: (Since 2000 CS enrollment in US colleges has decreased by 70% while college enrollment has increased, this is a PROBLEM.) Saying there is no value in teaching someone programming in Jr./High school because they might think "they know programming." Is like saying there's no value in teaching High school science and physics, because they might decide they "know physics." Leading to abuse like: "I passed a high school physics class, I'm going to develop a Unified Quantum Gravitational Theory." Better Prepared students are better students. Instead it would allows college programs to raise the bar on the entry level courses, allowing students to be weeded out based on their understanding of more advanced material. Plus people who did poorly in that in topic in High school aren't as likely to say "I think there's money in computer's so I'll computer science." Plus if people take it in high school and decide THEN that it's not for them, it's better than them wasting their money to PAY a college to figure that out. The result is that people who take the degree are more likely to succeed and be there for the RIGHT reasons. (i.e. It's what they REALLY want to do. And that's REALLY the key to being good at anything.) Programming is like anything else, the more practice and genuine interest you have the better you get. If you start them later, they get less practice. The earlier give them the opportunity to start, the more practice they will get. All other things equal, the more practice the better the programmer.

    Read the article

  • ASP.Net 4.0 - Response required in SiteMap building?

    - by Nick Craver
    I'm running into an issue upgrading a project to .Net 4.0...and having trouble finding any reason for the issue (or at least, the change causing it). Given the freshness of 4.0, not a lot of blogs out there for issues yet, so I'm hoping someone here has an idea. Preface: this is a Web Forms application, coming from 3.5 SP1 to 4.0. In the Application_Start event we're iterating through the SiteMap and constructing routes based off data there (prettifying URLs mostly with some utility added), that part isn't failing though...or at least isn't not getting that far. It seems that calling the SiteMap.RootNode (inside application_start) causes 4.0 to eat it, since the XmlSiteMapProvider.GetNodeFromXmlNode method has been changed, looking in reflector you can see it's hitting HttpResponse.ApplyAppPathModifier here: str2 = HttpContext.Current.Response.ApplyAppPathModifier(str2); HttpResponse wasn't used at all in this method in the 2.0 CLR, so what we had worked fine, in 4.0 though, that method is called as a result of this stack: [HttpException (0x80004005): Response is not available in this context.] System.Web.XmlSiteMapProvider.GetNodeFromXmlNode(XmlNode xmlNode, Queue queue) System.Web.XmlSiteMapProvider.ConvertFromXmlNode(Queue queue) System.Web.XmlSiteMapProvider.BuildSiteMap() System.Web.XmlSiteMapProvider.get_RootNode() System.Web.SiteMap.get_RootNode() Since Response isn't available here in 4.0, we get an error. To reproduce this, you can narrow the test case down to this in global: protected void Application_Start(object sender, EventArgs e) { var s = SiteMap.RootNode; //Kaboom! //or just var r = Context.Response; //or var r = HttpContext.Current.Response; //all result in the same "not available" error } Question: Am I missing something obvious here? Or, is there another event added in 4.0 that's recommended for anything related to SiteMap on startup? For anyone curious/willing to help, I've created a very minimal project (a default VS 2010 ASP.Net 4.0 site, all the bells & whistles removed and only a blank sitemap and the Application_Start code added). It's a small 10kb zip available here: http://www.ncraver.com/Test/SiteMapTest.zip Update: Not a great solution, but the current work-around is to do the work in Application_BeginRequest, like this: private static bool routesRegistered = false; protected void Application_BeginRequest(object sender, EventArgs e) { if (!routesRegistered) { Application.Lock(); if (!routesRegistered) RouteManager.RegisterRoutes(RouteTable.Routes); routesRegistered = true; Application.UnLock(); } } I don't like this particularly, feels like an abuse of the event to bypass the issue. Does anyone have at least a better work-around, since the .Net 4 behavior with SiteMap is not likely to change?

    Read the article

  • Why do some questions get closed for no reason? [closed]

    - by IVlad
    Recently there was a question asking about generating all subsets of a set using a stack and a queue, which was closed (and now deleted it seems) as not a real question for no good reason, since it didn't fit into any of these conditions: It's difficult to tell what is being asked here. No, it was clear what was being asked. This question is ambiguous, vague, incomplete, or rhetorical and cannot be reasonably answered in its current form. Not ambiguous, not vague, not incomplete, definitely not rhetorical and could easily be answered if one knew the solution. Now, the exact same thing has happened with this question: http://stackoverflow.com/questions/2791982/a-shortest-path-problem-with-superheroes-and-intergalactic-journeys/2793746#2793746 I am interested in hearing a logical argument for why that question is either ambiguous, vague, incomplete, rhetorical or cannot reasonably be answered in its current form. It seems that (the same bunch of) people like to close questions that they think are homework questions, especially when they think people want to be served the solution on a platter, which is also not the case: Any suggestions or ideas of how this problem might be solved would be most welcomed. Most of the time the people asking these questions are very reasonable and appreciate even the most vague idea, yet their question is closed. Let's go further and assume that it IS a homework problem. So what? When I registered here I didn't see any rule that said not to post homework problems, nor do I see such a rule now. What is wrong with posting homework problems that makes people hunt them down with a passion to close them without even reading the entire question body? This site is full of questions asked by people who get paid to know the things they are asking, yet their questions are considered fine. How is solving someone's homework problem worse? In some places (like where I live), computer science is a mandatory high school subject, and not everyone is interested in it. How is helping at least those people worse than doing someone's JOB? Not answering homework questions is fine and it's everyone's choice, but I consider closing them to be an act of power abuse, selfishness, and an insult to the fellow community members who are also interested in a solution or want feedback on their proposed solution. So my questions are: - Why do questions like the above get closed for reasons that do not apply? Why do you close them? Why don't you? - Why doesn't a vote to reopen a question reopen it automatically? Needing 5 votes for a reopen takes too long, and it's not fair because one reopen vote basically cancels out a close vote, making it 4 close votes (or 5 to 1, which is the same as only 4 people wanting to close the question), which isn't enough to close the question. I think a question should only be closed when CloseVotes - ReopenVotes >= 5. I'm hoping this will stay up, but I realize it probably won't. In either case, I think this is worth saying and discussing, since it IS community-related.

    Read the article

  • What is a good platform for building a game framework targetting both web and native languages?

    - by fuzzyTew
    I would like to develop (or find, if one is already in development) a framework with support for accelerated graphics and sound built on a system flexible enough to compile to the following: native ppc/x86/x86_64/arm binaries or a language which compiles to them javascript actionscript bytecode or a language which compiles to it (actionscript 3, haxe) optionally java I imagine, for example, creating an API where I can open windows and make OpenGL-like calls and the framework maps this in a relatively efficient manner to either WebGL with a canvas object, 3d graphics in Flash, OpenGL ES 2 with EGL, or desktop OpenGL in an X11, Windows, or Cocoa window. I have so far looked into these avenues: Building the game library in haXe Pros: Targets exist for php, javascript, actionscript bytecode, c++ High level, object oriented language Cons: No support for finally{} blocks or destructors, making resource cleanup difficult C++ target does not allow room for producing highly optimized libraries -- the foreign function interface requires all primitive types be boxed in a wrapper object, as if writing bindings for a scripting language; these feel unideal for real-time graphics and audio, especially exporting low-level functions. Doesn't seem quite yet mature Using the C preprocessor to create a translator, writing programs entirely with macros Pros: CPP is widespread and simple to use Cons: This is an arduous task and probably the wrong tool for the job CPP implementations differ widely in support for features (e.g. xcode cpp has no variadic macros despite claiming C99 compliance) There is little-to-no room for optimization in this route Using llvm's support for multiple backends to target c/c++ to web languages Pros: Can code in c/c++ LLVM is a very mature highly optimizing compiler performing e.g. global inlining Targets exist for actionscript (alchemy) and javascript (emscripten) Cons: Actionscript target is closed source, unmaintained, and buggy. Javascript targets do not use features of HTML5 for appropriate optimization (e.g. linear memory with typed arrays) and are immature An LLVM target must convert from low-level bytecode, so high-level constructs are lost and bloated unreadable code is created from translating individual instructions, which may be more difficult for an unprepared JIT to optimize. "jump" instructions cause problems for languages with no "goto" statements. Using libclang to write a translator from C/C++ to web languages Pros: A beautiful parsing library providing easy access to the code structure Can code in C/C++ Has sponsored developer effort from Apple Cons: Incomplete; current feature set targets IDEs. Basic operators are unexposed and must be manually parsed from the returned AST element to be identified. Translating code prior to compilation may forgo optimizations assumed in c/c++ such as inlining. Creating new code generators for clang to translate into web languages Pros: Can code in C/C++ as libclang Cons: There is no API; code structure is unstable A much larger job than using libclang; the innards of clang are complex Building the game library in Common Lisp Pros: Flexible, ancient, well-developed language Extensive introspection should ease writing translators Translators exist for at least javascript Cons: Unfamiliar language No standardized library functions, widely varying implementations Which of these avenues should I pursue? Do you know of any others, or any systems that might be useful? Does a general project like this exist somewhere already? Thank you for any input.

    Read the article

  • Ruby: Parse, replace, and evaluate a string formula

    - by Swartz
    I'm creating a simple Ruby on Rails survey application for a friend's psychological survey project. So we have surveys, each survey has a bunch of questions, and each question has one of the options participants can choose from. Nothing exciting. One of the interesting aspects is that each answer option has a score value associated with it. And so for each survey a total score needs to be calculated based on these values. Now my idea is instead of hard-coding calculations is to allow user add a formula by which the total survey score will be calculated. Example formulas: "Q1 + Q2 + Q3" "(Q1 + Q2 + Q3) / 3" "(10 - Q1) + Q2 + (Q3 * 2)" So just basic math (with some extra parenthesis for clarity). The idea is to keep the formulas very simple such that anyone with basic math can enter them without resolving to some fancy syntax. My idea is to take any given formula and replace placeholders such as Q1, Q2, etc with the score values based on what the participant chooses. And then eval() the newly formed string. Something like this: f = "(Q1 + Q2 + Q3) / 2" # some crazy formula for this survey values = {:Q1 => 1, :Q2 => 2, :Q3 => 2} # values for substitution result = f.gsub(/(Q\d+)/) {|m| values[$1.to_sym] } # string to be eval()-ed eval(result) So my questions are: Is there a better way to do this? I'm open to any suggestions. How to handle formulas where not all placeholders were successfully replaced (e.g. one question wasn't answered)? Ex: {:Q3 = 2} wasn't in values hash? My idea is to rescue eval()... any thoughts? How to get proper result? Should be 2.5, but due to integer arithmetic, it will truncate to 2. I can't expect people who provide the correct formula (e.g. / 2.0 ) to understand this nuance. I do not expect this, but how to best protect eval() from abuse (e.g. bad formula, manipulated values coming in)? Thank you!

    Read the article

  • Is this a legitimate implementation of a 'remember me' function for my web app?

    - by user246114
    Hi, I'm trying to add a "remember me" feature to my web app to let a user stay logged in between browser restarts. I think I got the bulk of it. I'm using google app engine for the backend which lets me use java servlets. Here is some pseudo-code to demo: public class MyServlet { public void handleRequest() { if (getThreadLocalRequest().getSession().getAttribute("user") != null) { // User already has session running for them. } else { // No session, but check if they chose 'remember me' during // their initial login, if so we can have them 'auto log in' // now. Cookie[] cookies = getThreadLocalRequest().getCookies(); if (cookies.find("rememberMePlz").exists()) { // The value of this cookie is the cookie id, which is a // unique string that is in no way based upon the user's // name/email/id, and is hard to randomly generate. String cookieid = cookies.find("rememberMePlz").value(); // Get the user object associated with this cookie id from // the data store, would probably be a two-step process like: // // select * from cookies where cookieid = 'cookieid'; // select * from users where userid = 'userid fetched from above select'; User user = DataStore.getUserByCookieId(cookieid); if (user != null) { // Start session for them. getThreadLocalRequest().getSession() .setAttribute("user", user); } else { // Either couldn't find a matching cookie with the // supplied id, or maybe we expired the cookie on // our side or blocked it. } } } } } // On first login, if user wanted us to remember them, we'd generate // an instance of this object for them in the data store. We send the // cookieid value down to the client and they persist it on their side // in the "rememberMePlz" cookie. public class CookieLong { private String mCookieId; private String mUserId; private long mExpirationDate; } Alright, this all makes sense. The only frightening thing is what happens if someone finds out the value of the cookie? A malicious individual could set that cookie in their browser and access my site, and essentially be logged in as the user associated with it! On the same note, I guess this is why the cookie ids must be difficult to randomly generate, because a malicious user doesn't have to steal someone's cookie - they could just randomly assign cookie values and start logging in as whichever user happens to be associated with that cookie, if any, right? Scary stuff, I feel like I should at least include the username in the client cookie such that when it presents itself to the server, I won't auto-login unless the username+cookieid match in the DataStore. Any comments would be great, I'm new to this and trying to figure out a best practice. I'm not writing a site which contains any sensitive personal information, but I'd like to minimize any potential for abuse all the same, Thanks

    Read the article

  • functional, bind1st and mem_fun

    - by Neil G
    Why won't this compile? #include <functional> #include <boost/function.hpp> class A { A() { typedef boost::function<void ()> FunctionCall; FunctionCall f = std::bind1st(std::mem_fun(&A::process), this); } void process() {} }; Errors: In file included from /opt/local/include/gcc44/c++/bits/stl_function.h:712, from /opt/local/include/gcc44/c++/functional:50, from a.cc:1: /opt/local/include/gcc44/c++/backward/binders.h: In instantiation of 'std::binder1st<std::mem_fun_t<void, A> >': a.cc:7: instantiated from here /opt/local/include/gcc44/c++/backward/binders.h:100: error: no type named 'second_argument_type' in 'class std::mem_fun_t<void, A>' /opt/local/include/gcc44/c++/backward/binders.h:103: error: no type named 'first_argument_type' in 'class std::mem_fun_t<void, A>' /opt/local/include/gcc44/c++/backward/binders.h:106: error: no type named 'first_argument_type' in 'class std::mem_fun_t<void, A>' /opt/local/include/gcc44/c++/backward/binders.h:111: error: no type named 'second_argument_type' in 'class std::mem_fun_t<void, A>' /opt/local/include/gcc44/c++/backward/binders.h:117: error: no type named 'second_argument_type' in 'class std::mem_fun_t<void, A>' /opt/local/include/gcc44/c++/backward/binders.h: In function 'std::binder1st<_Operation> std::bind1st(const _Operation&, const _Tp&) [with _Operation = std::mem_fun_t<void, A>, _Tp = A*]': a.cc:7: instantiated from here /opt/local/include/gcc44/c++/backward/binders.h:126: error: no type named 'first_argument_type' in 'class std::mem_fun_t<void, A>' In file included from /opt/local/include/boost/function/detail/maybe_include.hpp:13, from /opt/local/include/boost/function/detail/function_iterate.hpp:14, from /opt/local/include/boost/preprocessor/iteration/detail/iter/forward1.hpp:47, from /opt/local/include/boost/function.hpp:64, from a.cc:2: /opt/local/include/boost/function/function_template.hpp: In static member function 'static void boost::detail::function::void_function_obj_invoker0<FunctionObj, R>::invoke(boost::detail::function::function_buffer&) [with FunctionObj = std::binder1st<std::mem_fun_t<void, A> >, R = void]': /opt/local/include/boost/function/function_template.hpp:913: instantiated from 'void boost::function0<R>::assign_to(Functor) [with Functor = std::binder1st<std::mem_fun_t<void, A> >, R = void]' /opt/local/include/boost/function/function_template.hpp:722: instantiated from 'boost::function0<R>::function0(Functor, typename boost::enable_if_c<boost::type_traits::ice_not::value, int>::type) [with Functor = std::binder1st<std::mem_fun_t<void, A> >, R = void]' /opt/local/include/boost/function/function_template.hpp:1064: instantiated from 'boost::function<R()>::function(Functor, typename boost::enable_if_c<boost::type_traits::ice_not::value, int>::type) [with Functor = std::binder1st<std::mem_fun_t<void, A> >, R = void]' a.cc:7: instantiated from here /opt/local/include/boost/function/function_template.hpp:153: error: no match for call to '(std::binder1st<std::mem_fun_t<void, A> >) ()'

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23  | Next Page >