Search Results

Search found 2214 results on 89 pages for 'science experiments'.

Page 77/89 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Facial Recognition for Retail

    - by David Dorf
    My son decided to do his science project on how the brain recognizes faces.  Faces are so complicated and important that the brain has a dedicated area for just that purpose.  During our research, we came across some emerging uses for facial recognition in the retail industry. If you believe the movies, recognizing faces as they walk by a camera is easy for computers but that's not the reality.  Huge investments are being made by the U.S. government in this area, with a focus on airport security.  Now, companies like Eye See are leveraging that research for marketing purposes.  They do things like track eyes while viewing newspaper ads to see which ads get more "eye time."  This can help marketers make better placement and color decisions. But what caught my eye (that was too easy) was their new mannequins that watch shoppers.  These mannequins, being tested at European retailers like Benetton, watch shoppers that walk by and identify their gender, race, and age.  This helps the retailer better understand the types of customers being attracted to the outfit on the mannequin.  Of course to be most accurate, the software has pictures of the employees so they can be filtered out.  Since the mannequins are closer to the shoppers and at eye-level, they are more accurate than traditional in-ceiling LP cameras. Marketing agency RedPepper is offering retailers the ability to recognize loyalty shoppers at their doors using Facedeal.  For customers that have opted into the program, when they enter the store their face is recognized and they are checked in.  Then, as a reward, they are sent an offer on their smartphone. It won't be long before retailers begin to listen to shoppers are they walk the aisles, then keywords can be collected and aggregated to give the retailer an idea of what people are saying about their stores and products.  Sentiment analysis based on what's said or even facial expressions can't be far off. Clearly retailers need to be cautions and respect customer privacy.  That's why these technologies are emerging slowly.  But since the next generation of shoppers are less concerned about privacy, I expect these technologies to appear sporadically in the next five years then go mainstream.  Time will tell.

    Read the article

  • How can I be prepared to join a company?

    - by Aerovistae
    There's more to it than that, but this title was the best way I could think of to sum it up. I'm a senior in a good computer science program, and I'm graduating early. About to start interviews and all whatnot. I'm not a super-experienced programmer, not one of those people who started in middle school. I'm decent at this, but I'm not among the best, not nearly. I have to do an awful lot of googling. So today I'm meeting some fellow for lunch at a campus cafe to discuss some front-end details when this tall, good-looking guy begs pardon, says he's new to campus, says he's wondering if we know where he can go to sign up for recruiting developers. Quickly evolves into long conversation: he's the CEO of a seems-to-be-doing-well start-up. Hiring passionate interns and full-times. Sounds great! I take one look at his site on my own computer later, immediately spot a major bug. No idea how to fix it, but I see it. I go over to the page code, and good god. It's the standard amount of code you would expect from a full-scale web application, a couple dozen pages of HTML and scripts. I don't even know where to start reading it. I've built sites from scratch, but obviously never on that scale, nor have I ever worked on one of that scale. I have no idea which bit might generate the bug. But that sets me thinking: How could someone like me possibly settle into an environment like that? A start-up is a very high-pressure working environment. I don't know if I can work at that pace under those constraints-- I would hate to let people down. And with only 10 employees, it's not like anyone has much time to help you get your bearings. Somewhere in there is a question. Can you see it? I'm asking for general advice here. Maybe even anecdotal advice. Is joining a start-up right out of college a scary process? Am I overestimating what it would take to figure out the mass of code behind this site? What's the likelihood a decent but only moderately-experienced coder could earn his pay at such a place? For instance, I know nothing of server-side/back-end programming. Never touched it. That scares me.

    Read the article

  • New grad; To overcome complete lack of experience, should I ditch a creative pet project in lieu of one that would demonstrate more applicable skills?

    - by Hart Simha
    I am currently working on a project on github that I think would be a good demonstration of my initiative, creativity and enthusiasm. It is an educational game I am developing in pygame that enables the user to learn to improve their development productivity by using vim, specifically with python, though learning to code faster with vim should be transferable to any language. I think this is something that might have a mass appeal and benefit to a lot of people in a measurable way. -However- I am graduating from college in a month (my degree is computer science with a minor in English), with no experience that is relevant to helping me get any kind of job in the field, and a gpa that doesn't tout my merits. I could pursue a career in game development, but it's not necessarily what I'm most interested in, and see myself applying to startups around the country. To the places I am looking at applying, showing that I have experience with pygame is going to be largely irrelevant, except in demonstration of my ability to code, period. A lot of skills that ARE more marketable, such a data modeling, GIS, mobile application, development, javascript, .net framework, and various web development technologies, are not going to be showcased by this project (on the upside, employers do like to see familiarity with git and python). I'm wondering if I should sink all my free time in the next couple of months into this project, since I'm motivated and interested in it, and if the value of being able to demonstrate ambition and 'good ideas' (for lack of a better term, and in my own opinion) will compensate for the absence of demonstrating more sought-after skills. I am probably at a point where I should either commit fully to this project now, or put it on the backburner in favor of something else, and I am leaning towards continuing with what I am already working on, because I think it's a great idea, and something achievable to me with enough dedication over the next couple months. But the most important thing to me is being able to get a job out of college, which I am exceedingly concerned about as the professional landscape which I am navigating for the first time is a lot more intimidating than I could have anticipated, with almost every job (even short-term contract positions) requiring years of experience which I lack. So in brief, the common denominator to answering the question "How can I overcome experience requirements for a job" seems to be "Show off your own project." I want to know WHICH project I should work on to best increase my chances of getting a job out of college, keeping in mind that I have no experience. I believe this question is applicable to any new grad that lacks demonstrable experience.

    Read the article

  • Are closures with side-effects considered "functional style"?

    - by Giorgio
    Many modern programming languages support some concept of closure, i.e. of a piece of code (a block or a function) that Can be treated as a value, and therefore stored in a variable, passed around to different parts of the code, be defined in one part of a program and invoked in a totally different part of the same program. Can capture variables from the context in which it is defined, and access them when it is later invoked (possibly in a totally different context). Here is an example of a closure written in Scala: def filterList(xs: List[Int], lowerBound: Int): List[Int] = xs.filter(x => x >= lowerBound) The function literal x => x >= lowerBound contains the free variable lowerBound, which is closed (bound) by the argument of the function filterList that has the same name. The closure is passed to the library method filter, which can invoke it repeatedly as a normal function. I have been reading a lot of questions and answers on this site and, as far as I understand, the term closure is often automatically associated with functional programming and functional programming style. The definition of function programming on wikipedia reads: In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. and further on [...] in functional code, the output value of a function depends only on the arguments that are input to the function [...]. Eliminating side effects can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming. On the other hand, many closure constructs provided by programming languages allow a closure to capture non-local variables and change them when the closure is invoked, thus producing a side effect on the environment in which they were defined. In this case, closures implement the first idea of functional programming (functions are first-class entities that can be moved around like other values) but neglect the second idea (avoiding side-effects). Is this use of closures with side effects considered functional style or are closures considered a more general construct that can be used both for a functional and a non-functional programming style? Is there any literature on this topic? IMPORTANT NOTE I am not questioning the usefulness of side-effects or of having closures with side effects. Also, I am not interested in a discussion about the advantages / disadvantages of closures with or without side effects. I am only interested to know if using such closures is still considered functional style by the proponent of functional programming or if, on the contrary, their use is discouraged when using a functional style.

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • Which one to select for my future career; Java, C#, Azure or Apex?

    - by user636195
    Hi folks, This time, I am going to studying Masters in Computer Science in U.S.A after a week. I have been doing my B.Sc for the past three years and after my freshman year I started working on projects (in C# and very rarely in Java) for the past two years.(i.e while I was a second and third year student). Now I am in a college where all of the programming courses are going to be taken in Java only (using Eclipse) and I am going to stay in this college for 8 months on campus and then fully employed for two years in other companies as a CPT. I really love to work on Microsoft products because, for me, they are simple and easy to use and understand. My future plan is to work in Cloud computing and be a Cloud based business owner in the near future. Since the college is going to teach us and let us do every project in Java, I was confused which programming language to use that will help me and enhance me in my career, and of course I wanted to select the one I liked to do everytime. I also heard a lot about Azure (Microsoft’s ) and Apex (Salesforce.com’s cloud computing programming language). Would you please give me your advice and recommendation based on my situation? Should I have to study only Java, or should I have to study C# or Azure beside Java on my own? The reason I asked this is because, since I have no clue how Azure works and how long it will take me to know the language, I am really confused which one to select (Java Vs C# and Azure Vs Apex or if there is any popular and mostly used Cloud Computing langauge). Do you think I can get a job in cloud computing if I study Azure or Apex by my own without experience? There is also one issue I want to consider which is a short term issue is. i.e Salary. Since I have to pay my student loan, I also need to get a good job which will let me pay my loan within two years. But, as I said, my long term plan is, get experience in Cloud Computing (from programming to administrative part,i.e every area of cloud computing) and then have my own business may be within 5-10 years. What do you think? Thank you for your time.

    Read the article

  • Is it smart to take a year off from school to get experience?

    - by user134147
    firstly I apologize if this question is not appropriate for the site, but I've seen other similar (though slightly deviant) questions on this sight before and I know the people here are the most qualified to answer my question. Anyways, I'm currently between my sophomore and junior years at a 4 year university, and after a bit of deliberation I've decided on computer science as a major (BA, by the way, as a BS would require me to stay at least an extra year the way our program is set up). I've been interested now in programming for a few months and I've developed a passion for it in a very short time. I began learning C++, migrating to Java recently when I learned my school focuses on this language. Now, I should mention that the concept of higher education has never sat well with me, so part of my motivation for wanting to take time off is to truly challenge myself and see what I can accomplish when I actually try at something. The autodidact in me finds it difficult to focus on my passions while trying to keep a high GPA in unrelated classes. However, I understand the times we live in and therefore would plan to complete my degree after this year. So my question is whether or not the skills I learn in a year off from college could justify the time off from school. Unfortunately, I don't believe I know enough yet to gain any professional experience (internship, etc.) so I would mostly focus my time on learning Java and another language, possibly Wordpress (to gain an understanding of web programming concepts as I have not yet decided what field I want to get into, and to make some money to fund my off-year), and to delve into security concepts, which also interest me. I'm hoping I could work on projects, such as simple applications or contributions to open source software during this time to enhance my resume once I do finish school, so I can find a job out of college easier. I do not want to be the new hire who knows nothing beyond the concepts of his Java textbooks. Does anyone have any input about these thoughts of mine, or any ideas for where I should focus my studies or how high I might set the bar for my work? Thanks a lot everyone!

    Read the article

  • Bazaar newbie question about repository structures

    - by esc1729
    I want to use Bazaar on Windows XP for web-development and related tasks. Most of the files are edited locally and then transferred via FTP to the server. Just now the repository sits on my local workstation. Later on it should be shared locally with some co-workers. Perhaps we will use a local Linux server as a centralized repository, but this structure is not decided for now. But first I need to understand the impacts of the different repository setups, which I do not at all. Using Bazaar-Explorer on Windows XP I’ve created a ‘shared tree repository’ from the option list of the init-dialogue in some location dev-filter/. Bazaar Explorer tells me: Created repository with treeless branches at F:/bzr.local/dev-filter Created branch at F:/bzr.local/dev-filter/trunk Created working tree at F:/bzr.local/dev-filter/work OK so far. Now I move a bunch of files into the work directory and add and commit them as Rev 1 ‘Start Revision’. Then I work on some of these files and commit them again as Rev 2. Here my confusion starts. Shouldn’t both revisions go into the trunk? The trunk is still empty, beside the .bzr directory which only holds some management information. If I delete my working directory, which I have tried during these first experiments, everything is gone. There’s obviously no hidden storage of those files. OK. Perhaps I need to push it into the trunk? This does not work either. Entering the work/ directory and initializing the ‘push’ to the trunk, Bazaar-Explorer tells me No new revisions to push. So what? This looks like a severe conceptual misunderstanding about what should happen on my side. Edit, 2010-02-03: Some conclusions What I learned meanwhile is this: I think I should switch to the command line until I really understand what’s going on, at least for creating the repositories and branches. Bazaar Explorer introduces a new level of abstraction which I only can handle if I understand the level beneath One of the secrets of working with Bazaar at least for me is to understand those .bzr directories, their particular properties and states when created with ‘bzr init’, ‘bzr init-repository’, ‘bzr branch’ etc. in all their variants and how they are plumped together. While there’s a whole chapter of ‘Organizing your workspace’ in the Bazaar User Guide, it’s more or less workflow oriented. The manual contains a lot of directory structures for the given examples. What I would prefer beside this and have not (or only rudimentary) found so far is some graphical representation of those ‘Lego like’ .bzr building blocks which create the linking of all the parts. So I started to invent some simple notation while working through the examples and looking into the .bzr directories to document what information is stored there, where does it come from, how and to what is it linked, is it complete or shared, etc. Erich Schreiber

    Read the article

  • How can I extract paragaphs and selected lines with Perl?

    - by neversaint
    I have a text where I need to: to extract the whole paragraph under the section "Aceview summary" until the line that starts with "Please quote" (not to be included). to extract the line that starts with "The closest human gene". to store them into array with two elements. The text looks like this (also on pastebin): AceView: gene:1700049G17Rik, a comprehensive annotation of human, mouse and worm genes with mRNAs or ESTsAceView. <META NAME="title" CONTENT=" AceView: gene:1700049G17Rik a comprehensive annotation of human, mouse and worm genes with mRNAs or EST"> <META NAME="keywords" CONTENT=" AceView, genes, Acembly, AceDB, Homo sapiens, Human, nematode, Worm, Caenorhabditis elegans , WormGenes, WormBase, mouse, mammal, Arabidopsis, gene, alternative splicing variant, structure, sequence, DNA, EST, mRNA, cDNA clone, transcript, transcription, genome, transcriptome, proteome, peptide, GenBank accession, dbest, RefSeq, LocusLink, non-coding, coding, exon, intron, boundary, exon-intron junction, donor, acceptor, 3'UTR, 5'UTR, uORF, poly A, poly-A site, molecular function, protein annotation, isoform, gene family, Pfam, motif ,Blast, Psort, GO, taxonomy, homolog, cellular compartment, disease, illness, phenotype, RNA interference, RNAi, knock out mutant expression, regulation, protein interaction, genetic, map, antisense, trans-splicing, operon, chromosome, domain, selenocysteine, Start, Met, Stop, U12, RNA editing, bibliography"> <META NAME="Description" CONTENT= " AceView offers a comprehensive annotation of human, mouse and nematode genes reconstructed by co-alignment and clustering of all publicly available mRNAs and ESTs on the genome sequence. Our goals are to offer a reliable up-to-date resource on the genes, their functions, alternative variants, expression, regulation and interactions, in the hope to stimulate further validating experiments at the bench "> <meta name="author" content="Danielle Thierry-Mieg and Jean Thierry-Mieg, NCBI/NLM/NIH, [email protected]"> <!-- var myurl="av.cgi?db=mouse" ; var db="mouse" ; var doSwf="s" ; var classe="gene" ; //--> However I am stuck with the following script logic. What's the right way to achieve that? #!/usr/bin/perl -w my $INFILE_file_name = $file; # input file name open ( INFILE, '<', $INFILE_file_name ) or croak "$0 : failed to open input file $INFILE_file_name : $!\n"; my @allsum; while ( <INFILE> ) { chomp; my $line = $_; my @temp1 = (); if ( $line =~ /^ AceView summary/ ) { print "$line\n"; push @temp1, $line; } elsif( $line =~ /Please quote/) { push @allsum, [@temp1]; @temp1 = (); } elsif ($line =~ /The closest human gene/) { push @allsum, $line; } } close ( INFILE ); # close input file # Do something with @allsum There are many files like that I need to process.

    Read the article

  • dojo.require() prevents Firefox from rendering the page

    - by Eduard Wirch
    Im experiencing strange behavior with Firefox and Dojo. I have a html page with these lines in the <head> section: ... <script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script> <script type="text/javascript"> dojo.require("dojo.number"); </script> ... Sometimes the page loads normally. But sometimes it won't. Firefox will fetch the whole html page but not render it. I see only a gray window. After some experiments I figured out that the rendering problem has something to do with the load time of the html. Firefox starts evaluating the html page while loading it. If the page takes too long to load the above javascript will be executed BEFORE the html finishes loading. If this happens I'll get the gray window. Advising Firefox to show me the source code of the page will display the correct complete html code. BUT: if I save the page to disk (File-Save Page As...) the html code will be truncated and the above part will look like this: ... <script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script> <script type="text/javascript"> dojo.require("dojo.number"); </script></head><body></body></html> This explains why I get to see a gray area. But why does this code appear there? I assume the require() method of Dojo does something "evil". But I can't figure out what. There is no write.document("</head><body></body></html>"); in the Dojo code. I checked for it. The problem would be fixed, if I'd place the dojo.require("dojo.number"); statement in the window.load event: <script type="text/javascript"> window.load=function() { dojo.require("dojo.number"); } </script> But I'm curious why this happens. Is there a Javasctript function which forces Firefox to stop evaluating the page? Does Dojo do somethig "bad"? Can anyone explain this behavior to me? EDIT: Dojo 1.3.1, no JS errors or warnings.

    Read the article

  • JavaScript/Dojo Module Pattern - how to debug?

    - by djna
    I'm working with Dojo and using the "Module Pattern" as described in Mastering Dojo. So far as I can see this pattern is a general, and widely used, JavaScript pattern. My question is: How do we debug our modules? So far I've not been able to persuade Firebug to show me the source of my module. Firebug seems to show only the dojo eval statement used to execute the factory method. Hence I'm not able to step through my module source. I've tried putting "debugger" statements in my module code, and Firebug seems to halt correctly, but does not show the source. Contrived example code below. This is just an example of sufficient complexity to make the need for debugging plausible, it's not intended to be useful code. The page <!-- Experiments with Debugging --> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>console me</title> <style type="text/css"> @import "../dojoroot/dojo/resources/dojo.css"; @import "../dojoroot/dijit/themes/tundra/tundra.css"; @import "edf.css"; </style> <script type="text/javascript" src="../dojoroot/dojo/dojo.js"> </script> <script type="text/javascript" > dojo.registerModulePath("mytest", "../../mytest"); dojo.require("mytest.example"); dojo.addOnLoad(function(){ mytest.example.greet(); }); </script> </head> <body class="tundra"> <div id="bulletin"> <p>Just Testing</p> </div> </body> </html> <!-- END: snip1 --> The java script I'd like to debug dojo.provide("mytest.example"); dojo.require("dijit.layout.ContentPane"); /** * define module */ (function(){ //define the main program functions... var example= mytest.example; example.greet= function(args) { var bulletin = dojo.byId("bulletin"); console.log("bulletin:" + bulletin); if ( bulletin) { var content = new dijit.layout.ContentPane({ id: "dummy", region: "center" }); content.setContent('Greetings!'); dojo._destroyElement(bulletin); dojo.place(content.domNode, dojo.body(), "first"); console.log("greeting done"); } else { console.error("no bulletin board"); } } })();

    Read the article

  • Specialization of template after instantiation?

    - by Onur Cobanoglu
    Hi, My full code is too long, but here is a snippet that will reflect the essence of my problem: class BPCFGParser { public: ... ... class Edge { ... ... }; class ActiveEquivClass { ... ... }; class PassiveEquivClass { ... ... }; struct EqActiveEquivClass { ... ... }; struct EqPassiveEquivClass { ... ... }; unordered_map<ActiveEquivClass, Edge *, hash<ActiveEquivClass>, EqActiveEquivClass> discovered_active_edges; unordered_map<PassiveEquivClass, Edge *, hash<PassiveEquivClass>, EqPassiveEquivClass> discovered_passive_edges; }; namespace std { template <> class hash<BPCFGParser::ActiveEquivClass> { public: size_t operator()(const BPCFGParser::ActiveEquivClass & aec) const { } }; template <> class hash<BPCFGParser::PassiveEquivClass> { public: size_t operator()(const BPCFGParser::PassiveEquivClass & pec) const { } }; } When I compile this code, I get the following errors: In file included from BPCFGParser.cpp:3, from experiments.cpp:2: BPCFGParser.h:408: error: specialization of ‘std::hash<BPCFGParser::ActiveEquivClass>’ after instantiation BPCFGParser.h:408: error: redefinition of ‘class std::hash<BPCFGParser::ActiveEquivClass>’ /usr/include/c++/4.3/tr1_impl/functional_hash.h:44: error: previous definition of ‘class std::hash<BPCFGParser::ActiveEquivClass>’ BPCFGParser.h:445: error: specialization of ‘std::hash<BPCFGParser::PassiveEquivClass>’ after instantiation BPCFGParser.h:445: error: redefinition of ‘class std::hash<BPCFGParser::PassiveEquivClass>’ /usr/include/c++/4.3/tr1_impl/functional_hash.h:44: error: previous definition of ‘class std::hash<BPCFGParser::PassiveEquivClass>’ Now I have to specialize std::hash for these classes (because standard std::hash definition does not include user defined types). When I move these template specializations before the definition of class BPCFGParser, I get a variety of errors for a variety of different things tried, and somewhere (http://www.parashift.com/c++-faq-lite/misc-technical-issues.html) I read that: Whenever you use a class as a template parameter, the declaration of that class must be complete and not simply forward declared. So I'm stuck. I cannot specialize the templates after BPCFGParser definition, I cannot specialize them before BPCFGParser definition, how may I get this working? Thanks in advance, Onur

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • how to implement a "soft barrier" in multithreaded c++

    - by Jason
    I have some multithreaded c++ code with the following structure: do_thread_specific_work(); update_shared_variables(); //checkpoint A do_thread_specific_work_not_modifying_shared_variables(); //checkpoint B do_thread_specific_work_requiring_all_threads_have_updated_shared_variables(); What follows checkpoint B is work that could have started if all threads have reached only checkpoint A, hence my notion of a "soft barrier". Typically multithreading libraries only provide "hard barriers" in which all threads must reach some point before any can continue. Obviously a hard barrier could be used at checkpoint B. Using a soft barrier can lead to better execution time, especially since the work between checkpoints A and B may not be load-balanced between the threads (i.e. 1 slow thread who has reached checkpoint A but not B could be causing all the others to wait at the barrier just before checkpoint B). I've tried using atomics to synchronize things and I know with 100% certainty that is it NOT guaranteed to work. For example using openmp syntax, before the parallel section start with: shared_thread_counter = num_threads; //known at compile time #pragma omp flush Then at checkpoint A: #pragma omp atomic shared_thread_counter--; Then at checkpoint B (using polling): #pragma omp flush while (shared_thread_counter > 0) { usleep(1); //can be removed, but better to limit memory bandwidth #pragma omp flush } I've designed some experiments in which I use an atomic to indicate that some operation before it is finished. The experiment would work with 2 threads most of the time but consistently fail when I have lots of threads (like 20 or 30). I suspect this is because of the caching structure of modern CPUs. Even if one thread updates some other value before doing the atomic decrement, it is not guaranteed to be read by another thread in that order. Consider the case when the other value is a cache miss and the atomic decrement is a cache hit. So back to my question, how to CORRECTLY implement this "soft barrier"? Is there any built-in feature that guarantees such functionality? I'd prefer openmp but I'm familiar with most of the other common multithreading libraries. As a workaround right now, I'm using a hard barrier at checkpoint B and I've restructured my code to make the work between checkpoint A and B automatically load-balancing between the threads (which has been rather difficult at times). Thanks for any advice/insight :)

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Switch between speakerphone and headset on Android

    - by user210504
    Hi! I wish to know if there is a way, using which we can switch between the speaker and headset dynamically in an android application. I am using this sample code, I found online for my experiments final float frequency = 440; float increment = (float)(2*Math.PI) * frequency / 44100; // angular increment for each sample float angle = 0; AndroidAudioDevice device = new AndroidAudioDevice( ); AudioManager am = (AudioManager)getSystemService(AUDIO_SERVICE); am.setMode(AudioManager.MODE_IN_CALL); float samples[] = new float[1024]; int count = 0; while( count < 10 ) { count++; for( int i = 0; i < samples.length; i++ ) { samples[i] = (float)Math.sin( angle ) ; angle += increment; } device.writeSamples( samples ); } device.stop(); am.setMode(AudioManager.MODE_NORMAL); ---- next class public class AndroidAudioDevice { AudioTrack track; short[] buffer = new short[1024]; public AndroidAudioDevice( ) { int minSize =AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT ); track = new AudioTrack( AudioManager.STREAM_VOICE_CALL, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, minSize, AudioTrack.MODE_STREAM); track.play(); } public void writeSamples(float[] samples) { fillBuffer( samples ); track.write( buffer, 0, samples.length ); } private void fillBuffer( float[] samples ) { if( buffer.length < samples.length ) buffer = new short[samples.length]; for( int i = 0; i < samples.length; i++ ) buffer[i] = (short)(samples[i] * Short.MAX_VALUE);; } public void stop() { track.stop(); } } As per my understanding this should play audio on headset, because we have not enabled the speaker phone. However, the audio is playing on the speaker phone. 1 Am I doing something wrong here? 2 What would be a way to switch between internal speaker and speaker phone dynamically for same code peice Any help will be appreciated.

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • functional dependencies vs type families

    - by mhwombat
    I'm developing a framework for running experiments with artificial life, and I'm trying to use type families instead of functional dependencies. Type families seems to be the preferred approach among Haskellers, but I've run into a situation where functional dependencies seem like a better fit. Am I missing a trick? Here's the design using type families. (This code compiles OK.) {-# LANGUAGE TypeFamilies, FlexibleContexts #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u => a -> StateT u IO a -- plus other functions class Universe u where type MyAgent u :: * withAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } defaultWithAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () defaultWithAgent = undefined -- stub -- plus default implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- .. and they'll also need to make SimpleUniverse an instance of Universe -- for their agent type. -- instance Universe SimpleUniverse where type MyAgent SimpleUniverse = Bug withAgent = defaultWithAgent -- boilerplate -- plus similar boilerplate for other functions Is there a way to avoid forcing my users to write those last two lines of boilerplate? Compare with the version using fundeps, below, which seems to make things simpler for my users. (The use of UndecideableInstances may be a red flag.) (This code also compiles OK.) {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances, UndecidableInstances #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u a => a -> StateT u IO a -- plus other functions class Universe u a | u -> a where withAgent :: Agent a => (a -> StateT u IO a) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } instance Universe SimpleUniverse a where withAgent = undefined -- stub -- plus implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- And now my users only have to write stuff like... -- u :: SimpleUniverse u = SimpleUniverse "mydir"

    Read the article

  • Optimizing sparse dot-product in C#

    - by Haggai
    Hello. I'm trying to calculate the dot-product of two very sparse associative arrays. The arrays contain an ID and a value, so the calculation should be done only on those IDs that are common to both arrays, e.g. <(1, 0.5), (3, 0.7), (12, 1.3) * <(2, 0.4), (3, 2.3), (12, 4.7) = 0.7*2.3 + 1.3*4.7 . My implementation (call it dict) currently uses Dictionaries, but it is too slow to my taste. double dot_product(IDictionary<int, double> arr1, IDictionary<int, double> arr2) { double res = 0; double val2; foreach (KeyValuePair<int, double> p in arr1) if (arr2.TryGetValue(p.Key, out val2)) res += p.Value * val2; return res; } The full arrays have about 500,000 entries each, while the sparse ones are only tens to hundreds entries each. I did some experiments with toy versions of dot products. First I tried to multiply just two double arrays to see the ultimate speed I can get (let's call this "flat"). Then I tried to change the implementation of the associative array multiplication using an int[] ID array and a double[] values array, walking together on both ID arrays and multiplying when they are equal (let's call this "double"). I then tried to run all three versions with debug or release, with F5 or Ctrl-F5. The results are as follows: debug F5: dict: 5.29s double: 4.18s (79% of dict) flat: 0.99s (19% of dict, 24% of double) debug ^F5: dict: 5.23s double: 4.19s (80% of dict) flat: 0.98s (19% of dict, 23% of double) release F5: dict: 5.29s double: 3.08s (58% of dict) flat: 0.81s (15% of dict, 26% of double) release ^F5: dict: 4.62s double: 1.22s (26% of dict) flat: 0.29s ( 6% of dict, 24% of double) I don't understand these results. Why isn't the dictionary version optimized in release F5 as do the double and flat versions? Why is it only slightly optimized in the release ^F5 version while the other two are heavily optimized? Also, since converting my code into the "double" scheme would mean lots of work - do you have any suggestions how to optimize the dictionary one? Thanks! Haggai

    Read the article

  • iPhone OpenGL scrolling background jumps when texture is drawn for first time

    - by Magnum39
    I have been fighting a problem for a while now and would appreciate any help anybody could give. I have a sprite that moves within a landscape. The sprite remains in the center of the screen and the background moves to simulate that the sprite is moving within the landscape. I have split the landscape into sections so that I only draw the sections of the landscape that I need (are on screen). The Problem: As a new texture section of the screen appears on the screen (is drawn for the first time) the movement jumps. Almost like a frame is missed. I have done some timing experiments and I do not thinks a frame is missed. My processing is well below the 30fps that I have the animation set to. It only happens the first time the texture section is drawn. Is there something extra that is done the first time a texture is drawn? Here is the code: - (void) render{ // Sets up an array of values to use as the sprite vertices. const GLfloat sVerts[] = { -1.6f, -1.6f, 1.6f, -1.6f, -1.6f, 1.6f, 1.6f, 1.6f, }; static const GLfloat sTexCoords[] = { 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0 }; glDisableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Setup opengl to draw the object in correct orientation, size, position, etc glLoadIdentity(); // Enable use of the texture glEnable(GL_TEXTURE_2D); glVertexPointer(2, GL_FLOAT, 0, sVerts); glTexCoordPointer(2, GL_FLOAT, 0, sTexCoords); // draw the texture // set the position of the first tile float xOffset = -4.8; float yOffset = 4.8; int i; int y; int currentTexture = textureA; for(i=0; i<2; i++) { for(y=0; y<2; y++) { // test for the texture tile on the screen if not on screen then do not draw float localX = xOffset+(3.21*y); float localY = yOffset-(3.21*i); float xDiff = monkeyX - localX; float yDiff = monkeyY - localY; if(((xDiff < 3.2) && (xDiff > -3.2)) && ((yDiff <2.7) && (yDiff > -2.7))) { // bind the texture and set the vertex data pointers glBindTexture(GL_TEXTURE_2D, spriteTexture[currentTexture]); // move to draw position for the texture glLoadIdentity(); glTranslatef((localX+self.positionX), (localY+self.positionY), 0.0); //draw the texture glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } currentTexture++; } } }

    Read the article

  • Nginx case-insensitive reverse proxy rewrites

    - by BrianM
    I'm looking to setup an nginx reverse proxy to make some upcoming server moves and load balanced implementations much easier within our apps. Since our servers are all IIS case sensitivity hasn't been an issue, but now with nginx it's becoming one for me. I am simply looking to do a rewrite regardless of case. Infrastructure notes: All backend servers are IIS Most services are WCF services I am trying to simplify the URLs so I can move services around as we continue to build out I can't set my location to case insensitive due to the following error: nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/sites-enabled/test.conf:101 The main part of my conf file where I am trying to handle the rewrite is as follows location /svc_test { proxy_set_header x-real-ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; proxy_set_header host $http_host; proxy_pass http://backend/serviceSite/WFCService.svc; } location ~* /test { rewrite ^/(.*)/$ /svc_test/$1 last; } It's the /test location that I can't get figured out. If I call http://nginxserver/svc_test/help I get the WCF help page to display correctly and I can make all available REST calls. This HAS to be a boneheaded regex issue on my part, but I have tried several variations and all I can get are 404 or 500 errors from nginx. This is NOT rocket science so can someone point me in the right direction so I can look like an idiot and just move on?

    Read the article

  • Where can I ask questions that aren't IT questions?

    - by Adam Davis
    Thank you for your confidence in our abilities But have you read this? FAQ We get a lot of interesting technical questions on here, but Server Fault is meant to be first and foremost a resource for system administrators and IT professionals. In other words, Server Fault isn't meant to cater to every computer problem - just those that only exist in or can best be solved in the IT and sysadmin domain Yes, someone here might be able to help you, but you'll find that other forums more focused on your topic can give you a much better answer than a bunch of IT professionals. It's likely that your question will be downvoted, closed, and in some cases marked "offensive." It's not that we hate you, it's just that we like to keep our corn pops separate from our cocoa puffs. In that vein, here are a bunch of other forums where you can get help: (This list contains the top two forums for each category as voted for below.) Programming Stackoverflow Consumer Level Computer Hardware Ars Technica OpenForum Tom's Hardware Forums Computer Software Anandtech Forum Web Design/Hosting/CMS SitePoint Forums Web Hosting Talk Math, Science, Engineering Physics Forums PlanetMath Other/General Ask Metafilter Google Search Mahalo Answers Check out the answers below for even more suggestions! What forums can people go to to ask the questions that are off topic here? Please list only ONE forum per answer so votes can bring the best forums to the top. Return to FAQ Index

    Read the article

  • 'pskill \\hostname winlogon' might budge a server "stuck rebooting", but why?

    - by Snoi
    Question: Executing remote (Sysinternals) command... pskill \\machine winlogon ...can budge a server that is stuck rebooting, but how/why does this work? How do you know which service to kill? To recreate (e.g.): You run Windows Update, allow a reboot, and ...NOTHING! RDP gets cut off but the server does not reboot. Just about every other service seems to stay up. Further Background: I've faced this problem on VMs hosted around the planet for some years, and used various sc.exe and shutdown commands to learn the state of and attempt remote reboot of servers in such a state, with limited success. Most datacentres don't offer any way to see the true console or power off/on such machines. They charge $$ for you to call them to do such simple things after hours, when you nearly always have to run your maint tasks. e.g. NET USE \\machine\IPC$ /USER:login password sc \\machine query RpcSs sc \\machine query TermService sc \\machine query wuauserv tasklist /s machine This occasionally works for me... shutdown /m \\machine /r /f /t: 0 ...but more often than not it fails with: A system shutdown is in progress (1115). I found this question, and the answer by @Tweek, and it worked really well, but was I just lucky? Can not RDP to Win 2003 box or initiate remote restart @Tweek said to run: pskill \\hostname winlogon ...and that got me past this situation in a new way (Server 2008 R2 in my most recent case) - really useful! I just need to understand if I got lucky or there is more science here. What I'd like to know is why the winlogon process? @Livne said to use "tasklist /s HostName" to see what is the culprit, but how do you tell from the listed output? It's just a list of running tasks etc. From that I would not know what to look for, nor could I see anything about the winlogon process that suggested to my eyes that was the one to kill.

    Read the article

  • Scenarios for Bazaar and SVN interaction

    - by Adam Badura
    At our company we are using SVN repository. I'm doing programming from both work (main place) and home (mostly experiments and refactoring). Those are two different machines, in different networks and almost never turned on at the same time (after all I'm either at work or at home...) I wanted to give a chance to some distributed version control system and solve some of the issues associated with SVN based process and having two machines. From git, Mercurial and Bazaar I chose to start with Bazaar since it claims that it is designed do be used by human beings. Its my first time with distributed system and having nice and easy user interface was important for me. Features I wanted to achieve were: Being able to update from SVN repository and commit to it. Being able to commit locally steps of my work on a task. Being able to have few separate tasks at the same time in their own local branches. Being able to share those branches between my work and home computer. As a means of transport between work and home computer I wanted to use a pen-drive. Company server will not work since I may not instal there anything. Neither will work a web service repository as I may not upload source code to web (especially if it would be public which seems to be a common case in free web services). This transport should be Bazaar-based (or what ever else I will end with) so it can be done more or less automatically but manual copying and pasting some folders or generating patch files (providing they would work - I have bad experience with patch files in SVN) would work as well if there is no better solution. Yet the pen-drive should only be used for transportation. I do not want to edit or build there. I tried following Bazaar guidelines for integration with SVN. But I failed. I tried both bzr svn-import and bzr checkout providing URL from my repository as both https://... and svn+https://.... In some cases it had some issues with certificates but the output specified argument to ignore them so I did that. Sometimes it asked me to log in (in other cases maybe it remembered... I don't know) which I did. All were running very slow (this could be our server issue) and at some point were interrupted due to connection interruption (this almost for sure is our server issue: it truncates the connection after some time). But since (as opposed to SVN) restarting starts a new rather than from point where it was interrupted I was unable to reach all the ~19000 revisions (ending usually somewhere around 150). What and how should I do with Bazaar? Is is possible to somehow import SVN repository from the local checkout (so that I do not suffer the connection truncation)? I was told that a colleague that used to work with us has done something similar (importing SVN repository with full history) with Mercurial like in no time. So I'm seriously considering now trying Mercurial even if only to see if that will work. But also what are your general guidelines to achieve the listed features?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >