Search Results

Search found 2224 results on 89 pages for 'scientific computing'.

Page 81/89 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Architecture for Qt SIGNAL with subclass-specific, templated argument type

    - by Barry Wark
    I am developing a scientific data acquisition application using Qt. Since I'm not a deep expert in Qt, I'd like some architecture advise from the community on the following problem: The application supports several hardware acquisition interfaces but I would like to provide an common API on top of those interfaces. Each interface has a sample data type and a units for its data. So I'm representing a vector of samples from each device as a std::vector of Boost.Units quantities (i.e. std::vector<boost::units::quantity<unit,sample_type> >). I'd like to use a multi-cast style architecture, where each data source broadcasts newly received data to 1 or more interested parties. Qt's Signal/Slot mechanism is an obvious fit for this style. So, I'd like each data source to emit a signal like typedef std::vector<boost::units::quantity<unit,sample_type> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); for the unit and sample_type appropriate for that device. Since tempalted QObject subclasses aren't supported by the meta-object compiler, there doesn't seem to be a way to have a (tempalted) base class for all data sources which defines the samplesAcquired Signal. In other words, the following won't work: template<T,U> //sample type and units class DataSource : public QObject { Q_OBJECT ... public: typedef std::vector<boost::units::quantity<U,T> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); }; The best option I've been able to come up with is a two-layered approach: template<T,U> //sample type and units class IAcquiredSamples { public: typedef std::vector<boost::units::quantity<U,T> > SampleVector virtual shared_ptr<SampleVector> acquiredData(TimeStamp ts, unsigned long nsamples); }; class DataSource : public QObject { ... signals: void samplesAcquired(TimeStamp ts, unsigned long nsamples); }; The samplesAcquired signal now gives a timestamp and number of samples for the acquisition and clients must use the IAcquiredSamples API to retrieve those samples. Obviously data sources must subclass both DataSource and IAcquiredSamples. The disadvantage of this approach appears to be a loss of simplicity in the API... it would be much nicer if clients could get the acquired samples in the Slot connected. Being able to use Qt's queued connections would also make threading issues easier instead of having to manage them in the acquiredData method within each subclass. One other possibility, is to use a QVariant argument. This necessarily puts the onus on subclass to register their particular sample vector type with Q_REGISTER_METATYPE/qRegisterMetaType. Not really a big deal. Clients of the base class however, will have no way of knowing what type the QVariant value type is, unless a tag struct is also passed with the signal. I consider this solution at least as convoluted as the one above, as it forces clients of the abstract base class API to deal with some of the gnarlier aspects of type system. So, is there a way to achieve the templated signal parameter? Is there a better architecture than the one I've proposed?

    Read the article

  • randomized quicksort: probability of two elements comparison?

    - by bantu
    I am reading "Probability and Computing" by M.Mitzenmacher, E.Upfal and I have problems understanding how the probability of comparison of two elements is calculated. Input: the list (y1,y2,...,YN) of numbers. We are looking for pivot element. Question: what is probability that two elements yi and yj (ji) will be compared? Answer (from book): yi and yj will be compared if either yi or yj will be selected as pivot in first draw from sequence (yi,yi+1,...,yj-1,yj). So the probablity is: 2/(y-i+1). The problem for me is initial claim: for example, picking up yi in the first draw from the whole list will cause the comparison with yj (and vice-versa) and the probability is 2/n. So, rather the "reverse" claim is true -- none of the (yi+1,...,yj-1) elements can be selected beforeyi or yj, but the "pool" size is not fixed (in first draw it is n for sure, but on the second it is smaller). Could someone please explain this, how the authors come up with such simplified conclusion? Thank you in advance

    Read the article

  • What are practical guidelines for evaluating a language's "Turing Completeness"?

    - by AShelly
    I've read "what-is-turing-complete" and the wikipedia page, but I'm less interested in a formal proof than in the practical implications of being Turing Complete. What I'm actually trying to decide is if the toy language I've just designed could be used as a general-purpose language. I know I can prove it is if I can write a Turing machine with it. But I don't want to go through that exercise until I'm fairly certain of success. Is there a minimum set of features without which Turing Completeness is impossible? Is there a set of features which virtually guarantees completeness? (My guess is that conditional branching and a readable/writeable memory store will get me most of the way there) EDIT: I think I've gone off on a tangent by saying "Turing Complete". I'm trying to guess with reasonable confidence that a newly invented language with a certain feature set (or alternately, a VM with a certain instruction set) would be able to compute anything worth computing. I know proving you can building a Turing machine with it is one way, but not the only way. What I was hoping for was a set of guidelines like: "if it can do X,Y,and Z, it can probably do anything".

    Read the article

  • Performance of Java matrix math libraries?

    - by dfrankow
    We are computing something whose runtime is bound by matrix operations. (Some details below if interested.) This experience prompted the following question: Do folk have experience with the performance of Java libraries for matrix math (e.g., multiply, inverse, etc.)? For example: JAMA: http://math.nist.gov/javanumerics/jama/ COLT: http://acs.lbl.gov/~hoschek/colt/ Apache commons math: http://commons.apache.org/math/ I searched and found nothing. Details of our speed comparison: We are using Intel FORTRAN (ifort (IFORT) 10.1 20070913). We have reimplemented it in Java (1.6) using Apache commons math 1.2 matrix ops, and it agrees to all of its digits of accuracy. (We have reasons for wanting it in Java.) (Java doubles, Fortran real*8). Fortran: 6 minutes, Java 33 minutes, same machine. jvisualm profiling shows much time spent in RealMatrixImpl.{getEntry,isValidCoordinate} (which appear to be gone in unreleased Apache commons math 2.0, but 2.0 is no faster). Fortran is using Atlas BLAS routines (dpotrf, etc.). Obviously this could depend on our code in each language, but we believe most of the time is in equivalent matrix operations. In several other computations that do not involve libraries, Java has not been much slower, and sometimes much faster.

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is assembly an absolute must?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • Calculate an Internet (aka IP, aka RFC791) checksum in C#

    - by Pat
    Interestingly, I can find implementations for the Internet Checksum in almost every language except C#. Does anyone have an implementation to share? Remember, the internet protocol specifies that: "The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero." More explanation can be found from Dr. Math. There are some efficiency pointers available, but that's not really a large concern for me at this point. Please include your tests! (Edit: Valid comment regarding testing someone else's code - but I am going off of the protocol and don't have test vectors of my own and would rather unit test it than put into production to see if it matches what is currently being used! ;-) Edit: Here are some unit tests that I came up with. They test an extension method which iterates through the entire byte collection. Please comment if you find fault in the tests. [TestMethod()] public void InternetChecksum_SimplestValidValue_ShouldMatch() { IEnumerable<byte> value = new byte[1]; // should work for any-length array of zeros ushort expected = 0xFFFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidSingleByteExtreme_ShouldMatch() { IEnumerable<byte> value = new byte[]{0xFF}; ushort expected = 0xFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidMultiByteExtrema_ShouldMatch() { IEnumerable<byte> value = new byte[] { 0x00, 0xFF }; ushort expected = 0xFF00; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); }

    Read the article

  • tips for fixing bad coding/dev habits ?

    - by dfafa
    i want to become a better coder....so i have decided to sign up for computing science program...maybe a formal education can assist me. i started working on smaller projects to learn but currently i have really bad coding/dev habits which is hindering my productivity as the codebase increases.... i have highlighted them and perhaps someone could make suggestions (or redirect to resources) or a more efficient method. most stuff that i made in the past were web apps. i usually develop with putty + nano...i just love the minimalist feel i use winscp and develop directly on my private web server...too lazy to do it on localhost and upload it later. i dont use subversion control...which one do i need ? sometimes ctrl +z doesn't work well. when i run out of ideas for naming variable, i use swear words instead. i swear a lot when i get stuck....how to deal with anger issue ? my codes look ugly with comments everywhere. would rather use procedural coding finds "thinking" in OO difficult and time consuming i "write first think later". refactors code only if i am getting paid for it. dislikes configuring linux distro, Apache, MySQL, scaling, designing graphics and layouts. does not like writing tests likes working alone. does not like sharing codes. has an econ degree dislikes reading other people's code would rather write it on my own it seems my only true desire is to translate my ideas to a working prototype as fast as possible....it seems like i am very uninterested in the other details...could it be that i am not cut out to be a coder after all ? is going back to study comp sci a bad idea ?

    Read the article

  • Using game of life or other virtual environment for artificial (intelligence) life simulation? [clos

    - by Berlin Brown
    One of my interests in AI focuses not so much on data but more on biologic computing. This includes neural networks, mapping the brain, cellular-automata, virtual life and environments. Described below is an exciting project that includes develop a virtual environment for bots to evolve in. "Polyworld is a cross-platform (Linux, Mac OS X) program written by Larry Yaeger to evolve Artificial Intelligence through natural selection and evolutionary algorithms." http://en.wikipedia.org/wiki/Polyworld " Polyworld is a promising project for studying virtual life but it still is far from creating an "intelligent autonomous" agent. Here is my question, in theory, what parameters would you use create an AI environment? Possibly a brain environment? Possibly multiple self contained life organisms that have their own "brain" or life structures. I would like a create a spin on the game of life simulation. What if you have a 64x64 game of life grid. But instead of one grid, you might have N number of grids. The N number of grids are your "life force" If all of the game of life entities die in a particular grid then that entire grid dies. A group of "grids" makes up a life form. I don't have an immediate goal. First, I want to simulate an environment and visualize what is going on in the environment with OpenGL and see if there are any interesting properties to the environment. I then want to add "scarce resources" and see if the AI environment can manage resources adequately.

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • Building Paypal based membership website - total noob - would appreciate help

    - by Ali
    this is a follow up on my question on paypal integration. I'm working ona membership site for racing fans. My membership site has 3 membership levels - free, gold and premium. When a user signs up he/she can gets a free membership on the spot but has the option to upgrade to a gold membership for 4 Dollars a month or a premium membership for 10 Dollars a month. I've gone through the paypal integration guide a few times though and have a vague understanding of how to get this to work. I think the recurring payments option would be fine enough - however I don't know how do I implement this in my system. Like when a user decides to go for a paid account i.e. Gold or premium from basic - what should I do on both my code side and on the paypal account side - I'd really appreciate if anyone would outline what I'd have to do here. Plus when a user decides to upgrade from lets say a Gold to a premium account - there is the issue of computing how much should be charged to upgrade his/her account eg: a user has been billed for 4 dollars and the next day opts to go for a premium account so assuming that the surplus for the rest of the month is 5 dollars and further from that all payments would be recurring 10 dollars monthly - how do I implement this? And in case a user decides to downgrade from a premium account of 10 dollars a month to a gold account of 4 dollars a month - how do I handle the surplus which would have to be refunded for that month alone and changing the membership? And like wise if someone wishes to cancel membership and go to having a free account - how do I refund whatever is owed and cancel the subscription. I'm sorry if it sounds like I'm asking to be spoon fed :( I'm quite new to this and this is for a client and I would really appreciate all the help here and really have to get this working right. Thanks again everyone - waiting for all your replies.

    Read the article

  • Magic function `bash ` not found

    - by inspectorG4dget
    I have a bunch of simulations that I want to run on a high-performance cluster, on which I should make reservations to get computing time. Since the reservations are bounded by time, I am developing an automation script that I can scp into the cluster and run. This script will then download the relevant simulation files, run them, and upload the results. Part of this automation script is in bash (cp, scp, etc) and the rest is in python. In order to develop this automation, I am using an IPython notebook. So far, I've coded all the python automation stuff in my IPython notebook and am trying to write the bash part of it now. However, it seems that the magic %%bash doesn't work in my IPython notebook. I get the following error when I have this code in my cell: Cell %%bash echo hi Error File "<ipython-input-22-62ec98e35224>", line 3 echo hi ^ SyntaxError: invalid syntax On a whim, I tried this: Cell %%bash print "hi" Error hi ERROR: Magic function `bash` not found. So I tried this with %%system, %%! and %%shell. But none of those work; they all give me the same error. Why is this happening? How can I fix this? Metadata: IPython 0.13.dev Python 2.7.1 Mac OS X Lion

    Read the article

  • Performing calculations by subsets of data in R

    - by Vivi
    I want to perform calculations for each company number in the column PERMNO of my data frame, the summary of which can be seen here: > summary(companydataRETS) PERMNO RET Min. :10000 Min. :-0.971698 1st Qu.:32716 1st Qu.:-0.011905 Median :61735 Median : 0.000000 Mean :56788 Mean : 0.000799 3rd Qu.:80280 3rd Qu.: 0.010989 Max. :93436 Max. :19.000000 My solution so far was to create a variable with all possible company numbers compns <- companydataRETS[!duplicated(companydataRETS[,"PERMNO"]),"PERMNO"] And then use a foreach loop using parallel computing which calls my function get.rho() which in turn perform the desired calculations rhos <- foreach (i=1:length(compns), .combine=rbind) %dopar% get.rho(subset(companydataRETS[,"RET"],companydataRETS$PERMNO == compns[i])) I tested it for a subset of my data and it all works. The problem is that I have 72 million observations, and even after leaving the computer working overnight, it still didn't finish. I am new in R, so I imagine my code structure can be improved upon and there is a better (quicker, less computationally intensive) way to perform this same task (perhaps using apply or with, both of which I don't understand). Any suggestions?

    Read the article

  • Natural problems to solve using closures

    - by m.u.sheikh
    I have read quite a few articles on closures, and, embarassingly enough, I still don't understand this concept! Articles explain how to create a closure with a few examples, but I don't see any point in paying much attention to them, as they largely look contrived examples. I am not saying all of them are contrived, just that the ones I found looked contrived, and I dint see how even after understanding them, I will be able to use them. So in order to understand closures, I am looking at a few real problems, that can be solved very naturally using closures. For instance, a natural way to explain recursion to a person could be to explain the computation of n!. It is very natural to understand a problem like computing the factorial of a number using recursion. Similarly, it is almost a no-brainer to find an element in an unsorted array by reading each element, and comparing with the number in question. Also, at a different level, doing Object-oriented programming also makes sense. So I am trying to find a number of problems that could be solved with or without closures, but using closures makes thinking about them and solving them easier. Also, there are two types to closures, where each call to a closure can create a copy of the environment variables, or reference the same variables. So what sort of problems can be solved more naturally in which of the closure implementations?

    Read the article

  • What can I read from the iPad Camera Connection Kit?

    - by HELVETICADE
    I'm building a small controller device that I'd like to partner with a computer. I've settled on using OSC out from my custom built hardware and am pretty satisfied with what I can get from WOscLib. Two goals I'd like to achieve are portability and a very ratio between battery:computing power, and this has lured me towards using iPhoneOS to accomplish my goals. I think the iPad would suit my needs perfectly, except that using wifi to broadcast OSC out from my device requires a third device and would destroy the goal of portability, whilst also introducing potential latency and stability headaches. My question is pretty simple: Can I push my OSC-out FROM my controller TO an iPad via USB and the Camera Connection Kit? If I could accomplish this, the two major goals of my project would be fulfilled very nicely. This seems like it should be a simple little question, but researching this obsessively over the past few weeks has left me more almost more uncertain than if I had done no research at all. I'd really like some more confidence before I go down this route, and it seems like it should be possible. Any insight would be very, very appreciated.

    Read the article

  • Neural Networks or Human-computer interaction

    - by Shahin
    I will be entering my third year of university in my next academic year, once I've finished my placement year as a web developer, and I would like to hear some opinions on the two modules in the Title. I'm interested in both, however I want to pick one that will be relevant to my career and that I can apply to systems I develop. I'm doing an Internet Computing degree, it covers web development, networking, database work and programming. Though I have had myself set on becoming a web developer I'm not so sure about that any more so am trying not to limit myself to that area of development. I know HCI would help me as a web developer, but do you think it's worth it? Do you think Neural Network knowledge could help me realistically in a system I write in the future? Thanks. EDIT: Hi guys, I thought it would be useful to follow-up with what I decided to do and how it's worked out. I picked Artificial Neural Networks over HCI, and I've really enjoyed it. Having a peek into cognitive science and machine learning has ignited my interest for the subject area, and I will be hoping to take on a postgraduate project a few years from now when I can afford it. I have got a job which I am starting after my final exams (which are in a few days) and I was indeed asked if I had done a module in HCI or similar. It didn't seem to matter, as it isn't a front-end developer position! I would recommend taking the module if you have it as an option, as well as any module consisting of biological computation, it will open up more doors should you want to go onto postgraduate research in the future. Thanks again, Shahin

    Read the article

  • What should be taught in a "Fundamentals of programming" course at university?

    - by Dervin Thunk
    I have started a new question (see here), because I think the topic is of importance in a more general form. The question is now: If you were a professor at a Computer Science Dept. in some university, what would make it into your course? This is a programming course, second term, first year computer science/computer engineering. Remember you have a limited amount of time, and students are of different levels of competence, and some may be scientists, but some will also go on to be programmers in companies of different kinds. You have to cater to all. Bonus: What language? (Although see this question for my current thoughts about this...) Maybe you want to attach a course outline from some university? See here for an even more general question about this. Answer: I can't really summarize this post... I guess it was too subjective. However, it looks like we have to cover the history of computing up to a certain extent, computer architecture (memory, registers, whatever), C, and finally some basic algos and data structures in a problem solving fashion. This will be the bare bones of the course. Thanks all. I will accept the most voted up answer to close the thread, as it should be done.

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

  • Is it possible to send OSC commands to an iPad via the Camera Connection Kit?

    - by HELVETICADE
    I'm building a small controller device that I'd like to partner with a computer. I've settled on using OSC out from my custom built hardware and am pretty satisfied with what I can get from WOscLib. Two goals I'd like to achieve are portability and a very nice ratio between battery:computing power, and this has lured me towards using iPhoneOS to accomplish my goals. I think the iPad would suit my needs perfectly, except that using wifi to broadcast OSC out from my device requires that device to be connected to a third device with a wifi chip, and this would destroy the goal of portability, whilst also introducing potential latency and stability headaches. My question is pretty simple: Can I push OSC commands FROM my controller TO an iPad via USB and the Camera Connection Kit? If I could accomplish this, the two major goals of my project would be fulfilled very nicely. This seems like it should be a simple little question, but researching this obsessively over the past few weeks has left me more almost more uncertain than if I had done no research at all. I'd really like some more confidence before I go down this route, and it seems like it should be possible. Any insight would be very, very appreciated.

    Read the article

  • Storing date/times as UTC in database

    - by James
    I am storing date/times in the database as UTC and computing them inside my application back to local time based on the specific timezone. Say for example I have the following date/time: 01/04/2010 00:00 Say it is for a country e.g. UK which observes DST (Daylight Savings Time) and at this particular time we are in daylight savings. When I convert this date to UTC and store it in the database it is actually stored as: 31/03/2010 23:00 As the date would be adjusted -1 hours for DST. This works fine when your observing DST at time of submission. However, what happens when the clock is adjusted back? When I pull that date from the database and convert it to local time that particular datetime would be seen as 31/03/2009 23:00 when in reality it was processed as 01/04/2010 00:00. Correct me if I am wrong but isn't this a bit of a flaw when storing times as UTC? Example of Timezone conversion Basically what I am doing is storing the date/times of when information is being submitted to my system in order to allow users to do a range report. Here is how I am storing the date/times: public DateTime LocalDateTime(string timeZoneId) { var tzi = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); return TimeZoneInfo.ConvertTimeFromUtc(DateTime.UtcNow, tzi).ToLocalTime(); } Storing as UTC: var localDateTime = LocalDateTime("AUS Eastern Standard Time"); WriteToDB(localDateTime.ToUniversalTime());

    Read the article

  • How should I progressively enhance this content with JavaScript?

    - by Sean Dunwoody
    The background to this problem is that I'm doing a computing project which involves Some drop down boxes for input, and a text input where the user can input a date. I've used YUI to enhance the form, so the calendar input uses the YUI calendar widget and the drop down list is converted into a horizontal list of inputs where the user only has to click once to select any input as opposed to two clicks with the drop down box (hope that makes sense, not sure how to explain it clearly) The problem is, that in the design section of my project I stated that I would follow the progressive enhancement principles. I am struggling however to ensure that users without JavaScript are able to view the drop down box / text input on said page. This is not because I do not necessarily know how, but the two methods I've tried seem unsatisfactory. Method 1 - I tried using YUI to hide the text box and drop down list, this seemed like the ideal solution however there was quite a noticeable lag when loading the page (especially for the first time), the text box and drop down list where visibile for at least a second. I included the script for this just before the end of the body tag, is there any way that I can run it onload with YUI? Would that help? Method 2 - Use the noscript tag . . . however I am loathed to do this as while it would be a simple solution, I have read various bad things about the noscript tag. Is there a way of making mehtod one work? Or is there a better way of doing this that I am yet to encounter?

    Read the article

  • regexp in java problem

    - by Staszek28
    Hello! I found some problem while testing my NLP system. I have a java regex "(.\.\s)*Dendryt.*" and for string "v Table of Contents List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " it just dont stop computing. Its clear that this regex complexity is very high, I will try to refactor it. Have you some suggestions for me for a future regex development ??? Thanks.

    Read the article

  • Calculating distance from latitude, longitude and height using a geocentric co-ordinate system

    - by Sarge
    I've implemented this method in Javascript and I'm roughly 2.5% out and I'd like to understand why. My input data is an array of points represented as latitude, longitude and the height above the WGS84 ellipsoid. These points are taken from data collected from a wrist-mounted GPS device during a marathon race. My algorithm was to convert each point to cartesian geocentric co-ordinates and then compute the Euclidean distance (c.f Pythagoras). Cartesian geocentric is also known as Earth Centred Earth Fixed. i.e. it's an X, Y, Z co-ordinate system which rotates with the earth. My test data was the data from a marathon and so the distance should be very close to 42.26km. However, the distance comes to about 43.4km. I've tried various approaches and nothing changes the result by more than a metre. e.g. I replaced the height data with data from the NASA SRTM mission, I've set the height to zero, etc. Using Google, I found two points in the literature where lat, lon, height had been transformed and my transformation algorithm is matching. What could explain this? Am I expecting too much from Javascript's double representation? (The X, Y, Z numbers are very big but the differences between two points is very small). My alternative is to move to computing the geodesic across the WGS84 ellipsoid using Vincenty's algorithm (or similar) and then calculating the Euclidean distance with the two heights but this seems inaccurate. Thanks in advance for your help!

    Read the article

  • ssh & script problem

    - by Nishanth
    I am having a strange problem while doing ssh. I am not sure where the term Unmatched ` is coming from. What I need to do is run script that logs information of what I am doing on the terminal to text file. After ssh - Sun Microsystems Inc. SunOS 5.8 Generic Patch October 2001 This is /etc/motd, last updated 3 Feb 2003. To learn about the UCS system and other aspects of computing at UL-Lafayette visit our home page http://helpdesk.louisiana.edu/ . For more information about system use, contact the Help Desk, Stephens Hall, Room 201, 482-5516 (x25516), during normal UL office hours; or send e-mail to [email protected]. ATTENTION: Unsecure Telnet and FTP will be turned off soon. Please make arrange to use ssh or sftp. Putty(telnet) and WinSCP(ftp) would be a good replacement. Unmatched ` d13.ucs.louisiana.edu% bash bash-2.04$ script -a myInformation.txt Script started, file is myInformation.txt Unmatched ` d13.ucs.louisiana.edu% When I tried to start the script with name myInformation.txt, you can see the message I am getting - Script started, file is myInformation.txt. But again I am getting that message Unmatched ` and is coming out of bash, as you can notice. What is the problem ? Any insights suggested would be very great. Note: file with name myInformation.txt is being created but nothing goes in to it. As I have even tried running certain commands like ls and then exited the script with ctrl+d. But when I open the file, nothing is there.

    Read the article

  • determining if value is in range with 0=360 degree problem.

    - by Raven
    Hi, I am making a piece of code for DirectX app. It's meaning is to not show faces that are not visible. Normaly it would be just using Z-buffer, but I'm making many moves and rotations of mesh, so I would like to not do them and save computing power. I will describe this on cube. You are looking from the front so you see just one face and you don't need to rotate the 5 that left. If you would have one side of cube from 100*100 meshes, it would be great to not have to turn around 50k meshes that you really don't need. So I have stored X,Y,Z rotation of camera(the Z rotation I'm not using), and also X,Y,Z rotation of faces. In this cube simplified I would see faces that makes this statement true: cRot //camera rotation in degrees oRot //face rotation in degrees if(oRot.x > cRot.x-90 && oRot.x < cRot.x+90 && oRot.y > cRot.y-90 && oRot.y < cRot.y+90) But there comes a problem. If I will rotate arround, the camera can get to value 330 for exapmple. In this state, I would see front and right side of cube. Right side have rotation 270 so that's allright in IF statement. Problem is with 0 rotation of front face, which is also 360 degrees. So my question is how to make this statement to work, because when I use modulo, it will be failing for that right side and in this way it won't work for 0=360.

    Read the article

  • jQuery: Trouble with draggable across cloned elements

    - by Rosarch
    I'm trying to implement: User drags a draggable li to a droppable li. The original li is no longer draggable A new li is cloned from the original li, and is appended to the droppable li. I can't get it to work. function moveToTerm(original_course, helper, term) { var cloned_course = original_course.clone(true); original_course.addClass('already-scheduled'); original_course.draggable('disable'); cloned_course.draggable(); cloned_course.appendTo(term).hide().fadeIn('slow'); } This works fine, except now the cloned_course is not draggable. A droppable li: <li class="term ui-droppable"> <strong>Fall 2010</strong> <li class="course">Computing Cultures</li> <!-- this course was just dropped. I want it to be draggable but it's not --> <li class="course ui-draggable" style="display: list-item;">New Media and Society</li> </li> What am I doing wrong?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >