Search Results

Search found 59686 results on 2388 pages for 'time precision'.

Page 28/2388 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How should I fix problems with file permissions while restoring from Time Machine?

    - by Andrew Grimm
    While restoring files from a Time Machine backup, I got the error message "The operation can’t be completed because you don’t have permission to access some of the items." because of problems with files in one folder. What's the safest way to deal with this? The folder in question has permissions like: Andrew-Grimms-MacBook-Pro:kmer agrimm$ pwd /Volumes/Time Machine Backups/Backups.backupdb/Andrew Grimm’s MacBook Pro/2010-12-09-224309/Macintosh HD/Users/agrimm/ruby/kmer Andrew-Grimms-MacBook-Pro:kmer agrimm$ ls -ltra total 6156896 drwxrwxrwx@ 19 agrimm staff 680 18 Jan 2008 Saccharomyces_cerevisiae -r--------@ 1 agrimm staff 60221852 4 Aug 2009 hs_ref_GRCh37_chrY.fa -r--------@ 1 agrimm staff 157488804 4 Aug 2009 hs_ref_GRCh37_chrX.fa (snip a few files) -r--------@ 1 agrimm staff 676063 27 Oct 2009 NC_001143.fna -rw-r--r--@ 1 agrimm staff 6148 23 Mar 2010 .DS_Store drwxr-xr-x@ 3 agrimm staff 1530 23 Mar 2010 . drwxr-xr-x@ 30 agrimm staff 1054 20 Nov 14:43 .. Is it ok to do sudo chmod, or is there a safer approach? Background: Files within the original folder on my computer also had weird permissions - I suspect I may have used sudo to copy some files from a thumbdrive onto my computer.

    Read the article

  • How do I set a time in a time_select view helper?

    - by brad
    I have a time_select in which I am trying to set a time value as follows; <%= f.time_select :start_time, :value => (@invoice.start_time ? @invoice.start_time : Time.now) %> This always produces a time selector with the current time rather than the value for @invoice.start_time. @invoice.start_time is in fact a datetime object but this is passed to the time selector just fine if I use <%= f.time_select :start_time %> I guess what I'm really asking is how to use the :value option with the time_select helper. Attempts like the following don't seem to produce the desired result; <%= f.time_select :start_time, :value => (Time.now + 2.hours) %> <%= f.time_select :start_time, :value => "14:30" %>

    Read the article

  • Why doesn't MySQL support millisecond precision?

    - by Byron Whitlock
    So I just found the most frustrating bug ever in MySQL. Apparently the TIMESTAMP field, and supporting functions do not support any greater precision than seconds!? So I am using PHP and Doctrine, and I really need those microseconds (I am using the actAs: [Timestampable] property). I found a that I can use a BIGINT field to store the values. But will doctrine add the milliseconds? I think it just assigns NOW() to the field. I am also worried the date manipulation functions (in SQL) sprinkled through the code will break. I also saw something about compiling a UDF extension. This is not an acceptable workaround because I or a future maintainer will upgrade and poof, change gone. Has anyone found a suitable workaround?

    Read the article

  • best way to output a full precision double into a text file

    - by flevine100
    Hi, I need to use an existing text file to store some very precise values. When read back in, the numbers essentially need to be exactly equivalent to the ones that were originally written. Now, a normal person would use a binary file... for a number of reasons, that's not possible in this case. So... do any of you have a good way of encoding a double as a string of characters (aside from increasing the precision). My first thought was to cast the double to a char[] and write out the chars. I don't think that's going to work because some of the characters are not visible, produce sounds, and even terminate strings ('\0'... I'm talkin to you!) Thoughts?

    Read the article

  • Why doesn't MySQL support millisecond / microsecond precision?

    - by Byron Whitlock
    So I just found the most frustrating bug ever in MySQL. Apparently the TIMESTAMP field, and supporting functions do not support any greater precision than seconds!? So I am using PHP and Doctrine, and I really need those microseconds (I am using the actAs: [Timestampable] property). I found a that I can use a BIGINT field to store the values. But will doctrine add the milliseconds? I think it just assigns NOW() to the field. I am also worried the date manipulation functions (in SQL) sprinkled through the code will break. I also saw something about compiling a UDF extension. This is not an acceptable because I or a future maintainer will upgrade and poof, change gone. Has anyone found a suitable workaround?

    Read the article

  • Floating point precision nuances.

    - by user247077
    Hi, I found this code in NVIDIA's cuda SDK samples. void computeGold( float* reference, float* idata, const unsigned int len) { reference[0] = 0; double total_sum = 0; unsigned int i; for( i = 1; i < len; ++i) { total_sum += idata[i-1]; reference[i] = idata[i-1] + reference[i-1]; } // Here it should be okay to use != because we have integer values // in a range where float can be exactly represented if (total_sum != reference[i-1]) printf("Warning: exceeding single-precision accuracy. Scan will be inaccurate.\n"); } (C) Nvidia Corp Can somebody please tell me a case where the warning would be printed, and most importantly, why. Thank you very much.

    Read the article

  • Possible loss of precision / [type] cannot be dereferenced

    - by Samuel
    I have been looking around a lot but i simply can't find a nice solution to this... Point mouse = MouseInfo.getPointerInfo().getLocation(); int dx = (BULLET_SPEED*Math.abs(x - mouse.getX()))/ (Math.abs(y - mouse.getY()) + Math.abs(x - mouse.getX()))* (x - mouse.getX())/Math.abs(x - mouse.getX()); In this constellation i get: Possible loss of precision, when i change e.g (x - mouse.getX()) to (x - mouse.getX()).doubleValue() it says double cannot be dereferenced, when i add intValue() somewhere it says int cannot be dereferenced. What's my mistake? [x, y are integers | BULLET_SPEED is a static final int] Thanks!

    Read the article

  • Setting precision on std::cout in entire file scope - C++ iomanip

    - by Ivan
    Hi all, I'm doing some calculations, and the results are being save in a file. I have to output very precise results, near the precision of the double variable, and I'm using the iomanip setprecision(int) for that. The problem is that I have to put the setprecision everywhere in the output, like that: func1() { cout<<setprecision(12)<<value; cout<<setprecision(10)<<value2; } func2() { cout<<setprecision(17)<<value4; cout<<setprecision(3)<<value42; } And that is very cumbersome. Is there a way to set more generally the cout fixed modifier? Thanks

    Read the article

  • Is typeid of type name always evaluated at compile time in c++ ?

    - by cyril42e
    I wanted to check that typeid is evaluated at compile time when used with a type name (ie typeid(int), typeid(std::string)...). To do so, I repeated in a loop the comparison of two typeid calls, and compiled it with optimizations enabled, in order to see if the compiler simplified the loop (by looking at the execution time which is 1us when it simplifies instead of 160ms when it does not). And I get strange results, because sometimes the compiler simplifies the code, and sometimes it does not. I use g++ (I tried different 4.x versions), and here is the program: #include <iostream> #include <typeinfo> #include <time.h> class DisplayData {}; class RobotDisplay: public DisplayData {}; class SensorDisplay: public DisplayData {}; class RobotQt {}; class SensorQt {}; timespec tp1, tp2; const int n = 1000000000; int main() { int avg = 0; clock_gettime(CLOCK_REALTIME, &tp1); for(int i = 0; i < n; ++i) { // if (typeid(RobotQt) == typeid(RobotDisplay)) // (1) compile time // if (typeid(SensorQt) == typeid(SensorDisplay)) // (2) compile time if (typeid(RobotQt) == typeid(RobotDisplay) || typeid(SensorQt) == typeid(SensorDisplay)) // (3) not compile time ???!!! avg++; else avg--; } clock_gettime(CLOCK_REALTIME, &tp2); std::cout << "time (" << avg << "): " << (tp2.tv_sec-tp1.tv_sec)*1000000000+(tp2.tv_nsec-tp1.tv_nsec) << " ns" << std::endl; } The conditions in which this problem appear are not clear, but: - if there is no inheritance involved, no problem (always compile time) - if I do only one comparison, no problem - the problem only appears only with a disjunction of comparisons if all the terms are false So is there something I didn't get with how typeid works (is it always supposed to be evaluated at compilation time when used with type names?) or may this be a gcc bug in evaluation or optimization? About the context, I tracked down the problem to this very simplified example, but my goal is to use typeid with template types (as partial function template specialization is not possible). Thanks for your help!

    Read the article

  • Using * Width & Precision Specifiers With boost::format

    - by John Dibling
    I am trying to use width and precision specifiers with boost::format, like this: #include <boost\format.hpp> #include <string> int main() { int n = 5; std::string s = (boost::format("%*.*s") % (n*2) % (n*2) % "Hello").str(); return 0; } But this doesn't work because boost::format doesn't support the * specifier. Boost throws an exception when parsing the string. Is there a way to accomplish the same goal, preferably using a drop-in replacement?

    Read the article

  • Reading variable with double float precision from a text file with gnuplot

    - by user3636322
    I have a text file, containing data in 3 columns like below: 0.0100000000 | 0.0058077299 | -0.0000000288 0.0110000000 | 0.0075128707 | -0.0000000373 0.0120000000 | 0.0093579693 | -0.0000000465 I want to get the variables from this file in gnuplot and use them to draw graphs: What I exactly do is like below (e.g: to pick the variable from row 2 column 3): ii= 2 a_0 = system("awk '{ if (NR == " . ii . ") printf \"%f\", $3}' " .datafile) a_0 = a_0+0. but what is written as a_0 is zero! How can I increase the precision to get the exact value?

    Read the article

  • warning: '0' flag ignored with precision and ‘%i’ gnu_printf format

    - by morpheous
    I am getting the following warning when compiling some legacy C code on Ubuntu Karmic, using gcc 4.4.1 The warning is: src/filename.c:385: warning: '0' flag ignored with precision and ‘%i’ gnu_printf format The snippet which causes the warning to be emitted is: char buffer[256] ; long fnum ; /* some initialization code here ... */ sprintf(buffer, "F%03.3i.DTA", (int)fnum); /* <- warning emitted here */ I think I understand the warning, but I would like to check in here to see if I am right, and also the (definite) correct way of resolving this.

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • Time tracker for lxde

    - by deshmukh
    I have only recently started using lxde. And I am liking it. It is blazing fast, not-at-all resource hungry and just does what I want. The only thing I am missing is a time tracker tool. I have been using Hamster Time Tracker on gnome for quite some time. In lxde, I can still launch the application. But there are no reminders when the time limit is up, etc. The time tracker is just another window. Is there any way to get hamster working in lxde with notifications for time-up and an icon in the panel, etc.? Alternatively, is there another application like Hamster that will do all that Hamster does and WORKS in lxde?

    Read the article

  • What is the point in using real time?

    - by bobobobo
    I understand that using real time frame elapses (which should vary between 16-17ms on average) are provided by a lot of frameworks. GetTimeElapsedSinceLastFrame, and it gives you the wall clock time. But should we use this information in basic physics simulation? It looks to me to be a bad idea. Say there is a slight lag on the machine, for whatever reason (say a virus scanner starts up). The calculations all jump, and there is no need for this. Why not use a virtual second and ignore wall clock time? For gameplay on the level of Commander Keen, shouldn't you always use the virtual second and not real-time? (Besides stopwatch timing for race games) I don't see a need to use real time and not a fixed 16ms time step.

    Read the article

  • Issues with time slicing

    - by user12331
    I was trying to see the effect of time slicing. And how it can consume significant amount of time. Actually, I was trying to divide a certain work into number of threads and see the effect. I have a two core processor. So two threads can run in parallel. I was trying to see if I have a work w that is done by 2 threads, and if I have the same work done by t threads with each thread doing w/t of the work. How much does time slicing play a role in it As time slicing is time consuming process, I was expecting that when I do the same work using a two thread process or by a t thread process, the amount of time taken by the t thread process will be more Any suggestions?

    Read the article

  • Estimating file transfer time over network?

    - by rocko
    I am transferring file from one server to another. So, to estimate the time it would take to transfer some GB's of file over the network, I am pinging to that IP and taking the average time. For ex: i ping to 172.26.26.36 I get the average round trip time to be x ms, since ping send 32 bytes of data each time. I estimate speed of network to be 2*32*8(bits)/x = y Mbps -- multiplication with 2 because its average round trip time. So transferring 5GB of data will take 5000/y seconds Am I correct in my method of estimating time. If you find any mistake or any other good method please share.

    Read the article

  • How to stop Time Machine on Mac to use removeable disk?

    - by ablmf
    One of my friend recently bought a Mac and somehow when she connect her removeable disk to the computer, Time Machine took control of this device use it as backup device automatically. So she could not use the disc for other purpose any more. When we connect it to windows, it could not be recognize any more. How can we get it back under control?

    Read the article

  • Apple, Time Capsule: can I use it for servers ?

    - by Patrick
    hi, i was wondering if I can use Time Capsule from a server. Let's say I have an ubuntu server, and I'm running some websites and web applications on it. I would install these appications on ubuntu but then store the "file folders" of each website or application (with images, videos, etc.. ) on Airport capsule and leave only the application files on the server. Is this feasable ? Thanks Patrick

    Read the article

  • How to recycle/reuse/continue Time Machine for a new Mac?

    - by bmargulies
    I have been backing up a MacBook Pro to an external hard disk with Time Machine. I got a new laptop, used the firewire connector to pull the universe across to it, and started it up. It does not want to just pick up where I left off with the backups; it wants to start a new backup sequence and thus I need a ton of additional disk space. Does anyone know a way to force it to just incrementally back up to the existing backup set?

    Read the article

  • How to stop Time Machine on Mac use of removable disk?

    - by ablmf
    One of my friend recently bought a Mac and somehow when she connect her removable disk to the computer, Time Machine took control of this device uses it as backup device automatically -- she could not use the disk for other purpose any more. When we connect it to windows, it could not be recognize any more. How can we get it back under control?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >