Search Results

Search found 73851 results on 2955 pages for 'time machine'.

Page 39/2955 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How can I find files added to the system within X minutes of a specific time?

    - by Jack W-H
    I have done a fresh install of Mac OS X Mountain Lion today on a new MacBook. Because this was a new install, when I finally got round to configuring some of my own developer things, I was surprised to find some app had installed a binary into /usr/local/bin - a single binary called galileod. Interestingly, I can't find anything online about galileod. I had only installed the bare minimum of software at this point. Looking in the file columns I can see Date Modified was 9th November 2012, but Date Added to the system was today at 17:01. It's now 10:20PM and I can't remember which software I was installing at that point. So how do I find out which other files were installed to the system within, say, 5 minutes either side of 17:01? EDIT: I found out what galileod was by running galileod --help - it is a binary used with Fitbit to communicate with the USB dongle. So that's the mystery solved - but it would still be interesting to know how to find files added within X minutes of a timeframe for future reference.

    Read the article

  • how to back up data from a machine that keeps hanging

    - by Amit Phatarphekar
    Hello - I have a storage server running opensolaris. But lately its been acting up - it hangs at random times due to some SCSI/ATA related error messages. I've tried to fix it without any progress, so I'm giving up now. The machine keeps hanging every 30 minutes or 1 hr ...sometimes after 4 hrs. Its very unpredictable. So I've decided to just reformat the storage server and start from scratch...maybe I'll just not use solaris and install something else, since the errors are related to solaris running on ATA HDD or something. Question - Before I reformat it, I want to back up some of the important data on it. Like it has a VM with 200 GB disk files, it has a whole bunch of ISOs stored on it etc etc. I'm using a simple scp to copy the files over to a different machine. My issue is that, because the machine hangs....sometimes my file copy is incomplete and I have to start all over again. Lets say I'm trying to copy a 200GB file which takes like 4 hrs....IF the machine hangs before the whole file i copied over...I have to recopy the file from scratch. Is there a solution to copy the files over such that if the machine hangs or network goes down..the copying can resume from where it left off? - like if 50 GB of a 200GB file was copied and machine hung....next time, it'll just continue to copy rest of the amount, instead of starting all over again. Thanks Amit

    Read the article

  • Running docker in VPC and accessing container from another VPC machine

    - by Bogdan Gaza
    I'm having issues while running docker in AWS VPC. Here is my setup: I've got two machines running in VPC: 10.0.100.150 10.0.100.151 both having an elastic IPs assigned to them, both running in the same internet enabled subnet. Let's say I'm running a web server that serves static files in a container on the 10.0.100.150 machine the container: IP: 172.17.0.2 port 8111 is forwarded on the 8111 port on the machine. I'm trying to access the static files from my local machine (or another non-VPC machine also tried an EC2 instance not running in the VPC) and it work flawlessly. If I try to access the files from the other machine (10.0.100.151) it hangs. I'm using wget to pull the files. Tried to debug it with tcpdump and ngrep and that I have seen is that the request reaches the container. If I ngrep on the host machine I see the requests going in but no response going back. If I ngrep on the container I see the requests going in and the response going back. I've tried multiple iptables setups (with postrouting enabled, with manually forwarding ports etc) but no success. Help in any way - even debugging directions would be much appreciated. Thanks!

    Read the article

  • How to have a soft-real-time process in presense of heavily swapping IO-intensive background load?

    - by Vi
    schedtool: PID 32301: PRIO 4, POLICY R: SCHED_RR , NICE -20, AFFINITY 0xf ionice: realtime: prio 4 But the music is stumbling anyway. Background load is low prio (SCHED_IDLEPRIO, idle ionice), but uses a lot of memory (more than is physically available) and does a lot of IO and calculations. Latencytop shows about 1500ms for: Following symlink Writing buffer to disk (sync) Page fault Writing a page to disk both for the bg load and for unrelated processes. Load average is 10 and counting. Why cannot it allocate, for example, 200MHZ of one of the cores and 32M of memory and not less than once per second opportunity for IO for mplayer to make it happy while continuing calculations on the background? Or: why it cannot leave background task and swap loving each other but keeping the rest of the system as if there were no background load? How to have RT processes AND heavy bg load simultaneously (without of virtual machines)?

    Read the article

  • moving files and directories between two machine, via a third, preserving permissions and usernames

    - by Jarmund
    The situation is as follows: Machine A has a file repository accessible via rsync Machine B needs the above mentioned files with all permissions and ownerships intact (including groups etc) Machine C has access to both A and B, but has a completely different set of users. Normally, i would just rsync everything over, directly between A and B, but due to severely limited bandwidth at the moment, i need something different, as rsync times out after building the list of the 430 files (49Mb uncompressed... can be compressed down to ~7Mb). What i've tried so far: rsync everything over from A to C, tar it, copy the tarball over, and then untar it, however, this messes up the ownership and/or the permissions. To rsync it from A to C, i run this command: rsync --numeric-ids --password-file=/root/rsync_pwd_file -oaPvu rsync://[email protected]/portal_2/ ./portal_2/ ...and from the looks of things, they do end up on C with the correct ownerships/permissions/flags/everything (not 100% sure, though.. are there any more switches i can throw in there? did i miss something?) copying the tarball over is simple enough (slow as a one-legged turtle due to the bandwidth, but it checksums out alright) What i'm unsure of is the flags and switches for creating and extracting the tarball, so could someone please provide the full commands for creating a tarball from /root/portal_2 on machine C (with everything intact) and extracting the tarball into /var/ex/portal_2 on machine B? ? Also, are there any other approaches worth mentioning that could allow me to perform this? I have root access to A and C, whereas i only have rsync access to B. PS: I'm running rsync v2.6.9 on machine B, and unfortunately i do not have the oportunity to upgrade to v3

    Read the article

  • ASP.NET MVC: MVC Time Planner is available at CodePlex

    - by DigiMortal
    I get almost every week some e-mails where my dear readers ask for source code of my ASP.NET MVC and FullCalendar example. I have some great news, guys! I ported my sample application to Visual Studio 2010 and made it available at CodePlex. Feel free to visit the page of MVC Time Planner. NB! Current release of MVC Time Planner is initial one and it is basically conversion sfrom VS2008 example solution to VS2010. Current source code is not any study material but it gives you idea how to make calendars work together. Future releases will introduce many design and architectural improvements. I have planned also some new features. How MVC Time Planner looks? Image on right shows you how time planner looks like. It uses default design elements of ASP.NET MVC applications and jQueryUI. If I find some artist skills from myself I will improve design too, of course. :) Currently only day view of calendar is available, other views are coming in near future (I hope future will be week or two). Important links And here are some important links you may find useful. MVC Time Planner page @ CodePlex Documentation Release plan Help and support – questions, ideas, other communication Bugs and feature requests If you have any questions or you are interested in new features then please feel free to contact me through MVC Time Planner discussion forums.

    Read the article

  • Time Travel 101

    - by Jim Duffy
    I’m thinking maybe I should have used Time Crunching 101 as the title instead… or maybe ‘Duh Duffy, where have you been? Everyone knows that!” Ok, so maybe you won’t actually learn how to travel through time from this post but you will learn how to cram more learning into one day. We all know you can’t make it to every conference, every presentation, or every training session. The good news is that many of those events make their content available to either watch online or to download for off-line viewing. The problem is who has time to sit and watch all those presentations in real time? Not me. One trick I use is to view the content at an increased play rate. Why listen to a boring speaker like me drone on for the entire length of the session when you can listen to them drone on in almost half the time. :-) I view nearly all off-line content with Windows Media Player though I’m sure you can implement this idea with any media playback software. The idea is changing the playback speed you view the content at. With Windows Media Player you can change the play speed from the menu system. Once you have the Play Speed Setting panel open you can specify the playback speed. Depending on the content and the presenter I can typically listen between 1.6 and 2.0 times normal speed. My Florida edumacation taught me that playing the video back at twice the speed means I’ll listen to it twice as fast and that means I can view it in almost 1/2 the time.  Too bad it won’t make me twice as smart. :-) I hope this helps you speed your way through more training content. Have a day. :-|

    Read the article

  • Passing elapsed time to the update function from the game loop

    - by Sri Harsha Chilakapati
    I want to pass the time elapsed to the update() method as this would make easy to implement the animations and time related concepts. Here's my game-loop. public void gameLoop(){ boolean running = true; long gameTime = getCurrentTime(); long elapsedTime = 0; long lastUpdateTime = 0; int loops; while (running){ loops = 0; while(getCurrentTime()>gameTime && loops<Global.MAX_FRAMESKIP){ elapsedTime = getCurrentTime() - lastUpdateTime; lastUpdateTime = getCurrentTime(); update(elapsedTime); gameTime += SKIP_STEPS; loops++; } displayGame(); } } getCurrentTime() method public long getCurrentTime(){ return (System.nanoTime()/1000000); } update() method long time = 0; public void update(long elapsedTime){ time += elapsedTime; if (time>=1000){ System.out.println("A second elapsed"); time -= 1000; } } But this is printing the message for 3 seconds. Thanks.

    Read the article

  • Inconsistency between date-time in terminal and clock

    - by Franck
    I can't figure out why I have 5 hours difference between GUI clock and date command in terminal. My bios clock is set to GMT... Any ideas ? franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:48:47 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo dpkg-reconfigure tzdata Current default time zone: 'Europe/Paris' Local time is now: Wed Apr 11 09:49:02 CEST 2012. Universal Time is now: Wed Apr 11 07:49:02 UTC 2012. franck@franck-ThinkPad-T61:~$ tail /etc/timezone Europe/Paris franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:49:21 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo dpkg-reconfigure --frontend noninteractive tzdata Current default time zone: 'Europe/Paris' Local time is now: Wed Apr 11 09:49:27 CEST 2012. Universal Time is now: Wed Apr 11 07:49:27 UTC 2012. franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:49:30 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo cat /etc/default/rcS # # /etc/default/rcS # # Default settings for the scripts in /etc/rcS.d/ # # For information about these variables see the rcS(5) manual page. # # This file belongs to the "initscripts" package. TMPTIME=0 SULOGIN=no DELAYLOGIN=no UTC=yes VERBOSE=no FSCKFIX=no franck@franck-ThinkPad-T61:~$ sudo hwclock --show mer. 11 avril 2012 07:49:48 CDT -0.555705 secondes

    Read the article

  • Using the Java SE 8 Date Time API with JPA 2.1

    - by reza_rahman
    Most of you are hopefully aware of the new Date Time API included in Java SE 8. If you are not, you should check them out right now using the Java Tutorial Trail dedicated to the topic. It is a significantly leap forward in processing temporal data in Java. For those who already use Joda-Time the changes will look very familiar - very simplistically speaking the Java SE 8 feature is basically Joda-Time standardized. Quite naturally you will likely want to use the new Date Time APIs in your JPA domain model to better represent temporal data. The problem is that JPA 2.1 will not support the new API out of the box. So what are you to do? Fortunately you can make use of fairly simple JPA 2.1 Type Converters to use the Date Time API in your JPA domain classes. Steven Gertiser shows you how to do it in an extremely well written blog entry. Besides explaining the problem and the solution the entry is actually very good for getting a better understanding of JPA 2.1 Type Converters as well. I think such a set of converters may be a good fit for Apache DeltaSpike as a Java EE 7 extension? In case you are wondering about Java SE 8 support in the JPA specification itself, Nick Williams has already entered an excellent, well researched JIRA entry asking for such support in a future version of the JPA specification that's well worth looking at. Another possibility of course is for JPA providers to start supporting the Date Time API natively before anything is formalized in the specification. What do you think?

    Read the article

  • How does an optimizing compiler react to a program with nested loops?

    - by D.Singh
    Say you have a bunch of nested loops. public void testMethod() { for(int i = 0; i<1203; i++){ //some computation for(int k=2; k<123; k++){ //some computation for(int j=2; j<12312; j++){ //some computation for(int l=2; l<123123; l++){ //some computation for(int p=2; p<12312; p++){ //some computation } } } } } } When the above code reaches the stage where the compiler will try to optimize it (I believe it's when the intermediate language needs to converted to machine code?), what will the compiler try to do? Is there any significant optimization that will take place? I understand that the optimizer will break up the loops by means of loop fission. But this is only per loop isn't it? What I mean with my question is will it take any action exclusively based on seeing the nested loops? Or will it just optimize the loops one by one? If the Java VM complicates the explanation then please just assume that it's C or C++ code.

    Read the article

  • How to create a Semantic Network like wordnet based on Wikipedia?

    - by Forbidden Overseer
    I am an undergraduate student and I have to create a Semantic Network based on Wikipedia. This Semantic Network would be similar to Wordnet(except for it is based on Wikipedia and is concerned with "streams of text/topics" rather than simple words etc.) and I am thinking of using the Wikipedia XML dumps for the purpose. I guess I need to learn parsing an XML and "some other things" related to NLP and probably Machine Learning, but I am no way sure about anything involved herein after the XML parsing. Is the starting step: XML dump parsing into text a good idea/step? Any alternatives? What would be the steps involved after parsing XML into text to create a functional Semantic Network? What are the things/concepts I should learn in order to do them? I am not directly asking for book recommendations, but if you have read a book/article that teaches any thing related/helpful, please mention them. This may include a refernce to already existing implementations regarding the subject. Please correct me if I was wrong somewhere. Thanks!

    Read the article

  • Good Laptop .NET Developer VM Setup

    - by Steve Brouillard
    I was torn between putting this question on this site or SuperUsers. I've tried to do a good bit of searching on this, and while I find plenty of info on why to go with a VM or not, there isn't much practical advise on HOW to best set things up. Here's what I currently HAVE: HP EliteBook 1540, quad-core, 8GB memory, 500GB 7200 RPM HD, eSATA port. Descent machine. Should work just fine. Windows 7 64-bit Host OS. This also acts as my day-to-day basic stuff (email, Word Docs, etc...) OS. VMWare Desktop Windows 7 64-bit Guest OS with all my .NET dev tools, frameworks, etc loaded on it. It's configured to use 2 cores and up to 6GB of memory. I figure that the dev env will need more than email, word, etc... So, this seemed like a good option to me, but I find with the VM running, things tend to slow down all around on both the host and guest OS. Memory and CPU utilization don't seem to be an issue, but I/O does. I tried running the VM on an external eSATA drive, figuring that the extra channel might pick up the slack. Things only got worse (could be my eSATA enclosure). So, for all of that I have basically two questions in one. Has anyone used this sort of setup and are there any gotchas either around the VMWare configuration or anything else I may have missed here that you can point me to? Is there another option that might work better? For example, I've considered trying a lighter weight Host OS and run both of my environments as VMs? I tried this with Server 2008 Hyper-V, but I lose too much laptop functionality going this route, so I never completed setup. I'm not averse to Linux as a host OS, though I'm no Linux expert. If I'm missing any critical info, feel free to ask. Thanks in advance for your help. Steve

    Read the article

  • 2D Animation Smoothness - Delta time vs. Kinematics

    - by viperld002
    I'm animating a sprite in 2D with key frames of rotation and xy-positions. I've recently had a discussion with someone saying that when the device (happens to be an iPad using cocos2D) hits a performance bump due to whatever else the user may be doing, lag will arise and that the best way to fight it is to not use actual positions, but velocities, accelerations and torques with kinematics. His message is to evaluate the positions and rotations from these speeds at the current point in time. I've never experienced a situation where I've heard of using kinematics to stem lag in 2D animations and am not sure of how effective it could be. Also, it seems to be overkill. The application is not networked so it's all running on a local device. The desired effect is that the animation always plays as closely as it can to the target frame rate. Wouldn't the technique suffer the same problems as just using the time since the last frame or a fixed time step since the kinematics would also require some time value to perform the calculation? What techniques could you suggest to best achieve the desired effect? EDIT 1 Thank you for your responses, they are very illuminating. I want to clarify my question before choosing an answer however, to make sure that this post really serves it's purpose. I have a sprite of a ball, and a text file with 3 arrays worth of information (rotation,translations x, translations y) with each unit of information existing as a key frame to be stepped through (0 to 49 and back to 0 to replay it again). I have this playing by interpolating from the current key frame to the next, every n-units of time. The animation is visibly correct when compared to a video I was given of it, and it is smooth because of the interpolations between the key frames. This is the existing state of the project. There are no physics simulated, only a static animation of a ball moving in a way an artist specifically designed. Should I, instead of rotation in degrees and translations by positions in space, derive velocities, accelerations and torques to express this static animation as a function of time? As in, position now = foo(time now), where foo uses kinematics.

    Read the article

  • Need theoretical help, how to comprehend an if-else dependency net

    - by macbie
    I am going to face a following issue: I'm writing a program that manages some properties, some of them are general and some are specific. Each property is a pair of key and value, and for example: if it is given a general property and other specific property with exactly the same key and value has been existed before then the general property will swap the specific one in the register. If there are two the same general properties - both will remain in the register. And so on; it is like a net of dependencies. In my case I can handle with it intuitively and foresee all cases, but only because the system is not too vast. What if it would? I have met such problems a few times in many different programs and languages (i.e working with C semaphores) and my question is: How to approach this kind of problem? Is this connected with finite state machine, graph theory or something similar? How to be sure that I have considered the whole system and each possible case? Could you recommend some resources (books, sites) to learn from?

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • How to change the date/time in Python for all modules?

    - by Felix Schwarz
    When I write with business logic, my code often depends on the current time. For example the algorithm which looks at each unfinished order and checks if an invoice should be sent (which depends on the no of days since the job was ended). In these cases creating an invoice is not triggered by an explicit user action but by a background job. Now this creates a problem for me when it comes to testing: I can test invoice creation itself easily However it is hard to create an order in a test and check that the background job identifies the correct orders at the correct time. So far I found two solutions: In the test setup, calculate the job dates relative to the current date. Downside: The code becomes quite complicated as there are no explicit dates written anymore. Sometimes the business logic is pretty complex for edge cases so it becomes hard to debug due to all these relative dates. I have my own date/time accessor functions which I use throughout my code. In the test I just set a current date and all modules get this date. So I can simulate an order creation in February and check that the invoice is created in April easily. Downside: 3rd party modules do not use this mechanism so it's really hard to integrate+test these. The second approach was way more successful to me after all. Therefore I'm looking for a way to set the time Python's datetime+time modules return. Setting the date is usually enough, I don't need to set the current hour or second (even though this would be nice). Is there such a utility? Is there an (internal) Python API that I can use?

    Read the article

  • 12.04 - why does the network icon dissapear every time my machine is turned off

    - by Howard Walker
    After spending an hour or two working out how to find the network Icon, I find that when I reboot the machine, I have to go through the whole procedure again. 1. The network menu is not there - so first I have to find the terminal. This has no particular location that I can find but usually I have to type terminal in the dash. 2. Then I have to type nm-applet in terminal 3. The network manager icon then appears, but only works if the terminal window remains open. Shut down the terminal window and the network disappears. This is a pain - plus I have to type my password in every time, which is another waste of time, thought this has always been a problem. So can anyone tell me how to make the web automatically connect at startup without any intervention from me.

    Read the article

  • How can you set a time limit for a PowerShell script to run for?

    - by calrain
    I want to set a time limit on a PowerShell (v2) script so it forcibly exits after that time limit has expired. I see in PHP they have commands like set_time_limit and max_execution_time where you can limit how long the script and even a function can execute for. With my script, a do/while loop that is looking at the time isn't appropriate as I am calling an external code library that can just hang for a long time. I want to limit a block of code and only allow it to run for x seconds, after which I will terminate that code block and return a response to the user that the script timed out. I have looked at background jobs but they operate in a different thread so won't have kill rights over the parent thread. Has anyone dealt with this or have a solution? Thanks!

    Read the article

  • Why do VMs need to be "stack machines" or "register machines" etc.?

    - by Prog
    (This is an extremely newbie-ish question). I've been studying a little about Virtual Machines. Turns out a lot of them are designed very similarly to physical or theoretical computers. I read that the JVM for example, is a 'stack machine'. What that means (and correct me if I'm wrong) is that it stores all of it's 'temporary memory' on a stack, and makes operations on this stack for all of it's opcodes. For example, the source code 2 + 3 will be translated to bytecode similar to: push 2 push 3 add My question is this: JVMs are probably written using C/C++ and such. If so, why doesn't the JVM execute the following C code: 2 + 3..? I mean, why does it need a stack, or in other VMs 'registers' - like in a physical computer? The underlying physical CPU takes care of all of this. Why don't VM writers simply execute the interpreted bytecode with 'usual' instructions in the language the VM is programmed with? Why do VMs need to emulate hardware, when the actual hardware already does this for us? Again, very newbie-ish questions. Thanks for your help

    Read the article

  • Very slow startup after update to 12.04

    - by M Legoh
    My hardware: Dell Latitude D600 laptop Processor: Intel Pentium (r) 1.8 GHz Memory 1.2 GiB Graphics R200 (RV250 4C66) x86/MMX/SSE2 TCL DRI2 Disk: 37.7 gb I did an update from 11.?? using the automatic system update that pops up when you switch/log on. Ever since then when I switch on the machine it takes approximately one hour to get to the login screen. Sometimes the actual login will take time too, but sometimes within a couple of minutes I am logged in. Once I am logged in I find I can work normally, the response is not swift, but is neither too slow that I cannot work. I was wondering if there was anything I could do to speed things up. I deliberately did not do a clean install from disk, because I did not want to loose settings on the machine.

    Read the article

  • Series On Embedded Development (Part 2) - Build-Time Optionality

    - by user12612705
    In this entry on embedded development, I'm going to discuss build-time optionality (BTO). BTO is the ability to subset your software at build-time so you only use what is needed. BTO typically pertains more to software providers rather then developers of final products. For example, software providers ship source products, frameworks or platforms which are used by developers to build other products. If you provide a source product, you probably don't have to do anything to support BTO as the developers using your source will only use the source they need to build their product. If you provide a framework, then there are some things you can do to support BTO. Say you provide a Java framework which supports audio and video. If you provide this framework in a single JAR, then developers who only want audio are forced to ship their product with the video portion of your framework even though they aren't using it. In this case, support providing the framework in separate JARs...break the framework into an audio JAR and a video JAR and let the users of your framework decide which JARs to include in their product. Sometimes this is as simple as packaging, but if, for example, the video functionality is dependent on the audio functionality, it may require coding work to cleanly separate the two. BTO can also work at install-time, and this is sometimes overlooked. Let's say your building a phone application which can use Near Field Communications (NFC) if it's available on the phone, but it doesn't require NFC to work. Typically you'd write one app for all phones (saving you time)...both those that have NFC and those that don't, and just use NFC if it's there. However, for better efficiency, you can detect at install-time if the phone supports NFC and not install the NFC portion of your app if the phone doesn't support NFC. This requires that you write the app so it can run without the optional NFC code and that you write your install app so it can detect NFC and do the right thing at install-time. Supporting install-time optionality will save persistent footprint on the phone, something your customers will appreciate, your app "neighbors" will appreciate, and that you'll appreciate when they save static footprint for you. In the next article, I'll talk about runtime optionality.

    Read the article

  • How do I build (get/download) time.h library?

    - by coffeenet
    I am trying to build a project on Linux via Makefile. I keep getting cannot find <sys/time.h> error. I asked around, and I was told that my project doesn't have access to library folders. Therefore, I am trying to solve this problem by using the time library locally inside my project's folder. I am very new to Linux. So, please forgive my question if it sounds naive. I found this, but I don't know what files I need, and how to build the library. http://sourceware.org/git/?p=glibc.git;a=tree;f=time;h=c950c5d4dd90541e8f3c7e1649fcde4aead989bb;hb=master Where can I find the time.h library/package? How do I build the library?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >