Search Results

Search found 69645 results on 2786 pages for 'real time systems'.

Page 3/2786 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to save during real-time collaboration

    - by dev.e.loper
    I want multiple users to edit same document. Problem I'm facing is when a new user joins, he might see an outdated document. How do I make sure that new users get most recent changes? Some solutions I thought of: Save on every change. I don't like this solution because it will slow things down on UI and put load on db. When new user joins, trigger save on all other clients. After other clients saved, load document. With this there can be inconsistency still. Any other suggestions would be helpful.

    Read the article

  • Internet Time tab has disappeared from the Date and Time applet of the Control Panel

    - by Robert Thornton
    Previously, there was an Internet Time tab on the Date and Time applet of the Control Panel, wherein one could force a query of an internet time server and also type in a different server from the ones supplied. However, this tab has now disappeared, and I need to have it back. I should mention that this machine has never been part of a domain, since it seems that machines that are such do not have such a tab. I should be obliged to anyone who can help me restore the missing tab. Windows 7 Home Premium Service Pack 1

    Read the article

  • Work Time Start / Stop Tracking Software

    - by Shaharyar
    Is there a software that allows you to keep track of someones working time digitally? We are growing to an extent where we work remotely and we would like to have fixed working times. All it should do is kind of register when someone starts working (i.e. someone needs to login somewhere or set a flag.. really it could be anything) Do you have any ideas?

    Read the article

  • How to make a system time-zone sensitive?

    - by Jerry Dodge
    I need to implement time zones in a very large and old Delphi system, where there's a central SQL Server database and possibly hundreds of client installations around the world in different time zones. The application already interacts with the database by only using the date/time of the database server. So, all the time stamps saved in both the database and on the client machines are the date/time of the database server when it happened, never the time of the client machine. So, when a client is about to display the date/time of something (such as a transaction) which is coming from this database, it needs to show the date/time converted to the local time zone. This is where I get lost. I would naturally assume there should be something in SQL to recognize the time zone and convert a DateTime field dynamically. I'm not sure if such a thing exists though. If so, that would be perfect, but if not, I need to figure out another way. This Delphi system (multiple projects) utilizes the SQL Server database using ADO components, VCL data-aware controls, and QuickReports (using data sources). So, there's many places where the data goes directly from the database query to rendering on the screen, without any code to actually put this data on the screen. In the end, I need to know when and how should I get the properly converted time? There must be a standard method for this, and I'm hoping SQL Server 2008 R2 has this covered...

    Read the article

  • process taking high time on Itanium server

    - by Vishwa
    Hi, we have an application which is recently migrated from PA-Risk server to itanium server. After migration we noticed that there is significant increase in time taken to complete a process. when we tracked the time taken by each part we found that the system time is 7.590000, user time is 3.990000 but elapsed time is 70.434882!! Due to this huge elapsed time, overall performance of the application has gone very low. i wrote a small progrms to perform I/O operations on Itanium. these scripts working faster on Itanium compared to PA-Risk. what could be the reason for this high elapsed time of this process?

    Read the article

  • What are the advantages of programming to under an OS as opposed to bare metal executive?

    - by gby
    Assume you are presented with an embedded system application to program, in C, on a multi-core environment (think a Cavium or Tilera) and need to choose between two environments: Code the application under Linux in SMP mode or code the application under a thin bare metal executive (something like a very minimal RTOS), perhaps with a single core running UP Linux that can serve control tasks. For the purpose of this question, assume that both environment provide the same level of performance guarantees in any measurable aspects of run time performance, including number of meaningful action per second, jitter, latency, real time considerations - the works. (and yes, I realize this is by far not a trivial assumption at all, bare with me). How would you justify going with a Linux SMP based solution rather then a bare metal thin executive solution? The question may seems silly. It certainly seems obvious to me - but I have to convince someone that does not think the same. Could you help make a list of arguments in favor of choosing a real SMP aware OS (Linux) vs. a bare metal executive assuming performance guarantees are NOT an issue? Many thanks

    Read the article

  • airplanes operating system and choice of programing language

    - by adhg
    I was wondring if anyone knows what is the operating system used in commercial airplanes (say Boeing or Airbus). Also, what is the (preferred) real-time programing language? I heard that Ada is used in Boeing, so my question is - why Ada? what are the criteria the Boeing-guys had to choose this language? (I guess Java wouldn't be a great choice if the exactly in lift off the garbage collector wakes up). Thanks!

    Read the article

  • XNA GameTime TotalGameTime slower than real time

    - by robasaurus
    I have set-up an empty test project consisting of a System.Diagnostics.Stopwatch and this in the draw method: spriteBatch.DrawString(font, gameTime.TotalGameTime.TotalSeconds.ToString(), new Vector2(100, 100), Color.White); spriteBatch.DrawString(font, stopwatch.Elapsed.TotalSeconds.ToString(), new Vector2(100, 200), Color.White); The GameTime.TotalGameTime displayed is slower than the stop watch (by about 5 seconds per minute) even though GameTime.IsRunningSlowly is always false, why is this? The reason this is an issue is because I have a server which uses stopwatch and it is faster than my client game. For instance my client notifies the server it has dropped a mine which explodes in one minute. Because the stopwatch is faster the server state explodes the mine before the client and they are out of sync. I don't want to have to notify the client when the server explodes it as this would use unnecessary bandwidth.

    Read the article

  • Issues with time slicing

    - by user12331
    I was trying to see the effect of time slicing. And how it can consume significant amount of time. Actually, I was trying to divide a certain work into number of threads and see the effect. I have a two core processor. So two threads can run in parallel. I was trying to see if I have a work w that is done by 2 threads, and if I have the same work done by t threads with each thread doing w/t of the work. How much does time slicing play a role in it As time slicing is time consuming process, I was expecting that when I do the same work using a two thread process or by a t thread process, the amount of time taken by the t thread process will be more Any suggestions?

    Read the article

  • Time management and self improvement

    - by Filip
    I hope I can open a discussion on this topic as this is not a specific problem. It's a topic I hope to get some ideas on how people in similar situation as mine manage their time. OK, I'm a single developer on a software project for the last 6-8 months. The project I'm working on uses several technologies, mainly .net stuff: WPF, WF, NHibernate, WCF, MySql and other third party SDKs relevant for the project nature. My experience and knowledge vary, for example I have a lot of experience in WPF but much less in WCF. I work full time on the project and im curios on how other programmers which need to multi task in many areas manage their time. I'm a very applied type of person and prefer to code instead of doing research. I feel that doing research "might" slow down the progress of the project while I recognize that research and learning more in areas which I'm not so strong will ultimately make me more productive. How would you split up your daily time in productive coding time and time to and experiment, read blogs, go through tutorials etc. I would say that Im coding about 90%+ of my day and devoting some but very little time in research and acquiring new knowledge. Thanks for your replies. I think I will adopt a gradual transition to Dominics block parts. I kinda knew that coding was taking up way to much of my time but it feels good having a first version of the project completed and ready. With a few months of focused hard work behind me I hope to get more time to experiment and expand my knowlegde. Now I only hope my boss will cut me some slack and stop pressuring me for features...

    Read the article

  • Daylight Saving Time Visualized

    - by Jason Fitzpatrick
    When you map out the Daylight Saving Time adjusted sunrise and sunset times over the course of the year, an interesting pattern emerges. Chart designer Germanium writes: I tried to come up with the reason for the daylight saving time change by just looking at the data for sunset and sunrise times. The figure represents sunset and sunrise times thought the year. It shows that the daylight saving time change marked by the lines (DLS) is keeping the sunrise time pretty much constant throughout the whole year, while making the sunset time change a lot. The spread of sunrise times as measured by the standard deviation is 42 minutes, which means that the sunrise time changes within that range the whole year, while the standard deviation for the sunset times is 1:30 hours. Whatever the argument for doing this is, it’s pretty clear that reason is to keep the sunrise time constant. You can read more about the controversial history of Daylight Saving Time here. Daylight Saving Time Explained [via Cool Infographics] 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Time Tracking on an Agile Team

    - by Stephen.Walther
    What’s the best way to handle time-tracking on an Agile team? Your gut reaction to this question might be to resist any type of time-tracking at all. After all, one of the principles of the Agile Manifesto is “Individuals and interactions over processes and tools”.  Forcing the developers on your team to track the amount of time that they devote to completing stories or tasks might seem like useless bureaucratic red tape: an impediment to getting real work done. I completely understand this reaction. I’ve been required to use time-tracking software in the past to account for each hour of my workday. It made me feel like Fred Flintstone punching in at the quarry mine and not like a professional. Why You Really Do Need Time-Tracking There are, however, legitimate reasons to track time spent on stories even when you are a member of an Agile team.  First, if you are working with an outside client, you might need to track the number of hours spent on different stories for the purposes of billing. There might be no way to avoid time-tracking if you want to get paid. Second, the Product Owner needs to know when the work on a story has gone over the original time estimated for the story. The Product Owner is concerned with Return On Investment. If the team has gone massively overtime on a story, then the Product Owner has a legitimate reason to halt work on the story and reconsider the story’s business value. Finally, you might want to track how much time your team spends on different types of stories or tasks. For example, if your team is spending 75% of their time doing testing then you might need to bring in more testers. Or, if 10% of your team’s time is expended performing a software build at the end of each iteration then it is time to consider better ways of automating the build process. Time-Tracking in SonicAgile For these reasons, we added time-tracking as a feature to SonicAgile which is our free Agile Project Management tool. We were heavily influenced by Jeff Sutherland (one of the founders of Scrum) in the way that we implemented time-tracking (see his article http://scrum.jeffsutherland.com/2007/03/time-tracking-is-anti-scrum-what-do-you.html). In SonicAgile, time-tracking is disabled by default. If you want to use this feature then the project owner must enable time-tracking in Project Settings. You can choose to estimate using either days or hours. If you are estimating at the level of stories then it makes more sense to choose days. Otherwise, if you are estimating at the level of tasks then it makes more sense to use hours. After you enable time-tracking then you can assign three estimates to a story: Original Estimate – This is the estimate that you enter when you first create a story. You don’t change this estimate. Time Spent – This is the amount of time that you have already devoted to the story. You update the time spent on each story during your daily standup meeting. Time Left – This is the amount of time remaining to complete the story. Again, you update the time left during your daily standup meeting. So when you first create a story, you enter an original estimate that becomes the time left. During each daily standup meeting, you update the time spent and time left for each story on the Kanban. If you had perfect predicative power, then the original estimate would always be the same as the sum of the time spent and the time left. For example, if you predict that a story will take 5 days to complete then on day 3, the story should have 3 days spent and 2 days left. Unfortunately, never in the history of mankind has anyone accurately predicted the exact amount of time that it takes to complete a story. For this reason, SonicAgile does not update the time spent and time left automatically. Each day, during the daily standup, your team should update the time spent and time left for each story. For example, the following table shows the history of the time estimates for a story that was originally estimated to take 3 days but, eventually, takes 5 days to complete: Day Original Estimate Time Spent Time Left Day 1 3 days 0 days 3 days Day 2 3 days 1 day 2 days Day 3 3 days 2 days 2 days Day 4 3 days 3 days 2 days Day 5 3 days 4 days 0 days In the table above, everything goes as predicted until you reach day 3. On day 3, the team realizes that the work will require an additional two days. The situation does not improve on day 4. All of the sudden, on day 5, all of the remaining work gets done. Real work often follows this pattern. There are long periods when nothing gets done punctuated by occasional and unpredictable bursts of progress. We designed SonicAgile to make it as easy as possible to track the time spent and time left on a story. Detecting when a Story Goes Over the Original Estimate Sometimes, stories take much longer than originally estimated. There’s a surprise. For example, you discover that a new software component is incompatible with existing software components. Or, you discover that you have to go through a month-long certification process to finish a story. In those cases, the Product Owner has a legitimate reason to halt work on a story and re-evaluate the business value of the story. For example, the Product Owner discovers that a story will require weeks to implement instead of days, then the story might not be worth the expense. SonicAgile displays a warning on both the Backlog and the Kanban when the time spent on a story goes over the original estimate. An icon of a clock is displayed. Time-Tracking and Tasks Another optional feature of SonicAgile is tasks. If you enable Tasks in Project Settings then you can break stories into one or more tasks. You can perform time-tracking at the level of a story or at the level of a task. If you don’t break a story into tasks then you can enter the time left and time spent for the story. As soon as you break a story into tasks, then you can no longer enter the time left and time spent at the level of the story. Instead, the time left and time spent for a story is rolled up from its tasks. On the Kanban, you can see how the time left and time spent for each task gets rolled up into each story. The progress bar for the story is rolled up from the progress bars for each task. The original estimate is never rolled up – even when you break a story into tasks. A story’s original estimate is entered separately from the original estimates of each of the story’s tasks. Summary Not every Agile team can avoid time-tracking. You might be forced to track time to get paid, to detect when you are spending too much time on a particular story, or to track the amount of time that you are devoting to different types of tasks. We designed time-tracking in SonicAgile to require the least amount of work to track the information that you need. Time-tracking is an optional feature. If you enable time-tracking then you can track the original estimate, time left, and time spent for each story and task. You can use time-tracking with SonicAgile for free. Register at http://SonicAgile.com.

    Read the article

  • get the current time in C

    - by Antrromet
    I want to get the current time of my system. For that i'm using the following code in C. time_t now; struct tm *mytime = localtime(&now); if ( strftime(buffer, sizeof buffer, "%X", mytime) ) { printf("time1 = \"%s\"\n", buffer); } But the problem of this code is that its giving some random time.Also the random time is different all the time.I want the current time of my system. Can anyone please tell me how to solve this issue?

    Read the article

  • Obscure Operating Systems

    - by DLH
    Do you ever get the urge to try random obscure operating systems? I think it's sometimes just fun to use systems that are not widely used. What obscure operating systems have you tried (or have thought about trying)? I've been looking into Haiku lately.

    Read the article

  • time(NULL) returning different time

    - by cornerback84
    I am trying to get current time in C using time_t current_time = time(NULL). As I understand, it would return me the current time of system. I am later trying to convert it into GMT time using struct tm* gmt = gmtime(&current_time). I print both times using ctime() and asctime() functions. The current time on my system is GMT + 1. But gmtime() returns me the same time as current_time is. I could not understand why gmtime() is returning me same time. Any help will be appreciated.

    Read the article

  • Should Professional Development occur on company time?

    - by jshu
    As a first-time part-time software developer at a small consulting company, I'm struggling to organise time to further my own software development knowledge - whether that's reading a book, keeping up with the popular questions on StackOverflow, researching a technology we're using in-depth, or following the front page of Hacker News. I can see results borne from my self-allocated study time, but listing and demonstrating the skills and knowledge gained through Professional Development is difficult. The company does not have any defined PD policy, and there's a lot of pressure to get something deliverable done now! when working for consultants. I've checked what my coworkers do, and they don't appear to allocate any time to self-improvement; they just work at the problems they're given, looking up specific MSDN references, code samples, and the like as they need them. I realise that PD policy is going to vary across companies of different size and culture, and a company like my own is probably a bit of an edge case. I'd love to hear views and experiences from more seasoned developers; especially those who have to make the PD policy choices in their team or company. I'd also like to learn about the more radical approaches to PD, even if they're completely out there; it's always interesting to see what other people are trying. Not quite a summary, but what I'm trying to ask: Is it common or recommended for companies to allocate PD time? Whose responsibility is it to ensure a developer's knowledge and skills are up to date? Should a part-time work schedule inspire a lower ratio of PD time : work? How can a developer show non-developer coworkers that reading blogs and books is net productive? Is reading blogs and books actually net productive? (references welcomed) Is writing blogs effective as a way of PD? (a recent theme on Hacker News) This is sort of a broad question because I don't know exactly which questions I need to ask here, so any thoughts on relevant issues I haven't addressed are very welcome.

    Read the article

  • How to bill a client for frequently-interrupted time

    - by Greg
    I find that when I'm working on hourly-billable projects (in particular, those that are research/design/architecture-oriented as opposed to straight coding) that I'm easily distracted by any number of things (email, grab a drink (loss of focus, but nature happens), link off the webpage I was reading, wandering mind (easy when the job calls for a lot of thinking), etc.) This results in very fragmented time, far too incremental IMO to accurately track with a timeclock, and some time very gray. I frequently end up billing for only some fraction of the elapsed time I spent in order to feel fair, but sometimes it takes a really long time to put in an 8-hour day. By contrast, when I've worked for salary I've not worried about whether I'm actively working at any given minute, I just get the job done, and I've never had anything but stellar reviews/feedback from past salaried employers, so I think I get the job done well. I personally believe in an 80/20 cycle: I get 80% of my work done during an inspired 20% of my time. But I have to screw around the other 80% of the time in order to get that first 20%. So the question: what billing/time-tracking policy can I adopt in order to be fair to my hourly customers without having to write off my own less-productive 80% that a salaried employer is willing to overlook in light of the complete package? Note: This question is not about how to be more productive or focused. It's about how to work around whatever salient limitations that I have in a way that's both fair to me and to my customers. Update: A little clarification (to pre-emptively stop some righteous indignation): I currently have a half dozen different project/client groups. It's not a great situation and I'm working at reducing it down to two, but that's my current reality. It's very easy to get off on a thread related to a different project than the one I'm clocking, and I'm not always conscious of it at the time. [I did not intend the question to mean that I was off playing games or making personal calls, etc., and have adjusted wording above to be clearer. Most of the time. I am only human, and sometimes the mind does force you to take a break! :-)]

    Read the article

  • Problem measuring N times the execution time of a code block

    - by Nazgulled
    EDIT: I just found my problem after writing this long post explaining every little detail... If someone can give me a good answer on what I'm doing wrong and how can I get the execution time in seconds (using a float with 5 decimal places or so), I'll mark that as accepted. Hint: The problem was on how I interpreted the clock_getttime() man page. Hi, Let's say I have a function named myOperation that I need to measure the execution time of. To measure it, I'm using clock_gettime() as it was recommend here in one of the comments. My teacher recommends us to measure it N times so we can get an average, standard deviation and median for the final report. He also recommends us to execute myOperation M times instead of just one. If myOperation is a very fast operation, measuring it M times allow us to get a sense of the "real time" it takes; cause the clock being used might not have the required precision to measure such operation. So, execution myOperation only one time or M times really depends if the operation itself takes long enough for the clock precision we are using. I'm having trouble dealing with that M times execution. Increasing M decreases (a lot) the final average value. Which doesn't make sense to me. It's like this, on average you take 3 to 5 seconds to travel from point A to B. But then you go from A to B and back to A 5 times (which makes it 10 times, cause A to B is the same as B to A) and you measure that. Than you divide by 10, the average you get is supposed to be the same average you take traveling from point A to B, which is 3 to 5 seconds. This is what I want my code to do, but it's not working. If I keep increasing the number of times I go from A to B and back A, the average will be lower and lower each time, it makes no sense to me. Enough theory, here's my code: #include <stdio.h> #include <time.h> #define MEASUREMENTS 1 #define OPERATIONS 1 typedef struct timespec TimeClock; TimeClock diffTimeClock(TimeClock start, TimeClock end) { TimeClock aux; if((end.tv_nsec - start.tv_nsec) < 0) { aux.tv_sec = end.tv_sec - start.tv_sec - 1; aux.tv_nsec = 1E9 + end.tv_nsec - start.tv_nsec; } else { aux.tv_sec = end.tv_sec - start.tv_sec; aux.tv_nsec = end.tv_nsec - start.tv_nsec; } return aux; } int main(void) { TimeClock sTime, eTime, dTime; int i, j; for(i = 0; i < MEASUREMENTS; i++) { printf(" » MEASURE %02d\n", i+1); clock_gettime(CLOCK_REALTIME, &sTime); for(j = 0; j < OPERATIONS; j++) { myOperation(); } clock_gettime(CLOCK_REALTIME, &eTime); dTime = diffTimeClock(sTime, eTime); printf(" - NSEC (TOTAL): %ld\n", dTime.tv_nsec); printf(" - NSEC (OP): %ld\n\n", dTime.tv_nsec / OPERATIONS); } return 0; } Notes: The above diffTimeClock function is from this blog post. I replaced my real operation with myOperation() because it doesn't make any sense to post my real functions as I would have to post long blocks of code, you can easily code a myOperation() with whatever you like to compile the code if you wish. As you can see, OPERATIONS = 1 and the results are: » MEASURE 01 - NSEC (TOTAL): 27456580 - NSEC (OP): 27456580 For OPERATIONS = 100 the results are: » MEASURE 01 - NSEC (TOTAL): 218929736 - NSEC (OP): 2189297 For OPERATIONS = 1000 the results are: » MEASURE 01 - NSEC (TOTAL): 862834890 - NSEC (OP): 862834 For OPERATIONS = 10000 the results are: » MEASURE 01 - NSEC (TOTAL): 574133641 - NSEC (OP): 57413 Now, I'm not a math wiz, far from it actually, but this doesn't make any sense to me whatsoever. I've already talked about this with a friend that's on this project with me and he also can't understand the differences. I don't understand why the value is getting lower and lower when I increase OPERATIONS. The operation itself should take the same time (on average of course, not the exact same time), no matter how many times I execute it. You could tell me that that actually depends on the operation itself, the data being read and that some data could already be in the cache and bla bla, but I don't think that's the problem. In my case, myOperation is reading 5000 lines of text from an CSV file, separating the values by ; and inserting those values into a data structure. For each iteration, I'm destroying the data structure and initializing it again. Now that I think of it, I also that think that there's a problem measuring time with clock_gettime(), maybe I'm not using it right. I mean, look at the last example, where OPERATIONS = 10000. The total time it took was 574133641ns, which would be roughly 0,5s; that's impossible, it took a couple of minutes as I couldn't stand looking at the screen waiting and went to eat something.

    Read the article

  • How does operating system software maintains time clocks?

    - by Neeraj
    Hi everyone, This may sound a bit less relevant but I couldn't think of a better place to ask this question. Now consider this situation, you install an OS on your system, set the timezone and time, do some stuff and turn it off. (Note that there is no power going in to the computer). Now next time (say after some hours or days) you turn it on again, and you see the updated time. How is this possible even when my computer is not connected to the internet and was consuming no power during the period it was down.(Is there some kind of hardware hack?) please clarify!

    Read the article

  • two operating systems sharing their file systems with eachother (Windows and Linux)

    - by John Kube
    I have two operating systems installed on my notebook computer, Windows Vista and Ubuntu Linux. When I boot up, I'm presented with a bootloader which allows me to choose which one I want to load. I'm interested in sharing each operating system's file system with the other, such that I could access my Windows files from Linux and vice-versa. Is this possible, and if so how would one go about setting it up? Feel free to just post a link to an existing solution if there is one. I would Google for this myself, but I don't even know what to search for, as I don't know what this is called.

    Read the article

  • Time management and self improvement

    - by Filip
    Hi, I hope I can open a discussion on this topic as this is not a specific problem. It's a topic I hope to get some ideas on how people in similar situation as mine manage their time. OK, I'm a single developer on a software project for the last 6-8 months. The project I'm working on uses several technologies, mainly .net stuff: WPF, WF, NHibernate, WCF, MySql and other third party SDKs relevant for the project nature. My experience and knowledge vary, for example I have a lot of experience in WPF but much less in WCF. I work full time on the project and im curios on how other programmers which need to multi task in many areas manage their time. I'm a very applied type of person and prefer to code instead of doing research. I feel that doing research "might" slow down the progress of the project while I recognize that research and learning more in areas which I'm not so strong will ultimately make me more productive. How would you split up your daily time in productive coding time and time to and experiment, read blogs, go through tutorials etc. I would say that Im coding about 90%+ of my day and devoting some but very little time in research and acquiring new knowledge.

    Read the article

  • How do I implement the bg, &, and fg commands functionaliity in my custom unix shell program written in C

    - by user1631009
    I am extending the functionality of a custom unix shell which I wrote as part of my lab assignment. It currently supports all commands through execvp calls, in-built commands like pwd, cd, history, echo and export, and also redirection and pipes. Now I wanted to add support for running a command in background e.g. $ls -la& I also want to implement bg and fg job control commands. I know this can be achieved if I execute the command by forking a new child process and not waiting for it in the parent process. But how do I again bring this command to foreground using fg? I have the idea of entering each background command in a list assigning each of them a serial number. But I don't know how do I make the processes execute in the background, then bring them back to foreground. I guess wait() and waitpid() system calls would come handy but I am not that comfortable with them. I tried reading the man pages but still am in the dark. Can someone please explain in a layman's language how to achieve this in UNIX system programming? And does it have something to do with SIGCONT and SIGSTP signals?

    Read the article

  • How do I implement the bg, &, and bg commands functionaliity in my custom unix shell program written in C

    - by user1631009
    I am trying to extend the functionality of my custom unix shell which I earlier wrote as part of my lab assignment. It currently supports all commands through execvp calls, in-built commands like pwd, cd, history, echo and export, and also redirection and pipes. Now I wanted to add the support for running a command in background e.g. $ls -la& Now I also want to implement bg and fg job control commands. I know this can be achieved if I execute the command by forking a new child process and not waiting for it in the parent process. But how do I again bring this command to foreground using fg? I have the idea of entering each background command in a list assigning each of them a serial number. But I don't know how do I make the processes execute in the background, then bring them back to foreground. I guess wait() and waitpid() system calls would come handy but I am not that comfortable with them. I tried reading the man pages but still am in the dark. Can someone please explain in a layman's language how to achieve this in UNIX system programming? And does it have something to do with SIGCONT and SIGSTP signals?

    Read the article

  • R adding infrequent date 'events' to a time series plot

    - by flyingcrab
    Hi, I am just starting on R - and have hit a bit of a deadlock with some time series data. I have a time series (date and value) in 'zoo' format, that I want to annotate with a cross when an event occurs. The events are irregular and in a csv format (just the dates, sometimes repeated). I've managed to read in the dates etc into a format that R accepts - but i cant seem to get a means to chart the main time series with the secondary events annotated on top?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >