Search Results

Search found 60669 results on 2427 pages for 'time tracking'.

Page 25/2427 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • c++ thread running time

    - by chnet
    I want to know whether I can calculate the running time for each thread. I implement a multithread program in C++ using pthread. As we know, each thread will compete the CPU. Can I use clock() function to calculate the actual number of CPU clocks each thread consumes? my program looks like: Class Thread () { Start(); Run(); Computing(); }; Start() is to start multiple threads. Then each thread will run Computing function to do something. My question is how I can calculate the running time of each thread for Computing function

    Read the article

  • Using system time directly to get random numbers

    - by Richard Mar.
    I had to return a random element from an array so I came up with this placeholder: return codes[(int) (System.currentTimeMillis() % codes.length - 1)]; Now than I think of it, I'm tempted to use it in real code. The Random() seeder uses system time as seed in most languages anyway, so why not use that time directly? As a bonus, I'm free from the worry of non-random lower bits of many RNGs. It this hack coming back to bite me? (The language is Java if that's relevant.)

    Read the article

  • long waiting time in linking

    - by ccanan
    Hi, here is the situation. I am using visual studio 2005. the solution contains lots of projects, 34 projects in all, and the start up projects depends on others. then in linking part, it'll wait a long time before the real linking starts. I am pretty sure it's because of too many projects depended, as when I use a solution with 10 of the 34 projects(keep other projects as headers&libs), it'll start instantly. so any one has any idea that I can reduce the waiting time? thx.

    Read the article

  • In-Store Tracking Gets a Little Harder

    - by David Dorf
    Remember how Nordstrom was tracking shopper movements within their stores using the unique number, called a MAC, emitted by the WiFi radio in smartphones?  The phones didn't need to connect to the network, only have their WiFi enabled, as most people do by default.  They did this, presumably, to track shoppers' path to purchase and better understand traffic patterns.  Although there were signs explaining this at the entrances, people didn't like the notion of being tracked.  (Nevermind that there are cameras in the ceiling watching them.)  Nordstrom stopped the program. To address this concern the Future of Privacy, a Washington think tank, created Smart Store Privacy, a do-not-track service that allows consumers to register their MAC address in much the same way people register their phone numbers in the national do-not-call list.  A group of companies agreed to respect consumers' wishes and ignore smartphones listed in the database.  The database includes Bluetooth identifiers as well.  Of course you could simply turn your bluetooth and WiFi off when shopping as well. Most know that Apple prefers to use BLE beacons to contact and track smartphones within their stores.  This feature extends the typical online experience to also work in physical stores.  By identifying themselves, shoppers can expect a more tailored shopping experience much like what we've come to expect from Amazon's website, with product recommendations and offers that are (usually) relevant. But the upcoming release of iOS8 is purported to have a new feature that randomizes the WiFi MAC address of smartphones during the "probing" phase.  That is, before connecting to the WiFi network, a random MAC number is used so as to keep the smartphone's real MAC address secret.  Unless you actually connect to the store's WiFi, they won't recognize the MAC address. The details on this are still sketchy, but if the random MAC is consistent for a short period, retailers will still be able to track movements anonymously, but they won't recognize repeat visitors.  That may be sufficient for traffic analytics, but it will stymie target marketing.  In the case of marketing, using iBeacons with opt-in permission from consumers will be the way forward. There is always a battle between utility and privacy, so I expect many more changes in this area.  Incidentally, if you'd like to see where beacons are being used this site tracks them around the world.

    Read the article

  • Response time increasing (worsening) over time with consistent load

    - by NJ
    Ok. I know I don't have a lot of information. That is, essentially, the reason for my question. I am building a game using Flash/Flex and Rails on the back-end. Communication between the two is via WebORB. Here is what is happening. When I start the client an operation calls the server every 60 seconds (not much, right?) which results in two database SELECTS and an UPDATE and a resulting response to the client. This repeats every 60 seconds. I deployed a test version on heroku and NewRelic's RPM told me that response time degraded over time. One client with one task every 60 seconds. Over several hours the response time drifted from 150ms to over 900ms in response time. I have been able to reproduce this in my development environment (my Macbook Pro) so it isn't a problem on Heroku's side. I am not doing anything sophisticated (by design) in the server app. An action gets called, gets some data from the database, performs an AR update and then returns a response. No caching, etc. Any thoughts? Anyone? I'd really appreciate it.

    Read the article

  • How can we plan projects realistically while accounting for support issues?

    - by Thomas Clayson
    We're having a problem at work: we're trying to schedule work so that we can assess time scales and get deadline dates. The problem is that it's difficult to plan for a project without knowing everything that's going to happen. For instance, right now we've planned all our projects through the start of December, however in that time we will have various in house and external meetings, teleconferences and extra work. It's all well and good to say that a project will take three weeks, but if there is a week's worth of interruption in that time then the date of completion will be pushed back a week. The problem is 3 fold: When we schedule projects the time scales are taken literally. If we estimate three weeks, the deadline is set for three week's time, the client is told, and there is no room for extension. Interim work and such means that we lose productive time working on the project. Sometimes clients don't have the time that we need to take to do the work, so they'll sometimes come to us and say they need a project done by the end of the month even when we think that the work will take two months - not to mention we already have work to be doing. We have a Gantt chart which we are trying to fill in with all the projects we have and we fill in timesheets, but they're not compared to the Gantt chart at all. This makes it difficult to say "Well, we scheduled 3 weeks for this project, but we've lost a week here so the deadline has to move back a week." It's also not professional to keep missing deadlines we've communicated to the client. How do other people deal with this type of situation? How do you manage the planning of projects? How much "extra" time do you schedule into a project to account for non-project work that occurs during a project? How do you deal with support issues and bugs and stuff? Things you can't account for during planning? UPDATE Lots of good answers thank you.

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • Terrible ping time with TP-Link wireless router

    - by rabbid
    I am literally a foot away from this useless TL-WR340G/TL-WR340GD router and check out this ping time: 64 bytes from 192.168.1.8: icmp_seq=291 ttl=64 time=9477.516 ms 64 bytes from 192.168.1.8: icmp_seq=292 ttl=64 time=8954.423 ms 64 bytes from 192.168.1.8: icmp_seq=293 ttl=64 time=8262.836 ms 64 bytes from 192.168.1.8: icmp_seq=294 ttl=64 time=7937.853 ms 64 bytes from 192.168.1.8: icmp_seq=295 ttl=64 time=7517.768 ms 64 bytes from 192.168.1.8: icmp_seq=296 ttl=64 time=7106.063 ms 64 bytes from 192.168.1.8: icmp_seq=297 ttl=64 time=6492.109 ms 64 bytes from 192.168.1.8: icmp_seq=298 ttl=64 time=5835.305 ms 64 bytes from 192.168.1.8: icmp_seq=299 ttl=64 time=5314.897 ms 64 bytes from 192.168.1.8: icmp_seq=300 ttl=64 time=4902.705 ms 64 bytes from 192.168.1.8: icmp_seq=301 ttl=64 time=4716.959 ms 64 bytes from 192.168.1.8: icmp_seq=302 ttl=64 time=5224.450 ms 64 bytes from 192.168.1.8: icmp_seq=303 ttl=64 time=5024.079 ms 64 bytes from 192.168.1.8: icmp_seq=304 ttl=64 time=5044.100 ms 64 bytes from 192.168.1.8: icmp_seq=305 ttl=64 time=4477.990 ms 64 bytes from 192.168.1.8: icmp_seq=306 ttl=64 time=3582.432 ms 64 bytes from 192.168.1.8: icmp_seq=307 ttl=64 time=2911.896 ms At this time mine is the only computer using the router. This happens from time to time. I'd restart the router, and then it'll have a 1-2 ms ping for a while, and then back to terrible ping. Is it just a poor quality router? Suggestions? Thank you

    Read the article

  • Tracking my home IP from anywhere on the internet?

    - by oKtosiTe
    I have an ISP that serves semi-permanent IPv4 addresses. They can't promise fixed IP addresses, but unexpected changes are quite rare. This begs me to ask however: what would be the easiest/most reliable way to track my home IP address so I can access my (Windows 7) home server even in the case of an address change? Please note: for reasons that I don't want to go in to, I'd like to avoid using any "dynamic DNS" type services. Instead I'd prefer some way to perhaps have the home server leave an occasional/triggered "address stamp" on a remote, off-site server (by SSH, HTTP post or similar, preferably over an encrypted connection).

    Read the article

  • Is there any way to properly display negative time spans in Excel?

    - by Pepor
    Is there any way to make Excel show a negative time span? If I subtract two time values (say, when subtracting the actual amount of time spent on something from the amount of time planned for it) and the result is negative, Excel just fills the result cell with hashes to notify me that the result cannot be displayed as a time value. Even OpenOffice.org Calc and Google Spreadsheets can display negative time values. Is there a way to work around that issue by using conditional formatting? I really don't want to create some workaround by calculating the hours and minutes myself or anything like that.

    Read the article

  • Calculate travel time on road map with semaphores

    - by Ivansek
    I have a road map with intersections. At intersections there are semaphores. For each semaphore I generate a red light time and green light time which are represented with syntax [R:T1, G:T2], for example: 119 185 250 A ------- B: [R:6, G:4] ------ C: [R:5, G:5] ------ D I want to calculate a car travel time from A - D. Now I do this with this pseudo code: function get_travel_time(semaphores_configuration) { time = 0; for( i=1; i<path.length;i++) { prev_node = path[i-1]; next_node = path[i]); cost = cost_between(prev_node, next_node) time += (cost/movement_speed) // movement_speed = 50px per second light_times = get_light_times(path[i], semaphore_configurations) lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light; // Lights cycle time light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } } return time; } So for distance 119 between A and B travel time is, 119/50 = 2.38s ( exactly mesaured time is between 2.5s and 2.6s), then we add time if we came at a red light when at B. If we came at a red light is calculated with lines: lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } This pseudo code doesn't calculate exactly the same times as they are mesaured, but the calculations are very close to them. Any idea how I would calculate this?

    Read the article

  • Tracking down memory issues affecting a website

    - by gaoshan88
    I've got a website (Wordpress based) that became unresponsive. I SSH'd into the server and saw that we were out of memory. Errors in my apache log files indicated the same... things failing to be allocated due to lack of memory). Restarting the server fixes it. So I look in access.log and error.log around the time of the incident but I see nothing strange. No extra traffic, no unusual requests. In fact the only request around the time of the problem was one from Googlebot for an rss feed... at that point I start to see 500 response codes in the logs until the machine was rebooted. I look in message.log hoping to see something but there is nothing at all for that entire day (which is odd as there are entries for every other day). The site has a large amount of memory allocated to it and normally runs using about 30% of what is available. My question... how would you go about trying to track this down at this point? What are some other log files I could check or strategies I could take?

    Read the article

  • I need to calculate the date / time difference between one date time column

    - by Stringz
    Details. I have the notes table having the following columns. ID - INT(3) Date - DateTime Note - VARCHAR(100) Tile - Varchar(100) UserName - Varchar(100) Now this table will be having NOTES along with the Titles entered by UserName on the specified date / time. I need to calculate the DateTimeDifference between the TWO ROWS in the SAME COLUMN For example the above table has this peice of information in the table. 64, '2010-03-26 18:16:13', 'Action History', 'sending to Level 2.', 'Salman Khwaja' 65, '2010-03-26 18:19:48', 'Assigned By', 'This is note one for the assignment of RF.', 'Salman Khwaja' 66, '2010-03-27 19:19:48', 'Assigned By', 'This is note one for the assignment of CRF.', 'Salman Khwaja' Now I need to have the following resultset in query reports using MYSQL. TASK - TIME Taken ACTION History - 2010-03-26 18:16:13 Assigned By - 00:03:35 Assigned By - 25:00:00 More smarter approach would be TASK - TIME Taken ACTION History - 2010-03-26 18:16:13 Assigned By - 3 minutes 35 seconds Assigned By - 1 day, 1 hour. I would appreciate if one could give me the PLAIN QUERY along with PHP code to embed it too.

    Read the article

  • Custom date time picker for Windows Forms that is locked to GMT time

    - by m3ntat
    Using Visual Studio 2008, c#, .net 2.0. I have a Windows Forms client application that contains a scheduling UI section, currently this is housed only in the London office with the standard datetime picker control, the selected time is saved in a UK database (GMT) and a London based server aapplication processes the schedules. There is a requirement to roll the client out to various global locations, Hong Kong, New York etc and allow them to setup schedules that run according to GMT time on the London server. I'll have a label on screen saying "note schedules are GMT" what I need is a good way to present a datetime picker that always shows and is in sync with the database server's GMT time regardless of where the client app is running globally. Suggestions on how to acheive this? thanks.

    Read the article

  • Temperature anomaly calculation of time series data

    - by neel
    I have a time series like following: Data <- structure(list(Year = c(1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L), Month = c(8L, 9L, 9L, 9L, 10L, 10L, 10L, 11L, 11L, 11L, 12L, 12L, 12L, 1L, 1L, 1L, 2L, 2L, 2L, 3L, 3L, 3L, 4L, 4L, 4L, 5L, 5L, 5L, 6L, 6L, 6L, 7L, 7L, 7L, 8L, 8L, 8L, 9L, 9L, 9L, 10L, 10L, 10L, 11L, 11L, 11L, 12L, 12L, 12L), Day = c(30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 28L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L), Hour = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), temperature = c(72.5, 64, 62.5, 64, 64, 53, 52, 52, 45.5, 49, 50, 50, 59, 63.5, 69.5, 61, 61, NaN, NaN, 39.5, 37, 45.5, 45, 39, 43.5, 52, 53, 56, 64, 66, 66.5, 73.5, 81, 85, 89.5, 87.5, 88.5, 83, 84.5, 74, 60.5, 59, 53, 60.5, 62.5, 64.5, 63, 62, 65.5)), .Names = c("Year", "Month", "Day", "Hour", "temperature"), class = "data.frame", row.names = c(NA, -49L)) and I have to calculate standardized anomaly. The steps to calculate the anomalies are following: Monthly premature departures from the long-term (1991-2007) average are obtained. Then standardized by dividing by the standard deviation of monthly temperature. The standardized monthly anomalies are then weighted by multiplying by the fraction of the average temperature for the given month. These weighted anomalies are then summed over 3 month time period. Can you please help me?

    Read the article

  • java time check API

    - by ring bearer
    I am sure this was done 1000 times in 1000 different places. The question is I want to know if there is a better/standard/faster way to check if current "time" is between two time values given in hh:mm:ss format. For example, my big business logic should not run between 18:00:00 and 18:30:00. So here is what I had in mind: public static boolean isCurrentTimeBetween(String starthhmmss, String endhhmmss) throws ParseException{ DateFormat hhmmssFormat = new SimpleDateFormat("yyyyMMddhh:mm:ss"); Date now = new Date(); String yyyMMdd = hhmmssFormat.format(now).substring(0, 8); return(hhmmssFormat.parse(yyyMMdd+starthhmmss).before(now) && hhmmssFormat.parse(yyyMMdd+endhhmmss).after(now)); } Example test case: String doNotRunBetween="18:00:00,18:30:00";//read from props file String[] hhmmss = downTime.split(","); if(isCurrentTimeBetween(hhmmss[0], hhmmss[1])){ System.out.println("NOT OK TO RUN"); }else{ System.out.println("OK TO RUN"); } What I am looking for is code that is better in performance in looks in correctness What I am not looking for third-party libraries Exception handling debate variable naming conventions method modifier issues

    Read the article

  • average case running time of linear search algorithm

    - by Brahadeesh
    Hi all. I am trying to derive the average case running time for deterministic linear search algorithm. The algorithm searches an element x in an unsorted array A in the order A[1], A[2], A[3]...A[n]. It stops when it finds the element x or proceeds until it reaches the end of the array. I searched on wikipedia and the answer given was (n+1)/(k+1) where k is the number of times x is present in the array. I approached in another way and am getting a different answer. Can anyone please give me the correct proof and also let me know whats wrong with my method? E(T)= 1*P(1) + 2*P(2) + 3*P(3) ....+ n*P(n) where P(i) is the probability that the algorithm runs for 'i' time (i.e. compares 'i' elements). P(i)= (n-i)C(k-1) * (n-k)! / n! Here, (n-i)C(k-1) is (n-i) Choose (k-1). As the algorithm has reached the ith step, the rest of k-1 x's must be in the last n-i elements. Hence (n-i)C(k-i). (n-k)! is the total number of ways of arranging the rest non x numbers, and n! is the total number of ways of arranging the n elements in the array. I am not getting (n+1)/(k+1) on simplifying.

    Read the article

  • Time complexity of a certain program

    - by HokageSama
    In a discussion with my friend i am not able to predict correct and tight time complexity of a program. Program is as below. /* This Function reads input array "input" and update array "output" in such a way that B[i] = index value of nearest greater value from A[i], A[i+1] ... A[n], for all i belongs to [1, n] Time Complexity: ?? **/ void createNearestRightSidedLargestArr(int* input, int size, int* output){ if(!input || size < 1) return; //last element of output will always be zero, since no element is present on its right. output[size-1] = -1; int curr = size - 2; int trav; while(curr >= 0){ if(input[curr] < input[curr + 1]){ output[curr] = curr + 1; curr--; continue; } trav = curr + 1; while( input[ output [trav] ] < input[curr] && output [trav] != -1) trav = output[trav]; output[curr--] = output[trav]; } } I said time complexity is O(n^2) but my friend insists that this is not correct. What is the actual time complexity?

    Read the article

  • How to display locale sensitive time format without seconds in python

    - by Tim Kersten
    I can output a locale sensitive time format using strftime('%X'), but this always includes seconds. How might I display this time format without seconds? >>> import locale >>> import datetime >>> locale.setlocale(locale.LC_ALL, 'en_IE.utf-8') 'en_IE.utf-8' >>> print datetime.datetime.now().strftime('%X') 12:22:43 >>> locale.setlocale(locale.LC_ALL, 'zh_TW.utf-8') 'zh_TW.utf-8' >>> print datetime.datetime.now().strftime('%X') 12?22?58? The only way I can think of doing this is attempting to parse the output of locale.nl_langinfo(locale.T_FMT) and strip out the seconds bit, but that brings it's own trickery. >>> print locale.nl_langinfo(locale.T_FMT) %H?%M?%S? >>> locale.setlocale(locale.LC_ALL, 'en_IE.utf-8') 'en_IE.utf-8' >>> print locale.nl_langinfo(locale.T_FMT) %T

    Read the article

  • Fetching real time data from excel

    - by Umesh Sharma
    I am seriouly looking for your valuable help first time here. If possible, plese help me. I am developing a VB.NET app in which i read "real time data" from a excel sheet using "Microsoft.Office.Interop.Excel" i.e. excel automation. All cells in excel sheet are fetching stock data from some LOCAL DDE Server like "=XYZ|Bid!GOLD", "=XYZ|Bid!SILVER", "=XYZ|Ask!SILVER" and so on... Some cells also having fixed values like "Symbol", "Bid Rate", "32.90" etc. Values of DDE mapped cells (i.e. =XYZ|xxxx!yyy) are continuously changing. THE PROBLEM is here..."FIXED values" from excel cells are coming quite ok to my app but all DDE mapped cells values are coming "-2146826246" (When datasource local dde server ON) or "-2146826265" (OFF). Although, if i use C#.NET, it's all ok but not with Vb.NET. I want to display range of excel (A1 to J50) into VB.NET ListView which are changing in every 200ms (5 times in every 1 second) ================ Important ====================================================== Is it possible to BIND "listview items/columns values" with "excel cells" or some local memory variables ?? Currently, i am reading excel "cell by cell" and trying to put values in .NET listview but CPU USES are very high as well as it's toooo slow process. If yes, then how please ? I am a VFP developer but new to .NET It's very easy in VFP then why not in .NET ?? Please guide me, if someone has the solution...

    Read the article

  • linux bash script: set date/time variable to auto-update (for inclusion in file names)

    - by user1859492
    Essentially, I have a standard format for file naming conventions. It breaks down to this: target_dateUTC_timeUTC_tool So, for instance, if I run tcpdump on a target of 'foo', then the file would be foo_dateUTC_timeUTC_tcpdump. Simple enough, but a pain for everyone to constantly (and consistently) enter... so I've tried to create a bash script which sets system variables like so: FILENAME=$TARGET\_$UTCTIME\_$TOOL Then, I can just call the variable at runtime, like so: tcpdump -w $FILENAME.lpc All of this works like a champ. I've got a menu-driven .sh which gives the user the options of viewing the current variables as well as setting them... file generation is a breeze. Unfortunately, by setting the date/time variable, it is locked to the value at the time of creation (naturally). I set the variable like so: UTCTIME=$(/bin/date --utc +"%Y%m%d_%H%M%Z") What I really need is either a way to create a variable which updates at runtime, or (more likely) another way to skin this cat. While scouring for solutions, I came across a similar issues... like this. But, to be honest, I'm stumped on how to marry the two approaches and create a simple, distributable solution. I can post the entire .sh if anyone cares to review (about 120 lines)

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >