Search Results

Search found 61830 results on 2474 pages for 'efficient time use'.

Page 92/2474 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • Getting Run time 1004 error in code

    - by krishna123
    I tried the code provided by vba express for combining sheet, while execution it is displaying Run Time error 1004: Application Defined or Object Defined Error: My Scenario is: I have a Excel, in that I have first sheet "Connection" and after it I have Sheet1, Sheet2 and so on. I am combining all sheets except Sheet"Conection" by saying start with sheet2. I tried following line of code to exclude "Connection" sheet: If Not Sheet.Name = "Connection" then but it did not work. Whatever the sheets I have in some of them I have large data in some cells. Here is the code which I am using: I have highlighted the line Sub CopyFromWorksheets() Dim wrk As Workbook 'Workbook object - Always good to work with object variables Dim sht As Worksheet 'Object for handling worksheets in loop Dim trg As Worksheet 'Master Worksheet Dim rng As Range 'Range object Dim colCount As Integer 'Column count in tables in the worksheets Set wrk = ActiveWorkbook 'Working in active workbook For Each sht In wrk.Worksheets If sht.Name = "Master" Then sht.Delete Exit Sub End If Next sht 'We don't want screen updating Application.ScreenUpdating = False 'trg.SaveAs "C:\temp\CPReport1.xls" 'Add new worksheet as the last worksheet Set trg = wrk.Worksheets.Add(After:=wrk.Worksheets(wrk.Worksheets.Count)) 'Rename the new worksheet trg.Name = "Master" 'Get column headers from the first worksheet 'Column count first Set sht = wrk.Worksheets(2) colCount = sht.Cells(1, 255).End(xlToLeft).Column 'Now retrieve headers, no copy&paste needed With trg.Cells(1, 1).Resize(1, colCount) .Value = sht.Cells(1, 1).Resize(1, colCount).Value 'Set font as bold .Font.Bold = True End With trg.SaveAs "C:\temp\CPReport1.xls" 'We can start loop 'Skip Sheet - Connection If Not sht.Name = "Connection" Then For Each sht In wrk.Worksheets 'If worksheet in loop is the last one, stop execution (it is Master worksheet) If sht.Index = wrk.Worksheets.Count Then Exit For End If 'Data range in worksheet - starts from second row as first rows are the header rows in all worksheets Set rng = sht.Range(sht.Cells(2, 1), sht.Cells(65536, 1).End(xlUp).Resize(, colCount)) 'Put data into the Master worksheet '----------------- Error in below line -------------------------------------------------- trg.Cells(65536, 1).End(xlUp).Offset(1).Resize(rng.Rows.Count, rng.Columns.Count).Value = rng.Value '---------------------------------------------------------------------------------------- Next sht End If 'Fit the columns in Master worksheet trg.Columns.AutoFit 'Dim dest, destyfile 'dest = "E:\Test_Merge\" 'destyfile = dest & "_" & trg.Name 'trg.SaveAs (destyfile) 'Screen updating should be activated Application.ScreenUpdating = True End Sub

    Read the article

  • GWT Query fails second time -only.

    - by Koran
    HI, I have a visualization function in GWT which calls for two instances of the same panels - two queries. Now, suppose one url is A and the other url is B. Here, I am facing an issue in that if A is called first, then both A and B works. If B is called first, then only B works, A - times out. If I call both times A, only the first time A works, second time it times out. If I call B twice, it works both times without a hitch. Even though the error comes at timed out, it actually is not timing out - in FF status bar, it shows till - transferring data from A, and then it gets stuck. This doesnt even show up in the first time query. The only difference between A and B is that B returns very fast, while A returns comparitively slow. The sample code is given below: public Panel(){ Runnable onLoadCallback = new Runnable() { public void run() { Query query = Query.create(dataUrl); query.setTimeout(60); query.send(new Callback() { public void onResponse(QueryResponse response) { if (response.isError()){ Window.alert(response.getMessage()); } } } } VisualizationUtils.loadVisualizationApi(onLoadCallback, PieChart.PACKAGE); } What could be the reason for this? I cannot think of any reason why this should happen? Why is this happening only for A and not for B? EDIT: More research. The query which works all the time (i.e. B is the example URL given in GWT visualization site: see comment [1]). So, I tried in my app engine to reproduce it - the following way s = "google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});"; response = HttpResponse(s, content_type="text/plain; charset=utf-8") response['Expires'] = time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime()) return response Where s is the data when we run the query for B. I tried to add Expires etc too, since that seems to be the only header which has the difference, but now, the query fails all the time. For more info - I am now sending the difference between my server response vs the working server response. They seems to be pretty similar. HTTP/1.0 200 OK Content-Type: text/plain Date: Wed, 16 Jun 2010 11:07:12 GMT Server: Google Frontend Cache-Control: private, x-gzip-ok="" google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});Connection closed by foreign host. Mac$ telnet spreadsheets.google.com 80 Trying 209.85.231.100... Connected to spreadsheets.l.google.com. Escape character is '^]'. GET http://spreadsheets.google.com/tq?key=pWiorx-0l9mwIuwX5CbEALA&range=A1:B12&gid=0&headers=-1 HTTP/1.0 200 OK Content-Type: text/plain; charset=UTF-8 Date: Wed, 16 Jun 2010 11:07:58 GMT Expires: Wed, 16 Jun 2010 11:07:58 GMT Cache-Control: private, max-age=0 X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Server: GSE google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});Connection closed by foreign host. Also, please note that App engine did not allow the Expires header to go through - can that be the reason? But if that is the reason, then it should not fail if B is sent first and then A. Comment [1] : http://spreadsheets.google.com/tq?key=pWiorx-0l9mwIuwX5CbEALA&range=A1:B12&gid=0&headers=-1

    Read the article

  • nginx connection time issue on some IPs

    - by sheldon
    I have recently shifted my server to nginx and php-fpm getting rid of apache. This has helped improves speeds of my website. Everything seems to work fine until i came across this issue, i noticed that nginx keeps throwing connection time out errors for only certain IPs. One of the IPs is my office IP, we have a backend that is accessed from our office through out the day. I use supervisord to launch 3 php-fpm processes with workers this is my typical php-fpm config pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 300 Since i have a server with 4 cores and 2 GB ram this is my nginx setup worker_processes 4; worker_rlimit_nofile 8192; events { worker_connections 1024; use epoll; multi_accept off; } sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 55; recursive_error_pages on; server_name_in_redirect off; server_tokens off; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; connection_pool_size 256; client_header_buffer_size 8k; large_client_header_buffers 4 32k; request_pool_size 4k; output_buffers 4 32k; postpone_output 1460; proxy_buffer_size 32k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; fastcgi_connect_timeout 120; fastcgi_send_timeout 120; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; Where am i going wrong with the config, I have tried various settings but the issue still persists. These are the errors i keep getting 2011/11/13 18:20:33 [error] 21583#0: *311683 upstream timed out (110: Connection timed out) while reading response header from upstream, client: IP, server: tastykhana.in, request: "GET url HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.socket:", host: "tastykhana.in", referrer: "url"

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • First time installing Linux/Apache - uanble to connect

    - by bob
    I's my first time installing Linux/Apache. I loaded CentOS and LAMPP on a machine attached to a LAN. Turned off http and mysql (because I didn't want conflict with LLAMPP) chkconfig httpd off chkconfig mysqld off then successfully LAMPP started with /opt/lampp/lampp start Starting XAMPP for Linux 1.7.3a... XAMPP: Starting Apache with SSL (and PHP5)... XAMPP: Starting MySQL... XAMPP: Starting ProFTPD... XAMPP for Linux started. Problem: Unable to connect - Firefox can't establish a connection to the server at 179.16.51.36. I need some pointers as to where to look next. No errors in error_log file (just some warnings) I can ping server. httpd.conf looks like this: ServerRoot "/opt/lampp" Listen 80 ServerAdmin [email protected] ServerName 179.16.51.36 DocumentRoot "/opt/lampp/htdocs" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/opt/lampp/htdocs"> Options Indexes FollowSymLinks ExecCGI Includes Order allow,deny Allow from all </Directory> ErrorLog logs/error_log LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog logs/access_log common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "/opt/lampp/cgi-bin/" </IfModule> <IfModule cgid_module> </IfModule> <Directory "/opt/lampp/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig etc/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddHandler cgi-script .cgi .pl AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> EnableMMAP off EnableSendfile off

    Read the article

  • All internet requests in Windows time out

    - by Brandon
    So, I've run into a very strange problem with my home wireless network. Previously, at seemingly random times, the router seemed to disconnect all wireless hosts and cause all of the wired hosts to have a "limited connection" according to windows. In order to fix this, I had to unplug all of the wired hosts from the router, unplug the modem from the router, and power cycle the router. This seemed to solve the problem for a while until the exact same thing happened a day later and I had to go through the same process again. That's where I noticed something weird happening. There was one wireless host (a Windows Vista laptop) that seemed to be causing the router to disconnect the other hosts whenever it connected. When this happened, only that laptop was able to use the wireless from the router. When this happened, I disconnected it from the wireless (by disabling the wireless adapter) then reconnected it (by re-enabling it) and now it, like the other hosts, couldn't connect. I've never really seen anything this strange happen on our network before. So, I restored the router to factory settings and the problem seems to have vanished except one crucial problem. There's another host (a Windows 7 laptop) that was perfectly able to connect before all of the router issues and even in between the crashing and power-cycling events but now says its connected and says it's able to reach the Internet, but all requests time out. In any browser I've tried, the tab says connecting to [site]... for a solid minute and then tells me the request timed out. When I try to ping google.com in cmd it also says request timed out. In frustration, I booted into a dual-boot Ubuntu installation on the Windows 7 host and the connection works fine, to my surprise, as ubuntu is where I am now typing this rather long question. I haven't looked through the event log in windows but will post anything I find in an edit I haven't tried connecting (in Windows 7) to any other wireless network, since The fact that it works in Ubuntu suggests its Windows and not the router but I didn't change any wireless settings in windows before it being able to reach the Internet and not. Does anyone have any clue what could have happened. I opened to buying another router as this one is almost a year old :) but I would like to know whats going on here. Thanks in Advance! P.S. Sorry for how long my question is, I'm a little anxious (:

    Read the article

  • OTP or S/KEY - Conversion of Hex string into 6 readable words

    - by Garbit
    As seen in RFC2289 (S/KEY), there is a list of words that must be used when converting the hexadecimal string into a readable format. How would i go about doing so? The RFC mentions: The one-time password is therefore converted to, and accepted as, a sequence of six short (1 to 4 letter) English words. Each word is chosen from a dictionary of 2048 words; at 11 bits per word, all one-time passwords may be encoded. Read more: http://www.faqs.org/rfcs/rfc1760.html#ixzz0fu7QvXfe Does this mean converting a hex into decimal and then using that as an index for an array of words. The other thing it could be is using a text encoding e.g. 1111 might equal dog in UTF-8 encoding thanks in advance for your help!

    Read the article

  • Get seconds since epoch in any POSIX compliant shell

    - by mattbh
    I'd like to know if there's a way to get the number of seconds since the UNIX epoch in any POSIX compliant shell, without resorting to non-POSIX languages like perl, or using non-POSIX extensions like GNU awk's strftime function. Here are some solutions I've already ruled out... date +%s // Doesn't work on Solaris I've seen some shell scripts suggested before, which parse the output of date then derive seconds from the formatted gregorian calendar date, but they don't seem to take details like leap seconds into account. GNU awk has the strftime function, but this isn't available in standard awk. I could write a small C program which calls the time function, but the binary would be specific to a particular architecture. Is there a cross platform way to do this using only POSIX compliant tools? I'm tempted to give up and accept a dependency on perl, which is at least widely deployed. perl -e 'print time' // Cheating (non-POSIX), but should work on most platforms

    Read the article

  • How do you keep a balance between working, training, health and family?

    - by Jim Burger
    One trend I see in the awesome developers I've met, is that they devote inordinate amounts of time to coding at the expense of (usually) their health. Personally, I also find it hard to motivate myself to keep healthy. Every now and again, I meet a fantastic coder who has it clocked; they are up to date with the latest dev news, have time to read about good programming practices, and to finish it off, have happy wives/husbands and families. How do you guys/gals manage it in the short 24 hours a day that we all have?

    Read the article

  • Linear complexity and quadratic complexity

    - by jasonline
    I'm just not sure... If you have a code that can be executed in either of the following complexities: A sequence of O(n), like for example: two O(n) in sequence O(n²) The preferred version would be the one that can be executed in linear time. Would there be a time such that the sequence of O(n) would be too much and that O(n²) would be preferred? In other words, is the statement C x O(n) < O(n²) always true for any constant C? Why or why not? What are the factors that would affect the condition such that it would be better to choose the O(n²) complexity?

    Read the article

  • Dynamically add files to visual studio deployment project.

    - by Graeme Yeo
    I've been desperately looking for the answer to this and I feel I'm missing something obvious. I need to copy a folder full of data files into the TARGETDIR of my deployment project at compile time. I can see how I would add individual files (ie. right click in File System and go to Add-File) but I have a folder full of data files which constantly get added to. I'd prefer not to have to add the new files each time I compile. I have tried using a PreBuildEvent to copy the files: copy $(ProjectDir)..\Data*.* $(TargetDir)Data\ which fails with error code 1 when I build. I can't help but feel I'm missing the point here though. Any suggestions? Thanks in advance. Graeme

    Read the article

  • notification scheduling question

    - by sims
    Hi Stackers! I'm building an app that needs to send out notifications to users depending on user definable "notifications". So the notifications are not per event. They are arbitrary. A cron job should query the database and send out emails when it finds an event with matching criterion. The is a scheduling app. So naturally, one of the criterion is time. I think I've figured it out, but I'm sure there are better ideas out there as it seems to be a fairly common thing to do. I think I'll limit the users ability to "minutes before hand" to get notified as opposed to seconds or hours. So I have an eventA and notificationA. notificationA should be triggered if an event is due within 45 minutes. So eventA starts at 17:30. The user should be notified at 16:45. But the cron job might not run exactly at 00 seconds. So when the difference is not 45 minutes and 0 seconds it is, say, 45 minutes and 5 seconds. Notification time is past. Email doesn't get sent. User misses event. Shugar. We should also take into account that the cron job might take a long time to run. So maybe we should only trigger it every 5 minutes. So we need a bigger interval maybe. So then my guess would be to say that: if ((eventDueTime - now notificationTimeValue - interval) && (eventDueTime - now < notificationTimeValue + interval)) sendTheFrikinNotificationAlready(); It seems kind of risky if the there are thousands of notifications to send out. I guess I could make a thread for each notification and then a thread for each event that matches the criterion. That might help. Does that make sense? Any other ideas? Thanks!

    Read the article

  • How to check if datetime is older than 20 seconds.

    - by Jelle
    Hello! This is my first time here so I hope I post this question at the right place. :) I need to build flood control for my script but I'm not good at all this datetime to time conversions with UTC and stuff. I hope you can help me out. I'm using the Google App Engine with Python. I've got a datetimeproperty at the DataStore database which should be checked if it's older than 20 seconds, then proceed. Could anybody help me out? So in semi-psuedo: q = db.GqlQuery("SELECT * FROM Kudo WHERE fromuser = :1", user) lastplus = q.get() if lastplus.date is older than 20 seconds: print"Go!"

    Read the article

  • what is order notation f(n)=O(g(n))?

    - by Lopa
    2 questions: question 1: under what circumstances would this[O(f(n))=O(k.f(n))] be the most appropriate form of time-complexity analysis? question 2: working from mathematical definition of O notation, show that O(f(n))=O(k.f(n)), for positive constant k.? My view: For the first one I think it is average case and worst case form of time-complexity. am i right? and what else do i write in that? for the second one I think we need to define the function mathematically, so is the answer something like because the multiplication by a constant just corresponds to a readjustment of value of the arbitrary constant 'k' in definition of O.

    Read the article

  • Converting excel files to python to frequency

    - by Jacob
    Essentially I've got an excel files with voltage in the first column, and time in the second. I want to find the period of the voltages, as it returns a graph of voltage in y axis and time in x axis with a periodicity, looking similar to a sine function. To find the frequency I have uploaded my excel file to python as I think this will make it easier- there may be something I've missed that will simplify this. So far in python I have: import xlrd import numpy as N import numpy.fft as F import matplotlib.pyplot as P wb = xlrd.open_workbook('temp7.xls') #LOADING EXCEL FILE wb.sheet_names() sh = wb.sheet_by_index(0) first_column = sh.col_values(1) #VALUES FROM EXCEL second_column = sh.col_values(2) #VALUES FROM EXCEL Now how do I find the frequency from this? Huge thanks to anyone who can help! Jacob

    Read the article

  • basic boost date_time input format question

    - by Chris H
    I've got a pointer to a string, (char *) as input. The date/time looks like this: Sat, 10 Apr 2010 19:30:00 I'm only interested in the date, not the time. I created an "input_facet" with the format I want: boost::date_time::date_input_facet inFmt("%a %d %b %Y"); but I'm not sure what to do with it. Ultimately I'd like to create a date object from the string. I'm pretty sure I'm on the right track with that input facet and format, but I have no idea how to use it. Thanks.

    Read the article

  • How should I change my Graph structure (very slow insertion)?

    - by Nazgulled
    Hi, This program I'm doing is about a social network, which means there are users and their profiles. The profiles structure is UserProfile. Now, there are various possible Graph implementations and I don't think I'm using the best one. I have a Graph structure and inside, there's a pointer to a linked list of type Vertex. Each Vertex element has a value, a pointer to the next Vertex and a pointer to a linked list of type Edge. Each Edge element has a value (so I can define weights and whatever it's needed), a pointer to the next Edge and a pointer to the Vertex owner. I have a 2 sample files with data to process (in CSV style) and insert into the Graph. The first one is the user data (one user per line); the second one is the user relations (for the graph). The first file is quickly inserted into the graph cause I always insert at the head and there's like ~18000 users. The second file takes ages but I still insert the edges at the head. The file has about ~520000 lines of user relations and takes between 13-15mins to insert into the Graph. I made a quick test and reading the data is pretty quickly, instantaneously really. The problem is in the insertion. This problem exists because I have a Graph implemented with linked lists for the vertices. Every time I need to insert a relation, I need to lookup for 2 vertices, so I can link them together. This is the problem... Doing this for ~520000 relations, takes a while. How should I solve this? Solution 1) Some people recommended me to implement the Graph (the vertices part) as an array instead of a linked list. This way I have direct access to every vertex and the insertion is probably going to drop considerably. But, I don't like the idea of allocating an array with [18000] elements. How practically is this? My sample data has ~18000, but what if I need much less or much more? The linked list approach has that flexibility, I can have whatever size I want as long as there's memory for it. But the array doesn't, how am I going to handle such situation? What are your suggestions? Using linked lists is good for space complexity but bad for time complexity. And using an array is good for time complexity but bad for space complexity. Any thoughts about this solution? Solution 2) This project also demands that I have some sort of data structures that allows quick lookup based on a name index and an ID index. For this I decided to use Hash Tables. My tables are implemented with separate chaining as collision resolution and when a load factor of 0.70 is reach, I normally recreate the table. I base the next table size on this http://planetmath.org/encyclopedia/GoodHashTablePrimes.html. Currently, both Hash Tables hold a pointer to the UserProfile instead of duplication the user profile itself. That would be stupid, changing data would require 3 changes and it's really dumb to do it that way. So I just save the pointer to the UserProfile. The same user profile pointer is also saved as value in each Graph Vertex. So, I have 3 data structures, one Graph and two Hash Tables and every single one of them point to the same exact UserProfile. The Graph structure will serve the purpose of finding the shortest path and stuff like that while the Hash Tables serve as quick index by name and ID. What I'm thinking to solve my Graph problem is to, instead of having the Hash Tables value point to the UserProfile, I point it to the corresponding Vertex. It's still a pointer, no more and no less space is used, I just change what I point to. Like this, I can easily and quickly lookup for each Vertex I need and link them together. This will insert the ~520000 relations pretty quickly. I thought of this solution because I already have the Hash Tables and I need to have them, then, why not take advantage of them for indexing the Graph vertices instead of the user profile? It's basically the same thing, I can still access the UserProfile pretty quickly, just go to the Vertex and then to the UserProfile. But, do you see any cons on this second solution against the first one? Or only pros that overpower the pros and cons on the first solution? Other Solution) If you have any other solution, I'm all ears. But please explain the pros and cons of that solution over the previous 2. I really don't have much time to be wasting with this right now, I need to move on with this project, so, if I'm doing to do such a change, I need to understand exactly what to change and if that's really the way to go. Hopefully no one fell asleep reading this and closed the browser, sorry for the big testament. But I really need to decide what to do about this and I really need to make a change. P.S: When answering my proposed solutions, please enumerate them as I did so I know exactly what are you talking about and don't confuse my self more than I already am.

    Read the article

  • How precise is the internal clock of a modern PC?

    - by mafutrct
    I know that 10 years ago, typical clock precision equaled a system-tick, which was in the range of 10-30ms. Over the past years, precision was increased in multiple steps. Nowadays, there are ways to measure time intervals in actual nanoseconds. However, usual frameworks still return time with a precision of only around 15ms. My question is, which steps did increase the precision, how is it possible to measure in nanoseconds, and why are we still often getting less-than-microsecond precision (for instance in .NET). (Disclaimer: It strikes me as odd that this was not asked before, so I guess I missed this question when I searched. Please close and point me to the question in that case, thanks. I believe this belongs on SO and not on any other SOFU site. I understand the difference between precision and accuracy.)

    Read the article

  • Administrators vs Programmers: Who's got more people Interaction / Working hours?

    - by sanksjaya
    Well, I've heard programmers get to interact with other programmers quiet a lot. But, who gets to meet a lot of new people on a daily basis at work without getting the feeling "Goosh! I'm stuck with him/this for another year :(" - Admins or Coders? And what kind of people domain do each get to interact with? Secondly, I've had this myth for a long time that unlike programmers, Network/System/Security Admins get locked-up in a den and juiced up late nights and early mornings. Most of the time they had to slip out of work without being noticed. But recently one of my seniors from my grad school told he had to work late and on weekends for a product release. How true and often does this happen with programmers and admins?

    Read the article

  • What is the best possible technology for pulling huge data from 4 remote servers

    - by Habib Ullah Bahar
    Hello, For one of our project, we need to pull huge real time stock data from 4 remote servers across two countries. The trivial process here, check the sources for a regular interval and save the update to database. But as these are real time stock data of more than 1000 companies, I have to pull every second, which isn't good in case of memory, bandwidth I think. Please give me suggestion on which technology/platform [We are flexible here. PHP, Python, Java, PERL - anyone of them will be OK for us] we should choose, it can be achieved easily and with better performance.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >