Search Results

Search found 72757 results on 2911 pages for 'load time'.

Page 14/2911 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Why are part-time jobs in programming an anomality?

    - by Mikle
    I've recently quit my full time developing job at mega-corp, and I decided that I'll look for a part time job. Since then I've talked to half a dozen potential employers, and every one of them had the same reaction when I said the magic words "part-time" - they all closed up and became suspicious. Now, I understand that it might just be me, so as control I asked every one of them what if I were willing to work full time, and they all said I would probably get an offer. My question is two fold: Why, as an employer, would you give up a competent, even great, developer, simply because he wants to work 3 days a week and not 5? How do I sell the story of part time job better? I usually just list my reasons which are that I prefer that balance currently in my life and that I want to work on my own projects, but it leaves them even more suspicious - am I going to start something myself and quit? Am I just lazy?

    Read the article

  • Ask the Readers: How Do You Track Your Time?

    - by Jason Fitzpatrick
    Whether you’re tracking time for a client or keeping track of how you spend your day to bolster productivity, there’s a variety of tools and tricks you can use to get the big picture on where your time is spent. This week we want to hear all about your time tracking tools, tricks, and tips. How do you manage your time? What apps do you use to categorize and sort it? No matter how loosely or tightly you track your time or whether you use an analog or a digital system, we want to hear the ins and outs of it. Sound off in the comments below and then check back in for the What You Said roundup on Friday. Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference

    Read the article

  • Why are part-time jobs in programming an anomaly?

    - by Mikle
    I've recently quit my full time developing job at mega-corp, and I decided that I'll look for a part time job. Since then I've talked to half a dozen potential employers, and every one of them had the same reaction when I said the magic words "part-time" - they all closed up and became suspicious. Now, I understand that it might just be me, so as control I asked every one of them what if I were willing to work full time, and they all said I would probably get an offer. My question is two fold: Why, as an employer, would you give up a competent, even great, developer, simply because he wants to work 3 days a week and not 5? How do I sell the story of part time job better? I usually just list my reasons which are that I prefer that balance currently in my life and that I want to work on my own projects, but it leaves them even more suspicious - am I going to start something myself and quit? Am I just lazy?

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

  • Slicing a time range into parts

    - by beporter
    First question. Be gentle. I'm working on software that tracks technicians' time spent working on tasks. The software needs to be enhanced to recognize different billable rate multipliers based on the day of the week and the time of day. (For example, "Time and a half after 5 PM on weekdays.") The tech using the software is only required to log the date, his start time and his stop time (in hours and minutes). The software is expected to break the time entry into parts at the boundaries of when the rate multipliers change. A single time entry is not permitted to span multiple days. Here is a partial sample of the rate table. The first-level array keys are the days of the week, obviously. The second-level array keys represent the time of the day when the new multiplier kicks in, and runs until the next sequential entry in the array. The array values are the multiplier for that time range. [rateTable] => Array ( [Monday] => Array ( [00:00:00] => 1.5 [08:00:00] => 1 [17:00:00] => 1.5 [23:59:59] => 1 ) [Tuesday] => Array ( [00:00:00] => 1.5 [08:00:00] => 1 [17:00:00] => 1.5 [23:59:59] => 1 ) ... ) In plain English, this represents a time-and-a-half rate from midnight to 8 am, regular rate from 8 to 5 pm, and time-and-a-half again from 5 till 11:59 pm. The time that these breaks occur may be arbitrary to the second and there can be an arbitrary number of them for each day. (This format is entirely negotiable, but my goal is to make it as easily human-readable as possible.) As an example: a time entry logged on Monday from 15:00:00 (3 PM) to 21:00:00 (9 PM) would consist of 2 hours billed at 1x and 4 hours billed at 1.5x. It is also possible for a single time entry to span multiple breaks. Using the example rateTable above, a time entry from 6 AM to 9 PM would have 3 sub-ranges from 6-8 AM @ 1.5x, 8AM-5PM @ 1x, and 5-9 PM @ 1.5x. By contrast, it's also possible that a time entry may only be from 08:15:00 to 08:30:00 and be entirely encompassed in the range of a single multiplier. I could really use some help coding up some PHP (or at least devising an algorithm) that can take a day of the week, a start time and a stop time and parse into into the required subparts. It would be ideal to have the output be an array that consists of multiple entries for a (start,stop,multiplier) triplet. For the above example, the output would be: [output] => Array ( [0] => Array ( [start] => 15:00:00 [stop] => 17:00:00 [multiplier] => 1 ) [1] => Array ( [start] => 17:00:00 [stop] => 21:00:00 [multiplier] => 1.5 ) ) I just plain can't wrap my head around the logic of splitting a single (start,stop) into (potentially) multiple subparts.

    Read the article

  • HTTP response time profiling

    - by Sparsh Gupta
    Hello I have a nginx reverse proxy. The server is close to serving 600-700 requests per second. I have a Munin HTTP load time plugin which is outputting this: http://monitor.wingify.com/munin/visualwebsiteoptimizer.com/lb1.visualwebsiteoptimizer.com-http_loadtime.html Now, the problem is I am seeing some spikes in the graph. Expected response times should always be under 200ms. I am keeping an eye on syslog and messages but I am unable to figure out the actual cause of this. I was wondering if there is any good HTTP response time profiling system which I can install / embed with this nginx server and get a detailed reports / logs on the breakup of time taken by different things and what exactly is the cause of the spikes. The profiling system would also help me understand bottlenecks and how can I further optimize the latency. Most important right now is to investigate the cause of the spikes in the HTTP load time graphs (similar pattern is reported by external monitors - Pingdom) and to fix it to get consistent response times Thanks

    Read the article

  • Public free time server

    - by JL.
    I need to get the current datetime from a reliable source, because its likely that the local system time could be changed. Is it possible to get this from an internet time server, one that has close to 100% uptime, preferably via a webservice method, something that is free, and I have to stress absolutely reliable? I would hope an offering from Microsoft, or the organisation responsible for keeping global time.

    Read the article

  • raid 1 and high load average

    - by melocoton
    i have a server with high load average, I think the problem is the raid 1. cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 256896 blocks [2/2] [UU] md3 : active raid1 sdb3[1] sda3[0] 2562240 blocks [2/2] [UU] md4 : active raid1 sdb5[1] sda5[0] 958566272 blocks [2/2] [UU] md1 : active raid1 sdb2[1] sda2[0] 15366080 blocks [2/2] [UU] model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz Linux 2.6.18-164.6.1.el5.centos.plus (local) 04/19/2010 avg-cpu: %user %nice %system %iowait %steal %idle 17.37 0.01 6.02 26.17 0.00 50.43 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 61.09 562.65 893.73 1557214 2473546 sda1 0.01 0.27 0.02 751 42 sda2 6.11 195.50 169.78 541075 469888 sda3 0.01 0.23 0.00 641 0 sda4 0.00 0.01 0.00 18 0 sda5 54.96 366.54 723.94 1014449 2003616 sdb 54.40 433.22 893.73 1199015 2473546 sdb1 0.01 0.16 0.02 436 42 sdb2 5.31 169.00 169.78 467729 469888 sdb3 0.01 0.31 0.00 865 0 sdb4 0.00 0.00 0.00 10 0 sdb5 49.05 263.65 723.94 729695 2003616 md1 29.96 364.39 166.68 1008498 461312 md4 124.15 630.07 713.28 1743822 1974112 md3 0.05 0.43 0.00 1192 0 md0 0.04 0.32 0.00 872 10 dm-0 7.96 83.29 23.02 230530 63720 dm-1 3.67 51.81 2.73 143394 7560 dm-2 7.63 67.76 27.35 187546 75696 dm-3 8.20 134.60 14.02 372514 38792 dm-4 5.90 10.66 39.35 29498 108912 dm-5 17.39 24.52 121.79 67850 337080 dm-6 27.19 229.60 139.89 635442 387168 dm-7 0.14 1.07 0.28 2970 776 dm-8 25.84 4.23 202.89 11698 561536 dm-9 14.77 8.38 112.35 23202 310960 dm-10 5.29 12.78 29.55 35376 81784 dm-11 0.16 1.25 0.05 3450 128 the server runs lvm in the md4

    Read the article

  • Max ping response time?

    - by DougN
    I'm wondering what a maximum (practical) ping response time might be. As far as I know, there isn't a max defined anywhere (TTL, but that's hops, not time). As I think about it, I'm not sure I've ever seen a ping response time of more than a second or so. But as far as I know, there is nothing to stop a remote host from waiting (or being really busy) and not sending the response back for a few seconds. As a simple data point, I just pinged a number of servers around the world and the worst time I could find was 350ms.

    Read the article

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

  • Windows 7 sets wrong time

    - by P a u l
    My win7-64 ultimate has set the clock ahead 2 hours. It appears to have done it in increments of 1 hour, with the second 1 hour shift made sometime today. The first, correct, shift for Daylight Savings was sunday morning. In the clock settings it says Mountain Time UTC-7, but the official time should be Mountain Time UTC-6.

    Read the article

  • How to correct time on Windows PDC server without affecting logons

    - by Kieran Walsh
    I know how to set an authoritative time server in Server 2008 R2. That's not what this question is. I want to know how I can change the time on a network where the PDC (and therefore everything) is a month out of date? I know that a 5 minute difference in time between clients and the domain prevents logons, so just changing the time on the PDC will break everything. What is the best way to fix this? Thanks Kieran.

    Read the article

  • Best program for keeping track of Time (motor racing)

    - by Krazy_Kaos
    I need a program to control time, I need the time to go forth and back (for me to choose) and I insert the staring time. I also need a program to control lap times. If anyone know any program for these stuff (racing stuff), I would apreciate it, even if there only are paid solution, I still would like to take a look at them (I staring to make a program in python and it could be good for inspiration)

    Read the article

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

  • 0% CPU in top for all processes, but load average > 1

    - by chrisdew
    On two different servers (with Ubuntu 12.04LTS AMD64) I have seen the following behaviour: op - 10:50:05 up 305 days, 21:17, 1 user, load average: 1.94, 2.52, 2.97 Tasks: 141 total, 2 running, 139 sleeping, 0 stopped, 0 zombie Cpu(s): 41.5%us, 6.5%sy, 0.0%ni, 51.8%id, 0.0%wa, 0.2%hi, 0.1%si, 0.0%st Mem: 8178432k total, 5753740k used, 2424692k free, 159480k buffers Swap: 15625208k total, 0k used, 15625208k free, 4905292k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 23928 2072 1216 S 0 0.0 0:56.42 init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:01.23 migration/0 4 root 20 0 0 0 0 S 0 0.0 2:39.82 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:02.99 migration/1 7 root 20 0 0 0 0 S 0 0.0 2:32.15 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root RT 0 0 0 0 S 0 0.0 0:11.67 migration/2 10 root 20 0 0 0 0 S 0 0.0 29:00.34 ksoftirqd/2 The server is working fine, but top shows all processes as using 0% CPU. A reboot fixed this on an earlier machine, but I haven't yet tried it on this one. I have tried top several times, and so am sure that I haven't accidentally pressed '<' or '' to sort by a different column. Sorting the process list by all of the available columns, stills shows 0% CPU for all displayed processes. What is going on? If this a kernel bug? Update: If I use top -p <PID> for a know, busy process, top still displays 0% CPU for that process.

    Read the article

  • How to find date/time used by Cassandra

    - by JDI Lloyd
    Earlier this morning I noticed that one of the nodes in our Cassandra cluster is writing logs an hour in the future, despite the date/time being correct on the OS. A couple of other nodes I checked via logs appear to be writing logs at the correct time. I now need to go through and check each node in our 80 node cluster and ensure cassandra is running on the correct time, problem being is some of the nodes don't write to the logs very often as they aren't doing much... the question is, is there some form of tool/utility (ie nodetool) that can tell me the time that cassandra is running on? All the systems date/times are correct, ntpdate cron in place has been for a while. Servers are set to Belize timezone to avoid DST changes so its nothing to do with that.

    Read the article

  • Time capsule on windows 7

    - by Kiva
    Hi guys, I have a time machine to backup my mac book pro. All work fine with it. Now, my girlfriend have a PC on windows 7. She wants to backup her PC with cobian backup on the time machine. But her PC doesn't see the time capsule, so it's impossible to connect it. The Time capsule is connected on my box adsl with wifi and the mac and the pc are connected on the box with wifi. Why windows doesn't see the TC ? I installed "bonjour" on the PC but nothing worked. Thanks for your help.

    Read the article

  • Change the output format of zsh's time

    - by YGA
    Hi Folks, I've just switched to zsh. However, I really don't like how the time builtin command also outputs the command that it's timing. I much prefer the bash style output. Anyone know how to switch it over? Zsh: [casqa1:~/temp]$ time grep foo /dev/null /usr/local/gnu/bin/grep --color -i foo /dev/null 0.00s user 0.00s system 53% cpu 0.004 total Bash: [casqa1:~/temp]$ bash casqa1.nyc:~/temp> time grep foo /dev/null real 0.0 user 0.0 sys 0.0 Thanks, /YGA

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

  • EXCEL workbook, intermitently, takes 30 seconds to load

    - by Julio Nobre
    I am trying to figure out why a simple .XLS EXCEL workbook is taking, randomly, 30 seconds to open. Before answering: Please, bear mind the following: Problem symptoms Hanging is intermitent and it takes exactly 30 seconds; During hanging there is no cpu or disk activity; It only happens during workbook load. Every runs smooth after that; Windows Explorer.exe hangs on folder, but all other folders, system and applications are still responsive; There are no consecutive hangings. I have to wait for while to reproduce this behaviour; All workbooks where located on a local drive (C:\BPI); The workbook has no macros and no addins; Office 2003 is being used for several years; The computer is running Windows XP; Computer has several network mapped drives, all addressed to main file server; Recently, main fileserver was replaced by Windows 2011 SBS Standard Edition What I have done so far I have traced machine Explorer.exe, using Process Monitor, added Duration column, and filtered by Duration 1. That's is how I found that hanging was taking exactly 30 seconds. For further information, please refer to Oliver Salzburg tutorial. Using Process Monitor, I have also figured out than five operations were taking most of sample collecting duration. Looking at sample image below, column Operation below you will notice that one single operation was taking 29 seconds; I have tried different workbooks (all of them smaller than 30 KB); I have, temporarily, removed all shortcuts on User Document's folder that were pointing to network drives or shares; I have runned CCleaner to fix registry issues; I made sure that there were no external links on tested workbooks; I have reproduced this behaviour for hours; I have extensivelly researched for hours on the web; Process Monitor's collected and filtered data

    Read the article

  • Get time-sheet report from JIRA

    - by John
    I have enabled time-tracking on JIRA, developers are logging time spent. But I can't find a way to get a report on time spent, per-user, over a given period. It saves me asking them to separately send me timesheets to check. Is it possible? If so where do I look?

    Read the article

  • Get time-sheet report from JIRA

    - by John
    I have enabled time-tracking on JIRA, developers are logging time spent. But I can't find a way to get a report on time spent, per-user, over a given period. It saves me asking them to separately send me timesheets to check. Is it possible? If so where do I look?

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >