Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 150/2372 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • How much time do you need in between large projects?

    - by Mattio
    You've launched a large project at work, something that's been in progress and taken up large chunks of your life for more than 6 months. The post-launch triage is over. Tech support isn't calling you every hour because they don't know how to troubleshoot an issue. Your hours drop from 60+/wk to whatever is normal in your organization (which is hopefully less than 60+!). How much time do you (or your team) need before the next large project begins? I was asked this question at work and I think the ideal minimum is two weeks -- one week to clear your desk and inbox + one week to clear your head and remember what it's like to have a life outside of work. I'd frankly acknowledge that just being asked this question is a huge boon to work/life balance. But I do think it's possible to go too long in between.

    Read the article

  • How does having assets saved on a secondary domain(s) reduce the load time of the website?

    - by AAA
    I went for an interview yesterday where I was asked this question: "How does having assets (images/videos) stored on a secondary domain (assets.example.com) reduce the load time of example.com?" To that I answered that by having the code "call" those assets from a secondary website it reduces the traffic that is coming to the main domain and therefore only applying bandwidth to the main domain vs having to also serve bandwidth to request assets. Is that correct? Also, If i am correct, would you say it makes sense to start new websites with this in mind or do you prefer having it done after large traffic rates are achieved?

    Read the article

  • Diminishing Returns - When is it Time to Take Your Website Live?

    Being a perfectionist is not always the best way to be. Especially in a world where being first to market isn't the worst thing ever. Like many things in life, deciding when to take your new website live is a fine balancing act. The sooner you can start building up a readership or client base, the better; but you risk serious damage to your site's reputation if its functional and technical performance is below par. So when is the right time?

    Read the article

  • How to visualize real time data on Android? [closed]

    - by matarsak
    I want to build and android app that visualizes real time data (2D animation). I set up a UDP channel that get the data, now I want to visualize it. I know that I can use OpenGL ES, but after a few weeks, I dont think that I'm able to learn that. What about Android Processing? Could it be used for an extensive visualization task like this? or is it limited in some way? I've heard it's not hard learn. Any other options?

    Read the article

  • How to store UTC time values in Mongo with Mongoid?

    - by Jerry Cheung
    The behavior I'm observing with the Mongoid adapter is that it'll save 'time' fields with the current system timezone into the database. Note that it's the system time and not Rail's environment's Time.zone. If I change the system timezone, then subsequent saves will pick up the current system timezone. # system currently at UTC -7 @record.time_attribute = Time.now.utc @record.save # in mongo, the value is "time_attribute" : "Mon May 17 2010 12:00:00 GMT-0700 (QYZST)" @record.reload.time_attribute.utc? # false

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • To program in free time as a programmer, is to show that programming is passion. If not, is the programmer good? [closed]

    - by SonofWatson
    Possible Duplicate: I don't program in my spare time. Does that make me a bad developer? A lot of blogs and advice on the web seem to suggest that in order to become a great developer, doing just your day job is not enough. For example, you should contribute to open source projects in your spare time, write smartphone apps, etc. In fact a lot of this advice seems to suggest that if you don't love programming enough to do it all day long then you're probably in the wrong career. That doesn't ring true with me. I enjoy my work, but when I come home from the office I'm not in the mood to jump straight back onto the computer and start coding away until bedtime. I only have a certain number of hours free time each day, and I'd rather spend them on other hobbies, seeing friends or going outside than in front of the computer. I do get a kick out of programming, and do hack around outside of work occasionally. I'm committed to my personal development and spend time reading tech blogs and books as a way to keep learning and becoming better. But that doesn't extend so far as to my wanting to use all my spare time for coding. Does this mean I'm not a 'true' software developer at heart? Is it possible to become a good software developer without doing extra outside your job? I'd be very interested to hear what you think.

    Read the article

  • What is the most time-effective way to monitor & manage threats from bots and/or humans?

    - by CheeseConQueso
    I'm usually overwhelmed by the amount of tools that hosting companies provide to track & quantify traffic data and statistics. I'm equally overwhelmed by the countless flavors of malicious 'attacks' that target any and every web site known to man. The security methods used to protect both the back and front end of a website are documented well and are straight-forward in terms of ease of implementation and application, but the army of autonomous bots knows no boundaries and will always find a niche of a website to infest. So what can be done to handle the inevitable swarm of bots that pound your domain with brute force? Whenever I look at error logs for my domains, there are always thousands of entries that look like bots trying to sneak sql code into the database by tricking the variables in the url into giving them schema information or private data within the database. My barbaric and time-consuming plan of defense is just to monitor visitor statistics for those obvious patterns of abuse and either ban the ips or range of ips accordingly. Aside from that, I don't know much else I could do to prevent all of the ping pong going on all day. Are there any good tools that automatically monitor this background activity (specifically activity that throws errors on the web & db server) and proactively deal with these source(s) of mayhem?

    Read the article

  • How can I make fsck run non-interactively at boot time?

    - by Nelson
    I have a headless Ubuntu 12.04 server in a datacenter 1500 miles away. Twice now on reboot the system decided it had to fsck. Unfortunately Ubuntu ran fsck in interactive mode, so I had to ask someone at my datacenter to go over, plug in a console, and press the Y key. How do I set it up so that fsck runs in non-interactive mode at boot time with the -y or -p (aka -a) flag? If I understand Ubuntu's boot process correctly, init invokes mountall which in turn invokes fsck. However I don't see any way to configure how fsck is invoked. Is this possible? (To head off one suggestion; I'm aware I can use tune2fs -i 0 -c 0 to prevent periodic fscks. That may help a little but I need the system to try to come back up even if it had a real reason to fsck, say after a power failure.) In response to followup questions, here's the pertinent details of my /etc/fstab. I don't believe I've edited this at all from what Ubuntu put there. UUID=3515461e-d425-4525-a07d-da986d2d7e04 / ext4 errors=remount-ro 0 1 UUID=90908358-b147-42e2-8235-38c8119f15a6 /boot ext4 defaults 0 2 UUID=01f67147-9117-4229-9b98-e97fa526bfc0 none swap sw 0 0

    Read the article

  • Need help with cybersquatting complaint: can a domain name forward AND resolve at same time? [on hold]

    - by Alan
    Probably a silly question for you pros... but for this novice here, I just want to make sure my understanding is correct. Context: I am trying to prove that a domain name owner has been cybersquatting and has never used the domain name in question. There are 4 shots from WayBackMachine over a three-year period that show the domain name resolving to a basic server index page with either no files or a single cgi-bin folder. The domain name owner claims, however, that the domain name was forwarded over the entire time from to another website, and that these captures probably coincided with occasional "outages." It is my understanding that: a) domain name forwarding is binary: if a domain name is forwarded to a valid site, it cannot simultaneously resolve to a valid IP address. Is this correct? b) domain name forwarding is not subject to "outages": servers can have outages, and websites can be down, but the forwarding itself cannot be down, as this is simply a pointer. (Or, the entire registrar where the DNS settings are hosted would have to malfunction. Is this correct? FINALLY, bonus question for pro webmasters: What is the likelihood that the WayBackMachine would capture the domain name on just those occasions when the webmaster disabled forwarding to supposedly work on the new site? Mucho thanks in advance!

    Read the article

  • How to procedurally (create) grow an artistic (2D) tree in real-time (L-System?).

    - by lalan
    Recently I programmed an L-system module, It got me interested further. I am a Plants vs Zombies junkie as well, really liked the concept of Tree of Wisdom. Would love to create similar procedural art just for fun and learn more. Question: How should I approach the process of creating an artistic tree (2d perhaps with fixed camera/perspective) dynamically? Ideally I would like to start with a plant (only a stem with a leaf) and grow it dynamically using some influence (input/user action) over its structure. These influences may result in different type of branching, curves in branches, its spread, location of fruits, color of flowers, etc. Want it to be really full of life/spirit. :) Plants vs Zombies: Tree of wisdom It would be great to dynamically grow a similar tree, but with lot more variation and animations happening. My Background: Student / Programmer, have used few game engines (Ogre3d, cocos2d, unity). Haven't really programmed directly using openGL, trying to fix that :). I am ready to spend considerable time, Please let me know about the APIs? and how would an expert like you would take on this problem? Why 2D? I think it's easier to solve the problem only considering 2 dimensions. Artistic inspirations: Only the tree, with fruits and leaves, without the shrubs at the bottom The large tree (visible branches, green leaves, flowers, fruits, etc) on the left, behind monkey. PixelJunk's Eden (Art style inspiration). Procedurally Generated Apple Tree using Fractals Please let me know if it was easy for you to understand the question, I may elaborate further. I hope a discussion of various approach would be helpful for everyone. You guys are awesome.

    Read the article

  • Facebook Like javascript related to Time Spent Downloading a page Increase in GWT?

    - by donaldthe
    Hi, I installed the Facebook Like button Javascript version on my website on December 15th. Take a look at this report from Google Webmaster Central. Crawl stats Googlebot activity in the last 90 days The crawl stats are from Googlebot which as far as I know doesn't execute Javascript. Could the Facebook Like Javascript code, "The XFBML version" be related to large spike in Time spent downloading a page? (By the way the huge spike in November was caused by a mistake where every image request was getting a 301.) I'm not sure what caused the spike to go down by half somewhere in December. It may have been related to a faulty setting in web.config. I'm at a loss as to what I can do about this or even how to tell if this is my problem or Googlebots crawl problem. Here is the Facebook code I am using to create the like button. It is right after the opening body tag <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'xxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); ` and this creates the like box: <fb:like show_faces="false"></fb:like> If the Javascript can't be the problem any ideas on where to start looking would be appreciated.

    Read the article

  • Audio comes out of both headphone and speaker at the same time.. Ubuntu 12.04LTS [closed]

    - by pst007x
    I have the same issue on an Aspire. Ubuntu 12.04LTS 64bit realtek audio sound chip onboard If I plug in a headset, audio does not switch from internal speaker to headset, instead plays out of both at the same time. I have looked at the alsamixer setting, all on. I installed gnome-alsamixer, and I noticed headphone was ticked, if I untick the main audio mutes, and the headphone no longer works. Headset only works with internal speaker. Audio works fine on my other desktop and laptop running this release 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) salvatore@salvatore-Aspire-7730:~$ cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.24. salvatore@salvatore-Aspire-7730:~$ head -n 1 /proc/asound/card*/codec#* ==> /proc/asound/card0/codec#0 <== Codec: Realtek ALC888 ==> /proc/asound/card0/codec#1 <== Codec: LSI ID 1040 ==> /proc/asound/card0/codec#2 <== Codec: Intel Cantiga HDMI salvatore@salvatore-Aspire-7730:~$ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: ALC888 Analog [ALC888 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC888 Digital [ALC888 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 salvatore@salvatore-Aspire-7730:~$ uname -a Linux salvatore-Aspire-7730 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux salvatore@salvatore-Aspire-7730:~$ The alsa-base.conf does not exist Tried this: sudo apt-get remove --purge alsa-base sudo apt-get remove --purge pulseaudio sudo apt-get install alsa-base sudo apt-get install pulseaudio sudo alsa force-reload Then: sudo apt-get purge pulseaudio gstreamer0.10-pulseaudio sudo apt-get install pulseaudio gstreamer0.10-pulseaudio indicator-sound Tred this. sudo gedit Then open terminal: sudo /etc/modprobe.d/alsa-base.conf At the end of the file add a new line: options snd-hda-intel model=generic Save and then reboot But alsa-base.conf does not exist

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • How to keep the trunk stable when tests take a long time?

    - by Oak
    We have three sets of test suites: A "small" suite, taking only a couple of hours to run A "medium" suite that takes multiple hours, usually ran every night (nightly) A "large" suite that takes a week+ to run We also have a bunch of shorter test suites, but I'm not focusing on them here. The current methodology is to run the small suite before each commit to the trunk. Then, the medium suite runs every night, and if in the morning it turned out it failed, we try to isolate which of yesterday's commits was to blame, rollback that commit and retry the tests. A similar process, only at a weekly instead of nightly frequency, is done for the large suite. Unfortunately, the medium suite does fail pretty frequently. That means that the trunk is often unstable, which is extremely annoying when you want to make modifications and test them. It's annoying because when I check out from the trunk, I cannot know for certain it's stable, and if a test fails I cannot know for certain if it's my fault or not. My question is, is there some known methodology for handling these kinds of situations in a way which will leave the trunk always in top shape? e.g. "commit into a special precommit branch which will then periodically update the trunk every time the nightly passes". And does it matter if it's a centralized source control system like SVN or a distributed one like git? By the way I am a junior developer with a limited ability to change things, I'm just trying to understand if there's a way to handle this pain I am experiencing.

    Read the article

  • Does it make sense to write tests for legacy code when there is no time for a complete refactoring?

    - by is4
    I usually try to follow the advice of the book Working Effectively with Legacy Code. I break dependencies, move parts of the code to @VisibleForTesting public static methods and to new classes to make the code (or at least some part of it) testable. And I write tests to make sure that I don't break anything when I'm modifying or adding new functions. A colleague says that I shouldn't do this. His reasoning: The original code might not work properly in the first place. And writing tests for it makes future fixes and modifications harder since devs have to understand and modify the tests too. If it's GUI code with some logic (~12 lines, 2-3 if/else block, for example), a test isn't worth the trouble since the code is too trivial to begin with. Similar bad patterns could exist in other parts of the codebase, too (which I haven't seen yet, I'm rather new); it will be easier to clean them all up in one big refactoring. Extracting out logic could undermine this future possibility. Should I avoid extracting out testable parts and writing tests if we don't have time for complete refactoring? Is there any disadvantage to this that I should consider?

    Read the article

  • Long 'Wait' Time for three php/CSS files. Is something blocking them?

    - by William Pitcher
    I have been speed optimizing a Wordpress site to little effect. There are three files CSS-related php files from the Wordpress theme that are delaying page loads on the site. One of the three files is basically one line of custom CSS from the custom CSS feature in the theme. You can see what I am talking about with this Pingdom speed test: The yellow is 'Wait'. There are no slow items in the cut-off portion of the image. The full results are here: Pingdom Results Page 1. Any thoughts on what might be causing this? I understand that I have blocking CSS or JS files, but I don't see anything that would be causing that long of a wait. When I ran the P3 Plugin Profiler, Wordpress and all plugins appeared fine -- it is the theme that is taking all the time. GTmetrix recommends avoiding dynamic queries. I assume all the ver=3.61 references are to the version of Wordpress (which I am using). I noticed that my Wordpress sites using other themes don't make this query (at least not over and over). 2. Is this typical coding practice? 3. How much negative impact do these query-strings have -- a little or a lot? I tried searching for similar questions here, please excuse me if I missed something. Sometimes, I know just enough to be dangerous.

    Read the article

  • Do Loops kind of Reset every time you go through it?... [closed]

    - by JacKeown
    #include <iostream> using namespace std; int main (void) { cout << " 1\t2\t3\t4\t5\t6\t7\t8\t9" << endl << "" << endl; for (int c = 1; c < 10; c++) { cout << c << "| "; for (int i = 1; i < 10; i++) { cout << i * c << '\t'; } cout << endl; } return 0; } Hey so this code produces a times table...I found it on Google Code's C++ class online...I'm confused about why "i" in the second for loop resets to 1 every time you go through that loop...or is it being declared again in the first parameter? Thanks in advance!

    Read the article

  • Is it normal for GRUB to take some time (15+s) after I choose what to boot?

    - by zarnaik
    I had been planning to change the background of my bootloader for a while, and finally got to it. Now the black screen I get for quite some time was made clear. It is still GRUB, because the background image stays, while all of the text is gone. Then it just simply shows the Lubuntu loading screen for, usually, not more than 3 seconds. I run Lubuntu 12.10. My question is, is this normal behaviour or is something going wrong, causing GRUB to take longer? Here are the contents of my grub file located at /etc/default/ : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" GRUB_BACKGROUND="/usr/share/lubuntu/wallpapers/1210-Windmill_by_Ferran_Reyes.png" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" If you need any other information please tell me and I'll do my best to provide it. :)

    Read the article

  • How to read time phased data from Project Server 2007 directly from Project Server Database ?

    - by Nikhil Vaghela
    I am working on a custom web part for Project Web Access, for Project Server 2007. We are so far using PSI web services only to read and write data from and to Project Server 2007 databases. But there is a signinficant performance issue when you retrieve time phased data through Statusing web service, it is basically an expensive call for querying time phased data for each tasks. I want to access Time phased data entered by user for each tasks by directly hitting the Project Server Database. [ I do not want the solution suggested at this link : http://blogs.msdn.com/project_programmability/archive/2007/05/24/getting-at-the-task-time-phased-data.aspx as it reads data from reporting database which gets entry only after the project is published. ] I want to get time phased data as soon as user enters it. Any idea ? Thanks.

    Read the article

  • Time zone for Force.com API fiscal date literals?

    - by Nick
    I have a question about fiscal date literals in the Force.com API (http://www.salesforce.com/us/developer/docs/api/Content/sforce_api_calls_soql_select_dateformats.htm): For which time zone are date ranges calculated? For example, suppose we execute the query: SELECT Id FROM Opportunity WHERE CloseDate = THIS_FISCAL_QUARTER where, according to our company's fiscal settings, THIS_FISCAL_QUARTER runs from Jan 1 to Mar 31. Does the range for THIS_FISCAL_QUARTER use... the user's time zone? For example, if the user's time zone is GMT-8, THIS_FISCAL_QUARTER = Jan 1 00:00 GMT-8 to Mar 31 23:59 GMT-8 (or Jan 1 08:00 UTC to Mar 31 07:59 UTC) the company's default time zone (according to the company profile)? For example, if the company's default time zone is GMT-8, THIS_FISCAL_QUARTER = Jan 1 00:00 GMT-8 to Mar 31 23:59 GMT-8 (or Jan 1 08:00 UTC to Mar 31 07:59 UTC) UTC? THIS_FISCAL_QUARTER = Jan 1 00:00 UTC to Mar 31 23:59 UTC something else?

    Read the article

  • How can a Delphi TForm / TPersistent object calculate its own construction and deserialization time?

    - by mjustin
    For performance tests I need a way to measure the time needed for a form to load its definition from the DFM. All existing forms inherit a custom form class. To capture the current time, this base class needs overriden methods as "extension points": start of the deserialization process after the deserialization (can be implemented by overriding the Loaded procedure) the moment just before the execution of the OnFormCreate event So the log for TMyForm.Create(nil) could look like: - 00.000 instance created - 00.010 before deserialization - 01.823 after deserialization - 02.340 before OnFormCreate Which TObject (or TComponent) methods are best suited? Maybe there are other extension points in the form creation process, please feel free to make suggestions. Background: for some of our app forms which have a very basic structure (with some PageControls and QuantumGrids) I realized that it is not database access and other stuff in OnFormShow but the construction which took most of the time (around 2 seconds) which makes me wonder where this time is spent. As a reference object I will also build a mock form which has a similar structure but no code or datamodule connections and measure its creation time.

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >