Search Results

Search found 2823 results on 113 pages for 'crawl rate'.

Page 53/113 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Update Google Sitemap for Mobile

    - by dimo414
    I have a series of utilities to generate Google sitemaps for my whole site. These files are massive, and slow to build. We want to start telling Google these pages are mobile-crawl-able too, by adding them to mobile sitemaps, but the documentation is unclear if I need to specify physically different files for my mobile URLs than for my normal ones. If this is my current sitemap: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://mobile.example.com/article100.html</loc> </url> </urlset> Can I simply change it to: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:mobile="http://www.google.com/schemas/sitemap-mobile/1.0"> <url> <loc>http://mobile.example.com/article100.html</loc> <mobile:mobile/> </url> </urlset> Or do I need to create new files with the additional markup, alongside my existing files?

    Read the article

  • The danger of changing the domain of your portfolio

    - by Mervin
    So I have a online portfolio that is available at mervin-ux-portfolio.com but I am planning to change hosts since the current host I am hosting it with is hitting me with a very high yearly renewal rate. When I was inquiring about domain transfers ,,they told me that since I had not initiated the domain transfer within 14 days of the expiry of the domain ,they cannot do it immediately and it would take about two weeks to to release the domain name. Since I dont like the idea of my site being down for like 2 weeks ,I was wondering if I should start afresh with a new domain on a new host and what were the potential dangers of that ( I have the entire site backup,so creating a replica of the site on the new host wont be hard) I also wont be losing any business or work since I work full time currently but I was just wondering about the challenges in terms of getting my domain name back to the top of search results and basically getting it out there assuming I go the new domain name approach. I know this is strictly not an UX question but I was hoping people could give some suggestions on what I should do

    Read the article

  • Texture artifacts on iPad

    - by MrDatabase
    I'm porting an iPhone game to the iPad. When I move textures "quickly" (5.0 pixels every update at a rate of 60 Hz) I start to see little "artifacts" or remnants of where the texture used to be. I'm not sure if I know the correct terminology for this... imagine a texture at some location on the screen... then next to it is the same texture but faded a bit... then the same texture again just faded a bit more. I'm using CADisplayLink to drive my update loop if that helps. Also I didn't see this issue on the 3G or the iPhone 4. Any ideas? Cheers!

    Read the article

  • Cropping images & SEO

    - by user1181950
    So I have a page with a bunch of images with largely varying sizes. Also the layout of the page is such that the images are all in the shape of square tiles, so just resizing will cause distorted images. What I've been doing previously is when users upload images, I resize and crop them appropriately and display the new image as the thumbnail and load full image when user clicks on it. However, I just realized this is an issue with SEO as google will crawl the thumbnails and stick the thumbnails on Google Images instead of the full images. Is there any way to show a cropped/resized image but have Google Image show the full image? I can do something with css using an enclosing div and overflow:hidden, but I'd imagine the performance on that would be pretty bad. Any suggestions? Thanks! PS. I saw this (Make google index the actual image not the thumbnail), but in my case I have users continuously uploading images, and the database of images is always changing and pretty big (thousands), so sitemap will be pretty unwieldy..

    Read the article

  • Frame timing for GLFW versus GLUT

    - by linello
    I need a library which ensures me that the timing between frames are more constant as possible during an experiment of visual psychophics. This is usually done synchronizing the refresh rate of the screen with the main loop. For example if my monitor runs at 60Hz I would like to specify that frequency to my framework. For example if my gameloop is the following void gameloop() { // do some computation printDeltaT(); Flip buffers } I would like to have printed a constant time interval. Is it possible with GLFW?

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Hard Drive Fundamentals And Verifying Disk Performance

    - by Agnel Kurian
    Over the past few months, my Windows XP machine has slowed down to a crawl. It takes about 10-15 minutes to go from power-up to reaching a responsive state. I have reasons to believe that this is a result of the hard disk slowing down. Questions: Do hard disks slow down as a result of mechanical wear and tear ...or age? How do I check if my disk has slowed down? Conversely, how can I verify that my disk is indeed running at the speed it's designed to run at? Could drivers be at fault here? Do hard disks come with drivers or does Windows use a generic driver?

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

  • Recovering from an incorrectly deployed robots.txt?

    - by Doug T.
    We accidentally deployed a robots.txt from our development site that disallowed all crawling. This has caused traffic to dip dramatically, and google results to report: A description for this result is not available because of this site's robots.txt – learn more. We've since corrected the robots.txt about a 1.5 weeks ago, and you can see our robots.txt here. However, search results still report the same robots.txt message. The same appears to be true for Bing. We've taken the following action: Submitted site to be recrawled through google webmaster tools Submitted a site map to google (basically doing everything possible to say "Hey we're here! and we're crawlable!") Indeed a lot of crawl activity seems to be happening lately, but still no description is crawled. I noticed this question where the problem was specific to a 303 redirect back to a disallowed path. We are 301 redirecting to /blog, but crawling is allowed here. This redirect is due to a site redesign, wordpress paths for posts such as /2012/02/12/yadda yadda have been moved to /blog/2012/02/12. We 301 redirect to wordpress for /blog to keep our google juice. However, the sitemap we submitted might have /blog URLs. I'm not sure how much this matters. We clearly want to preserve google juice for URLs linked to us from before our redesign with the /2012/02/... URLs. So perhaps this has prevented some content from getting recrawled? How can we get all of our content, with links pointed to our site from pre-and-post redesign reporting descriptions? How can we resolve this problem and get our search traffic back to where it used to be?

    Read the article

  • Reality Check Webinar: How Does Your End User Adoption Fare Against 300 other Companies?

    - by Di Seghposs
    Gain insight into Neochange's 2012 Adoption Insight Report and Compare your End User Adoption Rate and Strategy! Discover why user adoption is a key factor to your IT investment's success and how Oracle UPK Professional can help ensure it!  Join us as Chris Dowse, CEO of Neochange and Beth Renstrom, Manager of Oracle UPK Outbound Product Mangement, reveal the results of the user adoption survey in which user service models and productivity levels of 300 organizations are discussed in detail to identify trends that deliver higher business productivity. See how your organization's productivity and service model match up to those companies who are getting the most out of their IT investment. Thursday, April 5, 2012 -- 2:00 pm ET Click here, to register for the webcast.

    Read the article

  • Two Wifi Icons in Panel

    - by Alex
    I have the exact problem in 13.10 as this user Two Wifi indicators in panel. Here are some screenshots: Here are some screenshots from another user: http://ubuntuforums.org/showthread.php?t=2183020&p=12825563 ifconfig and iwconfig outputs $ ifconfig lo Link encap:Local Loopback inet addr:XXXXXX Mask:XXXXXXX inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2243 errors:0 dropped:0 overruns:0 frame:0 TX packets:2243 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:209889 (209.8 KB) TX bytes:209889 (209.8 KB) wlan0 Link encap:Ethernet HWaddr XXXXXXXXX inet addr:XXXXXX Bcast:XXXXXXXX Mask:XXXXXXX inet6 addr: XXXXXXX Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5925 errors:0 dropped:0 overruns:0 frame:0 TX packets:3361 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2951818 (2.9 MB) TX bytes:630579 (630.5 KB) $ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"XXXXX" Mode:Managed Frequency:2.437 GHz Access Point: XXXXXXXX Bit Rate=72.2 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Link Quality=49/70 Signal level=-61 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:153 Invalid misc:472 Missed beacon:0

    Read the article

  • Silverlight XAP Signing Certificate promotion from Thawte

    And the offers keep coming in! Another one of our key partners for testing XAP signing for trusted applications was Thawte. Their group helped provide us with valid certificates to verify their process and signing worked as expected (and verified) for Silverlight 4. Today I just got an email from their marketing department that they would like to offer Silverlight developers a discount on Thawte code-signing certificates to $89 for a 1-yearabout 70% off their current rate. Thats pretty amazing of...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • On Mac OS X how can I monitor what is using my internet connection?

    - by Jon Hopkins
    I've got a relatively limited broadband connection (I live miles from the nearest exchange) and from time to time net access (but nothing else) slows to a near crawl. I know from a bit of monitoring software that the connection is being fairly heavily used which would explain it but I don't know what's using it. There are certainly plenty of things which might (these days there are dozens of apps that will either regularly or infrequently check data or download updates) but how can I find out? I'm happy to pay (a small amount of) money if needed, though in that case I'd rather it were a recommendation that me just Googling for something.

    Read the article

  • From Transactions To Engagement

    - by David Dorf
    I've mentioned in the past that Oracle has invested quite a bit in acquiring social companies to build out its Social Relationship Management suite.  The concept is to shift away from transactions and towards engagement.  Social media represents a great opportunity to engage with customers, learn what they want, and personalize the shopping experience for them. I look at SRM as the bridge between traditional CRM and CX.  If you're looking for ideas, check out Five Social Retailing Suggestions and Social Analytics and the Customer.  There are lots of ways to leverage social media to enhance the customer experience and thus drive more sales. My friends over at 8th Bridge have just released their Social IQ report in which they rate retailers on their social capabilities.  They also produced a nice infographic so you can consume the data quickly, but I'd still encourage you to download the full report. Retailers interested in upping their SRM abilities should definitely stop by the Oracle booth at NRF in January.

    Read the article

  • 14.04 PlayonLinux and Steam equals jittery choppy freezing video

    - by user2715390
    I trying to get some windows gaming software (warframe) up ad running and it is running but the video is very choppy/freezes even though the reported frame rate is 60fps. I'm using: Ubuntu 14.04 Nvidia 340 driver Play on Linux 4.2.4-2 Wine 1.7.24 & 1.7.22 (switchable) Steam 13 Aug 2014 14:19:47 I used the following link to get it going Warframe Linux/Ubuntu Does anyone have any tips for diagnosing this? Anyway I think it has something to do with syncing. I would like to disable audio in steam or play on linux so I can eliminate any sound issues. How do I do this? Regards WallyZ

    Read the article

  • WIFI card intel r 2200 will not work

    - by Telemarkhero
    I have installed 12.04 on a Fujitsu Siemens Amilo Pro. Wifi card does not work. Have tried various threads but none have the same problem. Think I have the correct firmware but the following comes up: gill@ubuntu:~$ iwconfig lo no wireless extensions. eth1 IEEE 802.11bg ESSID:off/any Mode:Managed Channel:0 Access Point: Not-Associated Bit Rate:0 kb/s Tx-Power=off Sensitivity=8/0 Retry limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 eth0 no wireless extensions. Also gill@ubuntu:~$ rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes There is a power switch for wifi but it does not do anything in Ubuntu. Any pointers greatfully received.

    Read the article

  • At what point should data be sent back to server?

    - by whamsicore
    A good example would be the stackexchange "rate" button. When a post is upvoted the arrow changes color immediately. However there is a grace period for one to edit one's vote decision (oops! voted by mistake?). Is the upvote action processed immediately or does is only process after a set time period, or when the user leaves the page? How exactly is this rating processed? What is the standard for handling dynamic page edits (e.g. stackexchange rating, facebook posts?)

    Read the article

  • Design leaderboard ratings for quiz games

    - by PeterK
    Back in March 2011 i started the following post: How to design a leaderboard? Now my quiz game have been out for approximately a year and sold pretty decently. I am working on to update the game design and is again looking into the leaderboard design to make it better as i am not happy with it. Currently i rate players on number of correct answers, which is not good as it does not consider things like number of games, difficulty levels etc. I also have "extended" stats behind the UITableView (Leaderboard). A player can play based on three levels of difficulty: hard, medium or easy Difficulty levels can be mixed between players in a game Each game can be one to six players, so there can be single games or duels Between 2 and 30 questions per game As i am considering integrating Game Center Leaderboard i need to design a better rating system so i would like to ask for some ideas how to do the rating based on the above. I am thinking about how much a point would be worth and what it includes.

    Read the article

  • Do I really need Microsoft Updates?

    - by Tony Wong
    When I install a fresh copy of Windows XP Home (I bought it from the store.. not a copy), my PC rocks like lightening speed. But when I start installing all the updates, patches & less .NET 4.0 client (as the .NET 4.0 Client seems to bring machine to slow crawl). The PC starts to slow down.. like there are more resources to watch or something is happening in the background. So could I not get away with an awesome virus protector and an awesome firewall set-up and avoid all the patches? The machine I have is a quad 4, 4 GB RAM and 2.3 GHz process. Tons of room and the machine can run several applications at one time.. but when the updates happen.. it's s-l-o-w!

    Read the article

  • How to make bash script run with a latency (i.e. wait 1 sec at each iterations)?

    - by user2413
    I have this bash script; for (( i = 1 ; i <= 160 ; i++ )); do qsub myccomputations"${i}".pbs done Basically, I would prefer if there was a 1 second delay between each iteration. The reason is that at each iterations, it sends the program file mycomputation"${i}$.pbs to a core node for solving. Solving in this instance involves the use of pseudo random numbers. I suspect the RNG I use (R's) uses CPU time as seed because as things are now I get repeating pseudo random numbers (at the rate of approx 1 out of 100). So how to you ask bash to for (( i = 1 ; i <= 160 ; i++ )); do wait 1 sec qsub myccomputations"${i}".pbs done

    Read the article

  • Virtual Server 2005 R2 kungfu

    - by AngryHacker
    Does Virtual Server 2005 R2 have a command line interface, that's versatile enough? Here is a situation. I run a Win2k VM on an old memory constrained machine. I allocate it 378MB of RAM and the VM runs just fine. Once a month, inside the VM, I backup the (a very large) database, compress it using 7Zip and ftp it to the backup site (all in a script). Unfortunately the compression part takes a massive amount of RAM (far exceeding the 378MB), it goes for the paging file and brings absolutely everything to a crawl and literally takes 2-3 days, if left unattended. So to fix this, I have to shutdown the VM, give it temporarily 768MB of RAM and then the whole thing finishes in 20 minutes. So, is there a way do the following automatically from the host machine in a script? Shutdown the guest OS (I think, I got this part) Change the RAM allocation from 378 to 768 Start the guest OS again then, 1 hour later, do everything in reverse.

    Read the article

  • How to schedule time-of-day upgrades

    - by Richard
    Hello, I'm responsible for about 30 Ubuntu computers at a private K-8 school. We have only a 3Mbps internet connection serving the entire campus, and I would like to ensure that updates are done in the middle of the night - so that daytime tasks are not slowed down. I'm using Ubuntu 10.04, and have set all computers to download and install security updates via the update manager. I have also installed cron-apt, and modified the config file to stagger the start times of the upgrades from about 10pm to 4am local time. HOWEVER - this morning I arrived at the school at 7:30am and all the computers were busy downloading a large security based update. Needless to say, all internet activity was slowed to a crawl (for the next 2 hours), and the computer users were very very upset. This was the event I'm trying so hard to prevent. It seems that my scheme to ensure middle of the night downloads failed, and I'm not sure why. I've also tried some schemes using unattended-upgrades & crontab, but there always seemed to be something scheduling upgrades to occur in addition to the ones I try to force at middle of the night. Is there a sure fire way to absolutely positively guarantee that updates will occur only at one specific time? It would be nice if the update manager just had a drop down menu to specify a designated time. Thanks in advance for any help you can give me.

    Read the article

  • BI&EPM in Focus Oct 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers Iluka Resources Improves Business Insight into Mining Operations Through Significantly Faster, Customized Analyses Banco do Brasil Monitors Budgets in Real Time, Generates Financial Reports In Minutes Instead of Months General Dynamics Improves Budgeting and Planning and Accelerates Rate Changes by Using Integrated Enterprise Performance Management Suite Facebook achieves world-wide automation of financial close task tracking and management of account reconciliations with Oracle Hyperion Financial Close Management (link) Hess Consolidates Multiple SAP General Ledgers with Oracle Hyperion (link) Navistar Leads with Cutting Edge Hyperion Platform, Including HSF, HPCM (link)   Enterprise Performance Management Oct 10: Navistar Leverages DRM (Rolta Solutions) (link) Replay: Integrated Business Planning, Featuring Leggett & Platt (link)   Business Intelligence Report: From Overload to Impact: An Industry Scorecard on Big Data Business Challenges (link | press release) Oct 10: The Top Five Things You Should Know When Migrating from an Old BI Technology to Oracle Business Intelligence Enterprise Edition (perfomance architects) (link)

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >