Search Results

Search found 24792 results on 992 pages for 'chris may'.

Page 59/992 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • How to Fix a Stuck Pixel on an LCD Monitor

    - by Chris Hoffman
    Have you ever noticed that a pixel – a little dot on your computer’s LCD monitor – is staying a single color all of the time? You have a stuck pixel. Luckily, stuck pixels aren’t always permanent. Stuck and dead pixels are hardware problems. They’re often caused by manufacturing flaws – pixels aren’t supposed to get stuck or die over time. Image Credit: Alexi Kostibas on Flickr How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere

    Read the article

  • Why does 'top' say my machine is only 50% idle?

    - by Chris Moore
    What's going on here? I'm running nothing on the system, iotop and iftop show the network and hard drive are both idle, and top (sorted by %CPU) shows nothing running. So why is the system only 50% idle? What's the other 50% waiting for? How can I find out? top - 12:01:05 up 3 days, 15:03, 1 user, load average: 6.00, 6.01, 6.05 Tasks: 179 total, 1 running, 178 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.0%sy, 0.0%ni, 49.7%id, 49.7%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2053996k total, 1992600k used, 61396k free, 81680k buffers Swap: 4092924k total, 10740k used, 4082184k free, 1338636k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1042 deb 20 0 21468 1412 1000 R 1 0.1 0:00.03 top 1 root 20 0 24188 1952 1152 S 0 0.1 0:01.44 init 2 root 20 0 0 0 0 S 0 0.0 0:00.05 kthreadd Update: dmesg shows the printer driver misbehaving: [28858.561847] cnijnetprn[1503]: segfault at 29 ip 00007f56cf3480f7 sp 00007fffb964ec30 error 4 in libcnnet.so.1.2.0[7f56cf345000+9000] [68851.187802] cnijnetprn[9180]: segfault at 29 ip 00007ffe7636a0f7 sp 00007fff9a8b1990 error 4 in libcnnet.so.1.2.0[7ffe76367000+9000] [155412.107826] cnijnetprn[19966]: segfault at 29 ip 00007fc31de770f7 sp 00007fffc03aa8e0 error 4 in libcnnet.so.1.2.0[7fc31de74000+9000] and also some issue with cp: [248041.172067] INFO: task cp:27488 blocked for more than 120 seconds. [248041.172071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [248041.172075] cp D ffffffff81805120 0 27488 27345 0x00000004 [248041.172080] ffff880078d57a38 0000000000000046 ffff880078d579d8 ffffffff81032a79 [248041.172085] ffff880078d57fd8 ffff880078d57fd8 ffff880078d57fd8 0000000000012a40 [248041.172090] ffff88007b818000 ffff880069acc560 ffff880078d57a18 ffff88007f8532c0 [248041.172095] Call Trace: [248041.172104] [<ffffffff81032a79>] ? default_spin_lock_flags+0x9/0x10 [248041.172109] [<ffffffff8110a360>] ? __lock_page+0x70/0x70 [248041.172114] [<ffffffff815f0ecf>] schedule+0x3f/0x60 I did try copying something to the USB stick that's plugged into the router and mounted onto this computer using mount.cifs. That almost always causes everything to lock up, so I'm guessing that's the problem. I'll reboot and stop using mount.cifs.

    Read the article

  • Why is Ubuntu's clock getting slower or faster?

    - by ændrük
    Ubuntu's clock is off by about a half hour: Where do I even start troubleshooting this? It's allegedly being set "automatically from the Internet". How can I verify that "the Internet" knows what time it is? Details Ubuntu has had plenty of time to communicate with the Internet: $ date; uptime Fri May 18 05:56:00 PDT 2012 05:56:00 up 12 days, 10:48, 2 users, load average: 0.61, 0.96, 1.15 This time server I found via a web search does appear to know the correct time: $ date; ntpdate -q north-america.pool.ntp.org Fri May 18 05:56:09 PDT 2012 server 208.38.65.37, stratum 2, offset 1752.625337, delay 0.10558 server 46.166.138.172, stratum 2, offset 1752.648597, delay 0.10629 server 205.189.158.228, stratum 3, offset 1752.672466, delay 0.11829 18 May 05:56:18 ntpdate[29752]: step time server 208.38.65.37 offset 1752.625337 sec There aren't any reported errors related to NTP: $ grep -ic ntp /var/log/syslog 0 After rebooting, the time was automatically corrected and the following appeared in /var/log/syslog: May 18 17:58:12 aux ntpdate[1891]: step time server 91.189.94.4 offset 1838.497277 sec A log of the offset reported by ntpdate reveals that the clock is drifting by about 9 seconds every hour: $ while true; do ntpdate-debian -q | tail -n 1 >> 'drift.log'; sleep 16m; done ^C $ r -e ' attach(read.table("drift.log", header=FALSE)) clock <- as.POSIXct(paste(V1, V2, V3), format="%d %b %H:%M:%S") fit <- lm(V10~clock) png("drift.png") plot(clock, V10, xlab="Clock time", ylab="Time server offset (s)") abline(fit) mtext(sprintf("Drift rate: %.2f s/hr", fit$coefficients[[2]]*3600)) '

    Read the article

  • Coordinate based travel through multi-line path over elapsed time

    - by Chris
    I have implemented A* Path finding to decide the course of a sprite through multiple waypoints. I have done this for point A to point B locations but am having trouble with multiple waypoints, because on slower devices when the FPS slows and the sprite travels PAST a waypoint I am lost as to the math to switch directions at the proper place. EDIT: To clarify my path finding code is separate in a game thread, this onUpdate method lives in a sprite like class which happens in the UI thread for sprite updating. To be even more clear the path is only updated when objects block the map, at any given point the current path could change but that should not affect the design of the algorithm if I am not mistaken. I do believe all components involved are well designed and accurate, aside from this piece :- ) Here is the scenario: public void onUpdate(float pSecondsElapsed) { // this could be 4x speed, so on slow devices the travel moved between // frames could be very large. What happens with my original algorithm // is it will start actually doing circles around the next waypoint.. pSecondsElapsed *= SomeSpeedModificationValue; final int spriteCurrentX = this.getX(); final int spriteCurrentY = this.getY(); // getCoords contains a large array of the coordinates to each waypoint. // A waypoint is a destination on the map, defined by tile column/row. The // path finder converts these waypoints to X,Y coords. // // I.E: // Given a set of waypoints of 0,0 to 12,23 to 23, 0 on a 23x23 tile map, each tile // being 32x32 pixels. This would translate in the path finder to this: // -> 0,0 to 12,23 // Coord : x=16 y=16 // Coord : x=16 y=48 // Coord : x=16 y=80 // ... // Coord : x=336 y=688 // Coord : x=336 y=720 // Coord : x=368 y=720 // // -> 12,23 to 23,0 -NOTE This direction change gives me trouble specifically // Coord : x=400 y=752 // Coord : x=400 y=720 // Coord : x=400 y=688 // ... // Coord : x=688 y=16 // Coord : x=688 y=0 // Coord : x=720 y=0 // // The current update index, the index specifies the coordinate that you see above // I.E. final int[] coords = getCoords( 2 ); -> x=16 y=80 final int[] coords = getCoords( ... ); // now I have the coords, how do I detect where to set the position? The tricky part // for me is when a direction changes, how do I calculate based on the elapsed time // how far to go up the new direction... I just can't wrap my head around this. this.setPosition(newX, newY); }

    Read the article

  • why would Remmina stop working?

    - by Chris Curvey
    Until sometime last night, I had remmina working fine. I could run RDP through an SSH tunnel and all was well. Then it stopped working. I can get as far as the password dialog for my work machine, but then it just says "Cannot connect to RDP server localhost". I can't even find any logs that look interesting. I've re-installed remmina, cleared my .remmina directory, restarted my machine, and even restarted my gateway. Just to make it really weird, my laptop (which has the same setup -- latest Ubuntu and Remmina) can make the connection just fine. It is even going through the same router, albeit wirelessly. Any thoughts?

    Read the article

  • How to stop fan running always on Asus P8P67LE motherboard with ATI Radeon HD6900

    - by Chris Good
    I'm using Ubuntu 12.04 LTS. I'm not sure if it is the CPU (i7) fan or the video card fan. I've tried using lm-sensors & fancontrol sudo sensors-detect Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `w83627ehf': * ISA bus, address 0x290 Chip `Nuvoton NCT6776F Super IO Sensors' (confidence: 9) Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: # Chip drivers coretemp w83627ehf Like many people, I'm also getting error: /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed Here is the output of sensors: # sensors radeon-pci-0100 Adapter: PCI adapter temp1: +71.0°C coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +44.0°C (high = +80.0°C, crit = +98.0°C) Core 0: +44.0°C (high = +80.0°C, crit = +98.0°C) Core 1: +40.0°C (high = +80.0°C, crit = +98.0°C) Core 2: +43.0°C (high = +80.0°C, crit = +98.0°C) Core 3: +42.0°C (high = +80.0°C, crit = +98.0°C) I'm hoping some-one has already solved this for my configuration because this seems to be a problem for many people and there are many different suggestions.

    Read the article

  • Antenna Aligner Part 7: Connecting the dots

    - by Chris George
    The app is basically ready, so I eagerly started to sort out creating the application entry in iTunes Connect. It's mostly intuitive actually, although I did have to create yet another icon for iTunes sized 512x512 pixels, damn lucky I did the original graphics as vector! It took me longer to write the application description than anything else, I'm so not a tech author! I didn't like the way you have to 'make up' an SKU (Stock Keeping Unit) number. I have to do some googling to find out that it really doesn't matter what it is! It should be more obvious what to do from the actual website itself. That aside, the rest of it was actually fairly straightforward. As well as the details of the application, iPhone and iPad screenshots were also required. This posed somewhat of a problem. The iPhone ones were easy (as I have one!), but I do not (yet) own an iPad . So I thought I'd leave the iPad screenshots out for now. Once the application details were sorted, I moved onto the rights and pricing. At the start of the project I had made the decision that I wouldn't charge any more than the lowest amount £0.59. I believe there is a market for this, but as my first foray into app development I didn't want to take the mick. I did realise, however, that I had built my app with a developer certificate and provisioning profile. This was fairly quickly corrected, and again Nomad made this very easy to switch over to the distribution certificate and provisioning profile. With a sense of excitement I cracked open iTunes connect and clicked the upload button ... ...slight snag... . when the Nomad project was started, Apple allowed uploads of these binaries via iTunes Connect. But this is no longer possible, the only upload path is via the Application Loader available from the Apple Developer program. This itself has one limitation, it only runs on a mac! D'OH!!!  Actually my language was somewhat more colourful when this fact came to light. After picking my laptop up off the floor and putting it back together... ok only joking, but I did nearly throw it out of frustration!... I started to consider the options; I briefly entertained the idea of buying a cheap mac from ebay... no, that defeats the whole object of what I'm doing, plus my wife wouldn't be impressed there are some guys out there in the interweb who will upload your app for a small fee...but I don't really like the idea of giving some faceless email address my apple developer login details, as well as my app binary! find some willing friend with a mac who would kindly let me use it... obviously this is the only sensible option. In the meantime, I informed the Nomad team about this slight 'issue' and they are currently investigating possible solutions...

    Read the article

  • How to Back Up & Restore Your Installed Ubuntu Packages With APTonCD

    - by Chris Hoffman
    APTonCD is an easy way to back up your installed packages to a disc or ISO image. You can quickly restore the packages on another Ubuntu system without downloading anything. After using APTonCD, you can install the backed up packages with a single action, add the packages as a software source, or restore them to your APT cache. How to Own Your Own Website (Even If You Can’t Build One) Pt 1 What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS

    Read the article

  • How to Back Up Your Linux System With Back In Time

    - by Chris Hoffman
    Ubuntu includes Déjà Dup, an integrated backup tool, but some people prefer Back In Time instead. Back In Time has several advantages over Déjà Dup, including a less-opaque backup format, integrated backup file browser, and more configurability. Déjà Dup still has a few advantages, notably its optional encryption and simpler interface, but Back In Time gives Déjà Dup a run for its money. How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2 How to Own Your Own Website (Even If You Can’t Build One) Pt 1

    Read the article

  • Is this Ubuntu One DBus signal connection code correct?

    - by Chris Wilson
    This is my first time using DBus so I'm not entirely sure if I'm going about this the right way. I'm attempting to connect the the Ubuntu One DBus service and obtain login credentials for my app, however the slots I've connected to the DBus return signals detailed here never seem to be firing, despite a positive result being returned during the connection. Before I start looking for errors in the details relating to this specific service, could someone please tell me if this code would even work in the first place, or if I'm done something wrong here? int main() { UbuntuOneDBus *u1Dbus = new UbuntuOneDBus; if( u1Dbus->init() ){ qDebug() << "Message queued"; } } UbuntuOneDBus::UbuntuOneDBus() { service = "com.ubuntuone.Credentials"; path = "/credentials"; interface = "com.ubuntuone.CredentialsManagement"; method = "register"; signature = "a{ss} (Dict of {String, String})"; connectReturnSignals(); } bool UbuntuOneDBus::init() { QDBusMessage message = QDBusMessage::createMethodCall( service, path, interface, method ); bool queued = QDBusConnection::sessionBus().send( message ); return queued; } void UbuntuOneDBus::connectReturnSignals() { bool connectionSuccessful = false; connectionSuccessful = QDBusConnection::sessionBus().connect( service, path, interface, "CredentialsFound", "a{ss} (Dict of {String, String})", this, SLOT( credentialsFound() ) ); if( ! connectionSuccessful ) qDebug() << "Connection to DBus::CredentialsFound signal failed"; connectionSuccessful = QDBusConnection::systemBus().connect( service, path, interface, "CredentialsNotFound", "(nothing)", this, SLOT( credentialsNotFound() ) ); if( ! connectionSuccessful ) qDebug() << "Connection to DBus::CredentialsNotFound signal failed"; connectionSuccessful = QDBusConnection::systemBus().connect( service, path, interface, "CredentialsError", "a{ss} (Dict of {String, String})", this, SLOT( credential if( ! connectionSuccessful ) qDebug() << "Connection to DBus::CredentialsError signal failed"; } void UbuntuOneDBus::credentialsFound() { std::cout << "Credentials found" << std::endl; } void UbuntuOneDBus::credentialsNotFound() { std::cout << "Credentials not found" << std::endl; } void UbuntuOneDBus::credentialsError() { std::cout << "Credentials error" << std::endl; }

    Read the article

  • How To Log Into Multiple Accounts On the Same Website At Once

    - by Chris Hoffman
    If you ever want to sign into two different accounts on the same website at once – say, to have multiple Gmail inboxes open next to each other – you can’t just open a new tab or browser window. Websites store your login state in browser-specific cookies. There are a number of ways you can get another browser window with its own cookies and stay logged into multiple accounts at once. HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • More Than Headsets: 5 Things You Can Do With Bluetooth

    - by Chris Hoffman
    Your laptop, smartphone, and tablet probably all have integrated Bluetooth support. Bluetooth is a standard that allows devices to communicate wirelessly. Most people are familiar with Bluetooth headsets, but there are more things you can do with Bluetooth. To make two Bluetooth devices work together, you’ll have to “pair” them. For example, you can pair a Bluetooth mouse with your laptop, pair a Bluetooth headset with your phone, or pair your smartphone with your laptop.    

    Read the article

  • Antenna Aligner Part 5: Devil is in the detail

    - by Chris George
    "The first 90% of a project takes 90% of the time and the last 10% takes the another 200%"  (excerpt from onista) Now that I have a working app (more or less), it's time to make it pretty and slick. I can't stress enough how useful it is to get other people using your software, and my simple app is no exception. I handed my iPhone to a couple of my colleagues at Red Gate and asked them to use it and give me feedback. Immediately it became apparent that the delay between the list page being shown and the list being drawn was too long, and everyone who tried the app clicked on the "Recalculate" button before it had finished. Similarly, selecting a transmitter heralded a delay before the compass page appeared with similar consequences. All users expected there to be some sort of feedback/spinny etc. to show them it is actually doing something. In a similar vein although for opposite reasons, clicking the Recalculate button did indeed recalculate the available transmitters and redraw them, but it did this too fast! One or two users commented that they didn't know if it had done anything. All of these issues resulted in similar solutions; implement a waiting spinny. Thankfully, jquery mobile has one built in, primarily used for ajax operations. Not wishing to bore you with the many many iterations I went through trying to get this to work, I'll just give you my solution! (Seriously, I was working on this most evenings for at least a week!) The final solution for the recalculate problem came in the form of the code below. $(document).on("click", ".show-page-loading-msg", function () {            var $this = $(this),                theme = $this.jqmData("theme") ||                        $.mobile.loadingMessageTheme;            $.mobile.showPageLoadingMsg(theme, "recalculating", false);            setTimeout(function ()                           { $.mobile.hidePageLoadingMsg(); }, 2000);            getLocationData();        })        .on("click", ".hide-page-loading-msg", function () {              $.mobile.hidePageLoadingMsg();        }); The spinny is activated by setting the class of a button (for example) to the 'show-page-loading-msg' class. Recalculate This means the code above is fired, calling the showPageLoadingMsg on the document.mobile object. Then, after a 2 second timeout, it calls the hidePageLoadingMsg() function. Supposedly, it should show "recalculating" underneath the spinny, but I've not got that to work. I'm wondering if there is a problem with the jquery mobile implementation. Anyway, it doesn't really matter, it's the principle I'm after, and I now have spinnys!

    Read the article

  • HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It

    - by Chris Hoffman
    You’ve probably heard that you need to overwrite a drive multiple times to make the data unrecoverable. Many disk-wiping utilities offer multiple-pass wipes. This is an urban legend – you only need to wipe a drive once. Wiping refers to overwriting a drive with all 0’s, all 1’s, or random data. It’s important to wipe a drive once before disposing of it to make your data unrecoverable, but additional wipes offer a false sense of security. Image Credit: Norlando Pobre on Flickr HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

  • Clickworthy tweets, the sequel&hellip;

    - by Chris Williams
    Twitter moves fast, and if you don’t stay on top of it, you can miss a lot. I don’t follow a ton of people, but I combine it with topic searches. Here are a few things I’ve found that are worth your time and attention, especially if you’re into video games… development or playing: The 15 Greatest Sci-Fi/Horror Games for the Commodore 64 - http://moe.vg/bovATG  (via @jlist)  Practical Tactics for Dealing with Haters! - http://www.fourhourworkweek.com/blog/2010/05/18/tim-ferriss-scam-practical-tactics-for-dealing-with-haters/ (via @The_Zman) Assassin’s Creed 2 + $10 Video Game Credit + $5 MP3 Credit - $24.99 on Amazon.com – http://amzn.to/bvRI9h (via @Assassin10k) Make Small Good – A design article about not trying to compete with ginormous AAA multimillion dollar titles. - http://www.gamasutra.com/blogs/AlexanderBrandon/20100518/5067/Make_Small_Good.php (via @Kei_tchan) (CW: Excellent article, I do this a lot in my roguelike games!) Purposes for Randomization in Game Design – http://bit.ly/cAH7PG  (via @gamasutra)

    Read the article

  • DNNWorld 2012, The Trailer

    - by Chris Hammond
    Some people in the asp.net community love to hate on DotNetNuke ( see Shaun's latest blog post comments ), that’s fine, the rest of us are off having a good time with it and the community! Check out the trailer for DNNWorld 2012, coming up in Orlando Florida in October (you can register for DNN World at http://dnnworld.dotnetnuke.com ). For those of you who love to hate on DNN, I challenge you to give it another look. A lot has changed with the platform in the past 10 years, most recently in the...(read more)

    Read the article

  • Is it possible to run the GNOME user manager from XFCE4?

    - by Chris Moore
    If I run 'gnome-control-center' and click on the 'User Accounts' icon, the gnome-control-center crashes. I built it from source to see what's going on, and it turns out it's doing a if (strcmp(getenv("XDG_CURRENT_DESKTOP"), "GNOME")) in panels/user-accounts/um-password-dialog.c, line 690. I don't have an environment variable "XDG_CURRENT_DESKTOP", so the getenv is returning NULL, and the strcmp is segfaulting Where is XDG_CURRENT_DESKTOP meant to be defined? And shouldn't gnome-control-center check the pointer returned by getenv before passing it to strcmp? Does xfce4 have its own 'User Accounts' tool for creating new users?

    Read the article

  • Comprehensive redesigns

    - by Chris Skardon
    So, last night I realised that I’d made some bad decisions with the database, structure and naming, so… I’ve now refactored it all, and I’m feeling… hmmm… meh about it. I suspect I will redo it all later, but for now it will do…. I’ve also come to the conclusion that I was maybe trying too much for the initial release, so as a consequence I have removed one part of the project… (which, by-the-by, I intend to have published in a month or so – and yes Andy, that is one month longer than I mentioned to you in that email :)) @Html.DisplayFor() I find myself using DisplayFor a lot at the moment, is this correct? I mean – it works, but is that really only for forms? Do I need to use it? Should I use it?

    Read the article

  • How To Switch Webmail Providers Without Losing All Your Email

    - by Chris Hoffman
    Do you use a webmail service you’re unhappy with because it’s where all your email is? There’s good news – you can easily switch, without losing your old email and contacts and without missing email sent to your old address. This guide will help you switch to a shiny new webmail service. The exact ways to switch between email services will differ depending on which webmail provider you’re using. We’ll be focusing on three of the most popular services here: Gmail, Outlook.com (Hotmail), and Yahoo! Mail. How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

  • Shelving &ndash; What is it &ndash; and more importantly, can it help me?

    - by Chris Skardon
    Since we shifted to TFS we’ve had the ability to perform what is known as ‘shelving’. Shelving (whilst not a wholly new topic in the world of SCC) is new to us, and didn’t exist in our previous SCC solution – SVN. Soo… what is it? What? Shelving is a way to check-in but not check-in your code. By shelving you submit a copy of your ‘pending changes’ to the SCC server, (which maintains a list of the shelvesets) and once that is done you can either continue working, or undo your changes, safe in the knowledge that a backup copy exists on the server. You can unshelve your code at any time and get back to the state you were when you shelved. Yer, that is great but why not just check it in?? Shelvesets don’t have to build. The shelveset you put in there could be entirely broken, or it might solve every bug in the system – shelves aren’t continuously integrated so you can shelve anything. Hmmmm… What else? Shelving allows us to do some pretty cool stuff that beforehand was quite frankly a pain. For instance – Gated Check-ins are implemented via the shelving mechanism, when code is checked-in, what you’re actually doing is shelving it, the Build Controller will build the shelveset with the original code and if it succeeds, the code will be committed, if it fails – well – it’s only you that has to fix the code :) Other nice features are things like the ability to share code you are working on… For example, if I was having trouble with a particular piece of code, I could shelve it, and then you (yes you) could then get that shelveset and check out the problem for yourself, and if you fix it?? Well – you could check-it in! Nice, but day-to-day shizzle? Let’s say you’ve been working on your project and your project manager comes over to you and says: “Hey, errr, bad times, there is an urgent bug we need you to fix, it needs to go out now!” (also for this to play out – we’ll need to assume you’re currently working in the 'release’ branch for another bug fix (maybe))… You could undo all your current changes (obviously you’ll probably backup your code using zip or something I imagine) fix the bug, then re-copy your backup over the top, or you could shelve and unshelve. Perhaps some other uses will awaken the shelver in you… :) Before each checkin – if you shelve, you no longer need to worry (if indeed you do) about resolving conflicts and mysteriously losing your code… Going home at night? Not checking in straight away? Why not shelve, this way – should the worst come to the worst and your local pc gives up, you can just get the shelveset onto another machine and be up and running in literally seconds minutes…

    Read the article

  • How to Move Your Google Authenticator Credentials to a New Android Phone or Tablet

    - by Chris Hoffman
    Most of the app data on your Android is probably synced online will automatically sync to a new phone or tablet. However, your Google Authenticator credentials won’t — they aren’t synchronized for obvious security reasons. If you’re doing a factory reset, getting a new phone, or just want to copy your credentials to second device, these steps will help you move your authenticator data over so you won’t lose your access codes. How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • High Performance SQL Views Using WITH(NOLOCK)

    - by gt0084e1
    Every now and then you find a simple way to make everything much faster. We often find customers creating data warehouses or OLAP cubes even though they have a relatively small amount of data (a few gigs) compared to their server memory. If you have more server memory than the size of your database or working set, nearly any aggregate query should run in a second or less. In some situations there may be high traffic on from the transactional application and SQL server may wait for several other queries to run before giving you your results. The purpose of this is make sure you don’t get two versions of the truth. In an ATM system, you want to give the bank balance after the withdrawal, not before or you may get a very unhappy customer. So by default databases are rightly very conservative about this kind of thing. Unfortunately this split-second precision comes at a cost. The performance of the query may not be acceptable by today’s standards because the database has to maintain locks on the server. Fortunately, SQL Server gives you a simple way to ask for the current version of the data without the pending transactions. To better facilitate reporting, you can create a view that includes these directives. CREATE VIEW CategoriesAndProducts AS SELECT * FROM dbo.Categories WITH(NOLOCK) INNER JOIN dbo.Products WITH(NOLOCK) ON dbo.Categories.CategoryID = dbo.Products.CategoryID In some cases quires that are taking minutes end up taking seconds. Much easier than moving the data to a separate database and it’s still pretty much real time give or take a few milliseconds. You’ve been warned not to use this for bank balances though. More from Data Stream

    Read the article

  • Package Version Numbers, why are they so important

    - by Chris W Beal
    One of the design goals of IPS has been to allow people to easily move forward to a supported "Surface" of component. That is to say, when you  # pkg update your system, you get the latest set of components which all work together, based on the packages you already have installed. During development, this has meant simply you update to the latest "build" of the components. (During development, we build everything and publish everything every two weeks). Now we've released Solaris 11 using the IPS technologies, things are a bit more complicated. We need to be able to reflect all the types of Solaris release we are doing. For example Solaris Development builds, Solaris Update builds and "Support Repository Updates" (the replacement for patches) in the version scheme. So simply saying "151" as the build number isn't sufficient to articulate what you are running, or indeed what is available to update to In my previous blog post I talked about creating your own package, and gave an example FMRI of pkg://tools/[email protected],0.5.11-0.0.0 But it's probably more instructive to look at the FMRI of a Solaris package. The package "core-os" contains all the common utilities and daemons you need to use Solaris.  $ pkg info core-os Name: system/core-os Summary: Core Solaris Description: Operating system core utilities, daemons, and configuration files. Category: System/Core State: Installed Publisher: solaris Version: 0.5.11 Build Release: 5.11 Branch: 0.175.0.0.0.2.1 Packaging Date: Wed Oct 19 07:04:57 2011 Size: 25.14 MB FMRI: pkg://solaris/system/[email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z The FMRI is what we will concentrate on here. In this package "solaris" is the publisher. You can use the pkg publisher command to see where the solaris publisher gets it's bits from $ pkg publisher PUBLISHER TYPE STATUS URI solaris origin online http://pkg.oracle.com/solaris/release/ So we can see we get solaris packages from pkg.oracle.com.  The package name is system/core-os. These can be arbitrary length, just to allow you to group similar packages together. Now on the the interesting? bit, the versions, everything after the @ is part of the version. IPS will only upgrade to a "higher" version. [email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z core-os = Package Name0.5.11 = Component - in this case we're saying it's a SunOS 5.11 package, = separator5.11 = Built on version - to indicate what OS version you built the package on- = another separator0.175.0.0.0.2.1 = Branch Version : = yet another separator20111019T070457Z = Time stamp when the package was published So from that we can see the Branch Version seems rather complex. It is necessarily so, to allow us to describe the hierachy of releases we do In this example we see the following 0.175: is known as the trunkid, and is incremented each build of a new release of Solaris. During Solaris 11 this should not change  0: is the Update release for Solaris. 0 for FCS, 1 for update 1 etc 0: is the SRU for Solaris. 0 for FCS, 1 for SRU 1 etc 0: is reserved for future use 2: Build number of the SRU 1: Nightly ID - only important for Solaris developersTake a hypothetical example [email protected],5.11-0.175.1.5.0.4.1:<something> This would be build 4 of SRU 5 of Update 1 of Solaris 11 This is actually documented in a MOS article 1378134.1 Which you can read if you have a support contract.

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >